Monthly Archives: May 2012

RAA: Applying “the power of the ask” in social media website?

RAA stands for: Research Article Analysis

Paper discussed:

Wash, R., & Lampe, C. (2012). The power of the ask in social media. Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work, CSCW  ’12 (pp. 1187–1190). New York, NY, USA: ACM. doi:10.1145/2145204.2145381


I’ve always wondered what motivates people to post comments on social media websites. Contributing, fun, or self-presentation? Probably most of readers are just like me: I usually felt lazy and seldom intensively involved in online discussions. This CSCW2012 paper came from Dr. Cliff Lampe at University of Michigan, trying to apply “the power of the ask” to promote more comments on social media websites. Let’s see if he can achieve this goal.


1. Purpose of the research:
Test a UI design grounded on “the power of the ask” strategy in philanthropy to see if it can induce users to contribute on a social media website.
The foundation of this research goal is that “charities and social media systems are both instances of what economists call public goods”. Voluntary contributors are needed but it is always hard to motivate people to become one. The authors claimed these two systems face two similar issues that prohibit people to contribute:
  • Which websites/charity organization to contribute to?
  • When should this contribution happen? Procrastination happens and stops them from contributing later.
The power of the ask is a powerful fundraising method widely used in charitable fundraising to solve these issues: upon asking to donate explicitly, people can react to the request, donate money immediately (when) to the person sent out request (to whom). Thus, the authors would like to apply this method to social media website, based on the similar nature of these two systems.


2. Methods:
The authors carried out a randomized field experiment on an existing social media system: the Great Lakes Echo, which is a WordPress based news service run by the Knight Center for Environmental Journalism. During the experiment period (10 weeks), users were randomly assigned to 3 conditions: no ask, immediate ask, and reminder. No ask provides the default interface as we can see in a WordPress blog, with comments and commenting textbox at the end of the article. Immediate ask and reminder conditions both provided popup windows 500ms after the page was full loaded, with two buttons: No Thanks, and Leave a Comment. The difference is that the immediate ask conditions provided a commenting box for readers to comment immediately, while the reminder condition asked readers to comment after reading the article. If you click “Leave a Comment” in the reminder condition, the page will automatically scrolled down to the comments area.
A reader is assigned randomly to only one of the conditions and it will be kept in the browser cookie so that he/she would always encounter the same condition during the experimental period. Also, one can only see comments posted by other users under the same experimental condition.


3. Main Findings:
A total of 266 comments were generated during this 10-week period.
  • No ask and immediate ask conditions performed similarly, with 83 and 81 comments generated respectively. Reminder condition had higher comments: 102.
  • There is a dropoff in the effectiveness of the popups over time, and 3 conditions are converging to approximately the same number of comments on average.
  • Popups didn’t promote the quality of comments.


4. Take Aways:
I like this article in the way that it borrows idea from another area reasonably and tested with a field study, which is quite interesting to read. However, I found several pitfalls (in my view), which I think compromised the study results.
  • The popup windows were shown 500ms after the article was fully loaded. The authors did a good argument on why other solution didn’t work and this one is the most clean one and it is worth to try “at the expense of some amount of external validity”. However, if you could imagine, at the time the window pops up, most readers must have just started reading a little bit, which basically made the “immediate ask” condition useless: who can give a comment when he/she just starts reading? So I’ve expected the result that “no ask” and “immediate ask” would make no difference. I am curious at what percentage of users clicked “No thanks” button under this condition? This was not reported in the paper. Similarly, I am curious to know what percentage of users clicked “No thanks” under the condition of “reminder”.
  • In the result part, it was claimed that 179 out of 209 commenters only contributed a single comment during the study, which could almost rule out the possibility that a single individual contributed enough comments to alter the results. However, it was unclear whether those 30 commenters were uniformly distributed in 3 conditions. With only 206 comments, if most of these 30 commenters who intend to post more comments were happen to aggregate in a certain condition, it would bias the result a lot.
Advertisements

RAA: A recent study on credibility of tweets

RAA stands for: Research Article Analysis

Paper discussed:

Morris, M. R., Counts, S., Roseway, A., Hoff, A., & Schwarz, J. (2012). Tweeting is believing?: understanding microblog credibility perceptions. Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work, CSCW  ’12 (pp. 441–450). New York, NY, USA: ACM. doi:10.1145/2145204.2145274


As I was doing a class paper regarding use of Twitter and self-presentation on Twitter, I found this newly published article quite interesting. In the age of information explosion, people rely more and more on personalized information channels with fast information updates to feed themselves with fresh news. Twitter, combines with multiple searching platforms, becomes ideal medium to provide useful information. Meanwhile, credibility issue rises up as people consume more and more tweets. This study took a look into elements that affect tweets credibility.


1. Purpose of the research:
Understand features that affect readers perceived credibility of tweets.


2. Methods:
A mix of survey and experimental studies were conducted to achieve the research goal. Survey was firstly used to gain the general perceptions of Twitter users on tweets credibility. Experimental designs were carried out later to focus on testing 3 core and most visible features (message topics, user names, and user images) reflected from survey results.


3. Main Findings:
People were poor at judging the truthfulness of tweets based on contents alone; instead, they inclined to use available heuristics, such as user names and user images to assess credibility of a tweet. For example, a default Twitter user image decreased the tweet contents credibility as well as author credibility, while a topically related user name (e.g., LabReport) increased credibility compared to an internet name (e.g., Pickles_92). These findings had great implications to both individual Twitter users who want to enhance their credibilities, and UI designs of search engines, which also has desire to increase perceived credibility of searching results.


4. Take Aways:
Besides the research finding itself, there are 2 points that I found interesting and useful for my future research:
(1) A very clear and persuasive background section
This paper provided a very clear and strong argument for the need of the study. The background regarding credibility study on Twitter was mainly composed with 3 parts:
  • Concerns about credibility do exist, but no one studied what features contribute to it. — served as a gap needs to be filled.
  • A study about Twitter user name existed, which studied the relationship between user name and tweets’ level of interestingness. — served as a step-stone that this research can build upon.
  • There are systems to automatically / semi-automatically classify tweets credibility through combination of crowdsourcsing and machine learning. — served as an application which this research can help with.
These 3 arguments triangulate each other, building a solid ground to claim the desire and value of this study.
(2) Snowball sampling in social computing research
In the experimental study part, the authors claimed that recruiting participants through advertising to their own followers was undesirable, due to the drawback of snowballing sampling strategy. This gave rise to my curiosity since though I knew the definition of snowballing sampling strategy, I never use it before and I didn’t know its drawbacks either. I referred to the citation the authors gave here, which is [Bernstein, M. S., Ackerman, M. S., Chi, E. H., & Miller, R. C. (2011). The trouble with social computing systems research. Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems, CHI EA  ’11 (pp. 389–398).]. In this CHI 2011 paper, the authors gave some theoretical framework to help with social computing system research. Regarding snowballing sampling strategy, this paper actually acknowledged the weakness of it as “the first participants will have a strong impact on the sample, introducing systematic and unpredictable bias into the results”. However, the main point of this paper was to suggest researchers to embrace snowballing sampling as it is “inevitable” due to 3 reasons:
  • The nature of social computing is: information spreads through social channels.
  • Random sampling is an impossible standard for social computing research because influential users exist to bias the sample.
  • Many social computing platforms are beyond the researcher’s ability to recruit random sample.

Thus, we might be able to acknowledge that snowballing is not an ideal sampling strategy but inevitable in some sense in CHI research. We should fully aware of its danger of bringing in biased sample, and use it wisely. In this credibility paper, the authors recruited participants from Microsoft and Carnegie Mellon University, which are the organizations they belonged to. This sample do include some degree of diversity but also has its own bias. As the authors pointed out, some other demographics that consume tweets were not covered by this recruitment method. Overall, a biased sampling might be inevitable for social computing research, it is the researchers’ call to choose from different sampling methods based on their research questions and maximumly minimize the bias in terms of answering their research questions.