Tag Archives: Social Computing

RAA: A recent study on credibility of tweets

RAA stands for: Research Article Analysis

Paper discussed:

Morris, M. R., Counts, S., Roseway, A., Hoff, A., & Schwarz, J. (2012). Tweeting is believing?: understanding microblog credibility perceptions. Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work, CSCW  ’12 (pp. 441–450). New York, NY, USA: ACM. doi:10.1145/2145204.2145274


As I was doing a class paper regarding use of Twitter and self-presentation on Twitter, I found this newly published article quite interesting. In the age of information explosion, people rely more and more on personalized information channels with fast information updates to feed themselves with fresh news. Twitter, combines with multiple searching platforms, becomes ideal medium to provide useful information. Meanwhile, credibility issue rises up as people consume more and more tweets. This study took a look into elements that affect tweets credibility.


1. Purpose of the research:
Understand features that affect readers perceived credibility of tweets.


2. Methods:
A mix of survey and experimental studies were conducted to achieve the research goal. Survey was firstly used to gain the general perceptions of Twitter users on tweets credibility. Experimental designs were carried out later to focus on testing 3 core and most visible features (message topics, user names, and user images) reflected from survey results.


3. Main Findings:
People were poor at judging the truthfulness of tweets based on contents alone; instead, they inclined to use available heuristics, such as user names and user images to assess credibility of a tweet. For example, a default Twitter user image decreased the tweet contents credibility as well as author credibility, while a topically related user name (e.g., LabReport) increased credibility compared to an internet name (e.g., Pickles_92). These findings had great implications to both individual Twitter users who want to enhance their credibilities, and UI designs of search engines, which also has desire to increase perceived credibility of searching results.


4. Take Aways:
Besides the research finding itself, there are 2 points that I found interesting and useful for my future research:
(1) A very clear and persuasive background section
This paper provided a very clear and strong argument for the need of the study. The background regarding credibility study on Twitter was mainly composed with 3 parts:
  • Concerns about credibility do exist, but no one studied what features contribute to it. — served as a gap needs to be filled.
  • A study about Twitter user name existed, which studied the relationship between user name and tweets’ level of interestingness. — served as a step-stone that this research can build upon.
  • There are systems to automatically / semi-automatically classify tweets credibility through combination of crowdsourcsing and machine learning. — served as an application which this research can help with.
These 3 arguments triangulate each other, building a solid ground to claim the desire and value of this study.
(2) Snowball sampling in social computing research
In the experimental study part, the authors claimed that recruiting participants through advertising to their own followers was undesirable, due to the drawback of snowballing sampling strategy. This gave rise to my curiosity since though I knew the definition of snowballing sampling strategy, I never use it before and I didn’t know its drawbacks either. I referred to the citation the authors gave here, which is [Bernstein, M. S., Ackerman, M. S., Chi, E. H., & Miller, R. C. (2011). The trouble with social computing systems research. Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems, CHI EA  ’11 (pp. 389–398).]. In this CHI 2011 paper, the authors gave some theoretical framework to help with social computing system research. Regarding snowballing sampling strategy, this paper actually acknowledged the weakness of it as “the first participants will have a strong impact on the sample, introducing systematic and unpredictable bias into the results”. However, the main point of this paper was to suggest researchers to embrace snowballing sampling as it is “inevitable” due to 3 reasons:
  • The nature of social computing is: information spreads through social channels.
  • Random sampling is an impossible standard for social computing research because influential users exist to bias the sample.
  • Many social computing platforms are beyond the researcher’s ability to recruit random sample.

Thus, we might be able to acknowledge that snowballing is not an ideal sampling strategy but inevitable in some sense in CHI research. We should fully aware of its danger of bringing in biased sample, and use it wisely. In this credibility paper, the authors recruited participants from Microsoft and Carnegie Mellon University, which are the organizations they belonged to. This sample do include some degree of diversity but also has its own bias. As the authors pointed out, some other demographics that consume tweets were not covered by this recruitment method. Overall, a biased sampling might be inevitable for social computing research, it is the researchers’ call to choose from different sampling methods based on their research questions and maximumly minimize the bias in terms of answering their research questions.