Fake news on Twitter during the 2016 U.S. presidential election

Paper by Nir Grinberg, Kenneth Joseph, Lisa Friedland, Briony Swire-Thompson, and David Lazer

The spread of fake news on social media became a public concern in the United States after the 2016 presidential election. We examined exposure to and sharing of fake news by registered voters on Twitter and found that engagement with fake news sources was extremely concentrated. Only 1% of individuals accounted for 80% of fake news source exposures, and 0.1% accounted for nearly 80% of fake news sources shared. Individuals most likely to engage with fake news sources were conservative leaning, older, and highly engaged with political news. A cluster of fake news sources shared overlapping audiences on the extreme right, but for people across the political spectrum, most political news exposure still came from mainstream media outlets.

Link: http://science.sciencemag.org/content/363/6425/374

Continue Reading

Betweenness to assess leaders in criminal networks: New evidence using the dual projection approach

A recent article “Betweenness to assess leaders in criminal networks: New evidence using the dual projection approach” by Rosanna Grassi, Francesco Calderoni, and Monica Bianchi show the performance of different betweenness centrality measures in identifying criminal leaders in a meeting participation network. Each of the measures reports different ranking of leaders and dual projection based approaches show better performance compared to traditional betweenness or flow-based measures. Read more:

Continue Reading

How algorithmic popularity bias hinders or promotes quality

By Giovanni Luca Ciampaglia, Azadeh Nematzadeh, Filippo Menczer & Alessandro Flammini

Algorithms that favor popular items are used to help us select among many choices, from top-ranked search engine results to highly-cited scientific papers. The goal of these algorithms is to identify high-quality items such as reliable news, credible information sources, and important discoveries–in short, high-quality content should rank at the top. Prior work has shown that choosing what is popular may amplify random fluctuations and lead to sub-optimal rankings. Nonetheless, it is often assumed that recommending what is popular will help high-quality content “bubble up” in practice. Here we identify the conditions in which popularity may be a viable proxy for quality content by studying a simple model of a cultural market endowed with an intrinsic notion of quality. A parameter representing the cognitive cost of exploration controls the trade-off between quality and popularity. Below and above a critical exploration cost, popularity bias is more likely to hinder quality. But we find a narrow intermediate regime of user attention where an optimal balance exists: choosing what is popular can help promote high-quality items to the top. These findings clarify the effects of algorithmic popularity bias on quality outcomes, and may inform the design of more principled mechanisms for techno-social cultural markets.

Article published at Nature Communications
https://www.nature.com/articles/s41598-018-34203-2
Continue Reading

Complexity Explorables by Dirk Brockmann

Complexity Explorables is a website where people easily explore some complex systems examples while playing models with fun.

For example, “I herd you!” enables you to explore how different network structures impact the spread of a disease in a population. Consequently, you can understand a phenomenon called “herd immunity”, defined that “a disease can be eradicated even if not the entire population is immunized.” The webpage is simple, yet very informative.

If you’re interested, there are many other examples and models. Check them out at http://www.complexity-explorables.org/explorables/!

Continue Reading

Multidimensional Understanding of Tie Strength

An article “The weakness of tie strength” in the current issue of Social Networks unpacked three elements related to the strength of ties: capacity, frequency, and redundancy. The case with an email network shows that the three elements are not highly correlated and are likely to reflect different dimensions of ties. This multidimensional view may explain some unexpected empirical findings. For example, Garg and Telang (forthcoming in Management Science) found that strong ties in online social networks play a significant role in job search and weak ties are ineffective. Weak ties may generate some job information, but only strong ties lead to actions such as referrals.

Continue Reading

How network theory predicts the value of Bitcoin

A recent research by Spencer Wheatley at ETH Zurich in Switzerland and a few colleagues shows that the key measure of value for cryptocurrencies is the network of people who use them. What’s more, they say, once Bitcoin is valued in this way it becomes possible to see when it is overvalued and perhaps even to spot the telltale signs that a market crash is imminent.

Read the complete article here: https://arxiv.org/abs/1803.05663

And the article published by MIT Technology Review here: https://www.technologyreview.com/s/610614/how-network-theory-predicts-the-value-of-bitcoin/

Continue Reading

Computational Social Science ≠ Computer Science + Social Data

Hanna Wallach published a thought piece of what computational social science is, especially from her computer science point of view. Given computational social science in mind, She made points of differences between computer science and social science in terms of goals, models, data, and challenges:

  •  Goals: Prediction vs. explanation — “[C]omputer scientists may be interested in finding the needle in the haystack—such as […] the right Web page to display from a search—but social scientists are more commonly interested in characterizing the haystack.”
  • Models: “Models for prediction are often intended to replace human interpretation or reasoning, whereas models for explanation are intended to inform or guide human reasoning.”
  • Data: “Computer scientists usually work with large-scale, digitized datasets, often collected and made available for no particular purpose other than “machine learning research.” In contrast, social scientists often use data collected or curated in order to answer specific questions.”
  • Challenges: Datasets consisting of social phenomena raised ethical concerns regarding privacy, fairness, and accountability — “they may be new to most computer scientists, but they are not new to social scientists.”

 

She concludes her article saying that “we need to work with social scientists in order to understand the ethical implications and consequences of our modeling decisions.”

The article is available here.

 

 

Continue Reading