Networks in the News – Latent Space Models for Cognitive Social Structures

A recent article by Daniel Sewell develops a new approach for Cognitive Social Structures (CSS) data. Although there have been several models for CSS, his latent space models better capture micro-structures of CSS and provide insights into respondents’ perceptions.

If you’re interested in the paper, it’s available:

Networks in the News – Simple vs. Complex Secrets

A recent American Journal of Sociology article by Georg Rilinger develops a relational theory of complex secrets which explains how corporate crimes often remain secrets even after the fact that critical information has been revealed. The author argues that this type of secrets (i.e., complex secrets) is not enough to be identified secrets as “things,” compared to simple secrets (i.e., discovering a fact reveals a secret). Rather, it requires those who discover secrets to (a) find whole sets of information and then (b) assemble them properly based on a guiding conception. The author demonstrated the case of complex secrets using the Insull’s Ponzi scheme in the 1920s and 1930s. In particular, in this scandal, there were four FTC investigations and early ones failed. The author illustrated that despite the fact that all the investigations had the same sets of information, the early ones relied on a misguided conception, which prevented them from successfully discovering the complex secrets.

If you’re interested in the article, go visit:

Networks in the News – The Determinants of Sharing Strategy in a Wi-Fi Sharing Game

A new study by the Human Nature Lab at Yale University explored how people allocate a limited, but personally usable, resource (e.g., unused Wi-Fi bandwidth) to their neighbors. Based on results from a Wi-Fi sharing game that the authors developed, the study found that (a) network density (i.e., the extent which people are connected with each other in the network) impacts the inequality of Wi-Fi sharing, and (b) those who benefit from Wi-Fi sharing at most tend to have many neighbors who in turn have few neighbors.

If you’re interested in the study, it is available:

Complexity Explorables by Dirk Brockmann

Complexity Explorables is a website where people easily explore some complex systems examples while playing models with fun.

For example, “I herd you!” enables you to explore how different network structures impact the spread of a disease in a population. Consequently, you can understand a phenomenon called “herd immunity”, defined that “a disease can be eradicated even if not the entire population is immunized.” The webpage is simple, yet very informative.

If you’re interested, there are many other examples and models. Check them out at!

Noshir Contractor and Kyosuke Tanaka presented their research at #ICA2018

Noshir Contractor and Kyosuke Tanaka presented their research at the 68th Annual Conference of International Communication Association in Prague, the Czech Republic:

Sid Jha and Matt Nicholson present at Northwestern Computational Research Day

Sid Jha will give a lightning talk “A Computational Platform to Evaluate the Ability to Perceive Social Connections” at the 2018 Computational Research Day on April 10, 2018. Moreover, Sid and Matt (both Undergraduate Research Assistants at SONIC) will present their posters then, respectively:

  • Creating a Framework for Evaluating the Effectiveness of Various Search Strategies in the Small-World Phenomenon (by Matt)
  • Network Acuity: Social Perceptions in a Small-World Experiment (by Sid)

Both abstracts and posters are available:

Computational Social Science ≠ Computer Science + Social Data

Hanna Wallach published a thought piece of what computational social science is, especially from her computer science point of view. Given computational social science in mind, She made points of differences between computer science and social science in terms of goals, models, data, and challenges:

  •  Goals: Prediction vs. explanation — “[C]omputer scientists may be interested in finding the needle in the haystack—such as […] the right Web page to display from a search—but social scientists are more commonly interested in characterizing the haystack.”
  • Models: “Models for prediction are often intended to replace human interpretation or reasoning, whereas models for explanation are intended to inform or guide human reasoning.”
  • Data: “Computer scientists usually work with large-scale, digitized datasets, often collected and made available for no particular purpose other than “machine learning research.” In contrast, social scientists often use data collected or curated in order to answer specific questions.”
  • Challenges: Datasets consisting of social phenomena raised ethical concerns regarding privacy, fairness, and accountability — “they may be new to most computer scientists, but they are not new to social scientists.”


She concludes her article saying that “we need to work with social scientists in order to understand the ethical implications and consequences of our modeling decisions.”

The article is available here.



Cooperation, clustering, and assortative mixing in dynamic networks

A recent study by David Melamed and his colleagues examined whether the emergent structures that promote cooperation are driven by reputation or can emerge purely via dynamics. To answer the research question, they recruited 1,979 Amazon Mechanical Turkers and asked them to play an iterated prisoner’s dilemma game. Further, these participants were randomly assigned one of 16 experimental conditions. Results of the experiments show that dynamic networks yield high rates of cooperation even without reputational knowledge. Additionally, the study found that the targeted choice condition in static networks yields cooperation rates as high as those in dynamic networks.

The original article is available here.

Scale-free Networks Are Rare

Recently Aaron Clauset and his colleague share their new study: “Scale-free networks are rare”. In this study, they found scale-free network structure is not so prevalent based on their statistical analyses of almost 1000 network datasets across different domains. In particular, their results indicate only 4% of the datasets showing the strongest-possible evidence of scale-free structure and 52% demonstrating the weakest-possible evidence.

Additionally, this study has invoked intense conversations over Twitter. For instance, Laszlo Barabasi retweeted Aaron Caluset’s tweet, saying “Every 5 years someone is shocked to re-discover that a pure power law does not fit many networks. True: Real networks have predictable deviations. Hence forcing a pure power law on these is like…fitting a sphere to the cow. Sooner or later the hoof will stick out.”

Link to the paper:

Link to Barabasi’s retweet: