• on May 1, 2020

What lessons haven’t we learned since 2016? Lesson 3: News Aggregation

Since the realization that foreign and domestic disinformation played a central role in driving certain narratives during the 2016 elections, many Americans have tried to grapple with what this means, how it affects us and our communities, and what we can do about it. 

This series explores those lessons we haven’t learned well enough since 2016, and what we should learn from them to be better information citizens and break patterns of how disinformation works. 


Lesson 3: “News aggregator” accounts and personalities play a role in sculpting information environments and building networks on social media — and sometimes also in promoting disinformation and coercive narratives.

We need to understand the sometimes outsized role that these accounts play in our timelines, and their purpose. How we associate with those networks and narratives can make us an unwitting accomplice in amplifying disinformation or a target of future campaigns meant to exploit our biases. 

The way we participate in information on social media platforms associates us with certain networks — identifies us as part of them, both algorithmically and actually. We can “RT≠endorsement” all we want, but this is meaningless in terms of how algorithms will identify our interests, or how our interactions with accounts and topics can be mapped and tracked. This means that our association with certain networks can make us targets for certain information campaigns, including disinformation narratives — and if we aren’t careful, we can become unwitting participants in promoting disinformation.

Disinformation narratives fail when they don’t find audiences — networks — to promote them. Identifying the right targets and building networks that reach and instrumentalize those targets effectively is a defining aspect of disinformation success — building communities online in which this information can thrive with little resistance. 

Understanding this, it’s important to look at how anonymized accounts and named personalities that “curate” news and information for others can help build this architecture within social media so that we can make better decisions about which information networks we choose to participate in. 

This is perhaps best introduced by discussing an example: the case of ZeroHedge. 

I was a longtime resistor of social media, for many of the data and privacy-related issues that have come into focus since 2016. But I joined twitter in March 2014, just after the invasion of Crimea. There was a frenzy of Kremlin information operations against Ukraine, and against the West; Twitter was one front in this battle. As I followed more people and learned how to use twitter as a resource for news and information, one account kept popping into my timeline over and over, retweeted or linked to by others trying to make sense of what Russia was doing: ZeroHedge. At first, it seemed snarky-fine. Over time, it became clear it was a gateway to propaganda and disinformation on specific topics. 

ZeroHedge, using avatars and pseudonyms from the movie Fight Club, was a website and social media content generator presenting itself as edgy financial news and analysis. There is some original content and some reposted, all from “anons” (anonymous writers) with alleged “insider information.” After gaining a following in financial circles in the years after the 2008 financial collapse, it started posting a fair bit on foreign affairs and geopolitical issues. Its hot-takes on twitter gained attention and amplification. The foreign affairs opinions were sympathetic to Putin, Assad, and Tehran. 

By mixing this pro-Kremlin, pro-Assad bias into its financial newsfeed, many people who didn’t normally track those issues (but who had identified themselves as open to anti-establishment thinking by following ZeroHedge) were exposed to this bias at the height of two wars – Syria and Ukraine – in which disinformation was a first-line effort.

ZeroHedge normalized these views as “sensible” and “pragmatic,” targeting an audience inclined to believe that expedient economic interests should come before geopolitical and ideological concerns — crowds who wanted to believe they were reading about “risk analysis.” Over time, ZeroHedge also offered commentary in support of Trump, and Wikileaks, and then attacks on the Mueller investigation, all next to ongoing promotion of other Kremlin conspiracies. 

Zerohedge — which is registered in Bulgaria and has many odd features — is sophisticated disinformation and narrative warfare, executed with purpose and intent, generating stories that “frequently echo the Kremlin line.” It doesn’t matter if it is directed by some Kremlin interest or merely a fellow traveler: in network analysis of 2016 disinformation campaigns and beyond, ZeroHedge’s twitter account was typically a central node in pushing conspiracies, with its content frequently posted by Kremlin trolls and far-right accounts in the US alike. 

None of this mattered to twitter, of course: ZeroHedge was ultimately kicked off the platform only this year for promoting conspiracies on COVID-19. But how it operated is representative of a core set of Kremlin tactics in information operations that many others have learned to mirror — tactics learned from decades of Soviet propaganda and disinformation techniques. Embed in a community, emulate it, and, over time, introduce new content and narratives that exploit the identity of the network. 

Curating news and information for a network on social media is an effective means to accomplish this. Once you establish you are part of a community, your content is accepted with lower vetting standards. For example, even after ZeroHedge was identified as a purveyor of problematic narratives, people who insist ZeroHedge is a font of useful information on financial issues swear by it, saying they know to ignore the kooky content (which isn’t how any of this works: all the context targets the same psychology, which is why it is successful). 

Only about 1/3 of the news we read online comes directly from the publisher’s site; the rest is filtered and curated in some way as it finds its way to us through social media and internet searches. Part of what we like about social media is the ability to follow and track what we want to see — to curate our own experiences on social media platforms based on our interests. We rely on the accounts we choose to follow to help put the information we want to see in front of us. Especially on twitter, high-volume posters can dominate your timeline.

But this leaves us vulnerable to the selectivity and bias of other accounts. Bad actors on social media understand how to exploit the weaknesses we create for ourselves, including the tendency to outsource the curation of information we want to read. Enter: accounts that aggregate news to interest groups, and named personas who are high-volume posters and commenters on news and current events whose motivations we may not understand. (Not all such accounts are bad actors, but more later on how they still influence what we see too much.)

To dig into this, it’s important to understand how Kremlin accounts actually worked and gained influence. Skeptics of Russian influence still dismiss the impact of the Kremlin information campaigns as “a handful of ads,” but analysis of accounts identified as run by the Kremlin’s troll farm provide a lot of information about what they actually did and why it was successful. A few of those tactics are relevant in understanding why curation can be a disinformation accelerator. They are: 

Exploit our trust in local news:

Americans trust local news more than other sources. The Kremlin operators exploited this by creating imposter local news feeds that were, among other things, highly selective in the polarizing content they posted. Appearing as local sources lowered our defenses to untrustworthy information. This tactic has also been emulated by domestic right-wing information campaigns. A lot of this is about building an audience and identifying targets. These types of “news” accounts established a presence within networks focused on local issues, which leads to —

Embed in established communities:

Research shows that trolls weren’t trolling — that is, arguing or posting denigrating comments — most of the time. Mostly they were flattering and ingratiating themselves in replies and retweets within the communities they wanted to “infiltrate.” The imposter news feeds did the same, essentially, by posting other local content. The goal of all of this was to look a lot like the community you wanted to have influence in, gain acceptance. The Kremlin accounts also rewarded outsiders who engaged in their narratives with additional followers and amplification. This behavioral engineering helped to — 

Build networks:

Individual accounts may gain a ton of followers and influence — but they can also be taken down quickly if they violate content rules or are identified as a bad actor. Networks built between accounts and among communities build resilience into information architecture, and they are harder to disrupt or erase once established. A good example of this from the data on the Kremlin accounts is the “media mirage” they built between parallel accounts targeting the black community in America; the reach came in part from the holistic establishment of a network of accounts. Across the board, Kremlin accounts interacted with other, real accounts, creating networked content models that erased lines between foreign and domestic. These networks and non-Kremlin accounts built up by Kremlin trolls persist even after the Kremlin accounts were purged. It is breaking those networks that is vital to disrupting disinformation — and so far a task the platforms cannot address. 

Fast-forward to 2020, and why do I think this is important to look back on? Because some accounts have more influence on what we see than others, and we need to be more consciously aware of how they act and influence us, regardless of their provenance or goals, which could be entirely innocuous. 

Let’s look first at accounts that appear as news aggregators. Take, for example, BNONews, a twitter feed with a linked website with little original content, and no bylines (it is not original reporting). What is it? I have no idea, and it doesn’t actually matter. It is incorporated in the Netherlands and relies heavily on references to a past project that no longer exists. Mostly it just posts tweets that copy the headlines of other news outlets without linking to those outlets or sources — things like ‘BREAKING: random disaster headline with no verifiable source’ — which indicates this is about its own revenue model more than anything else. It has hundreds of thousands of followers on twitter and is reposted by verified and influential accounts. 

If the stories are true, who cares, right? As we have seen time and again, these “commercial projects” can find themselves fueling less savory information tactics. BNO selects what it posts, which introduces bias. It doesn’t really explain the purpose, beyond “breaking news.” And the posting of unsourced headlines allows for the introduction of rumor or unverified content — it could say a headline is from CNN, but is it? What “officials”? Does anyone bother checking before retweeting? Even if the account itself is commercially driven and not a bad actor, the unsourced nature of the information and statistics it posts provide an opportunity and resource for bad actors: a report on accounts spreading COVID-19 disinformation found that BNO’s coronavirus tracker was linked to  almost 2000 times by those accounts. 

The point is: accounts like this exploit our lazy information habits (someone else is collecting information, yay!), and lower our defenses against malign actors doing the same with more focused narrative goals. We become used to them, then accept what they tell us without scrutiny. It reinforces the worst habits of spreading information on the internet. There are better ways to get and spread information. Beware aggregation. 

Also, we should be aware of the role that news aggregating personalities play in our Twitter feeds. There are plenty of verified and unverified accounts that have gained extensive following by essentially making themselves curators of important news. Some of these are TV producers or other video content producers or journalists, some are just interested devotees. But we should be aware of how their high-volume posting sculpts and changes our timelines in social media. By curating content quickly, they are relied on to flag news for us, find that clip of someone saying something and post it immediately, and they litter our timelines with reposts of the same videos and stories. Sometimes we think we understand their motivations, but we don’t always. But beyond that — they post too much and it can start to dominate the information that we see. The same clip is posted over and over because everyone wants to engage it, and it seems like the only story or a big story when it isn’t. Try muting or unfollowing these accounts as an experiment, and you will see how much it will alter your timeline and reduce selectivity bias. You’ll still see the same content, but without the sense of rocket-boosters. 

So, what can you do about it? 

  • Understanding that the power of social media is building networked communities — for good and bad. The way you participate in information identifies you as parts of networks. This makes you a target of certain information campaigns. Be aware of this as you follow accounts and engage content. Do you want to be in that community?
  • Understand the role you play as a node in disinformation networks and amplification. How you position yourself in networks influences the information that will find you. Be conscious of groups that seem to be building narrative communities with purposes you don’t quite understand. Don’t be in a network that is amplifying corrosive stories and narratives, or engaged in trolling or abuse as much as other activities. Mute, block, or unfollow to dissociate from negative narratives and campaigns. 
  • News aggregating accounts and individuals are risk/reward. Understand that balance, watch for bias, and never let them dominate the curation of information in your feed. Rely on accounts that link to (or at least screenshot) original sources and provide accurate representation of content. Follow diverse sources, including outlets and journalists and experts directly. 
Skin Color
Layout Options
Layout patterns
Boxed layout images
header topbar
header color
header position