After the January 6, 2021, Capitol riot, as researchers and journalists were seeking answers about what happened, how, and why, a trove of helpful information almost magically appeared. Through inept cybersecurity, the right-wing social media site Parler had accidentally made all of its data public to anyone who knew how to find it.
Journalists pounced, immediately combing through millions of posts, pictures, and videos to report on what people inside the Capitol did that day. Now, a year later, a team of researchers with the left-leaning think tank New America are releasing their findings after a deep and retrospective dive through an estimated 183 million now-public posts. The report, titled “Parler and the Road to the Capitol Attack” and written by Candace Rondeaux, Ben Dalton, Cuong Nguyen, Michael Simeone, Thomas Taylor, and Shawn Walker draws a clearer picture of what took place on the platform ahead of the riot.
“One of the biggest takeaways is that Parler was designed, either intentionally or because of neglect, to make it easy to run coordinated inauthentic campaigns,” Rondeaux told me in a phone call, referring to the practice of artificially boosting certain content and perspectives on social media platforms. The concept, best known in association with countries like Russia’s use of social media tactics to influence American political discourse, has also been used to describe how right-wing propagandist James O’Keefe’s Project Veritas boosted its videos spreading disinformation.
Rondeaux was careful to note that the team was unable to parse the precise nature and provenance of coordinated inauthentic behavior on Parler, but said that in their research they repeatedly saw accounts post in ways that strongly suggested that they weren’t real.
“One user we saw was posting at a rate of 1,000 posts per hour, almost entirely of QAnon content,” Rondeaux said. That’s over 16 posts a minute, every minute, for hours on end—a behavior she says was not uncommon. That suggests Parler is not just an echo chamber, as it is often described, but a funhouse of contorted representations of discourse willfully bent and manipulated. “It would be impossible for anyone to open Parler and see a real-world view of anything,” Rondeaux explained.
The co-authors write that they noticed very high spikes in activity around “significant political events and demonstrations,” including large Trump and MAGA rallies and anti-lockdown protests, as well as the murder of George Floyd and subsequent racial justice and right-wing counter demonstrations. While these spikes were probably organic, Rondeaux pointed out that people using the platform during moments of political turmoil were learning about the events in a distorted information ecosystem.
The researchers also found that people who came to Washington on the day of the riot were likely to have posted about previously attending other right-wing rallies across the country, suggesting those protests provide a gateway to further right politics.
The data the research was based on comes from a version of Parler that no longer exists and was taken down by Amazon shortly after the riot. It’s unclear how much the platform has since changed, though its initial vulnerability to manipulation and poor data security suggest it had a lot to improve on.
Parler users often boast of having been banned from places like Twitter and Facebook, a practice that the authors warn “raises troubling questions about the unintended consequences and efficacy of content moderation schemes on mainstream platforms.”
“In fact, it could be persuasively argued that the intransparency of mainstream platform algorithms and rather ad hoc approach to content moderation by behemoths like Twitter, with its hundreds of millions of users, and Facebook, with its billions of users, drove the market for alt-tech platforms like Parler in the first place,” the author’s conclude.
Still, Rondeaux isn’t calling on companies like Facebook and Twitter to address such issues. “It’s not on any of these companies. It’s on regulators,” she said. “There’s no evidence that major technology platforms have the incentive or desire to properly regulate themselves. It’s up to Congress. It’s up to Congress and lawmakers to make decisions about what matters.”