AUSTIN BOMBINGS 

Russian bots and the Austin bombings: Can fact-checking offset division, misinformation?    

Posted March 28th, 2018

As fear of a serial bomber gripped the Austin metro area last week, Russia-affiliated social media accounts began to push false narratives online and cause discord.

The Alliance for Securing Democracy, a bipartisan nonprofit that monitors 600 Twitter accounts linked to Russian influence agencies, reports that Russian bots and trolls were tweeting heavily about the Austin bombings at the height of the manhunt last week. The words “Austin,” “Texas” and “Austin bombings” were top trending subjects on the Alliance for Securing Democracy’s website.

The site also reported a spike of almost 2,000 tweets by the Russia-linked accounts on March 20, when a fifth package bomb exploded at a FedEx facility near San Antonio and just before the bomber, 23-year-old Mark Anthony Conditt, blew himself up in Round Rock as law enforcement officials closed in.

Russian bots were helping push false narratives that the bombings weren’t being covered by the news media because the first four bombing victims were black and Hispanic, according to the alliance.

The discord online also included discussion around law enforcement not taking the bombings seriously until they involved white victims, and whether Conditt, who was white and was reported to be of Christian background, was being treated differently than if he had been a person of color or a Muslim, although it’s not clear how much of that talk was being driven by Russia-tied accounts.

The bombings were the latest example of Russian bots and trolls attempting to influence discussion around major U.S. events, an issue that has been at the center of public discussion since U.S. intelligence officials revealed a campaign by Russia-linked bots in the 2016 U.S. presidential election.

In social media, bots are automated software that control or operate a social media account. The bots automatically generate social media posts and typically attempt to make it appear they are real people. The bots are often used to promote certain social or political ideas.

As artificial intelligence technology for bots has evolved, experts say, so has the threat of people using it maliciously on social media, although a growing effort by watchdog organizations and researchers is attempting to counteract the threat.

Bot networks

Last fall, executives with Twitter told congressional investigators that they found more than 1.4 million tweets from more than 36,000 Russian bots during the U.S. presidential race, while Facebook reported that about 126 million of its users could have seen content from accounts run by Russian trolls.

In this context, trolls refers to an organized operation involving users who generate online traffic aimed at influencing public opinion, and to spread misinformation and disinformation. Online trolls, whether foreign or domestic, typically target divisive or controversial issues such as illegal immigration or gun laws. According to the alliance, other recent targets have included the school shooting in Parkland, Fla.

Related: Viewpoints: Race, Russian bots and the angst around #AustinBombings

People who look to cause discord online are most likely using bot networks, said Matt Buck, a software engineer and co-founder of Austin-based chatbot firm Voxable, which this year built a Facebook Messenger chatbot for South by Southwest’s Facebook account.

According to Buck, tools to build bot networks, which can simultaneously tweet the same information and then interact with each other to boost a tweet’s visibility, can easily be purchased on the dark web or built by an engineer.

In the case of the Austin bombings, Buck said, a single person could have been controlling a bot network and only interfering when responding to replies of tweets published, since constructing a bot to properly respond to a human is still difficult.

“Links are easy enough to spot. If you can search what an account just tweeted, and you can find other accounts that tweeted the same thing, then that ran through a bot network,” Buck said. “Tracking the actual humans that are actually responding is obviously more difficult.”

As artificial intelligence advances, it will be easier to make bots that seamlessly respond to human interactions, making the threats by Russia and others more difficult to manage, Buck said.

JAY JANNER / AMERICAN-STATESMANOfficials investigate near a red vehicle believed to be that of the Austin bombing suspect Mark Conditt on I-35 in Round Rock on Wednesday March 21, 2018. 

‘Playing catch up’

But the awareness of the social bot threat has increased since the U.S. presidential election, said Jonathon Morgan, CEO of Austin-based New Knowledge, a technology company that built the Alliance for Securing Democracy’s monitoring dashboard for Russian bots.

Morgan said tech companies like his are working on software-related counterattacks, while researchers and think tanks are helping to understand the phenomenon.

Those include the University of Oxford’s Computational Propaganda Project, which studies how social media bots are used to influence public discourse, and Google’s Jigsaw incubator, which tries to promote fact-checking organizations and works on other projects to combat online information threats.

“We’re playing catch up” Morgan said. “And now, it doesn’t take a sophisticated actor to manipulate the system. Instead of one or two groups that are easy to identify like Russia’s Internet Research Agency was, now people are working for nonprofits, domestically, etc., using the same tactics. Anybody with an ax to grind can operate in this space.”

One of the ways to combat this, experts say, is through journalism and fact-checking organizations and websites.

Fact-checking entities such as Politifact and Snopes have long helped debunk false narratives. As the Austin bombings unfolded, for example, journalists and others took to Twitter to point out that the bombings were indeed being written about and getting air time on a number of media outlets. The American-Statesman published an article addressing the issue. 

Recently, there has been an increase in fact-checking crowdsourcing sites or AI software, said Matt Lease, a professor at the University of Texas School of Information. Lease is part of a team of information experts building a fact-checking website at UT called Claim Checker that attempts to form trust between users by allowing them to input their own biases on organizations that are being used to fact-check a claim.

“There is an important challenge now for AI not only to be accurate, but to be further transparent and accountable… So that an AI system can establish trust with its users,” Lease said by email.

That ultimately starts with social media platforms being more accountable, according to Morgan.

Private benefit, public harm

While Twitter, Facebook and others have said they monitor their platforms and have suspended fake accounts, malicious activity has persisted.

Facebook is facing a backlash after a recent New York Times report that a third-party political consulting firm, Cambridge Analytica, harvested personal data from more than 50 million accounts.

In a full-page advertisement in several major newspapers, Facebook CEO Mark Zuckerberg wrote that his company is beginning to limit the data third-party apps obtain when users sign into Facebook. The company is also revamping its advertisement system and using surveys to test new methods of promoting trustworthy news.

“I promise to do better for you,” Zuckerberg wrote in his ad.

In Twitter’s case, the company’s user rules include provisions for violence, abuse, graphic content and impersonation accounts on its platform, but its loose policies on account setup and parody accounts have allowed thousands of fake accounts to leak in.

Twitter has largely opted to let its users combat misinformation through fact-checkers.

“For these companies that want to be the de facto public square, unless they take some steps to be responsible stewards, then I think the historical examples are that when there is private benefit but public harm, those industries get regulated,” Morgan said. “The appropriate safeguards to stop the threats are behind.”

TERESA KROEGER / GETTY IMAGES Jack Dorsey, CEO of Twitter, at the Thurgood Marshall College Fund 28th Annual Awards Gala at Washington Hilton on November 21, 2016 in Washington, DC. 

Comments