The 2020 US election is a big test for social media companies like Facebook and Twitter. Four years ago, they were the target of a Russian misinformation campaign that aimed to mislead the electorate.
Russia’s Internet Research Agency (IRA) sent waves of fake news articles and posts into the US electoral stratosphere and watched as users spread the misinformation far and wide.
In his book The Hype Machine: How social media disrupts out elections, our economy, and our health—and how we must adapt, Sinan Aral, the David Austin professor of management, marketing, IT, and data science at MIT, says that though we can’t prove definitively that Russian interference in 2016 swayed the election Trump’s way, false news likely reached between 110 and 130 million people.
Trump won the electoral college vote four years ago by a margin of 107,000 votes across three states—misinformation campaigns only need to influence a small margin of the electorate to be effective.
So, this time around, what problems do companies like Facebook and Twitter face, and are they prepared to tackle election interference?
The problems facing social media companies in 2020
Sinan, who is also the director of the MIT Initiative on the Digital Economy, and the head of MIT’s Social Analytics Lab, documents in his book how large a problem the spread of false news is.
In a research project with MIT colleagues—in direct collaboration with Twitter—he discovered that between 2007 and 2017, in all categories of information, false news spread “significantly farther, faster, and deeper, and more broadly than the truth.” This was an issue in the last election and will be again in 2020.
The election of 2020 is taking place amid the coronavirus pandemic, and although by October 28th 75 million Americans had already voted in person or had their mail-in ballot received, it could be weeks after November 3rd that the victor is officially confirmed, once all the votes are counted.
Trump has spread rumours that mail-in ballot votes can’t be trusted. Twitter recently labelled one of his tweets (below) with a warning that claims like this are disputed and might be misleading.
Amid the chaos of a disputed election, foreign actors may find it easier to infiltrate and disseminate further false claims across both Twitter and Facebook.
Paul M. Barrett, the deputy director of the NYU Stern School of Business Center for Business and Human Rights, predicted in a report that this time around, Iran and China may join Russia in disseminating disinformation.
US national security officials have said that Iran is responsible for a slew of threatening emails sent to Democratic voters ahead of the election. They have also said that both Iran and Russia have obtained some voter registration information.
Compared to four years ago, what impact could this have in 2020?
“The problem with 2016 was that the platforms and their users—and the US government—weren’t at all prepared for Russian interference or domestically generated mis- and disinformation,” explains Paul.
“It's hard to gauge whether users are more on their guard for harmful content, but the platforms certainly are.”
Facebook and Twitter introduce new policies to tackle misinformation
Twitter in 2019 made the decision to ban all paid for political ads on its platform. Facebook has introduced a similar policy this year, banning all political ads in the week leading up to the election, and for an unspecified period of time after November 3rd.
Both platforms moved to restrict the spread of a New York Post story about Joe Biden’s son, Hunter Biden, which contained hacked materials and personal email addresses. Twitter said sharing the article defied its hacked materials policy, while Facebook limited its spread while it was fact-checked.
The platforms have also started to provide more information about a news article’s sources, something David Rand, a professor at the MIT Sloan School of Management and in MIT’s Department of Brain and Cognitive Sciences, believes is a positive step.
“This sort of tactic makes intuitive sense because well-established mainstream news sources, though far from perfect, have higher editing and reporting standards than, say, obscure websites that produce fabricated content with no author attribution,” he wrote in a New York Times op-ed.
Policies like these are clearly aimed at trying to protect the integrity of the US election in 2020. The fact that the companies are acting shows a willingness to avoid the spread of misinformation that was ripe in 2016. The election also comes at the end of a year in which the big tech platforms have been hounded for their lack of accountability and anti-competitive behavior.
Though recent research of David’s does raise questions about the effectiveness of their approach.
David, along with Gordon Pennycook of the University of Regina’s Hill and Levene Schools of Business, and Nicholas Dias of Annenberg School for Communication, found that emphasizing sources had virtually no impact on whether people believed news headlines.
Attaching warning labels was also seen as counterproductive. Though people were less likely to believe and share headlines labelled as false, only a small percentage of headlines are fact-checked, and bots can create and spread misinformation at a much faster rate than those stories can be verified.
“A system of sparsely supplied warnings could be less helpful than a system of no warnings, since the former can seem to imply that anything without a warning is true,” David wrote in the Times.
So, what’s the solution?
How to fix misinformation and social media
Paul of NYU Stern thinks that, in one sense, the social media companies can never do enough.
“The platforms host too much traffic for even the best combination of artificial intelligence and human moderation to stop all harmful content,” he says.
“In addition to continuing to improve technological and human screening, they should be revising their basic algorithms for search, ranking, and recommendation. Currently, those algorithms still reportedly favor promotion of sensationalistic and anger-inducing content—a tendency that purveyors of harmful content exploit.”
A more modest step would be to remove, rather than label or demote, content that has been determined demonstrably false. This should be coupled with an increase in the number of content moderators, says Paul, who are hired in-house rather than outsourced.
There’s also the issue of accountability. Under Section 230 of the Communications Decency Act in the US, social media companies aren’t responsible for what their users post—the law acts as an editorial shield.
Making social media companies responsible for the content posted on their platforms might be a step in the right direction. But how to do that is still up for debate, with big tech one of the key battlegrounds in the election. Trump signed an executive order earlier in 2020 aimed at stripping social media companies of the protection provided by Section 230. An executive order is not the solution.
In his research report, ‘Regulating Social Media: The Fight Over Section 230—and Beyond’, Paul says that Section 230 should be preserved and improved upon, pushing platforms to accept greater responsibility.
He also floats the idea of creating a Digital Regulatory Agency.
“There’s a crisis of trust in the major platforms’ ability and willingness to superintend their sites. Creation of a new independent digital oversight authority should be part of the response,” he explains in the report.
“While avoiding direct involvement in decisions about content, the agency would enforce the responsibilities required by a revised Section 230.”
By implementing new policies to tackle false information, social media companies are moving in the right direction. But research shows it’s not enough. Facebook and Twitter may be more ready than they were in 2016 to prevent the spread of misinformation, but whether that means they’re truly ready for the US election in 2020 is a different question.
The spread of false news is also a problem that goes beyond the US election. It’s a societal challenge on a global scale. More needs to be done.
As a start, on the platform side we need more accountability and responsibility. And on the user side, a more scientifically rigorous approach, that draws upon evidence to teach and inform the public how to discern fact from fiction, and how to interpret information in the digital age.
“[The platforms] need to continue to promote authoritative information about voting, while simultaneously rooting out any and all attempts at voter suppression,” concludes NYU Stern’s Paul.
“The latter practice—discouraging eligible voters from even attempting to go to the polls—is one of the great continuing sins of American society and must be addressed directly and forcefully.”
BB Insights examines the latest news and trends from the business world, drawing on the expertise of leading faculty members at the world's best business schools.
The main image in this article was used under this license. Insights logo added.
RECAPTHA :
3c
ec
98
ca