Much about Russiaâ€™s online efforts to roil the 2016 U.S. election remains murky. But as executives from Americaâ€™s top social-media companies - or more precisely, their lawyers - testified before Congress this week, two things became clear. First, the problem is much greater than previously admitted. And second, the companies in question have little incentive to solve it.
In two days of contentious hearings, lawyers for Facebook, Google and Twitter were pressed for answers about Russiaâ€™s interference. Their responses, according to the legislators involved, ranged from â€śvagueâ€ť to â€śvery, very disappointing.â€ť
Why the evasiveness?
One reason is that the meddling in question was egregious. Facebook revealed that Russia-backed accounts had spread 80,000 posts that may have reached 126 million people. Twitter found nearly 3,000 such accounts, along with some 36,000 bots that sent out messages related to the election. Google found that Russian groups had uploaded 1,108 videos to YouTube, which were viewed more than 300,000 times. For months, the companies have tried to downplay these numbers. Yet independent research suggests that they may still grow.
A deeper concern is that the Russian operatives showed a sophisticated understanding of what makes each social-media service so effective. On Google, they surfaced fake news. On YouTube, they posted savvy propaganda videos. On Twitter, they used armies of automated bots to spread disinformation and sway the political narrative.
Perhaps most potently, the Russians cleverly exploited Facebookâ€™s tools for targeting advertisements. They selected users on the basis of race and religion, sought out groups such as gun owners and Confederate apologists, and served up ads that showed a slick fluency with of the idiom of American culture wars - employing slogans such as â€śWoke Blacks,â€ť â€śKillary Clinton,â€ť and â€śHeritage, Not Hate.â€ť
These ads, in turn, induced users to visit the pages of fake accounts linked to Russia, which would then publish yet more incendiary posts, encourage users to share them, and thus amplify their insidious effect. This effort took advantage of something essential about social media and human nature: the more divisive the message, the more emotional the response. Clearly, the intruders had read their Dostoevsky.
All this places the tech companies in a bind. The very attributes that make them such effective advertising platforms are also what made them susceptible to a foreign influence campaign. Their networks, as Senator Mark Warner put it, â€śin many ways seem purpose-built for Russian disinformation techniques.â€ť Preventing such meddling, he might have added, could very well threaten their business models.
So how to resolve this quandary?
Congress has proposed requiring more transparency about whoâ€™s paying for online political ads, an idea that more than three-quarters of Americans support. Thatâ€™s reasonable as far as it goes. Unfortunately, it doesnâ€™t go very far: The bulk of Russiaâ€™s efforts involved not paid advertising but phony accounts that spread inflammatory content. Policing that kind of thing will be far more complicated, expensive and politically treacherous.
So be it. As a start, these companies must become more forthcoming, hire more monitoring staff, invest in artificial intelligence, and generally become more vigilant to the threats they face, in partnership with security officials. They should also open much more of their data to outside researchers, who could better understand how their services are being misused.
â€śIâ€™m dead serious about this,â€ť Facebookâ€™s Mark Zuckerberg says. Heâ€™d better be: Companies like his have developed networks of vast reach and influence. They should bear the consequences when their products are abused.