Researchers who study disinformation told me that the swift action by the platforms demonstrated some progress. “In the past it was denial,” said Sinan Aral, a professor at the M.I.T. Sloan School of Management, about the past responses of tech platforms to misinformation. “Then it was slow reaction. Now things are moving in the right direction.” But, he added: “I wouldn’t say it’s where we want it to be. Ultimately it needs to be proactive.”
That’s not easy to achieve for many reasons. A look at the Chinese content that Facebook and Twitter responded to shows that not all disinformation is made equal. Russia’s tactics, used to interfere with the 2016 and 2018 elections in the United States, were offensive, focused on so-called wedge issues to “widen the middle ground” and make it harder for people “to come together to negotiate,” said Samantha Bradshaw, a researcher at the Oxford Internet Institute. China’s have been defensive, “using the voice of authoritarian regimes, for suppressing freedom of speech” and “undermining and discrediting critical dissidents.”
I asked Professor Aral which kind of misinformation was more effective. “Let me be very clear,” he said. “We have very little understanding about its effectiveness.”
There’s no consensus on how to monitor it, or measure its impact. In large part, that’s because social media platforms have been reluctant to share details about how their algorithms work, or how content is moderated. “Some of these really basic stats, researchers still don’t have access to,” Ms. Bradshaw said.
Only by better understanding how misinformation works will we be able to figure out how to overcome it. And unless we want tech platforms to unilaterally solve the problem, they will need to give up some information to make that happen.
Big Tech’s big challenge
If the conclusions of those two stories seem in conflict, that’s because they are. Social networks are under pressure to better protect user data. They’re also being asked to open up so we can understand how they’re tackling issues like misinformation and hate speech.
Professor Aral called this the “Transparency Paradox,” a term he coined in 2018. “The only way to solve it,” he said, “is to thread the needle, by becoming more transparent and more secure at the same time.”