Spying on Big Tech Might Do a Better Job Than Dividing Firms

Over the years, with social media firms piling themselves on billions of data to boost big profits, the information flow never went the other way. Now the tables are turning. A promising law in the US Congress to combat the undue influence of Big Tech would force them to share data on how people use their platforms. Sure, it doesn’t have the same ring as “Break Up Big Tech,” but it could be more effective at curbing hate speech and political division on social media. A spotlight on what people are seeing and why can guide regulators to solutions and harness the power of public pressure. Think of the impact of Facebook whistleblower Frances Haugen and multiply it.

Governments are engaged in a variety of efforts to rein in the tech giants; US antitrust regulators last week got approval to prosecute Meta, which could lead to its breakup. However, this can take years, and does not directly deal with the psychological harm caused by its sites. But under the proposed law, researchers would get anonymized user data to study in detail how hate speech, conspiracy theories and social media spread. Companies failing to share such data will be penalized.

Haugen had shared thousands of internal documents with publications to show the extent to which Facebook is aware of the toll Instagram is taking on the mental health of teens. The revelations were a blast. But the world can’t trust whistleblowers, and is bogged down on internal research on meta side effects. So outsiders need to come in.

America’s Platform Accountability and Transparency Act (PATA) will create an opportunity for academics to study user activity on Meta’s Facebook and Instagram; Alphabet’s YouTube; Twitter; ByteDance’s Tiktok; and in other detail. According to Brandon Silverman, who has been helping senators with the nuts and bolts, the bill also gives some access to a wide range of parties, including non-profit organizations. Silverman founded CrowdTangle, a social analytics tool that Facebook bought in 2016 and left the company last year.

The data will cover millions of people, broken down by factors such as age, approximate location, gender and race, and matched to the content being viewed by these groups. The findings should help assess how and why something like hate speech attracts users. Such solid data can turn accepted wisdom on its head. For example, critics of Facebook often say that its most troubled users fall into conspiracy traps because algorithms recommend such content. But what if Facebook’s algorithms don’t always work that way? What if people explicitly visit YouTube [to confirm absurd beliefs], “What’s happening is that people looking at something on Twitter or Facebook are reaching out to YouTube, and they’re really looking for it,” said Nate Persili, a law professor at Stanford Law School. And advocacy groups can’t pressure Facebook to change anything if they don’t know why so many people have seen QAnon or anti-vaccine posts on those sites, which is why gathering evidence is so important. “Right now we are only glimpses,” said the professor.

For all its revelations, Haugen’s research dump barely scratched the surface of the most disturbing activity on social media. It could not show exactly how posts about a stolen election by a minority of people last year in the US before the January 6 uprising went dangerously viral. To accurately trace activity around those outsiders, researchers need large amounts of current and historical data, including information on how people hop between different social media platforms.

It will be an unprecedented glimpse of a world that social media companies never wanted people to see. PATA, and a similar proposal in Europe that has a better chance of passing a congressional impasse, could complicate data even more than researchers obtained from Facebook before 2018. That’s when the company barred researchers from studying user activity on the site. In the wake of the Cambridge Analytica privacy scandal. Even with the limited insight researchers could glean then, more than 130 studies on Facebook’s side effects and activities were published before that shutdown. Similar insights into user behavior have never been provided by YouTube or TikTok.

Online platforms, and Meta in particular, would argue that they have leaned backwards to be transparent, even publishing regular transparency reports. But civil society groups and researchers have long turned a blind eye to the lack of useful details. This is no surprise. Few businesses will willingly reveal their toll on human welfare. But being forced to look at the problem is the first step to solving it. © Bloomberg

Parmy Olson is a Bloomberg Opinion columnist covering technology

subscribe to mint newspaper

, Enter a valid email

, Thank you for subscribing to our newsletter!

Never miss a story! Stay connected and informed with Mint.
download
Our App Now!!

,