US lawmakers have left Facebook in no doubt this week that revelations about the impact of its Instagram app on teen mental health have further damaged the company’s reputation.
The Democrat senator Richard Blumenthal said the social network was “indefensibly delinquent” in its behaviour and had “chosen growth over children’s mental health”, after the Wall Street Journal (WSJ) reported that Facebook’s internal research had flagged concerns that its photo-sharing app was damaging the wellbeing of young users.
The pressure on Facebook is likely to increase on Sunday when a whistleblower appears on US TV to claim that the company is lying to the public and investors about the effectiveness of its attempts to remove hate, violence and misinformation from its platforms.
The whistleblower, who has submitted thousands of internal documents to the US financial regulator, will then appear at a Senate hearing on Tuesday.
The WSJ report and the whistleblower’s appearance take place against a backdrop of active attempts to rein in the power of Facebook and other tech companies. Here are some of the proposals being considered for regulating Facebook.
The US competition watchdog, the Federal Trade Commission, has lodged a lawsuit demanding that Facebook sell off Instagram and its messaging app WhatsApp. “After failing to compete with new innovators it illegally bought or buried them when their popularity became an existential threat,” said Holly Vedova, an FTC director.
An earlier lawsuit was dismissed by a US judge, but even if this one goes ahead it will be a years-long battle. If Facebook is forced to sell off Instagram and WhatsApp there is also the question of whether this will help reduce misinformation, hate speech or damage to wellbeing on those platforms.
One idea floated in the book Social Warming, by the former Guardian journalist Charles Arthur, is to split Facebook into discrete geographical entities, which would allow the new Facebook companies to concentrate on moderating smaller networks.
Mark Zuckerberg, Facebook’s founder and chief executive, has argued that only companies as large as Facebook have the resources to fight misinformation, election meddling and harmful content.
The Center for Countering Digital Hate, a US- and UK-based campaign group, argues that requiring more transparency from Facebook on several fronts, for instance on lobbying, enforcement of its own guidelines and its advertising system, will make a positive difference. Imran Ahmed, the CCDH’s chief executive, argues that Facebook must also be more transparent about how its algorithms can spread misinformation and create discord.
“If users knew for sure what the algorithm was doing, that there is transparency, and that governments, regulators and watchdogs can independently confirm whether Facebook’s algorithms are pushing misinformation, social media firms would find it impossible to carry on doing business as they are,” Ahmed said.
Asked about transparency at Thursday’s hearing, Facebook’s global head of safety, Antigone Davis, said the establishment of bodies such as the Facebook oversight board underlined the company’s commitment to transparency.
Copy the online safety bill – globally
In the UK, the online safety bill is a landmark piece of legislation that imposes a duty of care on social media companies to protect users from harmful content. Social media firms are also required under the draft bill to submit to Ofcom, the communications watchdog, a risk assessment of content that causes harm to users.
According to the Conservative chair of a Westminster committee scrutinising the bill, Damian Collins, failing to declare the Instagram research in a risk assessment would expose Facebook to substantial fines under the draft terms of the bill. The legislation also gives Ofcom the power to scrutinise algorithms, which tailor the content that a user consumes and are the subject of much debate among politicians on both sides of the Atlantic. Facebook says it shares the UK government’s objective of “making the internet safer while maintaining the vast social and economic benefits it brings”.
Reform section 230
Section 230 of the US Communications Decency Act is seen as a founding text for social media networks because, in broad terms, it means internet companies cannot be sued for what users publish on their platform – but neither can they be sued if they decide to take something down. The Democrat senator Amy Klobuchar is attempting to amend section 230 so that social media companies are accountable for the publishing of health misinformation. Along with fellow Democrat senators Mark Warner and Mazie Hirono she is also backing wider proposals to amend the law (Donald Trump called for section 230 to be repealed altogether), and there are other proposals too. It is a vexed issue, even before you get to the first amendment.
Give users more power over their data
Facebook’s all-important advertising system relies on data from its users, and regulators are considering whether users should be given more control over that data. For instance, users could be given the power to withhold data if they do not think a service meets their standards, which could in turn make social media companies behave more responsibly.
Ensure the metaverse is properly regulated
Facebook’s next big strategic push is the metaverse, where people lead their personal and professional lives online whether through virtual-reality headsets or Pokémon Go-style augmented reality (think a highly developed version of Facebook’s recently launched glasses product). There are obvious privacy implications around living in a virtual world hosted by Facebook, Google or Apple – Facebook’s policy chief, Nick Clegg, talks about multiple metaverses meshed together – that regulators will need to scrutinise, although Facebook says a fully fledged metaverse is many years away. Last month, Facebook launched a $50m (£37m) fund to help find solutions to those concerns and said it would collaborate with policymakers and experts.