Why the UK’s combative approach to Facebook will backfire

Blog

  • Picture of Brhmie Balaram
    Brhmie Balaram
    Associate Director, Economy, Enterprise and Manufacturing (family leave)
  • Economics and Finance
  • Digital

Today’s parliamentary hearing assembled an 'international grand committee’ to interrogate a representative of Facebook about its role in spreading disinformation. The optics may be in the committee’s favour, but their aggressive tactics are bound to backfire, dissuading greater cooperation and collaboration between social media platforms and countries like the UK.

Arguably, the Department for Digital, Culture, Media and Sport (DCMS) Select Committee on Disinformation and ‘Fake News’ may believe that Facebook has left them with no choice but to be forceful in their efforts at evidence gathering. After all, Facebook’s Chief Executive, Mark Zuckerberg, has repeatedly rebuffed their requests for him to testify before them.

Although Zuckerberg testified to US Congress earlier this year, the DCMS Select Committee has been intent on its own opportunity to quiz him on Facebook’s approach to advertising, privacy and data sharing with regard to the Cambridge Analytica scandal, for example. When the UK still failed to summon Zuckerberg himself after forming a global coalition with the likes of Canada, Singapore and others, perhaps members of the Committee were right to assume that their only option was to invoke ancient parliamentary powers to seize a cache of Facebook’s internal documents. But what were they hoping this would accomplish?

The documents in question concern Facebook’s data policies between 2013 and 2014 when the data of millions of Facebook users was exploited by Cambridge Analytica. Of course, it’s important to hold Facebook accountable for any breach of data protection law, but it’s not clear why the Select Committee believes that this is their responsibility or that they have the necessary authority to pursue this in a meaningful way.

The ICO, for example, has already recently fined Facebook £500,000 for this very reason and could likely press for these documents, if relevant, as it contests Facebook’s appeal. Given that Facebook has since changed its policies, it’s puzzling that so much of the focus in today’s hearing was about how the platform conducted itself four years ago rather than what Facebook is doing at present to protect users’ data, curb disinformation and challenge online manipulation.

It appears that the Select Committee – and the International Committee by extension – is keen to show its strength by berating Facebook for its past failures and threatening to regulate (somehow… any mention of what regulation actually entails is vague). But the spread of disinformation is not a unique problem to Facebook, and it is not a problem that Facebook alone can resolve. It is a problem that all social media networks must grapple with, and it’s bizarre to allege that platforms have been capable of managing this problem on their own but have simply chosen not to out of either malice or negligence. Self-regulation was never going to cut it, but neither should we presume that state intervention will have greater success. So far, state attempts to police platforms haven’t sat well with all citizens and have been criticised for censoring free speech.

It may be that the International Committee is considering a different tactic than taken so far (i.e. by the likes of Germany). Rather than getting involved directly in content moderation, members repeatedly voiced the idea that if policymakers weren’t satisfied with Facebook’s efforts to self-regulate than they should be able to apply sanctions. This might explain why the Committee have taken a hard-line during the hearing with Facebook, but this approach will backfire because, again, this is ultimately entrusting platforms to adequately self-regulate rather than collaborate.

Self-regulation is problematic because:

  1. It gives away too much power to platforms.
  2. It sets them up to fail (at democracy’s expense) because it expects too much of individual platforms.

If we can agree that the goal should be to protect democracy by controlling the spread of toxic content (disinformation, hate speech, etc.) then this will only be achieved through collective action – platforms, policymakers and civil society must collaborate to realise this aim.

Collaboration doesn’t take the form of sanctions; it entails cooperation and a commitment to working together for a shared purpose. For example, the recent deal struck by Facebook and regulators in France is a form of collaboration known as co-regulation; Facebook is inviting scrutiny of how they moderate hate speech and other problematic content so that regulators can help refine this process in the public’s best interests. Facebook also mentioned during the hearing that they want to work with other expert stakeholders, such as academics, to prevent online manipulation. There are opportunities here to collaborate should these policymakers want to – but it begins with asking a different question: how can we support platforms in the war against toxic content, as opposed to how can we punish them?

 

Be the first to write a comment

0 Comments

Please login to post a comment or reply

Don't have an account? Click here to register.

Related articles