Could a duty of care for social media companies work?
Years ago I did a law degree (and to be clear, some of the below is dredged from my ever-fallible memory…). At the time I had a Saturday job at an all-day breakfast café (a life skill that’s actually proved more useful than my degree).
In the middle of the restaurant was a bloody great trapdoor down to the cellar where the freezers were. Obviously we kept the trapdoor closed when we weren’t using it. When we needed to open it, we had barriers to stop diners falling in.
One day the restaurant was quiet, so the chef opened the trapdoor but didn’t put up the barriers. Customers were warned about the gaping hole but, with grim inevitability, someone fell in. I can still remember her scream.
It’s a classic duty of care case, for which a claimant needs to prove three things:
- Were you, the victim, owed a duty of care? (As a restaurant customer, she certainly was)
- Was the duty of care breached? (Yes, because a safety system existed but wasn’t used)
- Did the breach result in damage? (Judging by her scream, I’d say that’s a yes)
It’s worth noting that the UK government is talking about a statutory duty of care for social media, ie. an Act of Parliament, and the above law is common law, ie. developed ad-hoc through the courts. So a statutory duty of care for social media companies needn’t necessarily copy the common law above. But let’s imagine it does. How will the three test above translate into social media?
First: who will be owed a duty of care? That could be fairly simple; it could be all users of a social media service, eg. so as soon as you log into Instagram, you’re owed a duty of care.
But what if you don’t log in? On Twitter, for example, you can see many people’s tweets without ever having to log in. Does that mean Twitter owes a duty of care to anyone who visits the site, whether logged in or not? If they have no control over who uses the service, is it fair to hit them with a duty of care to everyone?
And what about users who breach the rules? I’m thinking here about under-13s who fake their ages to sign up to 13+ services. Should they be owed a duty of care if they should never have been on the site in the first place? (What if, in the restaurant example above, the victim had broken into the restaurant after hours?)
Second: what would it take to breach the duty of care? This is the key bit for social media companies. In the restaurant all those years ago, we had barriers to seal off the danger. What does that look like online? Clearly it doesn’t mean screening every bit of content before it goes live (if that’s what you want, then be prepared for a three-week wait while your tweets are approved – after all, there are around 6,000 tweets every second).
I think this is where social media companies will lobby hardest: so long as they can prove they have some kind of system to flag potentially harmful content, they can argue they’re not breaching their duty of care.
Problem is, I suspect at the moment those systems are woefully lacking, and there’s a reason why:
At the moment the legal responsibility of platforms such as Instagram et al is covered by the EU Communications Directive, which in essence says that they’re not liable for anything illegal (whether criminal or civil) until they’re told about it. In short: “publish first, ask questions later”. This has given the online services massive legal cover, and arguably has done more than anything else to foster these companies’ meteoric rise to power.
A duty of care would change this but (usefully for the tech companies) would stop short of changing them from being platforms to being publishers, which would have made them liable for everything on their site (which is the standard my employers, such as BBC News, are held to).
Third: has the breach resulted in damage?
This is potentially hard to prove. You might think, for example, that in the tragic case of Molly Russell (who committed suicide and had seen self-harm-related content on Instagram) it’d be pretty easy to argue. But perhaps not. Lawyers might try to argue that other factors contributed to her suicide, that she was consuming harmful content from other services, etc. In my restaurant case, above, the breach led directly to the harm. In teenagers’ tumultuous lives, it might not be so clear-cut.
So here’s my prediction for a social media duty of care:
As the legislation is discussed and drafted, expect a well-funded, full-scale lobbying campaign by the tech firms to water it down or derail it altogether.
And should it ever make the statute books, expect the overnight creation of a rampant army of ambulance-chasing lawyers, carpet-bombing social media sites with adverts shouting “Have you been harmed by something online?! Get in touch! No win, no fee!”