It was the tragic suicide of 14-year-old Molly Russell back in 2017 that finally prompted the Government to announce they were to put legislation in place to prevent cyberbullying and content promoting self-harm on social media platforms. Three years later, they have announced that it will fine tech companies, like Facebook and Twitter, up to 10% of their global revenue if they fail to protect people online by stopping illegal and harmful content from reaching them.
“Today Britain is setting the global standard for safety online with the most comprehensive approach yet to online regulation,” said British Digital Secretary Oliver Dowden. “We are entering a new age of accountability for tech to protect children and vulnerable users, to restore trust in this industry, and to enshrine in law safeguards for free speech.”
The new rules will apply across a range of digital services, although news websites - which often have comment sections that allow readers to discuss topics - will be exempt. This follows lobbying by publishing groups worried that they could face hefty penalties or punitive regulatory costs to police their online users. Some forms of online advertising, including social media ads however, will be part of the content rules.
To add further weight to the rules, Dowden threatened that technology company executives could face criminal action if their companies fail to uphold a so-called duty of care for their users which will require firms to remove and limit the spread of illegal content such as terrorist or child abuse material. Failure to act may lead to fines of up to £18 million or 10% of a firm’s global revenue, whichever is higher. The largest platforms will have to go further and assess what “legal, but harmful” content like COVID-19 disinformation is allowed on their sites.
Under the proposals, social media companies, internet messaging apps and almost all forms of digital services including search engines, where people communicate with each other will fall under new rules which are expected to go through parliament next year. However, the rules have stopped short of directly addressing what content will actually be deemed ‘harmful’ which includes online material that while legal, may still be problematic .The government said it is “progressing work with the Law Commission on whether the promotion of self-harm should be made illegal.” Tech companies had asked lawmakers to decide what should be included in that definition, but in today’s proposals the government said it would be up to the largest platforms to define it for themselves. This lack of clarity, and the potential for the tech companies to be allowed to continue to effectively self-regulate (or not as the case may be), has caused both online campaigners and industry groups to voice their concerns;
“For meaningful change, we also need governments to introduce systemic reforms which protect the individual consumer, advertisers, and society as a whole." Said Harriet Kingaby, co-chair of the Conscious Ad Network, a trade group whose aim is to stop advertising being associated with hate speech, fraud and other online harms, she also highlighted that the platforms should not be regulating themselves when it comes to defining harmful content.
Technology groups also highlighted the potential consequences of forcing digital firms of all sizes to police what was shared on their services. Dom Hallas, executive director of the Coalition for a Digital Economy, a trade group of mostly smaller British startups, said it was unclear how the U.K. government’s new rules would make the internet safer, adding that the greater regulatory controls could actually work in favour of larger firms.
“Until the government starts to work collaboratively instead of consistently threatening start-up founders with jail time it’s not clear how we’re going to deliver proposals that work,” he added.
When making the announcement, Oliver Dowden boasted that Britain is the first country to set a new standard, gazumping the European Union which is due to announce its own measures, known as the ‘Digital Services Act’ next week. The EU is set to impose fines of up to 6% of revenue on companies that do not meet new obligations, including removing illegal content like hate speech. With Brexit looming large it seems the UK was keen to get in first as the Whitehall and the EU strive to out-do each other in being seen to be getting tough with the world’s tech giants. The U.K.’s Office of Communications, the national regulator, will oversee the new regime, and have powers to both enforce fines and block digital services that fail to comply with London’s push to remove the most harmful material from the internet.