European Union lawmakers have backed plans to fine Facebook, Google, Twitter and other online platforms if they fail to remove extremist content within one hour.
The measures have been brought into sharper focus after the live streaming on one of Facebook’s platforms of shootings by a lone gunman, killing 50 people at two New Zealand mosques in March.
The EU’s draft law, including fines of up to 4 per cent of annual global turnover, was endorsed by member states last year. However, concerns that the measures would hurt smaller online platforms or encroach on civil rights had stalled Monday’s vote.
The EU assembly’s justice and home affairs committee voted 35 to 1, with 8 absentions, in favour of the proposal, which now requires approval in a plenary vote next week and negotiations among the EU’s three lawmaking bodies.
The first hour is the most vital to stemming the further viral spread of online content, EU officials say, moving to regulate after they judged that internet companies were not doing enough under voluntary measures.
Facebook said it removed 1.5 million videos containing footage of the New Zealand attack in the first 24 hours after the shootings.
Worries the new rules are lacking and could be misused have been expressed by three UN special rapporteurs for human rights and by the EU’s own rights watchdog.
Companies rely on a mix of automated tools and human moderators to spot and delete extremist content. However, when illegal content is taken down from one platform, it often crops up on another - straining authorities’ ability to police the web.
In response to industry concerns that smaller platforms do not have the same resources to comply as speedily with tougher EU rules, lawmakers said authorities should take into account the size and revenue of companies concerned.
Those facing their first removal order would also receive an additional 12 hours to comply.
Draft measures call on the bloc’s national governments to put in place the tools to identify extremist content online and an appeals procedure.
The one-hour rule would apply from the point of notification by national authorities and companies would face penalities over a “systematic failure” to comply.
However, lawmakers opted to drop a draft requirement for the monitoring of content uploaded or shared on their sites for signs of illegal activity.
“We risk the over-removal of content as businesses would understandably take a safety first approach,” said Daniel Dalton, a British lawmaker responsible for shepherding the bill through the house. “It also absolutely cannot lead to a general monitoring of content by the back door.”
Brussels has been at the forefront of a push by regulators worldwide to force tech companies to take greater responsibility for content on their sites.
Britain on Monday also proposed new rules that would penalise companies that fail to protect users from harmful content.