Social media giants face big fines and blocked sites under New rules on harmful content

      Social media giants face big fines and blocked sites under New rules on harmful content

Author: Rohit Khosla, Jindal Global Law School

Abstract

The topic of my article evolve around the new laws made in the united Kingdom regarding the penalising the big tech companies for not deleting the harmful or abusive content from there portal. So these big firm should be ready for the big fines from the regulator and the other organisations. This rules may be proven the great steps if they are taken seriously by the government.

The new rules may impose the hefty fines on the big giants like Facebook, Twitter, WhatsApp if they failed to protect the interest of their users from their competitor or if they are not able to protect the harmful content on their portal then under the new rules they might be in big trouble. The EU outlined the long-awaited, sweeping overhaul of its digital rule book while the British government released its own plans to intensify policing of harmful material online, signaling a subsequent phase of technology regulation in Europe. Both sets of proposals include specific measures aimed toward the most important tech companies.

The EU wants to line new rules for “digital gatekeepers” to stop them from acting unfairly. It aims to stop bad behaviour instead of just punish past actions, because it has largely done thus far. Big tech companies won’t be allowed, for instance, to prevent users from uninstalling preinstalled software or apps, nor will they be ready to use data from business users to compete against them. The rules, referred to as the Digital Markets Act, leave fines of up to 10% of annual global revenue and, controversially, began three criteria for outlining a gatekeeper: Companies that, for the past three years, have had annual European turnover of a minimum of 6.5 billion euros ($8 billion); or a market price of 65 billion euros and a minimum of 45 million monthly users; or 10,000 yearly business users.

Another a part of the EU plan, the Digital Services Act, updates the bloc’s 20-year-old rules on e-commerce by making platforms take more responsibility for his or her goods and services. Which will involve identifying sellers in order that rogue traders are often tracked down, being more transparent with users on how algorithms make recommendations, or swiftly taking down illegal content like hate speech, though during a bid to balance free speech requirements, users are going to be given the prospect to complain. Violations risk fines of up to six of annual turnover.[1]

The rules, referred to as the Digital Markets Act, leave fines of up to 10% of annual global revenue and, controversially, began three criteria for outlining a gatekeeper: Companies that, for the past three years, have had annual European turnover of a minimum of 6.5 billion euros ($8 billion); or a market price of 65 billion euros and a minimum of 45 million monthly users; or 10,000 yearly business users. Another a part of the EU plan, the Digital Services Act, updates the bloc’s 20-year-old rules on e-commerce by making platforms take more responsibility for his or her goods and services.

Such will involve identifying sellers in order that rogue traders are often tracked down, being more transparent with users on how algorithms make recommendations, or swiftly taking down illegal content like hate speech, though during a bid to balance free speech requirements, users are going to be given the prospect to complain. Violations risk fines of up to six of annual The main point of the law is that if they failed to do so, then the fine would be imposed on them which may be led to the 10 percent of the total revenue of their company worldwide, by this the UK has raised the standard that in the time of the technology era you cannot just rumor the wrong information and the official will not able to see that.

In this bill the government has also added the provision of the criminal sanction on the senior manager of the company in the soil of the UK. However, with this new legislation UK has to make their intention clear regarding the laws for the big tech companies in their lead.

“We are entering a replacement age of accountability for tech to guard children and vulnerable users, to revive trust during this industry, and to enshrine in law safeguards for free of charge speech. This proportionate new framework will ensure we do not put unnecessary burdens on small businesses but give large digital businesses robust rules of the road to follow, so we will seize the brilliance of recent technology to enhance our lives,” he said. Under the principles, tech platforms are going to be expected to try to too much more to guard children from being exposed to harmful content or activity like grooming, bullying and pornography. The most popular social media sites, with the most important audiences and high-risk features, will get to go further by setting and enforcing clear terms and conditions which explicitly state how they’re going to handle content which is legal but could cause significant physical or psychological harm to adults.

This includes dangerous disinformation and misinformation about coronavirus vaccines, and can help bridge the gap between what companies say they are doing and what happens in practice. Dame Melanie Dawes, Ofcom’s Chief Executive, said: “Being online brings huge benefits, but four in five people have concerns about it. That shows the necessity for sensible, balanced rules that protect users from serious harm, but also recognize the good things about online, including free expression, “We’re gearing up for the task by acquiring new technology and data skills, and we’ll work with Parliament because it finalizes the plans.”

The government plans to bring the laws forward in a web Safety Bill next year. The powers, in response to a web Harms white book consultation, would be introduced by Parliament via secondary legislation. The govt said it’s also progressing work with the Law Commission on whether the promotion of self-harm should be made illegal. Companies will have different responsibilities for various categories of content and activity, under an approach focused on the sites, apps and platforms where the danger of harm is greatest. Few group of companies with the most important online presences and high-risk features, likely to incorporate Facebook, TikTok, Instagram, and Twitter, are going to be in Category 1.

These companies will got to assess the danger of legal content or activity on their services with “a reasonably foreseeable risk of causing significant physical or psychological harm to adults”. They’re going to then got to explain what sort of “legal but harmful” content is suitable on their platforms in their terms and conditions and enforce this transparently and consistently. All companies will need mechanisms so people can easily report harmful content or activity while also having the ability to appeal the takedown of content. Category 1 companies are going to be required to publish transparency reports about the steps they’re taking to tackle online harms. Financial harms are going to be excluded from this framework, including fraud and therefore the sale of unsafe goods.[2]


Reference

[1] Kelvin Chan, https://apnews.com/article/business-media-bills-social-media-19038a8f0a68448ce6037308930a5efc, publishing date 15 December 2020

[2]https://gadgets.ndtv.com/social-networking/news/facebook-twitter-tiktok-uk-britain-fine-laws-10-percent-turnover-harmful-content-cyber-bullying-pornography-2338753

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: