Nixse
0

Australia Tightens Law Social Media Networks Violent Content

SOCIAL MEDIA NETWORKS – Australia has passed a new law that will penalize social media companies and jail executives over violent contents if such contents are not removed “expeditiously”.

Under the new Australian law, social media and web hosting companies will have to pay a fine of up to 10 percent of their global turnover. Executives will also face the risk of imprisonment for up to three years.

Companies will be required to inform the Australian police about any videos or photographs depicting murder, torture, or rape within a “reasonable” period of time. These companies include Facebook Inc., Google, which owns YouTube.

“It is important that we make very clear statement to social media companies that we expect their behavior to change,” said Mitch Fifield, Australian Minister for Communications and the Arts.

For Australian Attorney-General Christian Porter, the law was a “world first in terms of legislating the conduct of social media and online platforms.”

In checking the companies’ compliance with the law, juries will be the ones to decide. This increases the risk of high-profile convictions.

“Whenever there are juries involved, they can get it wrong but when you add to the mix technology – which is complex – the risk is heightened,” said a professor in the University of Melbourne.

Finance Brokerage-social media networks: A Tragedy Streamed A Tragedy Streamed

The new law serves to address the violent incident in which a lone gunman attacked two mosques in Christchurch on March 15. The tragedy killed 50 people who were in the act of their Friday prayers.

The gunman used the social network platform Facebook to stream the attack live, with netizens watching and sharing the video for over an hour before it was finally removed from the platform. This length of time was described as unacceptable by Australian PM Scott Morrison.

Facebook said last week that it was working on setting restrictions over who can use their live-streaming tools, considering factors like history of violation of the website’s community standards.

Meanwhile, a spokesperson for Google said that they “have zero tolerance for terrorist content on our platforms.”

“We are committed to leading the way in developing new technologies and standards for identifying and removing terrorist content,” the statement said.

Digital Industry Group Inc., a group composed of various tech companies including Facebook, Twitter, Google, Apple, and Amazon, said that the laws did not fully understand the complexity of violent content removal.

“With the vast volumes of content uploaded to the internet every second, this is a highly complex problem,” said the managing director of DIGI.



You might also like
Leave A Reply

Your email address will not be published.