CANBERRA (Reuters) – Australia will fine social media and web hosting companies up to 10 percent of their annual global turnover and imprison executives for up to three years if violent content is not removed “expeditiously” under a new law.
FILE PHOTO – The Facebook logo is displayed on their website in an illustration photo taken in Bordeaux, France, February 1, 2017. REUTERS/Regis Duvignau
The new law passed by parliament on Thursday is in response to a lone gunman attack on two mosques in Christchurch on March 15 which killed 50 people as they attended Friday prayers.
The gunman broadcasted his attack live on Facebook and it was widely shared for over an hour before being removed, a timeframe Australian Prime Minister Scott Morrison described as unacceptable.
Australian Brenton Tarrant, 28, a suspected white supremacist, was charged with one murder following the attack and was remanded without a plea.
New Zealand police said on Thursday that Tarrant will face a total of 50 murder charges and 39 attempted murder charges when he appears in court on Friday.
Under the new laws, it is now an offence in Australia for companies, such as Facebook Inc and Alphabet’s Google, which owns YouTube, not to remove any videos or photographs that show murder, torture or rape without delay.
Companies must also inform Australian police within a “reasonable” timeframe.
“It is important that we make a very clear statement to social media companies that we expect their behavior to change,” Australian Minister for Communications and the Arts Mitch Fifield told reporters in Canberra.
Australian Attorney-General Christian Porter described the laws as a “world first in terms of legislating the conduct of social media and online platforms”.
Juries will decide whether companies have complied with the timetable, heightening the risk of high-profile convictions.
“Whenever there are juries involved, they can get it wrong but when you add into the mix technology – which is complex – the risk is heightened,” Jason Bosland, professor of media law, University of Melbourne told Reuters.
Technology firms said they are already working on the issue.
“We have zero tolerance for terrorist content on our platforms,” said a spokesperson for Google in an emailed statement.
“We are committed to leading the way in developing new technologies and standards for identifying and removing terrorist content.”
A spokeswoman for Facebook was not immediately available for comment.
Facebook said last week it was exploring restrictions on who can access their live video-streaming service, depending on factors such as previous violations of the site’s community standards.
Digital Industry Group Inc (DIGI) – of which Facebook, Apple, Google, Amazon and Twitter are members – said the laws fail to understand the complexity of removing violent content.
“With the vast volumes of content uploaded to the internet every second, this is a highly complex problem,” said Sunita Bose, Managing Director of DIGI.
Australia’s opposition Labor party backed the legislation, but said it will consult with the technology industry over possible amendments if it wins power at an election due in May.
Reporting by Colin Packham; Editing by Michael Perry