Australia’s eSafety Commissioner is taking the initiative to crack down on deepfake child abuse material and pro-terror content by implementing new protocols that would require technology giants to do more to tackle seriously harmful online content. These protocols come after the failure of self-regulatory codes developed by the tech industry and aim to address the proliferation of synthetic child sexual abuse material created with artificial intelligence.
The eSafety Commissioner is proposing new standards that would significantly impact companies such as Meta, Apple, and Google. These standards are being released for consultation and would still require parliamentary approval. They are designed to cover a wide range of online platforms, including websites, photo storage services, and messaging apps.
The main focus of the proposed standards is to prevent the dissemination of child sexual abuse material. The eSafety Commissioner, Julie Inman Grant, emphasizes the need for meaningful action by the industry to combat this type of seriously harmful content. The previous reliance on self-regulation and industry-developed codes has been deemed insufficient and lacking a strong commitment to identifying and removing known child sexual abuse material.
Australia’s attempts to hold tech giants accountable have faced challenges in the past. The passage of the “Online Safety Act” in 2021 was a groundbreaking move to address the responsibility of tech giants for user-generated content on social media platforms. However, the enforcement of these new powers has been met with indifference at times. The eSafety Commissioner, for example, imposed a fine on Elon Musk’s X for failing to remove child sexual abuse content from the platform. Despite ignoring the deadline to pay the fine, X has launched legal action to have it overturned.
If approved, the new standards proposed by Australia’s eSafety Commissioner would have industry-wide consequences. Technology giants, including Meta, Apple, and Google, would be compelled to take more extensive measures to prevent the proliferation of deepfake child abuse material and pro-terror content. Compliance with these standards would be crucial for these companies to avoid penalties and legal actions.
Australia’s eSafety Commissioner is taking a proactive approach to combatting harmful online content by proposing stricter regulations. The focus on deepfake child abuse material and pro-terror content reflects the urgent need to protect vulnerable individuals from exploitation and to ensure a safer online environment. If implemented, these standards would force technology giants to prioritize the removal and prevention of such content, making a significant impact on the digital landscape. However, challenges in enforcing these regulations and holding tech giants accountable remain, suggesting that a multifaceted approach is necessary to effectively address this issue and protect internet users.