Italy’s Data Protection Authority has announced that it is temporarily blocking the use of the artificial intelligence software ChatGPT in the country after a data breach. The move comes amid an investigation into a possible breach of EU data protection regulations.
The Italian watchdog has called for “provisional action” until the company, OpenAI, can prove it is respecting privacy, including limiting the firm from processing Italian users’ data. While the company has not commented on the action, it did earlier announce that it had to take the platform offline on 20 March to fix a bug.
Regulators in Italy have temporarily banned the use of ChatGPT due to privacy concerns and opened an investigation into how OpenAI uses data. Italy’s data protection agency stated that users lacked information about the collection of their data and that a breach had been reported on March 20.
The agency also expressed concerns over the lack of age verification for ChatGPT users, which could expose children to inappropriate content. OpenAI has been given 20 days to communicate measures to comply with Italy’s data rules, or it could face a penalty of up to €20 million ($21.8 million) or 4% of its annual global turnover.
ChatGPT has become a global phenomenon, impressing users with its ability to craft convincing written content, but concerns have also emerged about its rapid spread and what large-scale uptake of such tools could mean for society. The European Union is finalizing rules on the use of AI in the bloc, while generative AI tools available to the public are proliferating. OpenAI released GPT-4, a new and more powerful version of the technology underpinning ChatGPT. This week, some of the biggest names in tech, including Elon Musk, called for AI labs to stop training the most powerful AI systems for at least six months, citing risks to society and humanity.