OpenAI’s ChatGPT chatbot blocked in Italy over privacy concerns
Italy’s data protection watchdog on Friday issued an immediate ban on access to OpenAI’s popular artificial intelligence chatbot, ChatGPT, citing alleged privacy violations.
In a statement, the Italian National Authority for Personal Data Protection said that ChatGPT had “suffered a data breach on March 20 concerning users’ conversations and payment information of subscribers to the paid service”.
The decision, which comes into “immediate effect,” will result in “the temporary limitation of the processing of Italian users’ data vis-à-vis [ChatGPT’s creator] OpenAI,” the watchdog said.
ChatGPT was launched in November 2022 and has since become hugely popular, impressing users with its ability to explain complex things clearly and succinctly, write in different styles and languages with a human-sounding tone, create poems and even pass exams.
ChatGPT can also be used to write computer code, without having the technical knowledge.
The Italian data regulator, however, criticised ChatGPT for not providing an information notice to users whose data is collected by OpenAI. It also took issue with “the lack of a legal basis justifying the collection and mass storage of personal data with the aim of ‘training’ the algorithms that run the platform”.
In addition, while the robot is intended for people over 13 years old, “the Authority emphasises that the absence of any filter to verify the age of users exposes minors to responses absolutely not in accordance with their level of development”.
The watchdog is now asking OpenAI to “communicate within 20 days the measures undertaken” to remedy this situation – or risk a fine of up to 4 per cent of its annual worldwide turnover.
The announcement comes as the European police agency Europol warned on Monday that criminals were ready to take advantage of AI chatbots such as ChatGPT to commit fraud and other cybercrimes.
From phishing to misinformation and malware, the rapidly evolving capabilities of chatbots are likely to be quickly exploited by those with malicious intent, Europol said in a report.
Elon Musk and hundreds of global experts also warned this week that AI systems pose “profound risks to society and humanity” and called on companies to halt further development of the technology for at least six months.
Leave a Comment