Bulletproof privacy in one click
Discover the world's #1 privacy solution
Coming soon
Coming soon
Coming soon
Italian regulators have taken the first swipe at AI by blocking the hugely popular ChatGPT. Italy's data protection agency issued the temporary but immediate ban amidst growing privacy concerns. But Italy isn't the only one getting twitchy about the tech. AI ethic advocacy groups and industry experts are all starting to raise their concerns over ChatGPT's meteoric rise. Here's what we know so far.
On March 31, Garante, the national data protection authority of Italy ordered an immediate temporary limitation on the processing of Italian users' data by OpenAI, the US-based company developing and managing the ChatGPT platform. At the same time, the regulators opened an investigation into the company for possible GDPR violations.
Italy has become the first Western nation to ban the increasingly popular artificial intelligence tool. ChatGPT is a large language model developed by OpenAI, that is designed to generate human-like responses to a wide range of text-based prompts.
Italy's ban on ChatGPT came in the wake of a data breach of the chatbot which exposed personal data belonging to users of the service.
ChatGPT data breach
On March 20, ChatGPT experienced a bug in an open-source library that leaked personal data including details of users' chat history. Chat titles and the first message of newly-created conversations could be visible to other users.
On further investigation, OpenAI discovered that the same bug may also have also exposed the payment information of ChatGPT Plus subscribers.
Within a specific nine-hour window it may have been possible for active users to see other active users' details. The exposed data included:
OpenAI worked quickly to resolve the issue and reached out to notify affected users. They also issued a public apology on their blog giving a full technical breakdown of what happened and the actions the company took to address it.
Is ChatGPT guilty of GDPR violations?
In a press release, the Italian watchdog highlights three ways in which ChatGPT fails to meet European data protection requirements.
Firstly, ChatGPT provides no information on data collection to the users (data subjects); secondly, there is no legal basis for the collection and processing of such data; and lastly, the lack of an age verification method exposes children to generated responses that may not be age-appropriate.
But while there is no age verification in place before accessing the platform, according to OpenAI's Terms of Use, users must be at least aged 13. Any user under the age of 18 must have a parent or legal guardian's permission. But in the eyes of Garante, this notification isn't enough to cover the GDPR requirement.
OpenAI has disabled access to Italy-based users but has also publicly disagreed with the Italian government's actions. In a written response to POLITICO, a ChatGPT spokesperson stated "We believe we comply with GDPR and other privacy laws.”
OpenAI is based in California but it does have a designated representative in the European Economic Area. They will have to notify the Italian data protection authority within 20 days of their compliance or face a GDPR fine of up to $21 million or 4% of their total worldwide annual turnover (whichever is highest).
Growing concerns over AI
It's not just GDPR violations coming after ChatGPT. The Center for Artificial Intelligence and Digital Policy has urged the Federal Trade Commission (FTC) to halt the commercial release of more ChatGPT models until more safeguards are implemented.
The tech ethics group sent a formal complaint which declares ChatGPT 4, OpenAI's latest release to the consumer market, as “biased, deceptive, and a risk to privacy and public safety”.
At the same time, an open letter issued by the Future of Life Institute and signed by AI experts and tech heavyweights called for a six-month pause in AI developments on systems more powerful than what is already available.
The Future of Life Institute is a non-profit that works to reduce existential risk from powerful technologies. The letter gained more than 5500 signatures, including the likes of Elon Musk, Steve Wozniak, Pinterest co-founder Evan Sharp, and Stability AI CEO Emad Mostaque.
The letter states that because of AI's potential impact on society and the profound risks that it should be “planned for and managed with commensurate care and resources.”
But it highlights that this level of consideration is not being taken.
Instead, AI labs have been “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
Read the full open letter here.
What's next for AI and ChatGPT?
The signed open letter is not suggesting a total blanket ban on AI development but rather that AI labs take a step back to essentially play catch up with the systems already developed. The pause should be used by AI labs and independent experts to develop and implement safety protocols that will ensure advanced AI is “safe beyond a reasonable doubt”.
Will OpenAI take heed? Will the FTC pay attention? Or will money-making win out? Will other countries follow Italy's footsteps and ban ChatGPT? That all remains to be seen.
Read more: Privacy and Security Risks of ChatGPT
Ruby is a full-time writer covering everything from tech innovations to SaaS, Web 3, and blockchain technology. She is now turning her virtual pen to the world of data privacy and online anonymity.
Chapter 14: IoT Hacks
Dive into the unsettling world of government-controlled GPS tracking!
Trash Talk: How your garbage can be exploited by hackers, law enforcement, and government agencies
It’s time to uncover how government surveillance gets personal.
Discover the world's #1 privacy solution
Coming soon
Coming soon
Coming soon