This assessment was issued to clients of Dragonfly’s Security Intelligence & Analysis Service (SIAS) on 4 April 2023.
In a first, Italy on 31 March 2023 temporarily banned the popular chatbot ChatGPT over concerns about data privacy. This comes amid rising pressure on its developer, OpenAI, to pause the development of its latest chatbot system. While bans or restrictions on the use of ChatGPT would be likely to cause some degree of disruption within organisations already using the service, we do not anticipate that any restrictions would be permanent or long-lasting.
Tools like ChatGPT have been in development for years. But the recent exponential growth in the volume of data sets available to train LLMs means that the speed of innovations is now accelerating and appears to be outpacing the ability of countries to establish common standards, norms and behaviours over their use. This dynamic is highly likely to persist in the long term, in our view. And countries will probably increasingly find themselves in a predicament over balancing pro-innovation AI strategies and regulating such technology.
The ban on ChatGPT in Italy does not appear politically motivated. The Italian data protection authority (an independent authority), rather than the government, has said that it temporarily banned the service due to the ‘unlawful collection of personal data [and] absence of systems for verifying the age of minors’. It said ChatGPT processed data in breach of privacy laws, referring to a specific data breach on 20 March 2023. OpenAI said that the breach led some users to ‘see titles from another active user’s chat history’, among other information.
Regulatory bodies in other countries appear likely to pursue similar bans or restrict the use of ChatGPT, mainly due to data protection issues. There does not seem to be existing legislation that would specifically govern or regulate AI in countries or blocs like the EU. But national regulators do seem to be considering whether ChatGPT complies with the EU’s General Data Protection Regulation (GDPR) or comparable data protection legislation:
As such, any bans or restrictions would most likely be imposed by individual national regulators in the EU on a case-by-case basis. Such bans would probably cause some degree of disruption to organisations that have already integrated ChatGPT into their business functions or backends across a diverse range of sectors. And in our view, the data breach of ChatGPT highlights the risks around the protection and confidentiality of highly-sensitive corporate or customer information when inputting this into chatbot services like ChatGPT.
Any bans are unlikely to be long-lasting. Instead, the timeframe of these would probably be dependent on the ability of OpenAI to meet regulators’ concerns, such as by adding age checks or updating privacy policies. The Italian regulator said on 31 March that OpenAI has 20 days to address the allegations. Beyond seemingly immediate concerns over privacy and regulatory compliance, European governments do not appear to hold overtly strong anti-AI sentiment at present, including over whether AI might replace jobs. For instance, Italy’s deputy prime minister reportedly called the recent ban ‘disproportionate’.
Still, legislative efforts to govern AI tools and chatbots specifically are likely to trail behind the accelerating pace of innovation of such technologies in the long term. A group of AI researchers and tech executives in late March reportedly argued for a six-month moratorium on the training of next-generation AI tools so the industry has time to set safety standards. The BEUC, a European consumer advocacy group, recently said EU efforts to roll out an ‘AI act’ to regulate such technologies could ‘take years’ to come into effect.
This gap is only likely to widen. There has been intense and increased competition and investment in LLMs, such as by the likes of Microsoft and Google. A contact who has a deep knowledge of generative AI told us recently that while some AI tools are relatively mature, there is ‘massive potential’ for a rapid advancement in the capabilities of LLMs in the next one to two years. Rising regulatory pressure on AI tools such as chatbots would probably only at most slow the development of new and more capable models.
Rapid innovation and advancements in such technologies are likely to present new challenges and risks for organisations over the coming years. OpenAI has been transparent and detailed about the risks of the latest version of its chatbot, ChatGPT-4, in a paper it released on 23 March. It said that despite improvements in its models and ‘fine-tuning’ of GPT-4 since its launch, issues of the service include or might include in the future:
We anticipate that the impacts of accelerating AI technologies on society, stability and the global economy, in particular, will increasingly become key strategic risk issues for national governments over the coming years.
Image: Mobile phone showing ChatGPT suspension in Italy on 1 April 2023. Photo by Donato Fasano via Getty Images.
Be the first to receive our articles, news and insight on global risk, industry trends and what's new at Dragonfly