Skip to main content

More countries appear likely to ban or restrict the use of popular artificial intelligence (AI) large language models (LLMs) in the near term.

This assessment was issued to clients of Dragonfly’s Security Intelligence & Analysis Service (SIAS) on 4 April 2023.

  • National regulatory bodies will probably temporarily ban or restrict the use of large language models (LLMs) such as ChatGPT due to concerns about data privacy
  • Such restrictions would most probably be imposed by national regulators in EU member states where compliance with data protection is closely regulated
  • Government efforts to regulate AI tools are likely to trail behind the accelerating development of such technologies, resulting in negative impacts such as disinformation

In a first, Italy on 31 March 2023 temporarily banned the popular chatbot ChatGPT over concerns about data privacy. This comes amid rising pressure on its developer, OpenAI, to pause the development of its latest chatbot system. While bans or restrictions on the use of ChatGPT would be likely to cause some degree of disruption within organisations already using the service, we do not anticipate that any restrictions would be permanent or long-lasting.

Tools like ChatGPT have been in development for years. But the recent exponential growth in the volume of data sets available to train LLMs means that the speed of innovations is now accelerating and appears to be outpacing the ability of countries to establish common standards, norms and behaviours over their use. This dynamic is highly likely to persist in the long term, in our view. And countries will probably increasingly find themselves in a predicament over balancing pro-innovation AI strategies and regulating such technology.

First regulatory move against ChatGPT

The ban on ChatGPT in Italy does not appear politically motivated. The Italian data protection authority (an independent authority), rather than the government, has said that it temporarily banned the service due to the ‘unlawful collection of personal data [and] absence of systems for verifying the age of minors’. It said ChatGPT processed data in breach of privacy laws, referring to a specific data breach on 20 March 2023. OpenAI said that the breach led some users to ‘see titles from another active user’s chat history’, among other information.

Regulatory bodies in other countries appear likely to pursue similar bans or restrict the use of ChatGPT, mainly due to data protection issues. There does not seem to be existing legislation that would specifically govern or regulate AI in countries or blocs like the EU. But national regulators do seem to be considering whether ChatGPT complies with the EU’s General Data Protection Regulation (GDPR) or comparable data protection legislation:

  • According to German business outlets on 3 April 2023, a spokesperson for the German Federal Commissioner for Data Protection said that a ‘similar procedure’ to the temporary ban on ChatGPT in Italy ‘is also possible in Germany’.
  • According to the BBC on 1 April 2023, the Irish data protection commission said that it was in discussions with its Italian counterparts to ‘understand the basis for their action’ and that it ‘will coordinate with all data protection authorities’ in relation to the ban.
  • The BBC also reported on 1 April that the UK Information Commissioner’s Office would ‘support’ developments in AI, but that it was ready to ‘challenge non-compliance’ with data protection laws.
  • That said, the UK government in a policy paper on 29 March 2023 said it has set out a ‘proportionate and pro-innovation regulatory framework’.

As such, any bans or restrictions would most likely be imposed by individual national regulators in the EU on a case-by-case basis. Such bans would probably cause some degree of disruption to organisations that have already integrated ChatGPT into their business functions or backends across a diverse range of sectors. And in our view, the data breach of ChatGPT highlights the risks around the protection and confidentiality of highly-sensitive corporate or customer information when inputting this into chatbot services like ChatGPT.

Any bans are unlikely to be long-lasting. Instead, the timeframe of these would probably be dependent on the ability of OpenAI to meet regulators’ concerns, such as by adding age checks or updating privacy policies. The Italian regulator said on 31 March that OpenAI has 20 days to address the allegations. Beyond seemingly immediate concerns over privacy and regulatory compliance, European governments do not appear to hold overtly strong anti-AI sentiment at present, including over whether AI might replace jobs. For instance, Italy’s deputy prime minister reportedly called the recent ban ‘disproportionate’.

A losing race?

Still, legislative efforts to govern AI tools and chatbots specifically are likely to trail behind the accelerating pace of innovation of such technologies in the long term. A group of AI researchers and tech executives in late March reportedly argued for a six-month moratorium on the training of next-generation AI tools so the industry has time to set safety standards. The BEUC, a European consumer advocacy group, recently said EU efforts to roll out an ‘AI act’ to regulate such technologies could ‘take years’ to come into effect.

This gap is only likely to widen. There has been intense and increased competition and investment in LLMs, such as by the likes of Microsoft and Google. A contact who has a deep knowledge of generative AI told us recently that while some AI tools are relatively mature, there is ‘massive potential’ for a rapid advancement in the capabilities of LLMs in the next one to two years. Rising regulatory pressure on AI tools such as chatbots would probably only at most slow the development of new and more capable models.

Security risks and other challenges

Rapid innovation and advancements in such technologies are likely to present new challenges and risks for organisations over the coming years. OpenAI has been transparent and detailed about the risks of the latest version of its chatbot, ChatGPT-4, in a paper it released on 23 March. It said that despite improvements in its models and ‘fine-tuning’ of GPT-4 since its launch, issues of the service include or might include in the future:

  • A lack of information reliability (such as ‘hallucination’ effects, and unbiased and unreliable content)
  • The production of harmful content (such as hate speech, instructions on planning attacks and finding websites selling illegal goods and services), especially before mitigations put in place by OpenAI
  • The enabling of disinformation (primarily the production of plausible, persuasive and targeted content that the user intends to use to mislead)
  • Negative impacts on cybersecurity (such as GPT-4’s capabilities of drafting social engineering content, such as e-mail phishing, potentially ‘lowering the cost of certain steps of a cyber attack’ and in some cases ‘explaining’ vulnerabilities in source code)
  • Impacts on the economy (such as the automation of jobs and workforce displacement)
  • Impacts of accelerating AI technologies on societal risks and international stability
  • Overreliance (issues such as GPT-4’s ‘tendency to make up facts’ and its impacts on the skill set development of users)

We anticipate that the impacts of accelerating AI technologies on society, stability and the global economy, in particular, will increasingly become key strategic risk issues for national governments over the coming years.

Image: Mobile phone showing ChatGPT suspension in Italy on 1 April 2023. Photo by Donato Fasano via Getty Images.