News Hub

Beware of ‘prompt injection’ attacks on AI chatbots, warns UK cybersecurity agency

Written by Thu 31 Aug 2023

Chatbots could be manipulated by threat actors to perform harmful tasks and cybersecurity risks through a method called ‘prompt injection’, warned the UK’s cybersecurity agency.

The National Cyber Security Centre (NCSC) has warned that artificial intelligence (AI) chatbots scripts could be overridden through ‘prompt injection’ attacks.

“As LLMs are increasingly used to pass data to third-party applications and services, the risks from malicious prompt injection will grow,” said the NCSC in a blog post.

Prompt injection attacks occur when a user creates an input designed to make a large language model (LLM) behave in an unintended manner. This might cause the LLM to produce offensive content, disclose confidential information, or enable scams and data theft.

An AI chatbot used by a bank could be manipulated through a sophisticated prompt injection attack to make unauthorised transactions.

“[These attacks] can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind,” added the NCSC.

Mitigating prompt injection attacks

The NCSC said organisations should be aware of the risks associated with machine learning components in order to design systems in a way that prioritises security.

A rules-based system could be applied on top of a machine learning model to prevent prompt injection attacks. The NCSC also recommended basic cybersecurity principles, including supply chain security, user education, and applying appropriate access controls.

Due to the nascency of LLMs, the NCSC said organisations should treat these services in the same way they would use a product or code library in beta stage.

AI tool skepticism 

As AI-powered tools gain traction, concerns have been raised about the associated cybersecurity risks.

Oseloka Obiora, Chief Technology Officer at cybersecurity firm RiverSafe, told Reuters that senior executives should be wary of latest AI trends. Rushing into deploying AI could lead to ‘disastrous consequences’ if the necessary security checks have not been made.

Governments across the world are addressing these concerns through regulations and strategic supervision.

Hungry for more tech news?

Sign up for your weekly tech briefings!

Written by Thu 31 Aug 2023

Send us a correction Send us a news tip