AI will increase the severity of cyberattacks, says the NCSC
Written by Rebecca Uffindell Thu 25 Jan 2024
Artificial intelligence (AI) will ‘almost certainly’ amplify the frequency and severity of cyberattacks in the next two years, according to a new report by the UK’s National Cyber Security Centre (NCSC).
In the report ‘The near-term impact of AI on the cyber threat’, the NCSC said AI is poised to enhance reconnaissance and social engineering tactics. This will result in cyberattacks that are not only more effective but more efficient and challenging to detect.
“To 2025, Generative AI (GenAI) and large language models (LLMs) will make it difficult for everyone, regardless of their level of cybersecurity understanding, to assess whether an email or password reset request is genuine, or to identify phishing, spoofing, or social engineering attempts,” said the NCSC.
The NCSC added GenAI can already create convincing conversations with victims, including creating lure documents without typical grammatical and spelling errors that expose phishing attempts.
Daniel Hofmann, CEO at cloud-based security company Hornetsecurity, expressed concern regarding the escalating threat of these attacks.
“GenAI models empower criminals with the ability to constantly optimise their approaches through machine learning, ultimately enhancing the success rates of phishing attempts,” added Hofmann.
Hornetsecurity’s Cyber Security Report 2024 revealed over a third (36.4%) of email traffic is unwanted, with phishing attacks increasing from 39.6% to 43.3%.
How Will Threat Actors use AI in Cyberattacks?
The NCSC found AI will heighten the impact of cyberattacks in the UK by enabling faster and more efficient analysis of exfiltrated data.
AI’s rapid data summarisation capability may empower threat actors to identify high-value assets swiftly, enhancing the value and impact of cyberattacks in the next two years.
By utilising AI, individuals with limited cybercrime expertise can execute more sophisticated cyberattacks, reducing the entry barrier for novice cybercriminals.
Hitesh Bansal, Country Head (UK & Ireland) and Senior Partner for Cybersecurity & Risk Services at Wipro, said unskilled actors will likely leverage AI-powered tools for automated vulnerability scanning, exploit matching, and exploit deployment.
“The issue with AI is using LLMs: ‘spear phishing’ becomes more sophisticated yet easier with AI’s assistance. This increases attack success rates and lowers barriers to entry for cybercrime,” added Bansal.
AI is also expected to aid in developing malware and software exploiting system vulnerabilities, making existing techniques more efficient. It may automate vulnerability research, allowing threat actors to discover and exploit weaknesses faster. AI could also improve the efficiency of lateral movement techniques, facilitating cyber attackers in navigating compromised systems.
The potential of AI to generate undetectable malware poses a challenge for traditional security measures. It learns from patterns, making it harder to identify and block malicious activities. As AI becomes more widespread, concerns have been raised about the increased complexity and effectiveness of cyberattacks due to the broader availability of AI-enabled tools.
The NCSC also highlighted a shrinking gap between security updates and threat actors exploiting unpatched software, intensifying the challenge for network managers to address vulnerabilities promptly. AI-driven reconnaissance is expected to further accelerate this challenge, pinpointing vulnerable devices more rapidly and accurately.
The Guardian reported the Information Commissioner’s Office found 706 ransomware incidents in the UK in 2022, compared to 694 in 2021.
Leveraging Advanced AI Limited to Human Capabilities
The NCSC said proficiency, equipment, time, and financial resources are crucial for leveraging advanced uses of AI in cyber operations.
“Only those who invest in AI, have the resources and expertise, and have access to quality data will benefit from its use in sophisticated cyberattacks to 2025,” said the NCSC.
The NCSC also emphasised AI’s impact on cyber threats will be offset by its role in strengthening cybersecurity resilience through improved detection and security-by-design enhancements.
The effectiveness of AI-generated malware is also contingent on the quality of the exploit data used for training the AI model. To create evasive malware effectively, the AI needs to learn from high-quality data about exploiting vulnerabilities.
The NCSC noted highly capable state actors are in the best position among cyber threat actors to harness the potential of AI in advanced cyber operations.
“There is a realistic possibility that highly capable states have repositories of malware that are large enough to effectively train an AI model for this purpose,” said the NCSC.
The organisation stressed further investigation is needed to gauge the full extent to which AI advancements in cybersecurity will mitigate threat impacts.
Industry Figures Offer Businesses Advice
Looking to 2025 and beyond, the NCSC said the increasing commoditisation of AI-enabled capabilities in both criminal and commercial markets is expected.
Senior Vice President and General Manager of Dell Technologies UK, Steve Young, said AI has the potential for huge productivity gains for enterprises, with gains of up to 30% for most organisations.
“Anybody can now be an AI innovator with the right data and processing power. But therein lies the rub; as GenAI’s capability expands, it creates equal opportunities for both enterprises and bad actors,” added Young.
Tom Gorup, Vice President of Security Services at content delivery network Edgio, said businesses should ramp up defences against ransomware attacks. He highlighted the importance of businesses safeguarding against attacks that not only encrypt data but also exfiltrate and demand ransom.
“They should also look to upskill employees in social engineering and spotting phishing attacks to reduce points of entry,” added Gorup.
Suid Adeyanju, CEO of IT security services company Riversafe, also stressed the necessity for thorough training programmes to make staff aware of the dangers posed by cyber threats.
“Cybercriminals are persistently exploiting ransomware and powered by AI advancements these attacks pose a detrimental risk to businesses that are not sufficiently prepared. In response to these escalating attacks, organisations must increase their threat intelligence,” added Adeyanju.
Adeyanju suggested a comprehensive strategy that blends advanced technologies and heightened observability.
Dan Lattimer, Vice President at threat prevention company Semperis, said stopping destructive attacks requires a change in the behaviour of organisations trying to protect themselves.
“Collectively, as an industry we need to constantly uplevel the cyber discussion, bring it to the boardroom. Invest more in people and best practices. Security is a sport played, not watched,” said Lattimer.
The Role of Policymakers and Stakeholders
Pavel Goldman-Kalaydin, Head of AI and Machine Learning at identity verification platform Sumsub, emphasised the critical role of regulation in mitigating AI risks, highlighting the need for oversight from governments and stakeholders.
“Governments and policymakers must collaborate closely with private businesses that are truly on the frontline in combating technically AI-related illicit activities, to establish a robust regulatory framework,” added Goldman-Kalaydin.
Ivana Bartoletti, Chief Privacy and AI Governance Officer at IT consulting services company Wipro, expressed a similar sentiment.
“It is crucial that we build an alliance between technical experts and policymakers so we can develop the future of AI in threat hunting and beyond, and support organisations in the fight to protect their assets,” said Bartoletti.
Jake Moore, Global Cybersecurity Advisor at cybersecurity company ESET, said the ultimate goal is to educate people about these new attacks.
“The volume of such attacks will inevitably increase but until we find a robust and secure solution to this evolving problem, we need to act now to help teach people and businesses in how to protect themselves with what is available,” added Moore.
The NCSC report coincided with the UK Government’s release of a draft Code of Practice on cybersecurity governance. These measures seek to elevate cybersecurity as a primary concern for businesses, placing it on par with other threats such as financial and legal risks.
In December, EU policymakers provisionally agreed on regulations for the AI Act, aiming to ensure the safe deployment of AI.
The regulation aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI. It also seeks to promote innovation and position Europe as a leader in the field.
However, there have been calls for a concerted effort between Government, industry, and academia to fully understand and implement AI in business, following the AI Safety Summit in November.
The Summit concluded with an agreement that governments and tech companies should share responsibility for safety testing frontier AI models.
In November, France, Germany, and Italy formulated an agreement on the regulation of AI. The three governments are in favour of binding voluntary commitments for both large and small AI providers within the European Union. This self-regulation would be enforced through adherence to predefined codes of conduct.
Written by Rebecca Uffindell Thu 25 Jan 2024
Most Viewed News
February 21, 2024Ransomware group LockBit disrupted by the UK’s NCA along with FB...
February 20, 2024Virgin Media O2 reveals NetCo, a new company to rival BT Openreach
February 20, 2024Echelon Data Centres receives £673m investment from Starwood Capital