Learn why GetApp is free

60% of U.K. IT pros back preventing cyberattacks with AI–learn 3 ways to adopt the tech

Published on 29/10/2024 Written by David Jani.

Artificial intelligence (AI) is often cited as a disrupter of cyberdefence, but that doesn’t tell the whole story. IT professionals are recognising many opportunities for AI to boost security and are increasingly adopting the technology to fight threats.

AI et management

Artificial intelligence often conjures negative associations due to its involvement in enhancing existing security threats and newer types of attacks, such as deepfakes. However, preventing cyberattacks with AI is increasingly showing promise and many IT professionals are starting to regard burgeoning technology as more of an opportunity than a menace to cybersecurity. These were the findings uncovered in GetApp’s 2024 Data Security survey, which canvassed 4,000 respondents in 11 countries, featuring answers from 350 U.K. participants.*

AI protection against cyberattacks can help businesses improve monitoring capacity to face off against common security risks and AI-driven enhancements to cyberattacks. Our data shows that IT professionals are seeing the value of these developments and that businesses in the U.K. are investing accordingly. However, deploying an AI cybersecurity system requires companies to carefully plan for their adoption, taking into account use cases, data training, and guardrails to ensure everything works as intended.

Key insights
  • 60% of U.K. IT and data security professionals see AI as a cyberdefence enhancement rather than simply a threat.
  • 80% in the U.K. expect spending on cybersecurity to increase in 2025.
  • 47% of U.K. respondents are prioritising spending on AI tools to assist with cloud security.
  • 49% of U.K. participants already use AI tools for real-time monitoring of their network.

IT professionals look to AI for cybersecurity support

AI in cybersecurity appears to be bucking a trend of negativity across many other industries where autonomous systems have been applied. It has triggered anxiety about how it may influence processes like job selection and their potential to replace professions entirely. [1] This is also true for IT and cybersecurity where fears of AI-enhanced cyberthreats are a major concern for professionals. 

Despite the gloomy picture in other areas, our data shows IT workers tend to be more optimistic about AI’s applications in cybersecurity. In total, 60% of our U.K. participants say they see AI as a tool to help them improve cyberdefences rather than a means for more dangerous cyberattacks.

Graph of U.K. IT professional takes on AI support for cybersecurity

Whilst this is a strong endorsement of AI's capacity to help prevent cyberattacks, the U.K. is still a little behind the global average of 62%, suggesting there is still a little wariness nationwide about the technology compared to other countries. 

Nevertheless, many facets of AI, such as machine learning, neural networks, deep learning, and natural language processing (NLP), can provide useful tools for cyberdefence, especially in the capacity to train systems to detect threats automatically with increased precision and effectiveness.  

There are already signs that businesses wish to embrace AI rather than avoid it in cybersecurity. However, a quirk of these findings is that it could be the fear of malicious uses of AI for cyberattacks that’s driving the interest in better systems for network monitoring or automating threat detection powered by AI in the first place.

97% in the U.K. are making AI security an investment priority

There appears to be a good level of optimism about using AI in cybersecurity amongst U.K. respondents. However, there are many elements to this technology. So, it is important to know what your business needs and the facets driving this trend.

Our analysis shows the core areas where companies plan to invest in AI for cybersecurity to try and better understand the trend. Our U.K. sample showed that AI in cloud security was the main priority. However, other elements, such as network security and threat detection and analysis, also proved vital for future deployment.

AI investment priorities for British IT professionals

Cloud security more generally seems to be a key concern for U.K. firms and this was reflected in our first report, where vulnerabilities of cloud systems were the top concern for IT professionals going into 2025. Therefore it is not a great surprise to see investment in cloud solutions factoring so highly. 

Nevertheless, the trend towards AI spending is generally high in the U.K., with 97% working in companies with spending priorities in this area. Additionally, the industry expects IT security spending to remain high year-on-year, with only a slight decrease in increased expenditure expected for next year compared to the last 12 months. It is likely that AI spending will factor into any increase in overall spending.

Cybersecurity spending differences between 2023, 2024 and projected for 2025

Monitoring and detection are the most used AI features in cybersecurity

Artificial intelligence and security can complement each other well. Security monitoring supported by AI can help teams detect more threats in advance of them doing damage. This can also significantly free up security teams to focus on other essential tasks rather than fire-fighting directly.

This advantage appears to be appreciated by IT professionals using AI tools in security software. In total, 90% of our sample is already using AI-supported tools for cybersecurity tasks, and among those who do, real-time network monitoring, malware detection, and threat intelligence integrations stand out as the most common.

Practical uses of cybersecurity AI functions according to U.K. respondents
Safer software by design is being enforced in Britain

The U.K. government’s National Cybersecurity Council (NCSC) introduced a new code of practice for software in May 2024. This pushes for minimum standards among software vendors to improve cybersecurity resilience. It is intended to reduce the risk and severity of attacks. [2]

Among the minimum standards of the Secure by Design code are the following:

  • Practices to ensure strong levels of security when software is first accessed (such as not providing generic passwords on installation or onboarding).
  • Stronger build environments to reduce vulnerabilities to cyberattacks.
  • A commitment to regular security maintenance during the lifetime of the product.
  • Sufficient communication with customers to ensure effective risk management.

3 ways to effectively implement AI into cybersecurity

While there are solid reasons to introduce AI into a cybersecurity plan in the near future, the process shouldn’t be rushed, even if time is imperative. Integration of AI into a business’s cybersecurity defences can be a long process, and it’s important to factor this into planning.

A recent article by Gartner identifies four key focus areas to get companies’ IT systems ready to leverage AI. These include defining the use of AI tools, assessing deployment necessities, making data ‘AI-ready,’ and adopting AI principles. [3] To help achieve these steps and ready your firm for AI cybersecurity implementation, we’ve highlighted three tips below.

Plan around AI’s cyberthreat prevention strengths

The first step to any AI deployment is to set goals for its usage. Having clear goals for this deployment can help organise preparations for implementation and plan the use of staff and resources more effectively. 

It's better to prioritise areas where AI can help drive better protection of systems that need constant surveillance. As our data shows, this applies primarily to cloud security, network security, and threat detection. 

Another important consideration is to check how this will affect the organisation’s current tech stack. Based on changing business needs and market trends, businesses have to decide whether to opt for a new software entirely or adopt unutilised features of an existing system. In some cases, businesses may be able to add AI features to an existing security system suite.

Prioritise human-in-the-loop (HITL) approaches

The use of machine learning and deep learning automation in cybersecurity isn’t quite as contentious as other areas where AI can be used, such as the application of generative AI in marketing. However, while monitoring and automation of cybersecurity can help IT teams save time and enhance protection, human intervention is necessary to avoid errors a machine could miss due to faulty programming or limited capabilities.

A human-in-the-loop approach can help ensure smooth operations even with most AI-managed tasks, especially when considering AI deployment and applying ethical AI principles. Human decision-making should still be able to override AI and allow a person to act on threat intelligence manually when needed. Additionally, businesses should set clear guardrails to avoid improper data use and comply with regulations.

To prepare for the use of AI, firms will need to provide sufficient security training courses that empower staff to use AI tools effectively. These courses should focus on how and where human intervention is needed, how to remain data compliant when using data for AI training, and a technical understanding of identifying bugs when managing AI.

Get data AI-ready 

Using AI in any capacity requires a source of quality data to train the system. This information needs to be organised and readable. This helps the AI systems carry out their tasks more accurately and reduces performance errors. There are a few key factors to focus on to get data AI-ready.

Data management and data governance are highly important to AI adoption. The data that can be accessed and used by a system must be checked carefully and organised into an error-free, readable, and uniform format for an AI system to put it to effective use.

Once data is prepared for use by an AI process, there is an important decision must be made on whether to use a system fed with information primarily from public datasets. Companies can simply opt to use their own in-house data sets exclusively. Alternatively, they can partially or entirely use proprietary data sets belonging to the software maker providing the AI system. Managing the data process in-house can be more challenging and expensive, but it also provides a more bespoke service for the user. 

Protecting any data you share with the system is also highly important. In theory, AI-assisted cybersecurity software should take care of much of that, but there are still ways in which data could be compromised. For example, data poisoning can make a secure system more vulnerable to attacks (a factor that 31% of respondents noted as a top concern).

Looking for cybersecurity software? Check out our catalogue.


Methodology

*GetApp’s 2024 Data Security Survey was conducted online in August 2024 among 4,000 respondents in Australia (n=350), Brazil (n=350), Canada (n=350), France (n=350), India (n=350), Italy (n=350), Japan (n=350), Mexico (n=350), Spain (n=350), the U.K. (n=350), and the U.S. (n=500) to learn more about data security practices at businesses around the world. Respondents were screened for full-time employment in an IT role with responsibility for, or full knowledge of, their company's data security measures.

Sources

  1. AI hiring tools may be filtering out the best job applicants, BBC 
  2. Raising the cyber resilience of software 'at scale', NCSC.GOV.UK
  3. Get AI Ready: Action Plan for IT Leaders, Gartner


This article may refer to products, programs or services that are not available in your country, or that may be restricted under the laws or regulations of your country. We suggest that you consult the software provider directly for information regarding product availability and compliance with local laws.

About the author

David is a Content Analyst for the UK, providing key insights into tech, software and business trends for SMEs. Cardiff University graduate. He loves traveling, cooking and F1.

David is a Content Analyst for the UK, providing key insights into tech, software and business trends for SMEs. Cardiff University graduate. He loves traveling, cooking and F1.