GetApp offers objective, independent research and verified user reviews. We may earn a referral fee when you visit a vendor through our links. 

Security

Most IT Pros See AI as a Cyberdefense Ally, but Only When Firms Follow 3 Key Steps

Oct 1, 2024

Artificial intelligence (AI) is often talked about as a cybersecurity threat. However, it can be an integral part of cyberdefense and IT professionals are taking note of the opportunities.

AvatarImg
David Jani
Most IT Pros See AI as a Cyberdefense Ally, but Only When Firms Follow 3 Key Steps

What we'll cover

Artificial intelligence in cybersecurity can be both an emerging threat and an opportunity. Nevertheless, the overall sentiment among IT professionals on using AI in cybersecurity is positive rather than negative, with respondents highlighting benefits such as network traffic monitoring and threat detection capabilities. These findings featured amongst many found in GetApp’s 2024 Data Security Survey, which studied the responses of 4,000 respondents in 11 countries and featured answers from 500 U.S. participants.*

An increasing number of firms are now considering leveraging AI in cybersecurity. Various signs indicate their readiness to embrace these systems, including their interest to invest substantially in its use over the coming months. Our research shows that professionals see a clear value to using AI to secure their networks, cloud resources, and data. However, it is important for IT leaders to plan for this adoption carefully, taking into account the necessary use cases, preparing the dataset that will be used, and providing enough human guardrails.

Key insights

  • 64% of U.S. IT and data security professionals see AI as a greater ally than a threat.

  • 80% in the U.S. expect spending on cybersecurity to increase in 2025.

  • 45% of U.S. respondents see AI solutions for network security and cloud security as the biggest investment priorities, respectively.

  • 48% of U.S. participants use AI tools for real-time monitoring of their networks.

Most IT professionals see AI as a help rather than a hindrance

Flashy examples such as deepfake impersonation of staff, the generation of more potent phishing emails, or even its ability to seek out vulnerabilities faster give AI technology a bad reputation in the world of cybersecurity. However, this doesn’t always take the full picture into account.

Various facets of AI, such as machine learning, neural networks, natural language processing (NLP), and deep learning, can present opportunities for cyberdefense. They can also prove highly useful for automating tasks and helping systems identify threats more precisely.     

This implication of considering AI as an opportunity rather than a danger in security is clearly seen amongst the cybersecurity participants of our survey. In the US, for instance, 64% of participants see a greater potential for artificial intelligence to boost business cybersecurity defenses rather than solely create new vulnerabilities or enhance attacks.

GA_092024_AICyberdefensePracticalTipsAdoption-AIaid

There was a similar pattern among global respondents, too, with 62% seeing AI as a valuable tool for cyberdefense against 38% considering it simply a threat. Notably, the U.S. sample appears to feel more favorably towards AI than its global peers.

These signs indicate businesses’ desire to embrace AI rather than seeing it as the cause of worse threats. However, it’s worth considering that the fear of AI's malicious uses could drive the interest in more robust network monitoring and automation solutions in cybersecurity that AI itself provides the solutions for.

98% in the US have at least one AI security investment priority

With the findings suggesting that there is optimism about the potential of AI in cybersecurity defense, it’s worth unpacking what inspires such confidence. As already mentioned, AI is multifaceted and can influence many areas of cyberdefense. This affects how companies choose systems and prioritize their security spending accordingly.

We looked at the areas companies aim to prioritize when implementing AI systems in cybersecurity. In terms of investments, the top priorities identified by our U.S. sample center on monitoring across essential areas such as networks, cloud services, and emails. Additionally, IT professionals suggest their companies see the use of AI to detect cybersecurity threats more widely as a necessity.

GA_092024_AICyberdefensePracticalTipsAdoption-investments

This is also widespread behavior with 98% of our U.S. sample identifying at least one AI investment priority, with only 2% choosing none of the available options. This is also just above the global average of 97%.

Another element that will affect AI investment is spending dedicated to cybersecurity more generally. Looking at year-on-year figures, this is currently high and expected to remain high into 2025. However, as AI moves past the adoption stage, spending is expected to stabilize comparatively over the next 12 months, although it isn’t expected to drop entirely.

GA_092024_AICyberdefensePracticalTipsAdoption-spending

The ongoing commitment to security spending is a worthwhile step to get ahead of some of the considerable threats detailed in our sample. However, the desire to raise and maintain spend is also likely to be driven by a willingness to stay security compliant more generally, avoid falling behind competitors, and capitalize on the investment priorities noted above.

Security monitoring stands out as a top benefit of AI assistance

You can’t be everywhere and see everything all the time. However, AI security monitoring can reduce some of that workload. We’ve already seen that AI investment is primarily focused on tools to assist with security monitoring. This emphasis on monitoring and detection appears to be reflected amongst those who have already adopted AI-powered security systems.

In total, 90% are using AI-assisted cybersecurity tools in some capacity among those surveyed in the U.S., with most using them for a variety of threat detection. Amongst firms real-time monitoring stood out as the top use, with advanced malware detection also proving popular amongst companies.

GA_092024_AICyberdefensePracticalTipsAdoption-features

One slight variation that is interesting to note is the popularity of task automation. This AI use-case applies to monitoring to some degree; however, automation can also be set up to remove malicious activity on detection and help security teams avoid busywork that is normally required to analyze and report on log data.

3 tips for implementing AI in cybersecurity

While there are solid reasons to introduce AI into a cybersecurity plan in the near future, the process shouldn’t be rushed, even if time is imperative.

Integration of AI into a business’s cybersecurity defenses can be a long process and it’s important to factor this into planning.

A recent article by Gartner identifies four key areas of focus for getting firms’ IT ready to leverage AI. These include defining the use of AI tools, assessing deployment necessities, making data ‘AI-ready,’ and adopting AI principles. [1] To help achieve these steps, we’ve highlighted three tips below to ready your firm for AI cybersecurity implementation. 

1. Plan around AI’s cyberthreat prevention strengths

The first step to any AI deployment is to set goals for its usage. Having clear goals for this usage can help organize preparations for implementation and plan the use of staff and resources more effectively. 

It’s better to prioritize areas where AI can help drive better protection of systems that need constant surveillance. As seen in our data, this applies to network security, cloud security and threat detection primarily. 

Another important consideration is to check how this will affect the organization’s current tech stack. Based on changing business needs and market trends, businesses have to decide whether to opt for a new software entirely or adopt unutilized features of an existing system. In many cases, businesses can add AI features to an existing security system suite by introducing new features and tiers to their existing software.

2. Prioritize human-in-the-loop (HITL) approaches

The use of machine learning and deep learning automations in cybersecurity isn’t quite as contentious as other areas where AI can be used, such as the application of generative AI in marketing. However, whilst monitoring and automation of cybersecurity can help IT teams save time and enhance protection, human intervention is necessary to avoid errors that a machine could miss due to faulty programming or limited capabilities.

A human-in-the-loop approach can help ensure smooth operations even with most AI-managed tasks, especially when considering AI deployment and applying ethical AI principles. Human decision-making should still be able to override AI and allow a person to act on threat intelligence manually when needed. Additionally, businesses should set clear guardrails to avoid improper data use and stay compliant with regulations.     

To get ready for the use of AI in a company, firms will need to provide sufficient security training courses that empower staff to use AI tools effectively. This should focus on how and where human intervention is needed, how to remain data compliant when using data for AI training, and a technical understanding of identifying bugs when managing AI.

3. Get data AI-ready 

Using AI for effective results requires users to input quality data into the system. This information needs to be organized and readable to help the AI system carry out its tasks more accurately and reduce performance errors. There are a few key factors to focus on to get data AI-ready.

Data management and data governance are highly important to AI adoption. The data that can be accessed and used by a system must be checked carefully and organized into an error-free, readable, and uniform format for an AI system to put it to effective use.

Once data is prepared to use by an AI process, there is an important decision to be made on whether to use a system fed with information primarily from public datasets. Companies can simply opt to use their own in-house data sets exclusively. Alternatively, they can partially or entirely use proprietary data sets belonging to the software maker providing the AI system. Managing the data process in-house can be more challenging and expensive but it also provides a more bespoke service for the user.

Protecting any data you share with the system is also highly important. In theory, AI-assisted cybersecurity software should take care of much of that but there are still ways that data could be compromised. For example, data poisoning can make a secure system more vulnerable to attacks (a factor that 33% of respondents noted as a top concern).

Methodology

*GetApp’s 2024 Data Security Survey was conducted online in August 2024 among 4,000 respondents in Australia (n=350), Brazil (n=350), Canada (n=350), France (n=350), India (n=350), Italy (n=350), Japan (n=350), Mexico (n=350), Spain (n=350), the U.K. (n=350), and the U.S. (n=500) to learn more about data security practices at businesses around the world. Respondents were screened for full-time employment in an IT role with responsibility for, or full knowledge of, their company's data security measures.

Sources

  1. Get AI Ready: Action Plan for IT Leaders, Gartner

avatar
About the author

David Jani

David Jani is a content analyst at GetApp. With a background in tech journalism, public relations, and marketing, he uses his extensive experience to provide actionable insights for small and midsize businesses.

David’s research and analysis is informed by more than 150,000 authentic user reviews on GetApp and nearly 3,000 interactions between GetApp software advisors and software buyers.

His thought leadership work has been featured in TechRadar, Startups Magazine, and Raconteur.
Visit author's page