Find the right AI cybersecurity tools for your security strategy

OpenAI’s ChatGPT, a generative artificial-intelligence (AI) tool that will be released in November 2022 has sparked a discussion about the potential of these technologies in cyberspace – by both attackers and defenses.

As a result, vendors of cybersecurity products have increased their product offerings to include AI-based capabilities. In March 2024, Technavio published a report that estimated the AI-based cyber security market to have grown by 19.5% between 2022 and 2023. By 2027, it is expected to grow by $28,29bn.

A recent survey by Infosecurity Europe of 200 professionals in the field of cybersecurity found that 54% of respondents planned to incorporate AI into their cybersecurity strategy within the next year.

Harman Singh, Director and Managing consultant at Cyphere consultancy, told infosecurity that “subsets of AI are changing very quickly as technology advances, with many new and exciting options.” We may see stealth solutions later this year.”

The hype surrounding AI technology can create challenges for organizations to effectively use these tools in their cybersecurity operations.

Ian Hill, Director, Information and Cyber Security at UPP, warns that, while AI is a game changer in cybersecurity, the current environment could lead organizations to waste a lot on products which are not always appropriate or effective for their business risks.

He explained that “everyone is jumping onto the AI bandwagon, because they can see the PS/$ signs. So many vendors are trying a AI spin on their existing products or, as they will sell it, “updated” products, which in some cases, are nothing more than glorified automating tools.”

Cyber professionals and business leaders must have a thorough understanding of the AI-based cybersecurity products market and know how to select the right solutions for their organizations.

Importance of AI in Cybersecurity

AI’s Impact on Cybersecurity

Since around 2022, AI has been used to enhance cybersecurity. Large language models (LLMs), which are a type of AI, have added to this capability. AI is being weaponized by cyber adversaries. It’s important that defenders harness these technologies as well.

Hanah Darley is Director of Threat Analysis at AI cybersecurity company Darktrace. She said: “SOC Teams need a growing army of defensive AI in order to protect an organization effectively in the age offensive AI.”

AI has a major impact on cybersecurity, especially in light of the growing cyber skills gap.

Singh said: “SOC work can be exhausting for SOC analysts because of the amount work needed to detect, analyse, triage, and remediate security concerns. AI automation allows SOC teams the ability to focus on more critical tasks and improves overall efficiency.

AI models that are well-trained can also be far more efficient than humans at analyzing data. This improves the ability of security teams to detect and predict future attacks.

Indy Dhami said that the analysis of patterns can help predict future threats with greater certainty.

AI Platforms – The Next Step for Artificial Intelligence

He added that “Generative AI could also be used to simulate Cyber-attacks. It would help security firms to understand network components, data flows and vulnerabilities, and highlight potential attack paths continuously to test and protect system and data resilience.”

AI’s ability quickly identify and classify threats can also enhance the incident response capabilities of security teams. ReliaQuest’s March 2020 report found that AI and automation enabled organizations to respond to security incidents 99% faster than they did in 2022.

The use of generative AI in particular can improve the efficiency and capabilities for security teams. Microsoft Copilot for security tool shows how LLM chatbots are able to assist.

Microsoft Copilot will be made available to all users worldwide after the end of its early-access program on April 1, 2024.

The LLM in Copilot was designed to help security teams perform a wide range of tasks, such as classifying and responding incidents, writing reports for investigations, creating secure code and scripts and analyzing the internal and external attack surfaces.

Dhami believes that generative AI is also ideally suited for enhancing cyber governance. This includes tracking compliance with current security protocols and managing third-party risks.

By automating due diligence it is possible, for example, to monitor and assess risks with each vendor in the supply chains.

Protecting against the risks of Generative AI

In the last year, a number of solutions were developed to mitigate the data security risk posed by generative AI when it is used for operational purposes.

The risks can include accidental data leakage, such as source code for a company. This concern led Samsung to ban its employees from using AI-generated apps at work in 2023.

Singh said: “Sensitive information can be shared either intentionally, where staff chooses not to sanitize the data first, or inadvertently, as with clipboard pasting and document summaries that include sensitive data.”

A second concern is the vulnerabilities of generative AI software. OpenAI, for example, provided details in March 2023 of a data breach that was caused by a flaw in an open-source software library. This bug exposed payment-related information about some customers and allowed chat titles from active users to be viewed.

Many cybersecurity vendors have developed products to protect organisations against such threats.

The Best Marketing Automation Solutions for Agencies

Culture AI, for example, is a solution that flags and monitors the use of generative AI by employees in the workplace. This allows them to quickly identify sensitive data shared with such tools. They also offer real-time training on how to safely utilize these tools.

AI Security Labs has also recently launched Mindguard. This solution is designed to assist engineers in evaluating the cyber risks to AI systems such as ChatGPT.

How To Cut Through A Noisy AI Security Market

AI in Cybersecurity is Not the Silver Bullet

AI is an important tool for cybersecurity professionals. However, it shouldn’t be seen as the panacea to combating cyber-threats. To that end, organizations should be careful about which AI tools they choose to use.

Chris Stouff believes that organizations need to be cautious when using AI standalone solutions. Some AI tools can be unreliable when the data they are trained on contains biases or is tainted.

“AI is unable to contextualize.” It lacks human-like abilities in situational awareness, judgement, and prioritization. He warned that the software does not understand the subtleties of the environment in which it is being used, or the contexts of industry and market.

Darktrace’s Darley acknowledged a current trend of “AI-washing,” where many businesses apply AI to systems or solutions that are simply not suitable for them.

She noted that “even if AI was used, it might not be the best AI for the problem. This could lead to gaps in the effectiveness of the solution.”

The term AI can encompass a wide range of technologies. From generative AI chatbots like ChatGPT, to machine learning and automation capabilities.

Darley explained that “Generative AI” is only one type of AI. The right AI technique needs to be applied for the right problem.

She added that “business leaders must take the time to understand and invest in the correct types of AI for the right use cases.”

Hill expressed concern about the fact that AI-based products in some cases have served only to drive up prices.

He said: “I’ve implemented AI-based security tools. One was a well-publicized and famous early adopter. For me, it didn’t bring anything I couldn’t already achieve with existing software for much less money, and didn’t live to its hype.”

How to Choose the Right AI Tool

On a strategic basis, AI-based tools for cyber security are no different than other solutions. The fundamental risks also remain the same.

“While AI can enhance certain aspects in cybersecurity, it will not replace the need for an integrated risk strategy or expertise from internal IT security teams as well as external SOCs,” Stouff said.

As with other business solutions, AI tools serve a purpose. That purpose is to protect your business.

Hill noted: “Too few take a top-down risk-based approach aligned with business goals and objectives, but instead a bottom up solutionized approach.”

AI-based tools cannot be used in isolation and must be effectively integrated into existing capabilities.

Darley says that organizations must ask vendors the right questions in order to make sure the solution they choose is safe, effective and fits the problem.

She stated that “to cut through the noise security leaders should ask questions about the specific AI techniques used, the way the organization mitigates the risks of data contamination and model tampering and how they perform quality control on their models to guarantee valid, unbiased outcomes,”

These questions must be part of a rigorous buying process to enable organizations to make informed choices.

Singh said: “Every organization is unique, just like every network. Leaders need to make data-driven decisions when purchasing products and services in order to separate the wheat from the chaff. It is important to ask for capabilities, not just buzzwords. You should also demand measurable results and consider integration and expertise, and focus on ROI.


AI has enormous potential for cybersecurity, both now and in future. As cybercriminals use these technologies to increase attack volume and sophistication, it’s even more important that defenders utilize AI as a response.

The availability of generative AI has accelerated the hype cycle surrounding AI and created a lot more noise regarding how it should best be used in security teams. Business and security leaders need to learn how to cut the noise, and create processes for making appropriate purchases of AI-based tools.


Related Articles


Please enter your comment!
Please enter your name here

Stay Connected


Latest Articles