Does AI scale to the size of the enterprise?
Posted: Thu Feb 13, 2025 5:47 am
What skillsets are required for the security team to maintain and operate AI?
CTOs also must address questions specifically for an AI solution:
Which of the claimed functions of a specific AI product align with your business objectives?
Can the same functionality be achieved using existing tools?
Does the solution actually detect threats?
That last question can be difficult to answer because netherlands whatsapp number data malicious cybersecurity events occur on a minuscule scale compared with legitimate activity. In a limited proof-of-concept study using live data, an AI tool may detect nothing if nothing is there. Vendors often use synthetic data or Red Team attacks to demonstrate an AI’s capability, but the question remains whether it is demonstrating true detection capability or simply validating the assumption under which the indicators were generated.
It’s difficult to determine why an AI thinks something was an attack because AI algorithms are essentially black boxes, still unable to explain how they reached a certain conclusion – as demonstrated by DARPA’s Explainable AI (XAI) program.
Mitigating the Risks of AI
An AI solution is only as good as the data it works with. To ensure ethical behavior, AI models should be trained on ethical data, not on the wholesale collection of garbage that is on the World Wide Web. And any data scientist knows that producing a well-balanced, unbiased, clean dataset to train a model is a difficult, tedious, and unglamorous task.
CTOs also must address questions specifically for an AI solution:
Which of the claimed functions of a specific AI product align with your business objectives?
Can the same functionality be achieved using existing tools?
Does the solution actually detect threats?
That last question can be difficult to answer because netherlands whatsapp number data malicious cybersecurity events occur on a minuscule scale compared with legitimate activity. In a limited proof-of-concept study using live data, an AI tool may detect nothing if nothing is there. Vendors often use synthetic data or Red Team attacks to demonstrate an AI’s capability, but the question remains whether it is demonstrating true detection capability or simply validating the assumption under which the indicators were generated.
It’s difficult to determine why an AI thinks something was an attack because AI algorithms are essentially black boxes, still unable to explain how they reached a certain conclusion – as demonstrated by DARPA’s Explainable AI (XAI) program.
Mitigating the Risks of AI
An AI solution is only as good as the data it works with. To ensure ethical behavior, AI models should be trained on ethical data, not on the wholesale collection of garbage that is on the World Wide Web. And any data scientist knows that producing a well-balanced, unbiased, clean dataset to train a model is a difficult, tedious, and unglamorous task.