Latest Blog
How AI is Permanently Changing How We Innovate in Cybersecurity
April 8, 2024
How AI is Permanently Changing How We Innovate in Cybersecurity
Total Article Read Time: 3 min
This blog is written from an engineering perspective that explores the development considerations for AI-native managed detection and response (AI-MDR) in cybersecurity. Within, I'll highlight how an AI-led approach transforms the development process and impacts outcomes:
Introduction
The use of Managed Detection and Response (MDR) services has grown rapidly due to the vast number of vulnerabilities and threats that an organization sees and the lack of skilled cybersecurity professionals to analyze and remediate them in a timely manner. Without continuous and timely monitoring, many cybersecurity incidents can go undetected for long periods of time, leaving a business exposed to a significant impact on the organization. Because small and medium companies need help implementing 24/7 monitoring, MDR services are often adopted to address this challenge.
Additionally, MDR services promise superior threat monitoring, investigation, and threat hunting to analyze threats, remediate them, and make recommendations to improve organizations' defenses. MDR service can also assist with reducing exposures (attack surface) and hardening systems.
Even when going to a MDR service, one of the challenges that persist is the vast number of false positives that need to be addressed. Most reported threats tend to be false positives since every environment is unique and requires significant ongoing tuning from security analysts to remove benign alerts. Incorporating AI-Native products into MDR services directly addresses this challenge with the ability to automatically analyze and resolve the vast majority of threats that are unique to each organization.
Traditional MDR vs. AI-Native MDR
Traditional MDRs require significant human resources to operate their MDR service. The quality of analysis in a conventional MDR also depends heavily on the experience and knowledge of the analysts they have. Due to a lack of significant automation, traditional MDR analysts can also suffer from alert fatigue. This may result in them being unable or under resourced to promptly analyze the real threats comprehensively, risking exposure for organizations. The situation is compounded by how fast attackers can break out of the initial system that they infect. A CrowdStrike report shows a breakout Time of 79 minutes in 2023, which is 5 minutes lower than in 2022. Rapid analysis of reported threats and remediation is now required more than ever to prevent the reported threat from having a large impact on the organization.
AI-Native MDR directly impacts the quality of incident response by simplifying and accelerating response time by creating playbooks that use AI-powered natural language virtual assistants. The virtual assistants are pre-trained and do not require highly experienced analysts. Junior analysts can also use them to create playbooks in minutes to automate the analysis of the majority of threats. Since most alerts tend to be false positives or irrelevant to their organization, these can be resolved using automated playbooks that remove manual cycles. Playbooks can also automatically prioritize alerts to surface critical threats requiring the highest investigation priority.
For threats that require thorough analysis, analysts can use the virtual assistant to ask questions and perform tasks in a much faster manner. Typically, the virtual assistant increases analyst productivity by 10X for performing thorough investigations, focussing on remediation, and hardening their systems to prevent further attacks.
Analysts can also use virtual assistants to automatically create consistent and well-recorded case documentation. This removes manual work and provides time to help train analysts to improve investigation and threat hunting.
Development Considerations for AI-Native MDR
Fundamentally, AI-Native MDRs should be built and trained on high-quality data unique to each organization. Security controls should ensure that each organization's data is not used to train AI models belonging to other organizations. In addition, users should be able to add feedback to tune the AI models to fit their needs.
The AI-Native products must have the metrics built in to ensure the AI models are working as designed and meet the performance expectations required for analyzing a high volume of alerts. MDR providers must also continuously enhance the AI models themselves to handle evolving cyber threats.
AI-Native MDR is also expected to offer high-quality integrations for all major endpoint, cloud, network, and identity products. This should also include the ability to service, assess, and remediate/harden customer environments to reduce the attack surface.
Challenges and Ethical Implications
Bias in AI is a frequently discussed topic due to many documented instances of bias in real life. AI-Native products should address this by having tight quality control and including feedback and training infrastructure to overcome this. With any MDR service, human analysts should always review and validate the results of automated threat analysis to ensure AI bias is contained and addressed.
Conclusion
AI is rapidly transforming everything we do because of its efficiency, quality, and cost savings. This remarkable innovation has been applied in many industries, and cybersecurity is no exception. Unfortunately, AI technology can be used for both good and bad. Cybercriminals are increasingly using AI in phishing, exploit code generation, and reverse engineering. There is an urgent need for AI security vendors to leverage AI to equip security teams and give them the upper hand.
AI-Native MDR solutions work as a force multiplier for organizations of all sizes. AirMDR is building one of the first AI-native services that leverages AI's power and our team has vast experience in building cybersecurity products and services. We aim to bring forward a viable option that affordably provides small and medium organizations with the technology and resources that typically only more mature organizations have.
Please contact our team to learn more about how a highly automated and AI-led MDR service can instantly help with your threat investigation and hunting.
Srikant Vissamsetti, CTO of AirMDR, was previously VP of Engineering at SentinelOne, SVP at Attivo Networks, and VP of Network Security at McAfee.