AIFALSE POSITIVESSURVEYPDF

Reducing False Positives in Vulnerability Detection: A Survey of LLM-Based and Agentic Approaches

Daniil Ogurtsov, Igor RumiantsevMar 2026

Abstract

Security teams face overwhelming false positive rates: traditional SAST tools produce 68-95% false positives, SOC analysts routinely leave the majority of alerts uninvestigated, and analyst burnout is pervasive. This survey analyzes 100 sources (academic papers, vendor data, production case studies) covering 10 distinct AI/LLM-based approaches to false positive reduction in security tooling. We find that no single approach dominates: hybrid SAST+LLM pipelines eliminate 94-98% of false positives in one industrial study; triage memories offer fast ROI; and proof-of-concept validation achieves zero false positives by confirming exploitability. We categorize approaches along three axes, compare cost-effectiveness, and identify seven open problems.

Have corrections, additions, or feedback? We aim to keep this research accurate and up to date.

Get in Touch