In November 2025, we filed papers to establish the AI Security Foundation as a 501(c)(6) industry association. This wasn't a decision made lightly—the world doesn't need more organizations. But after years of working at the intersection of AI and security, the gap became impossible to ignore.
The Numbers Tell the Story
The data is stark. According to recent industry surveys:
The Kiteworks study surveyed 461 cybersecurity professionals across multiple industries. The finding that 83% lack automated security controls for AI systems isn't surprising to anyone who's worked in this space—but seeing it quantified makes the gap undeniable.
What Exists Today
We're not starting from nothing. Several frameworks provide valuable guidance:
- NIST AI RMF — Comprehensive risk management framework released January 2023, providing a structured approach to AI governance and risk assessment.
- MITRE ATLAS — Adversarial Threat Landscape for AI Systems, documenting known attack techniques against ML systems in a format familiar to security practitioners.
- ISO/IEC 42001:2023 — International standard for AI management systems, providing requirements for establishing and maintaining AI governance.
- OWASP LLM Top 10 — Practical enumeration of the most critical security risks for Large Language Model applications.
These frameworks are valuable—we reference and build upon them. But they share a common limitation: they provide guidance, not certification paths. An organization can adopt NIST AI RMF principles, but there's no recognized certification that says "we've implemented these controls and a qualified assessor has verified it."
What's Missing
The gap we're addressing isn't in research or guidance—it's in implementation-focused standards and trusted certification. Organizations need:
- Controls they can implement and audit against
- Certification that communicates security posture to customers and regulators
- Professional credentials for practitioners in this emerging field
- A common vocabulary for discussing AI security requirements
This is what AISF will provide. Not another framework of principles, but actionable standards with certification programs that organizations can pursue and professionals can earn.
Why a 501(c)(6)
We structured AISF as a 501(c)(6) industry association—the same structure as PCI SSC, HITRUST, and other standards bodies—because independence matters. Standards developed by a single vendor or dominated by a few large players won't earn industry trust.
The 501(c)(6) structure allows us to:
- Operate as a non-profit focused on industry benefit
- Accept membership dues and develop certification programs
- Maintain independence from any single corporate interest
- Build governance that represents diverse stakeholders
What We're Building
Our initial priorities:
AI Security Controls Matrix (AISCM)
Implementation-focused security controls mapped to NIST AI RMF, MITRE ATLAS, ISO 42001, and OWASP LLM Top 10. Controls organizations can implement and certify against.
Organizational Certification
Three-tier certification (Foundation, Hardened, Zero-Trust) allowing organizations to demonstrate AI security maturity appropriate to their risk profile.
Professional Certification
Credentials for AI security practitioners and assessors, creating a recognized career path and trusted pool of qualified professionals.
The Road Ahead
We're in the founding phase now—establishing the board, awaiting IRS determination, and building the initial team. The work of drafting standards and launching certification programs comes later, guided by the expertise of the founding board we're now assembling.
If you've spent your career at the intersection of AI and security, if you've felt the frustration of the gap between AI adoption and AI security, we'd like to talk. Not everyone who's interested will be right for the founding board—we need specific expertise and the capacity to contribute meaningfully. But if that describes you, this is a chance to shape something foundational.
Join the Founding Board
We're seeking leaders who understand this gap and have the expertise to help close it.
Apply to Join the Founding BoardSources
[1] Fullview, "200+ AI Statistics & Trends for 2025." fullview.io/blog/ai-statistics
[2] Kiteworks, "The 2025 AI Security Gap." kiteworks.com
[3] NIST AI Risk Management Framework. nist.gov/itl/ai-risk-management-framework
[4] MITRE ATLAS. atlas.mitre.org
[5] OWASP Top 10 for LLM Applications. owasp.org