Loading...
Loading...
The standards we write today will shape how a generation secures AI. We're looking for the rare individuals qualified to write them.
78% of enterprises run AI in production. 83% lack security controls. The gap isn't due to lack of concern—it's the absence of authoritative guidance. Organizations want to secure their AI systems but have no industry-recognized framework to implement, no controls to audit against, no common vocabulary to communicate their posture.
This isn't a problem that can be solved by well-meaning generalists. It requires people who've spent years at the intersection of AI systems and security architecture—people who understand both the novel attack surfaces and the practical constraints of deployment.
These frameworks will become reference points for the industry—cited in procurement requirements, referenced by regulators, taught in universities. They deserve to be built by the best minds available.
A comprehensive, implementation-focused framework of security controls for AI systems—mapped to NIST AI RMF, MITRE ATLAS, ISO 42001, and OWASP LLM Top 10. The industry's first dedicated AI security controls standard.
Framework for model provenance, AI Bill of Materials (AIBOM), and supply chain attestation. Addressing the 100+ malicious models discovered on public repositories in 2024 alone.
Tailored adaptations for high-stakes sectors: Legal (privilege protection), Government/IC (classification controls), Financial Services (model risk), Healthcare (patient safety), and Critical Infrastructure (OT/ICS integration).
The decisions made by the Founding Board will compound over decades. Every control family you help define, every requirement you help write—these become the building blocks others build upon.
Organizations worldwide will structure their AI security programs around the frameworks we create. Your architectural decisions become industry defaults.
As AI oversight accelerates globally, well-designed industry standards become the technical foundation that policies reference. Shape the conversation before legislation solidifies.
These standards will be taught in graduate programs and corporate training for decades. Your expertise becomes encoded in how practitioners think about the problem.
We're not looking for enthusiasm alone. We need people who've architected security for production AI systems, who've researched adversarial attacks, who understand both the theoretical foundations and practical realities.
If you've spent years building expertise at the intersection of AI and security, and you recognize the urgency of this moment—we'd like to talk.
Apply to Join the Founding Board