Loading...
Loading...
We're building a research organization that produces work practitioners actually use. That means researchers who understand both rigor and relevance.
Too much AI security research falls into one of two traps: academic work that's technically rigorous but disconnected from deployment realities, or vendor content that's practical but methodologically questionable. Practitioners deserve better.
AISF is establishing a 501(c)(3) research organization that bridges this gap— producing peer-reviewed work that addresses the problems practitioners actually face. That requires researchers who've operated in both worlds.
The Founding Board will refine these priorities based on where rigorous investigation can have the most practical impact—and where the current literature falls short.
Systematic study of attacks against ML systems—evasion, poisoning, extraction, and inference attacks—and the defenses that can withstand real-world adversaries.
Practical applications of fully homomorphic encryption, secure multi-party computation, and differential privacy—bridging the gap between theoretical guarantees and production deployments.
Model provenance, pre-trained model integrity, and dependency analysis. Understanding the threat landscape revealed by 100+ malicious models discovered in public repositories.
The emerging security challenges of autonomous agents—tool use authorization, memory integrity, multi-agent coordination, and containment. An area where practitioners are ahead of the literature.
The Research Institute will be led by Founding Director Manbir Gulati, whose work spans privacy-preserving ML (fully homomorphic encryption implementations at scale), synthetic data generation (TabMT, NeurIPS 2023), and production AI security. We're seeking collaborators who bring complementary depth—researchers whose work has influenced how practitioners think about specific problem domains.
Independence. Rigor. Relevance. These aren't just aspirations—they're constraints that require difficult tradeoffs. We're looking for people who understand why they matter.
Research driven by questions, not sponsors. We'll seek diverse funding specifically to avoid capture by any single interest.
Peer review. Reproducibility. Appropriate skepticism about our own findings. The standards that separate research from opinion.
Research that addresses problems practitioners recognize. Findings they can act on. Theory that connects to deployment reality.
Academics who've worked with industry. Industry researchers who've published peer-reviewed work. People frustrated by the disconnect between what gets published and what gets deployed.
If you've spent your career trying to make AI security research actually useful, you'll understand why we're building this—and we'd value your perspective in shaping it.
Apply to Join the Founding Board