Our Members
The AI Evaluator Forum brings together independent research organizations focused on a range of issues. Membership is limited to organizations that conduct and publish rigorous independent technical evaluations of general-purpose AI systems in the public interest.

Transluce
Building open, scalable technology for understanding AI behaviors and their effects on society.

METR
Researching, developing, and running evaluations of frontier AI systems' ability to complete complex tasks without human input.

RAND
Developing advanced AI evaluation methods, testing systems for dangerous capabilities, and sharing their findings to help inform evidence-based policies.

AI Verification and Evaluation Research Institute
Advancing effective third‑party auditing for frontier AI.

SecureBio
Securing the future against catastrophic pandemics, including by evaluating the risk of misuse for biosecurity-relevant AI capabilities.

Princeton Holistic Agent Leaderboard
The standardized, cost-aware, and third-party leaderboard for evaluating agents.

Collective Intelligence Project
Embedding democratic global guidance into the development and evaluation of frontier AI.

Meridian Labs
Open source tools and technology for evaluating and understanding frontier AI.
More members coming soon...