About
Our mission is to mitigate catastrophic risks from transformative AI, which we believe is likely to be developed in the coming years or decades.
Our team and history
Our team works hard to ensure that your donations are used as effectively as possible. We have substantial experience managing large donations securely, professionally, and flexibly. We've collectively overseen tens of millions of dollars’ worth of donations to nonprofits based on careful analysis of their impact and ability to use new funding.
We're always happy to help — if there's anything that we can do to ensure you have a great experience donating through the ARM Fund, don’t hesitate to email us at info@airiskfund.com.
Lawrence is a researcher at ARC Evals, working on safety standards for AI companies. Before joining ARC Evals, he worked at Redwood Research and as a PhD Student at the Center for Human Compatible AI at UC Berkeley.
Oliver is the cofounder of Lightcone Infrastructure, whose main product is Lesswrong. Lesswrong has significantly influenced conversations around rationality and AGI risk, and the LW community is often credited with having realized the importance of topics such as AGI (and AGI risk), COVID-19, existential risk and crypto much earlier than other comparable communities.
Lauro is a PhD student with David Krueger at the University of Cambridge. His work focused broadly on AI Safety, in particular on demonstrations of alignment failures, forecasting AI capabilities, and scalable AI oversight.
Thomas is the executive director of the Center for AI Policy, where he works on developing and advocating for AI regulation. He previously did alignment research at MIRI and has an undergraduate degree in Computer Science and Mathematics from the University of Michigan.
Caleb leads Effective Altruism Funds and is a fund manager on the Long-Term Future Fund, where he has evaluated over $34M of grant applications, with the majority of his grants being in AI safety field-building or technical safety research.
Sam is a Member of Technical Staff at Anthropic, where he directs an alignment research division. His research includes work on scalable oversight, model organisms of misalignment and generalization. He is also a tenured associate professor of linguistics, data science, and computer science at NYU. Sam holds a PhD in linguistics from Stanford University.
Aviv (@ metaviv) works at the intersection of AI, platforms, democracy, and deliberation. He is a research fellow at newDemocracy; an affiliate at Harvard's Berkman Klein Center, and the Centre for the Governance of AI; a founder of the AI & Democracy Foundation (soon to be launched); and consults for civil society organizations, technology companies, and funders.
Abigail is an Affiliate of IAPS, a board member at Rethink Priorities, and a Non-Resident Fellow at the Council on Strategic Risks. Abi has served as a U.S. diplomat for over seven years, focusing on emerging technology, geopolitical risks, immigration, macroeconomic stability, cybersecurity, AI, and national security with an emphasis on China. Abi earned an BA from University of Richmond and a Master's of Global Affairs from Yale University.
Adam is the CEO and co-founder of FAR AI, an alignment research non-profit working to incubate and accelerate new alignment research agendas. He received his PhD from UC Berkeley under the supervision of Stuart Russell. Previously, he spent time at DeepMind working with Jan Leike and Geoffrey Irving, and Cambridge working with Zoubin Ghahramani and Christian Steinruecken.
What you can expect if you donate
When we receive a donation designated for the ARM Fund, we grant 100% of the value of the donation (minus any fees charged by payment processors) to organizations and individuals recommended by our team of expert grantmakers.
The ARM Fund does not take any fees from donations before the distribution (we fundraise separately for our overhead costs, which include wages for our grantmakers). Technically, all donations are made to Effective Ventures Foundation USA, Inc., a 501(c)(3) public charity, or to Effective Ventures Foundation (UK) (EV UK), a charity in England and Wales, and are restricted to the ARM Fund.
Some applicants to the ARM Fund will likely also apply to the Long-Term Future Fund. So while 100% of your donations will go directly to AI safety projects under the ARM Fund, they are also likely to (indirectly) free up some of Long-Term Future Fund’s other resources. Unlike ARM Fund, LTFF funds not just AI safety but also interventions in other existential security focus areas, such as biosecurity.
Donors can join our quarterly mailing list, which will track updates in AI safety, list our grants, and more.