When the board of OpenAI staged a bum mutiny last November, throwing out the company’s leadership only to have the bosses return while board members were pressured to resign, something seemed rotten in the state of effective altruism. Nominally, OpenAI’s mission had been to ensure that AI “benefits all of humanity.” Fiduciarily, OpenAI’s mission is to benefit the subset of humanity with a stake in OpenAI.
And then, of course, there was Sam Bankman-Fried, the felonious altruist who argued in court last fall that his sordid crypto exchange was in fact a noble exercise in earning-to-give—making Midas money, sure, but only to funnel it to the global poor. The jury didn’t buy it, convicting him on seven counts of playing-god-to-defraud. This week he’s facing a prison sentence of up to 50 years, which his legal team has complained paints him as a “depraved super-villain.”
But while OpenAI and SBF were busy besmirching the EA brand, the philosophy’s central inquiry has persisted. How best to help others? Should we feed the hungry? Model the potential catastrophes of AGI? Colonize Mars?
In WIRED this week, the philosopher Leif Wenar calls EA “the secular religion of elites.” Indeed, every street-corner preacher in Silicon Valley seems to want to sermonize on it. (Strangers Drowning, by Larissa MacFarquhar, remains the best recent book on altruism and its discontents.) But only in sitting down with Elie Hassenfeld, the CEO of GiveWell, the charity-reviewer that has long been the stablemate of Silicon Valley philanthropists, did I realize that EA does not have to be a creed or a philosophy or debate. At GiveWell, it’s a to-do list.
Hassenfeld’s answer to “How to be good?” is unnervingly specific. It’s also utilitarian, and boring. Give to Malaria Consortium, Against Malaria Foundation, Helen Keller International, and New Incentives. These are GiveWell’s top charities, chosen because they stave off malaria, prevent childhood blindness and death, and get kids vaccinated. How is GiveWell so sure? Consult, if you dare, its weedsy research, and be sure to look at the “Our Mistakes” tab if you want to see what a credible performance of candor in data-gathering looks like.
Hassenfeld cofounded GiveWell with fellow ex-hedge-funder Holden Karnofsky in 2007, before effective altruism was even a seed-stage social movement. From the start, the organization was the leading light of the so-called evidence-based charity community. Over time, its mission was subsumed under the EA rubric along with the missions of disparate other projects, including Giving What We Can and the Machine Intelligence Research Institute.
GiveWell also became a starter employer for young bourgeois EAs looking to do demonstrable good. Among many other EAs, Hassenfeld and Karnofsky hired Helen Toner, a former OpenAI board member and the one who most resoundingly clashed with the CEO, Sam Altman, over ethical issues. Toner especially seems to have found a mentor in Karnofsky, who now works to mitigate the AI threat at Open Philanthropy, an EA grantmaking organization that he helped create at GiveWell.
Read the full article here







