An anonymous reader quotes a report from Ars Technica:
The US AI Safety Institute -- part of the National Institute of Standards and Technology (NIST)—has finally announced its leadership team after much speculation. Appointed as head of AI safety is Paul Christiano, a former OpenAI researcher who pioneered a foundational AI safety technique called reinforcement learning from human feedback (RLHF), but is also known for predicting that "there's a 50 percent chance AI development could end in 'doom.'" While Christiano's research background is impressive, some fear that by appointing a so-called "AI doomer," NIST may be risking encouraging non-scientific thinking that many critics view as sheer speculation.
There have been rumors that NIST staffers oppose the hiring. A controversial VentureBeat report last month cited two anonymous sources claiming that, seemingly because of Christiano's so-called "AI doomer" views, NIST staffers were "revolting." Some staff members and scientists allegedly threatened to resign, VentureBeat reported, fearing "that Christiano's association" with effective altruism and "longtermism could compromise the institute's objectivity and integrity." NIST's mission is rooted in advancing science by working to "promote US innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life." Effective altruists believe in "using evidence and reason to figure out how to benefit others as much as possible" and longtermists that "we should be doing much more to protect future generations," both of which are more subjective and opinion-based. On the Bankless podcast, Christiano shared his opinions last year that "there's something like a 10-20 percent chance of AI takeover" that results in humans dying, and "overall, maybe you're getting more up to a 50-50 chance of doom shortly after you have AI systems that are human level." "The most likely way we die involves -- not AI comes out of the blue and kills everyone -- but involves we have deployed a lot of AI everywhere... [And] if for some reason, God forbid, all these AI systems were trying to kill us, they would definitely kill us," Christiano said.
As head of AI safety, Christiano will seemingly have to monitor for current and potential risks. He will "design and conduct tests of frontier AI models, focusing on model evaluations for capabilities of national security concern," steer processes for evaluations, and implement "risk mitigations to enhance frontier model safety and security," the Department of Commerce's press release said. Christiano has experience mitigating AI risks. He left OpenAI to found the Alignment Research Center (ARC), which the Commerce Department described as "a nonprofit research organization that seeks to align future machine learning systems with human interests by furthering theoretical research." Part of ARC's mission is to test if AI systems are evolving to manipulate or deceive humans, ARC's website said. ARC also conducts research to help AI systems scale "gracefully." "In addition to Christiano, the safety institute's leadership team will include Mara Quintero Campbell, a Commerce Department official who led projects on COVID response and CHIPS Act implementation, as acting chief operating officer and chief of staff," reports Ars. "Adam Russell, an expert focused on human-AI teaming, forecasting, and collective intelligence, will serve as chief vision officer. Rob Reich, a human-centered AI expert on leave from Stanford University, will be a senior advisor. And Mark Latonero, a former White House global AI policy expert who helped draft Biden's AI executive order, will be head of international engagement."
Gina Raimondo, US Secretary of Commerce, said in the press release: "To safeguard our global leadership on responsible AI and ensure we're equipped to fulfill our mission to mitigate the risks of AI and harness its benefits, we need the top talent our nation has to offer. That is precisely why we've selected these individuals, who are the best in their fields, to join the US AI Safety Institute executive leadership team."