Jan Leike, a key safety researcher at OpenAI who recently left the organization, has announced that he is now part of the rival AI startup, Anthropic. This move comes shortly after his resignation from OpenAI and the disbanding of the superalignment group that he co-led. Despite the challenges of leaving his previous role, Leike expressed excitement about continuing the superalignment mission with his new team at Anthropic.
The field of AI safety has become increasingly crucial within the tech industry following the introduction of ChatGPT by OpenAI, which led to a surge in generative AI products and investments. Concerns have been raised about the rapid deployment of powerful AI systems without sufficient consideration of potential societal consequences. In response to these concerns, OpenAI has established a new safety and security committee led by senior executives, including CEO Sam Altman, to oversee decision-making in this area.
Anthropic, founded by siblings Dario Amodei and Daniela Amodei along with other former OpenAI executives, launched its ChatGPT competitor, Claude 3, in March. The company has garnered support from Google, Salesforce, Zoom, and most notably, Amazon, which has invested up to $4 billion in Anthropic for a minority stake. With a focus on scalable oversight, generalization, and automated alignment research, Anthropic is poised to make significant contributions to the field of AI ethics and safety.
In a post following his departure from OpenAI, Jan Leike acknowledged the difficulty of leaving a role that he was deeply committed to, emphasizing the urgency of ensuring that AI systems are developed and controlled responsibly. His decision to join Anthropic reflects his ongoing dedication to advancing the field of AI safety and aligning AI systems with human values. As the tech industry continues to grapple with the ethical implications of AI advancement, researchers like Leike play a crucial role in shaping the future of artificial intelligence.
Leave a Reply
You must be logged in to post a comment.