How’d you prefer to earn greater than half one million {dollars} working for one of many world’s fastest-growing tech firms? The catch: the job is aggravating, and the previous few individuals tasked with it didn’t stick round. Over the weekend, OpenAI boss Sam Altman went public with a seek for a brand new Head of Preparedness, saying quickly enhancing AI fashions are creating new dangers that want nearer oversight.
Altman flagged a gap for the corporate’s Head of Preparedness on Saturday in a submit on X. Describing the function, which carries a $555,000 base wage plus fairness, as one targeted on securing OpenAI’s techniques and understanding how they may very well be abused, Altman additionally famous that AI fashions are starting to current “some actual challenges” as they quickly enhance and acquire new capabilities.
“The potential affect of fashions on psychological well being was one thing we noticed a preview of in 2025,” Altman mentioned, with out elaborating on particular instances or merchandise.
AI has been flagged as an more and more frequent set off of psychological troubles in each juveniles and adults, with chatbots reportedly linked to a number of deaths prior to now yr. OpenAI, one of many hottest chatbot makers out there, rolled again a GPT-4o replace in April 2025 after acknowledging it had turn into overly sycophantic and will reinforce dangerous or destabilizing consumer conduct.
Regardless of that, OpenAI launched ChatGPT-5.1 final month, which included quite a few emotional dependence-nurturing options, just like the inclusion of emotionally-suggestive language, “hotter, extra clever” responses, and the like. Positive, it may be much less sycophantic, nevertheless it’ll converse to you with extra intimacy than ever earlier than, making it really feel extra like a human companion as a substitute of the impersonal, logical ship laptop from Star Trek that spits info with little regard for feeling.
It is no surprise the corporate wants somebody to steer the ship with regard to mannequin security.
“We’ve a powerful basis of measuring rising capabilities,” Altman mentioned, “however we’re getting into a world the place we want extra nuanced understanding and measurement of how these capabilities may very well be abused.”
In response to the job posting, the Head of Preparedness might be chargeable for main technical technique and execution of OpenAI’s preparedness framework [PDF], which the corporate describes as its method “to monitoring and making ready for frontier capabilities that create new dangers of extreme hurt.”
It isn’t a brand new function, thoughts you, nevertheless it’s one which’s seen extra turnover than the Protection In opposition to Darkish Arts college place at Hogwarts.
Aleksander Madry, director of MIT’s Middle for Deployable Machine Studying and school chief on the Institute’s AI Coverage Discussion board, occupied the Preparedness function till July 2024, when OpenAI reassigned him to a reasoning-focused analysis function.
This, thoughts you, got here within the wake of quite a few high-profile security management exits on the firm and a partial reset of OpenAI’s security staff construction.
In Madry’s place, OpenAI appointed Joaquin Quinonero Candela and Lilian Weng to guide the preparedness staff. Each occupied different roles at OpenAI previous to heading up preparedness, however neither lasted lengthy within the place. Weng left OpenAI in November 2024, whereas Candela left his function as head of preparedness in April for a three-month coding internship at OpenAI. Whereas nonetheless an OpenAI worker, he is out of the technical house completely and is now serving as head of recruiting.
“This might be a aggravating job and you may leap into the deep finish just about instantly,” Altman mentioned of the open place.
Understandably so – OpenAI and mannequin security have lengthy had a contentious relationship, as quite a few ex-employees have attested. One govt who left the corporate in October referred to as the Altman outfit out for not being as targeted on security and the long-term results of its AGI push correctly, suggesting that the corporate was pushing forward in its aim to dominate the trade on the expense of the remainder of society.
Will $555,000 be sufficient to maintain a brand new Preparedness chief within the function? Skepticism could also be warranted.
OpenAI did not reply to questions for this story. ®
How’d you prefer to earn greater than half one million {dollars} working for one of many world’s fastest-growing tech firms? The catch: the job is aggravating, and the previous few individuals tasked with it didn’t stick round. Over the weekend, OpenAI boss Sam Altman went public with a seek for a brand new Head of Preparedness, saying quickly enhancing AI fashions are creating new dangers that want nearer oversight.
Altman flagged a gap for the corporate’s Head of Preparedness on Saturday in a submit on X. Describing the function, which carries a $555,000 base wage plus fairness, as one targeted on securing OpenAI’s techniques and understanding how they may very well be abused, Altman additionally famous that AI fashions are starting to current “some actual challenges” as they quickly enhance and acquire new capabilities.
“The potential affect of fashions on psychological well being was one thing we noticed a preview of in 2025,” Altman mentioned, with out elaborating on particular instances or merchandise.
AI has been flagged as an more and more frequent set off of psychological troubles in each juveniles and adults, with chatbots reportedly linked to a number of deaths prior to now yr. OpenAI, one of many hottest chatbot makers out there, rolled again a GPT-4o replace in April 2025 after acknowledging it had turn into overly sycophantic and will reinforce dangerous or destabilizing consumer conduct.
Regardless of that, OpenAI launched ChatGPT-5.1 final month, which included quite a few emotional dependence-nurturing options, just like the inclusion of emotionally-suggestive language, “hotter, extra clever” responses, and the like. Positive, it may be much less sycophantic, nevertheless it’ll converse to you with extra intimacy than ever earlier than, making it really feel extra like a human companion as a substitute of the impersonal, logical ship laptop from Star Trek that spits info with little regard for feeling.
It is no surprise the corporate wants somebody to steer the ship with regard to mannequin security.
“We’ve a powerful basis of measuring rising capabilities,” Altman mentioned, “however we’re getting into a world the place we want extra nuanced understanding and measurement of how these capabilities may very well be abused.”
In response to the job posting, the Head of Preparedness might be chargeable for main technical technique and execution of OpenAI’s preparedness framework [PDF], which the corporate describes as its method “to monitoring and making ready for frontier capabilities that create new dangers of extreme hurt.”
It isn’t a brand new function, thoughts you, nevertheless it’s one which’s seen extra turnover than the Protection In opposition to Darkish Arts college place at Hogwarts.
Aleksander Madry, director of MIT’s Middle for Deployable Machine Studying and school chief on the Institute’s AI Coverage Discussion board, occupied the Preparedness function till July 2024, when OpenAI reassigned him to a reasoning-focused analysis function.
This, thoughts you, got here within the wake of quite a few high-profile security management exits on the firm and a partial reset of OpenAI’s security staff construction.
In Madry’s place, OpenAI appointed Joaquin Quinonero Candela and Lilian Weng to guide the preparedness staff. Each occupied different roles at OpenAI previous to heading up preparedness, however neither lasted lengthy within the place. Weng left OpenAI in November 2024, whereas Candela left his function as head of preparedness in April for a three-month coding internship at OpenAI. Whereas nonetheless an OpenAI worker, he is out of the technical house completely and is now serving as head of recruiting.
“This might be a aggravating job and you may leap into the deep finish just about instantly,” Altman mentioned of the open place.
Understandably so – OpenAI and mannequin security have lengthy had a contentious relationship, as quite a few ex-employees have attested. One govt who left the corporate in October referred to as the Altman outfit out for not being as targeted on security and the long-term results of its AGI push correctly, suggesting that the corporate was pushing forward in its aim to dominate the trade on the expense of the remainder of society.
Will $555,000 be sufficient to maintain a brand new Preparedness chief within the function? Skepticism could also be warranted.
OpenAI did not reply to questions for this story. ®















