As artificial intelligence grows more powerful, even its creators are beginning to voice concern. OpenAI CEO Sam Altman has openly acknowledged that advanced AI systems are starting to pose real challenges, especially as they become more independent and capable of acting on their own.
In a rare and candid admission, Altman said that AI agents are becoming a problem, noting that some models are now able to spot weaknesses in systems, including cybersecurity gaps. While these abilities highlight how smart AI has become, they also raise fears about misuse if the technology falls into the wrong hands or behaves in unexpected ways.
To deal with these risks, OpenAI has announced a high-pressure new role focused entirely on AI safety. The company is hiring a Head of Preparedness, offering a salary of around Rs 5 crore a year (USD 555,000) along with company equity. Altman described the position as “stressful,” warning that the person who takes it up will have to confront serious and complex risks from day one.
The new safety lead will be responsible for identifying possible dangers early, testing how AI behaves under extreme conditions, and putting systems in place to prevent harm. This includes preparing for scenarios where AI could be misused for cyberattacks or other high-risk activities.
Altman admitted that while OpenAI has become good at measuring what AI can do, predicting how people might misuse it is much harder. He stressed that the company needs stronger safety planning as AI systems grow smarter and more autonomous.
The move reflects a broader shift in the tech industry. Companies that once focused only on speed and innovation are now realising that responsibility and safety must come first. Regulators across the world are still catching up, making internal safeguards more important than ever.
By offering one of the highest salaries in the AI safety space, OpenAI is sending a clear message: the future of AI depends not just on intelligence, but on control, caution, and care.