OpenAI employees report a high-risk, punishing culture

OpenAI employees report a high-risk, punishing culture

OpenAI employees report a high-risk, punishing culture

In a public letter, several OpenAI staff members, both past and present, have expressed concern that the business and its competitors are developing artificial intelligence at excessive risk, with little oversight, and by silencing staff members who may have seen risky behavior.

“These risks range from the further entrenchment of existing inequalities to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction,reads the letter published at righttowarn.ai.

“So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable.”

In the letter, workers urge OpenAI and all other AI businesses not to retaliate against workers who disclose their involvement. Additionally, it mandates that businesses set upverifiablechannels for employees to anonymously report on their jobs.

“Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated,the letter reads.

“Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry.”

READ MORE:

Senior judge expects AI will change expert roles

First varsity AI faculty studies launched in Malaysia

Thailand seeks digital nomad permits to become a digital hub

Last month, OpenAI faced backlash following a Vox report that disclosed the firm had threatened to reclaim employees’ shares if they refused to sign non-disparagement agreements. Which prohibits them from criticizing the company or even bringing up the agreement’s existence.

Sam Altman, the CEO of OpenAI, recently stated on X (formerly known as Twitter) that he was not aware of any such agreements and that the business has never taken back any equity. In addition, Altman stated the company will lift the clause, allowing workers to express their opinions.

Additionally, OpenAI just modified how it handles safety.

An OpenAI research team that was in charge of evaluating and mitigating the long-term hazards presented by the company’s more potent AI models essentially disbanded last month after several notable individuals departed and the team’s surviving members integrated into other organizations.

A few weeks later, Altman and the other board members revealed that the corporation had established a Safety and Security Committee.

The board of OpenAI dismissed Altman in November of last year for allegedly withholding facts and purposefully misleading them. Following a highly visible altercation, Altman rejoined the company and removed the majority of the board.

We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,said OpenAI spokesperson Liz Bourgeois in a statement.

“Considering the importance of this technology, we both believe that careful discussion is essential, and we will keep interacting with governments, civil society, and other communities worldwide.”

Signatories to the letters include researchers currently employed by competitor AI startups. Current OpenAI workers who signed anonymously, and others who worked on safety and governance at OpenAI.

Several well-known AI experts also supported it, including Stuart Russell, a specialist in AI safety, and Geoffrey Hinton and Joshua Bengio, who shared the Turing Award for their groundbreaking work in AI.

Former OpenAI workers William Saunders, Carroll Wainwright, and Daniel Ziegler—all of whom worked on AI safety—signed the letter.

“The public at large is currently underestimating the pace at which this technology is developing,says Jacob Hilton, a researcher who previously worked on reinforcement learning at OpenAI and who left the company more than a year ago to pursue a new research opportunity.

According to Ilton, even though organizations such as OpenAI pledge to develop AI safely, there is insufficient supervision to guarantee this.

“The protections that we’re asking for, they’re intended to apply to all frontier AI companies, not just OpenAI,he says.

“I left because I lost confidence that OpenAI would behave responsibly,says Daniel Kokotajlo, a researcher who previously worked on AI governance at OpenAI.

“There are things that happened that I think should have been disclosed to the public,he adds, declining to provide specifics.

The recommendation in the letter, according to Kokotajlo, would increase transparency. He also thinks that given the backlash against news of non-disparagement agreements, OpenAI and other companies are likely to change their policies.

Additionally, he claims that AI is developing at an alarming rate.

“The stakes are going to get much, much, much higher in the next few years, he says,at least so I believe.”

Leave a Reply

Your email address will not be published. Required fields are marked *