OpenAI must be held accountable.

OpenAI must be held accountable.

OpenAI must be held accountable.

OpenAI and its leadership cannot be trusted to responsibly develop advanced artificial intelligence.

OpenAI and its leadership cannot be trusted to responsibly develop advanced artificial intelligence.

Since pivoting from their nonprofit origins to begin rapidly scaling AI models in 2020, OpenAI and its senior executives have shown repeated, critical failures of oversight and accountability.


While the “move fast and break things” attitude may be celebrated at some tech companies, it is not remotely appropriate for a technology as consequential and dangerous as this one. We must hold the people and corporations building AI to an unusually high standard. “Good enough” won’t cut it for something so important.


This website contains evidence supporting our conclusion. The signatories of this letter demand that OpenAI be held accountable for its past and future actions to ensure that it does not cause further avoidable harm.


Relevant accountability mechanisms may include, but are not limited to:


  • Appointing a nonprofit board predominantly composed of leaders in AI safety and civil society, as opposed to its current overwhelming bias toward industry leaders.

  • Ensuring the nonprofit is fully insulated from the financial incentives of the for-profit subsidiary.

  • Committing to provide early model access to third party auditors including nonprofits, regulators, and academics.

  • Expanding internal teams focused on safety and ethics, and pre-assigning them meaningful “veto power” in future development decisions and product releases.

  • Publishing a more detailed, and more binding, preparedness framework.

  • Publicly announcing the release of all former employees from non-disparagement obligations, assuming there is nothing to hide.

  • Accepting increased government scrutiny to ensure that OpenAI follows all applicable laws and regulations, and refraining from lobbying to water down such regulation.

  • Accepting clear legal liability for the current and future harms to people and society caused by OpenAI’s products.


Our future is at stake. OpenAI and its leaders must act cautiously and with real accountability as they enter uncharted territory developing advanced AI. Local and federal government agencies, lawmakers, the media, and the global public must work proactively to hold OpenAI accountable.

Since pivoting from their nonprofit origins to begin rapidly scaling AI models in 2020, OpenAI and its senior executives have shown repeated, critical failures of oversight and accountability.


While the “move fast and break things” attitude may be celebrated at some tech companies, it is not remotely appropriate for a technology as consequential and dangerous as this one. We must hold the people and corporations building AI to an unusually high standard. “Good enough” won’t cut it for something so important.


This website contains evidence supporting our conclusion. The signatories of this letter demand that OpenAI be held accountable for its past and future actions to ensure that it does not cause further avoidable harm.


Relevant accountability mechanisms may include, but are not limited to:


  • Appointing a nonprofit board predominantly composed of leaders in AI safety and civil society, as opposed to its current overwhelming bias toward industry leaders.

  • Ensuring the nonprofit is fully insulated from the financial incentives of the for-profit subsidiary.

  • Committing to provide early model access to third party auditors including nonprofits, regulators, and academics.

  • Expanding internal teams focused on safety and ethics, and pre-assigning them meaningful “veto power” in future development decisions and product releases.

  • Publishing a more detailed, and more binding, preparedness framework.

  • Publicly announcing the release of all former employees from non-disparagement obligations, assuming there is nothing to hide.

  • Accepting increased government scrutiny to ensure that OpenAI follows all applicable laws and regulations, and refraining from lobbying to water down such regulation.

  • Accepting clear legal liability for the current and future harms to people and society caused by OpenAI’s products.


Our future is at stake. OpenAI and its leaders must act cautiously and with real accountability as they enter uncharted territory developing advanced AI. Local and federal government agencies, lawmakers, the media, and the global public must work proactively to hold OpenAI accountable.

Signatories:

Signatories:

Lucie Philippon
French Center for AI Safety
Charbel-Raphael Segerie
Executive Director, Centre pour la Sécurité de l'IA (CeSIA)
Geoffrey F. Miller
University of New Mexico
Michelle NIe
CeSIA
Amaury Lorin
EffiSciences
Maxime Fournes
Pause AI
Melissa de Britto Pereira
Student, USP
Roman Yampolskiy
Author of AI: Unexplainable, Unpredictable, Uncontrollable
Thomas Burden
Alberto Reis
Digital Artist
Arturo Villacañas
University of Cambridge
Alvin Ånestrand
Co-founder, AI Safety Gothenburg
Kieran Scott
Doctoral candidate, ML author
Harry Lee-Jones
Process Engineer, BHP
Janusz Kaiser
Translator
Oisin Tummon Swensen
Thomas Emond
Construction Manager
Carlo Cosmatos
Anubhav Awasthy
Granicus Technologies
Noah Topper
Klaus Lönhoff
Digital Solutions Project Manager
Bruce McLennan
Nickster Shovel
Nicole Richards
Diego Dorn
EPFL
Paolo Massimo
Software Engineer
Simon Steshin
DL Researcher
Lewis McGregor
Filmmaker
Michael Huang
University of Melbourne
Søren Elverlin
Founder, AISafety.com
Simon Karlsson
Manuel Roman
CFO at a EU tech company
Quinton thorp
Machinest
Raphael Royo-Reece
Solicitor
John Slape
IT System Analyst
Jaime Raldua
Software developer working in LLM evaluations
Thomas Moore
Senior Software Engineer
Fredi Bach
Developer
Kai Brügge
Emerson Spartz
Nonlinear
Lawrence Jasud
Retired Educator
Michelle Runyan
Stanford University
Mateusz Bagiński
Ron Karroll
Michaël Trazzi
Host, The Inside View
Aaron Stuckert
Felix De Simone
Organizing Director, PauseAI
Neerav Sahay
Alistair Stewart
Campaign Coordinator, Plant-Based Universities
Matthew Loewen
Student, WWU
D'Arcy Mayo
Copywriter
Carlos Parada
Cambridge MLG
Emily Dardaman
Independent researcher, Ex-BCG
Dawid Wojda
Software and machine learning engineer
Manuel Bimich
Co-fondateur, Centre pour la sécurité de l'IA
Eric Paice
Market Researcher
Eric Ciccone
DevOps Engineer
Peter S. Park
MIT
Siebe Rozendal
Brenna Nelson
MSP-employed IT professional
Zachary Magliaro
ASU Sustainability student
Del Jacobo
Anthony Bailey
Kylie Turner
Ben Cady
Wellington Financial
Marcus Faulstone
Sean McBride
Rasoul Mohammadi
AI engineer
Alex Kaplan
Pieter Louw
Audit trainee Moore Pretoria South Africa
Alex McKenzie
Data Scientist, on sabbatical
William Justin Wilson
Fabian Scholl
Writer
Tara Steele
Writer
Christopher Smith
Game Developer
Mac Burnie Rodolphe
Graphist
Max Winga
UIUC Physics Graduate, AI Safety Researcher
Anne Beccaris
Member of PauseAI
Océane Beccaris
Student, member of PauseAI
Stephen Casper
PhD Student, MIT
Terry Faber
Economist. IBISWorld
Dion Bridger
Wendy A.
Severin Field
PhD Candidate
Alexandra Santos
Startup ecosystem operator
Liron Shapira
PauseAI activist
Piotr Zaborszczyk
Conrad Barski
Medical Doctor, retired
Léo Dana
Student
Florent Berthet
EffiSciences
Johan Hanson
Sjoerd Meijer
Student
Paolo Massimo Veneziani
Web Developer
Tess Hegarty
PhD student, Stanford University
Joseph Miller
PauseAI
Ori Nagel
Nathan Metzger
Co-founder, AI-Plans.com
Patricio Vercesi
Coleman Snell
Global Risk Researcher and President of Cornell Effective Altruism
Joshua Eby
Stockholm University
Jeffrey C Choate
Holly Elmore
Executive Director, PauseAI US
Tyler Johnston
Executive Director, The Midas Project

Add your name to the letter

Add your name to the letter

Submissions will be manually approved and listed on this page soon.

Submissions will be manually approved and listed on this page soon.

Share on X
Share on FB
Share on LI

Since its founding, OpenAI has drifted far from its nonprofit origin.

Since its founding, OpenAI has drifted far from its nonprofit origin.

OpenAI was founded in 2015 as a nonprofit organization. This structure was chosen to ensure that the development of advanced AI technology served the public interest, rather than being driven by the private financial interests of any individual.


By 2019, OpenAI realized they didn't have enough money to continue scaling their AI models. In a last-ditch effort to raise more money, they transitioned to operating a for-profit company, OpenAI Global LLC. While this new company is controlled by the nonprofit, its ownership is held largely by employees, VC funds, and mega-corporations like Microsoft.


Now that OpenAI's valuation is nearing hundreds of billions of dollars, it's obvious that financial incentives have entered the picture. OpenAI, Inc. (the nonprofit, which still exists today) has failed to insulate their activities from financial incentives.

OpenAI was founded in 2015 as a nonprofit organization. This structure was chosen to ensure that the development of advanced AI technology served the public interest, rather than being driven by the private financial interests of any individual.


By 2019, OpenAI realized they didn't have enough money to continue scaling their AI models. In a last-ditch effort to raise more money, they transitioned to operating a for-profit company, OpenAI Global LLC. While this new company is controlled by the nonprofit, its ownership is held largely by employees, VC funds, and mega-corporations like Microsoft.


Now that OpenAI's valuation is nearing hundreds of billions of dollars, it's obvious that financial incentives have entered the picture. OpenAI, Inc. (the nonprofit, which still exists today) has failed to insulate their activities from financial incentives.

Image Source: OpenAI

The nonprofit has lost control of OpenAI in practice (even if not on paper)

The nonprofit has lost control of OpenAI in practice (even if not on paper)

In 2023, the board of the OpenAI nonprofit decided to replace Sam Altman as CEO of the for-profit company. They made this decision due to concerns that Altman had been lying to the board, hindering their ability to exercise oversight of OpenAI. The decision to remove Sam was well-intentioned and within the board's discretion, as later affirmed by an independent review from the law firm WilmerHale.


But soon after the decision was announced, interest groups with financial stake in OpenAI Global, LLC (the for-profit) began to push back. Microsoft, as well as a number of employees within OpenAI, made a clear demand to the nonprofit board: reinstate Altman as CEO, or they would leave OpenAI and join Microsoft to continue their work there.


In the end, the board had to acquiesce. It's clear that their decision was constrained by the financial interests of the company. The nonprofit was supposed to retain the ability to fire the CEO, at any time and for any reason, so long as it was pursuant to the mission of the organization. Sam Altman himself bragged about this fact to gain the trust of reporters and the public.


But the events of last fall have made it clear: in practice, the nonprofit board has lost control of OpenAI.

In 2023, the board of the OpenAI nonprofit decided to replace Sam Altman as CEO of the for-profit company. They made this decision due to concerns that Altman had been lying to the board, hindering their ability to exercise oversight of OpenAI. The decision to remove Sam was well-intentioned and within the board's discretion, as later affirmed by an independent review from the law firm WilmerHale.


But soon after the decision was announced, interest groups with financial stake in OpenAI Global, LLC (the for-profit) began to push back. Microsoft, as well as a number of employees within OpenAI, made a clear demand to the nonprofit board: reinstate Altman as CEO, or they would leave OpenAI and join Microsoft to continue their work there.


In the end, the board had to acquiesce. It's clear that their decision was constrained by the financial interests of the company. The nonprofit was supposed to retain the ability to fire the CEO, at any time and for any reason, so long as it was pursuant to the mission of the organization. Sam Altman himself bragged about this fact to gain the trust of reporters and the public.


But the events of last fall have made it clear: in practice, the nonprofit board has lost control of OpenAI.

Sam Altman has faced many accusations of of manipulation and concerning behavior.

Sam Altman has faced many accusations of of manipulation and concerning behavior.

Image Source: Techcrunch

The board's concerns in the fall of 2023 weren't the first time that Altman faced professional consequences for his actions, and his reputation in the Bay Area reflects his allegedly aggressive and self-interested approach to business.


At his first startup Loopt, senior employees got together twice to urge the board to fire him for "deceptive and chaotic behavior," according to the Wall Street Journal. This echoes the accusations of manipulation and toxic work environments raised by fellow OpenAI execs Ilya Sutskever and Mira Murati, as well as former employee Geoffrey Irving.


The Washington Post also reports concerns that he was privately investing in startups he discovered through his former job at YCombinator — an accusation of double-dipping for personal enrichment that echoes his alleged current practices of personally investing in energy and computer infrastructure critical to the development of AI.

The board's concerns in the fall of 2023 weren't the first time that Altman faced professional consequences for his actions, and his reputation in the Bay Area reflects his allegedly aggressive and self-interested approach to business.


At his first startup Loopt, senior employees got together twice to urge the board to fire him for "deceptive and chaotic behavior," according to the Wall Street Journal. This echoes the accusations of manipulation and toxic work environments raised by fellow OpenAI execs Ilya Sutskever and Mira Murati, as well as former employee Geoffrey Irving.


The Washington Post also reports concerns that he was privately investing in startups he discovered through his former job at YCombinator — an accusation of double-dipping for personal enrichment that echoes his alleged current practices of personally investing in energy and computer infrastructure critical to the development of AI.

OpenAI's safety staff have a history of jumping ship, or getting the boot.

OpenAI's safety staff have a history of jumping ship, or getting the boot.

OpenAI claims to put a huge focus on safety, but the employees they've hired to focus on AI safety show a pattern of dramatically exiting the company.


In 2020, a handful of senior OpenAI employees left the company to found a competing public benefit corporation named Anthropic. This departure was spurred by safety disagreements between OpenAI and the team that left to found the new company.


More recently, the Superalignment team became the center of safety efforts at OpenAI. It was announced in 2023, led by Ilya Sutskever and Jan Leike. Now, less than a year since the team launched, both leaders have resigned from the company, along with a handful of important safety staff who also departed for various reasons, including "losing confidence that [OpenAI] would behave responsibly" as AI models continue to advance.


Sam Altman even tried to oust a former board member due to her academic work discussing the safety practices of an OpenAI competitor, according to the Wall Street Journal.

OpenAI claims to put a huge focus on safety, but the employees they've hired to focus on AI safety show a pattern of dramatically exiting the company.


In 2020, a handful of senior OpenAI employees left the company to found a competing public benefit corporation named Anthropic. This departure was spurred by safety disagreements between OpenAI and the team that left to found the new company.


More recently, the Superalignment team became the center of safety efforts at OpenAI. It was announced in 2023, led by Ilya Sutskever and Jan Leike. Now, less than a year since the team launched, both leaders have resigned from the company, along with a handful of important safety staff who also departed for various reasons, including "losing confidence that [OpenAI] would behave responsibly" as AI models continue to advance.


Sam Altman even tried to oust a former board member due to her academic work discussing the safety practices of an OpenAI competitor, according to the Wall Street Journal.

OpenAI's current models don't meet their own safety standards.

OpenAI's current models don't meet their own safety standards.

Perhaps all of the above could be excused if OpenAI were actually delivering on their safety commitments. If it were true that its products lived up to the commitments in the OpenAI Charter and recent commitments like the Preparedness Framework and Model Spec, it might be evidence that OpenAI does take safety seriously.


Unfortunately, this isn't the case. Every text-based model that OpenAI has publicly released can either be misused by default, or can be jailbroken with relatively trivial effort to make it say and do unethical things. Even the training of these models may have crossed ethical lines, including cutting corners when it comes to data collection.


OpenAI's most recent model, GPT-4o, is the first major public release since they adopted their Preparedness Framework. However, they've already broken the spirit of the safety commitments they made only months earlier by neglecting to publicly release a safety evaluation scorecard alongside the model.

OpenAI has repeatedly claimed that governments need to be regulating AI companies like them. However, last summer, Time reported that OpenAI had also been quietly lobbying to water down the E.U. AI act, exempting models like the ones they were creating from a "high-risk" category that would have imposed regulatory burdens on the company.

Perhaps all of the above could be excused if OpenAI were actually delivering on their safety commitments. If it were true that its products lived up to the commitments in the OpenAI Charter and recent commitments like the Preparedness Framework and Model Spec, it might be evidence that OpenAI does take safety seriously.


Unfortunately, this isn't the case. Every text-based model that OpenAI has publicly released can either be misused by default, or can be jailbroken with relatively trivial effort to make it say and do unethical things. Even the training of these models may have crossed ethical lines, including cutting corners when it comes to data collection.


OpenAI's most recent model, GPT-4o, is the first major public release since they adopted their Preparedness Framework. However, they've already broken the spirit of the safety commitments they made only months earlier by neglecting to publicly release a safety evaluation scorecard alongside the model.

OpenAI has repeatedly claimed that governments need to be regulating AI companies like them. However, last summer, Time reported that OpenAI had also been quietly lobbying to water down the E.U. AI act, exempting models like the ones they were creating from a "high-risk" category that would have imposed regulatory burdens on the company.

And things just keep getting worse…

And things just keep getting worse…

(Added May 21, 2024)

It would appear that OpenAI knows it has something to hide. In recent reporting from Vox, it was revealed that OpenAI employees are met with an unwelcome surprise when they choose to leave the company: an implicit ultimatum, asking them to choose between signing an extremely restrictive lifetime non-disparagement agreement, or potentially losing all their vested equity in the company.

Even acknowledging that this agreement exists, according to Vox, counts as a violation of the agreement. In other words, departing OpenAI employees must choose between retaining the equity that served as a central component of their compensation, or never speaking badly about the company again.

So why is OpenAI so interested in buying the silence of former employees? Maybe it's further failures of the company to live up to their own promises. According to Fortune, OpenAI failed to deliver on its promise to provide 20% of its computing resources (as of 2023) to a team focused on safety, one of the key factors that led to that teams' dramatic exit from the company.

In March 2024, OpenAI also said that their usage policies regarding synthetic voices "prohibit the impersonation of another individual or organization without consent or legal right." However, they soon took down one of their main voices, known as "Sky," due to allegations from Scarlett Johansson that the voice is intended to impersonate her. This theory isn't baseless, either — OpenAI had previously attempted to legally license her voice. Even after she denied, a glib tweet from Sam Altman suggests that the products featuring her voice were intended to mimic the titular character from Her, an AI-focused film she starred in.

(Added May 21, 2024)

It would appear that OpenAI knows it has something to hide. In recent reporting from Vox, it was revealed that OpenAI employees are met with an unwelcome surprise when they choose to leave the company: an implicit ultimatum, asking them to choose between signing an extremely restrictive lifetime non-disparagement agreement, or potentially losing all their vested equity in the company.

Even acknowledging that this agreement exists, according to Vox, counts as a violation of the agreement. In other words, departing OpenAI employees must choose between retaining the equity that served as a central component of their compensation, or never speaking badly about the company again.

So why is OpenAI so interested in buying the silence of former employees? Maybe it's further failures of the company to live up to their own promises. According to Fortune, OpenAI failed to deliver on its promise to provide 20% of its computing resources (as of 2023) to a team focused on safety, one of the key factors that led to that teams' dramatic exit from the company.

In March 2024, OpenAI also said that their usage policies regarding synthetic voices "prohibit the impersonation of another individual or organization without consent or legal right." However, they soon took down one of their main voices, known as "Sky," due to allegations from Scarlett Johansson that the voice is intended to impersonate her. This theory isn't baseless, either — OpenAI had previously attempted to legally license her voice. Even after she denied, a glib tweet from Sam Altman suggests that the products featuring her voice were intended to mimic the titular character from Her, an AI-focused film she starred in.

Source: Sam Altman

Source: Sam Altman

Source: Sam Altman

Cover image credit: World Economic Forum

© Coalition for AI Accountability 2024