- OpenAI cofounder admitted the startup “made a mistake” in response to Musk’s criticism of ChatGPT.
- Musk has repeatedly criticized the chatbot for its “woke” responses.
- As more users test out AI chatbots, they have begun to roll out various constraints.
In response to Elon Musk’s criticism of ChatGPT being too “woke,” OpenAI cofounder and president Greg Brockman admitted the startup “made a mistake,” in an interview with The Information.
Musk, an OpenAI cofounder who has since severed ties with the company, has routinely criticized OpenAI for implementing safeguards that prevent the chatbot from producing responses that could be deemed offensive.
“We made a mistake: The system we implemented did not reflect the values we intended to be in there,” Brockman, who serves as OpenAI’s president, told The Information. “And I think we were not fast enough to address that. And so I think that’s a legitimate criticism of us.”
ChatGPT has been criticized by users who claim it generates answers with political biases.
Last month, screenshots of a ChatGPT conversation circulated on Twitter showing the chatbot declining to generate a positive poem about Donald Trump, stating it wasn’t programmed to create “partisan, biased or political” content. But when fed the same prompt subbing out Joe Biden for Trump, the chatbot wrote a glowing poem. Musk called the chatbot’s refusal to generate a poem about Trump “a serious concern.”
Musk had previously criticized the technology, saying that “the danger of training AI to be woke – in other words, lie – is deadly.”
As more users flock to AI chatbots like ChatGPT and Bing’s recently-launched chatbot, which is powered by OpenAI technology, their limits and flaws have been revealed. In response, companies have added guardrails to the technology.
In the month since Microsoft released its AI-powered Bing chatbot, the tech giant has set conversation limits on the chatbot, capping users at 50 questions per day and five questions per session. It has since loosened those limits.
ChatGPT is also a work in progress — and given Brockman’s comments, it seems like the platform will continue to evolve.
“Our goal is not to have an AI that is biased in any particular direction,” Brockman told The Information. “We want the default personality of OpenAI to be one that treats all sides equally. Exactly what that means is hard to operationalize, and I think we’re not quite there.”