Mira Murati, the chief technology officer at ChatGPT’s creator OpenAI, has stated that the AI tool should be regulated because it could be used by “bad actors.”
She told Time magazine, ”[AI] can be misused, or it can be used by bad actors. So, then there are questions about how you govern the use of this technology globally. How do you govern the use of AI in a way that’s aligned with human values?” She added that the company didn’t expect its “child” would be met with such enthusiasm when it was released.
Murati also believes that it is not too late for various stakeholders to get involved, and that some regulations may be required. She emphasized, however, that the company will require all assistance possible, including from regulators, governments, and everyone else. “It’s not too early,” she adds, to regulate it.
Murati added that the bot “may make up facts” as it writes sentences. She went on to call that a “core challenge.” According to her, ChatGPT generates responses by predicting the logical next word in a sentence — but what is logical to the bot may not always be accurate. Users can address this by continuing to interact with the bot and challenging responses they don’t believe are correct, she says.
When asked whether companies such as OpenAI or governments should be in charge of regulating the tool, Murati responded, “I don’t know.” “It’s important for OpenAI and companies like ours to bring this into the public consciousness in a way that’s controlled and responsible.”
The sources for this piece include an article in Time.