Anthropic, an Amazon-backed AI startup is seeking public input as part of its efforts to create guidelines for governing its AI models.
Anthropic commissioned a poll of 1,000 Americans to ask them what values and guardrails they wanted powerful AI models to reflect. The results were then compared to an existing set of principles that Anthropic staff developed and already applied to its Claude chatbot.
While there was only a 50 per cent overlap between the two sets of principles, Anthropic found that the public “constitution” was “less biased” across nine social categories, including age, gender, nationality, and religion. The survey findings were curated into 75 guiding principles, and compared to the 58 principles that Anthropic had previously developed and applied to the Claude chatbot.
Anthropic claims that this new framework is “less biased” than their prior set of principles. The public emphasized a greater focus on impartiality and the provision of objective information reflecting all aspects of a situation. Additionally, they stressed the importance of making AI responses easily understandable.
This suggests that public input can be a valuable way to ensure that AI models are more aligned with the values of the people who will be using them.
The sources for this piece include an article in Axios.