Wikipedia is thinking about using artificial intelligence (AI) to broaden and grow its information offerings. Some volunteers, however, are concerned about AI’s impact on the site’s content and potential biases.
It was discovered at a recent community call that the usage of huge language models, such as Open AI’s ChatGPT, to produce and summarize content has caused a schism in the Wikipedia community. Although AI generators can generate believable, human-like text, there have been concerns raised about the accuracy of the data they generate.
Mariana Fossatti, a coordinator for Whose Knowledge?, a global movement focusing on online access to knowledge, is worried that massive language models and Wikipedia have created a feedback cycle that promotes prejudices. As Wikipedia investigates the use of AI, a draft AI policy includes a point that explicitly states that in-text attribution for AI-generated content is required.
While some volunteers are wary of expanding AI’s role on the site, the Wikimedia Foundation is looking into how AI can help close knowledge gaps and increase access and participation. Human interaction, according to the organization, is still critical to the site’s environment, and AI works best as a supplement to human editors.
The sources for this piece include an article in Vice.