Wikipedia urges AI to expand knowledge offerings

Wikipedia is thinking about using artificial intelligence (AI) to broaden and grow its information offerings. Some volunteers, however, are concerned about AI’s impact on the site’s content and potential biases.

It was discovered at a recent community call that the usage of huge language models, such as Open AI’s ChatGPT, to produce and summarize content has caused a schism in the Wikipedia community. Although AI generators can generate believable, human-like text, there have been concerns raised about the accuracy of the data they generate.

Mariana Fossatti, a coordinator for Whose Knowledge?, a global movement focusing on online access to knowledge, is worried that massive language models and Wikipedia have created a feedback cycle that promotes prejudices. As Wikipedia investigates the use of AI, a draft AI policy includes a point that explicitly states that in-text attribution for AI-generated content is required.

While some volunteers are wary of expanding AI’s role on the site, the Wikimedia Foundation is looking into how AI can help close knowledge gaps and increase access and participation. Human interaction, according to the organization, is still critical to the site’s environment, and AI works best as a supplement to human editors.

The sources for this piece include an article in Vice.

IT World Canada Staff
IT World Canada Staff
The online resource for Canadian Information Technology professionals.

Would you recommend this article?


Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.

Jim Love, Chief Content Officer, IT World Canada

Featured Download

ITW in your inbox

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

More Best of The Web