Site icon IT World Canada

Stanford study reveals trust issues with generative AI search engines

Stanford University’s Human-Centered AI research group has questioned the veracity of information acquired through generative AI search engines. The team lead by Nelson Liu examined the fluency, perceived utility, citation memory, and citation precision of popular engines across a wide range of disciplines.

While the replies were typically fluent and informative, their research indicated that they frequently featured unsupported arguments and erroneous citations, with just half of the produced sentences completely supported by citations. The researchers also identified an inverse link between citation recall and accuracy, as well as fluency and perceived utility, raising concerns about the possibility of user misinformation.

The lack of citations in search engine results is concerning because it makes it difficult for consumers to distinguish between correct and fraudulent material. The study team expressed concern about the low dependability of generative search engines and expects that their findings would drive the creation of trustworthy engines and increase knowledge of the limits of existing commercial systems among researchers and consumers.

Chatbots such as ChatGPT and Bing Chat have also been identified as presenting false information as factual. ExtremeTech backs up this claim by saying it’s impossible to distinguish between fact and fiction without citations, which most chatbot results lack, especially in the few seconds people spend on a search engine’s results page.

The sources for this piece include an article in TechXplore.

Exit mobile version