artistic rendering of AI in action

New software helps users understand where large language models get their information and whether their sources are trustworthy


University of Waterloo researchers have developed a new tool that reveals where large language models (LLMs) like ChatGPT get their sources and evaluates if these sources are trustworthy. Nicknamed “RAGE,” the tool uses a strategy called “retrieval-augmented generation” to understand and assess the context of an LLM’s answers to a given prompt. UWaterloo computer science PhD student and lead author Joel Rorseth explained that this strategy illuminates how providing LLMs with different sources can lead to different answers. “We’re in a place right now where innovation has outpaced regulation,” said Rorseth. “People are using these technologies without understanding their potential risks, so we need to make sure these products are safe, trustworthy, and reliable.”

Leave a comment