
Daniel E. Ho, a Law Professor at Stanford and co-author of the paper, described the legal tech industry's claims regarding the effectiveness of Retrieval-Augmented Generation (RAG) systems in preventing hallucinations as bold and asserting that their products are "hallucination-free". However, he mentioned that their study showed that legal RAG has not solved the problem of hallucinations, contradicting these marketing claims.

VB Transform 2024 is expected to be attended by over 400 enterprise leaders. The event will take place in San Francisco from July 9-11, focusing on the advancement of GenAI strategies and engaging in thought-provoking discussions within the community2.

The study's findings highlight the need for increased transparency and rigorous benchmarking of legal AI technologies. The authors argue that the closed nature of these systems makes it difficult for lawyers to assess when it is safe to rely on them. They found that even the most advanced legal AI tools still hallucinate at an alarmingly high rate, and there is a lack of transparency around the design and performance of these products. The study calls for public benchmarking and rigorous evaluations of AI tools in law, emphasizing the importance of transparency and access to evaluations for responsible adoption and use of these technologies in legal practice3.