ChatGPT was highly effective in generating undetected exam answers, as it went undetected in 94% of cases in a real-world blind test conducted at the University of Reading, UK. Additionally, the AI-generated answers attained higher grades than real student submissions on average.
University of Reading researchers found that AI-generated exam answers went undetected in 94% of cases and earned higher grades than those submitted by university students. The study, which involved submitting AI-generated answers for undergraduate psychology modules, revealed the difficulty in detecting AI-generated content and raised concerns about the academic integrity of universities and higher education institutions.
In a UNESCO survey of over 450 schools and universities, it was found that fewer than 10% had developed institutional policies and/or formal guidance concerning the use of generative AI applications1.