AI-generated submissions were found to be more difficult to detect compared to human students. In a study conducted by the University of Reading, 94% of AI-generated submissions went undetected by markers. Additionally, AI consistently achieved higher grades than human students, except in third-year exams requiring more abstract reasoning.
Tim Mousel suggests several assessment alternatives to traditional essays, including project-based assessments, problem-solving scenarios, oral presentations or debates, collaborative group projects, reflective journals or portfolios, experiential learning assignments, peer teaching or tutoring, creation of original content, interactive simulations or role-playing, and open-ended research projects. These methods aim to foster critical thinking, creativity, collaboration, and communication skills, which are less easily replicated by AI.
Dr. Jennifer Chang Wathall advocates for a complete overhaul of assessment practices, emphasizing the need to focus on the process of learning and growth over time. She believes assessment should celebrate individual strengths and talents, and be a continuous process measured against qualitative criteria. This shift would move the focus from single high-stakes exams to a more holistic evaluation of a student's learning journey.