University exams are failing to detect answers generated by OpenAI's ChatGPT, leading to higher scores for students submitting AI-generated work. A study at the University of Reading found that 94% of AI-generated submissions went undetected, with AI responses outperforming real student submissions. There are concerns about the potential for cheating in academia due to AI advancements.
Key Points
94% of AI-generated exam submissions went undetected in a study at the University of Reading.
AI responses scored higher than real student submissions on average.
Concerns raised about the vulnerability of academia to AI cheating.
Pros
AI-generated answers can help students score higher in exams.
Cons
Potential for academic dishonesty and cheating with AI technology.