Spot the Error: Using AI for Critical Reading & Reasoning

Spot the Error: Using AI for Critical Reading & Reasoning

Safieh Moghaddam teaches undergraduate linguistics courses, including large-enrolment first- and second-year courses as well as medium-to-small-sized C- and D-level courses. Her teaching is grounded in active learning, inclusive pedagogy, and scaffolded skill-building. She addresses GenAI differently across contexts: in large-enrolment courses, she introduces it cautiously, with clear boundaries and structured support to protect learning goals and academic integrity; in third- and fourth-year courses, she integrates GenAI more intentionally as a tool for advanced research processes, revision, and critical evaluation, while keeping students responsible for original ideas, arguments, and evidence. 

Objectives

As part of the third-year course, LINC10: Linguistic Analysis and Argumentation (Fall 2025), this activity aimed to strengthen students’ critical reading and reasoning skills by engaging them with AI-generated text. Students were tasked with identifying inaccuracies, vague claims, and logical gaps in an AI-generated answer, explaining why these issues were problematic, and proposing improvements using concepts and evidence from the course. The activity encouraged students to return to core definitions and arguments, practice prioritizing significant errors, and develop information literacy by recognizing that AI outputs are not inherently authoritative. Ultimately, the goal was to help students question, verify, and improve texts rather than accept them at face value, fostering critical AI literacy alongside disciplinary knowledge. 

Learn more in Professor Moghaddam’s detailed assignment guide.

Process

The activity was designed as a structured, interactive exercise that guides students through analyzing and improving AI-generated text while applying course concepts. The steps included: 

Step 1: Prepare AI-Generated Answer

  • Instructor provides a course-specific prompt to an AI tool (e.g., ChatGPT) and selects an imperfect answer containing a mix of correct and incorrect material (e.g., misused terminology, vague claims, or logical gaps).
  • The AI response is left unedited to preserve errors for analysis. 

Step 2: Distribute Materials

  • Students receive the original prompt and the AI-generated answer in class or via Quercus.
  • The instructor reminds students that AI is a tool, not an authoritative source, and frames the task as an “argumentation detective” work. 

Step 3: Group Activity: Spot, Discuss, and Prioritize Errors (15–20 minutes)

  • Individual scan (3–5 minutes): Students highlight errors or weaknesses and jot notes on why they are problematic. 
  • Small-group discussion (10–15 minutes): Students compare flagged issues, classify error types (e.g., factual error, logical gap, overconfident claim), and prioritize the top 3–5 most significant problems. 

Step 4: Whole-Class Debrief (10–15 minutes)

  • Groups share one major error and their improved version.
  • Class discusses conceptual vs. stylistic issues, the risk of being misled by fluent but incorrect text, and strategies for critical evaluation.

Step 5: Post-Activity Reflection (200–300 words)

  • Students reflect on one interesting error, how their trust in AI changed, connect the activity to a course concept, and identify a future strategy for reading AI-generated text.

Future-Focused Skill Development

This activity supports future-ready learning by aligning with principles from the University of Calgary’s STRIVE model. For instance, it supports Student-Centred Learning by positioning students as active “argumentation detectives,” giving them autonomy to question, verify, and improve AI-generated text while applying course concepts. It also emphasizes Transparency, as students are guided to clearly understand the role of AI in the activity and learn to critically evaluate its outputs rather than accept them as authoritative. Finally, it promotes Responsibility by requiring students to identify inaccuracies and justify corrections using evidence and disciplinary knowledge, fostering ethical engagement with technology. Together, these principles help students develop critical AI literacy, reasoning skills, and collaborative problem-solving abilities that prepare them for academic integrity and informed decision-making in future academic and professional contexts.

Student Feedback

Professor Moghaddam shares: “Students responded positively to this activity. In post-activity reflections, some students noted that the task boosted their confidence in evaluating AI-generated content and spotting weak reasoning. Also, some emphasized that the activity reinforced course concepts (definitions, evidence standards, and argument structure) while showing why verification is important when using GenAI (quote from one student: eye-opening and useful for building both argumentation skills and AI literacy).”

Back to Top