Teaching Critical Thinking About AI: ELIZA as a Lens for Students
Use ELIZA chats to teach students to spot AI hype, evaluate claims, and build AI literacy after CES 2026's flood of 'AI' products.
Hook: Stop the Hype — Teach Students to See Past the Shiny CES 2026 Label
Teachers and students are drowning in headlines: “AI-powered toothbrush,” “AI fridge,” “miracle therapy bot.” After CES 2026 showcased wave after wave of AI-labeled products, classrooms need curriculum that builds skepticism and practical understanding, not fear. If your students can’t distinguish between genuine algorithmic innovation and marketing spin, they’ll be unprepared for homework, careers, and civic life.
Module Snapshot: Using ELIZA to Teach Critical Thinking About AI
This classroom module uses ELIZA — the 1960s pattern-matching chatbot — as a lens to uncover how conversational AIs work, where they break, and how to evaluate AI claims. AI literacy and sharp critical thinking are central outcomes: ELIZA’s simple design makes hidden mechanics visible. When students chat with it and compare responses to modern neural chatbots, they practice AI literacy and sharpen critical thinking.
Why ELIZA in 2026?
- ELIZA exposes the limits of surface-level conversational behavior: no real understanding, only pattern rules.
- After CES 2026’s flood of “AI-washed” products, ELIZA provides a compact experiment to distinguish marketing from engineering.
- It supports standards-aligned learning: computational thinking, media literacy, and civic reasoning.
Learning Objectives
- Students will explain the basic mechanism behind ELIZA and contrast it with modern neural chatbots.
- Students will identify at least three limitations of conversational AI (e.g., lack of grounding, hallucinations, privacy risks).
- Students will apply an evaluation checklist to assess AI claims in marketing or news (source, method, data, evidence, limitations).
- Students will produce a short portfolio: chat logs, analysis, and a public service announcement (PSA) about AI claims. Use the accompanying budget vlogging kit guide if you plan to record student PSAs.
Module Duration & Structure
This module can stretch from a single 90-minute class to a three-week unit. Below is a flexible 4-lesson plan suitable for middle and high school.
Lesson 1 — Discovery (45–60 minutes)
- Hook: Show 3 CES 2026 product blurbs that attach "AI" to everyday items. Ask: what do we actually know?
- Introduce ELIZA: short history and demo via an online ELIZA web app (or teacher-run emulator).
- Quick paired activity: students ask ELIZA five questions each and save the transcripts.
- Exit ticket: one sentence—how did ELIZA respond like a human, and how did it not?
Lesson 2 — Mechanisms & Comparison (60 minutes)
- Mini-lecture (10 min): ELIZA's pattern-matching and template substitution vs. modern language models' statistical prediction over tokens.
- Group activity: analyze transcripts. Mark moments that show rule-following, repetition, or evasiveness.
- Compare with a modern chatbot transcript (teacher-provided). Identify differences: fluency, factual errors, hallucinations, specificity.
Lesson 3 — Evaluation Framework & Application (60 minutes)
- Introduce the Evaluation Checklist (below).
- Students pick a CES 2026 product or an AI news headline and apply the checklist to evaluate the claim.
- Class discussion: Which claims were overblown? Which were reasonable? What evidence is missing?
Lesson 4 — Create & Communicate (45–90 minutes)
- Students create a 60–90 second PSA, infographic, or short essay summarizing how to spot AI hype and what questions to ask.
- Optional: Host a gallery walk or publish to a class blog. If you run this locally or offline, use a local emulator and edge hub to keep transcripts on-device.
Evaluation Checklist: What To Ask When an AI Claim Appears
Teach students a compact checklist they can apply in minutes. Use the acronym MODEL: Method, Outcomes, Data, Evidence, Limitations.
- Method: How does the system work? Is it template-based, rule-based, or model-based? Who built it?
- Outcomes: What measurable result is claimed? Is it user-facing (saves time) or behind-the-scenes (better caching)?
- Data: What data does it use? Is the training data described? Is there risk of bias?
- Evidence: Are there independent tests, peer reviews, or benchmarks shared? Or only testimonials and marketing copy? (See guidance on evaluating AI evidence.)
- Limitations: What failures are disclosed? How is privacy handled? Does it require post-editing (the "clean-up" problem)?
“After CES 2026, the dominant impression was not technical depth but marketing spin — AI became a sticker for products that sometimes didn’t need it.”
Use paraphrases like the line above to prompt debate. Students should connect ELIZA’s transparent mechanics to contemporary marketing tactics.
Practical Classroom Resources
- ELIZA web apps: free online ELIZA emulators let students chat without installs. (Search "ELIZA web chat".)
- Python option: the old eliza libraries and simple pattern files are great for computational-thinking classes — pair with on-device storage tips (see storage considerations for on-device AI).
- Modern chatbot for comparison: use a classroom-safe API or closed-off demo to produce a short transcript; emphasize teacher oversight and consider local or offline setups described in the home-edge router field review.
- Transcript annotation template: provide printed sheets or a Google Doc for marking response types (reflection, deflection, repetition, hallucination).
Sample Student Activity: Detecting the Trick
Give students this short task: chat with ELIZA for ten minutes, then identify three places where ELIZA used tricks to appear understanding. Below is a teacher-provided example.
Sample Transcript Snippet (teacher-provided)
Student: "I'm worried about my grades."
ELIZA: "Why are you worried about your grades?"
Student: "I feel like I'm not getting the help I need."
ELIZA: "What would it mean to you if you did get the help you need?"
Analysis
- This is pattern reflection: ELIZA transforms first-person statements into a question, creating the illusion of empathy.
- There is no grounding or concrete advice; ELIZA avoids specific facts and offers prompts instead.
- Students should mark these as "surface understanding"—it sounds human, but there is no model of the student’s situation behind it.
Assessment Rubric (Quick)
Use a 12-point rubric across three criteria: Understanding (4 pts), Application (4 pts), Communication (4 pts).
- Understanding: Can the student explain ELIZA’s mechanism and name at least two modern AI differences? (0–4)
- Application: Can the student apply the MODEL checklist to a real product or headline and justify their evaluation? (0–4)
- Communication: Does the PSA/essay clearly persuade peers and include actionable tips? (0–4)
Common Student Misconceptions & How to Address Them
- Misconception: Fluent responses equal understanding. Response: Compare ELIZA’s reflection with a modern model’s confident but false claim; both can be fluent without truth.
- Misconception: All AI is the same. Response: Teach simple taxonomy — rules/templates, statistical language models, retrieval-augmented systems, and multi-modal systems.
- Misconception: AI claims are always backed by data. Response: Use the MODEL checklist to look for evidence and independent evaluations.
Tie to 2026 Trends: Why This Module Matters Now
In late 2025 and early 2026 we saw two parallel trends: a surge of consumer-facing AI features (highlighted at CES 2026) and growing conversations about cleanup costs and reliability in enterprise deployments. Teachers should leverage both trends.
- CES 2026 illustrated AI washing: companies labeling mundane features as AI to capture attention. Students benefit from practicing skepticism.
- Industry conversations in 2025–2026 about "clean-up after AI" show that productivity gains often require verification and human oversight—an important class discussion point backed by recommendations for managing local infrastructure like the HomeEdge hub.
- Regulatory momentum and news coverage through 2025 increased public scrutiny; classrooms can convert headlines into investigative assignments.
Differentiation and Accessibility
Make this module inclusive:
- Lower grades: focus on chat and simple observation tasks; produce posters with four questions to ask about any "AI" product.
- Upper grades: include brief coding labs using a Python ELIZA script or a unit on prompt design versus model capabilities.
- ELL & special education: provide sentence starters, visual annotation tools, and small-group support for transcript analysis.
Extensions & Cross-Curricular Connections
- English/ELA: analyze rhetorical strategies in AI product copy; write rebuttal ads.
- Social Studies/Civics: investigate how AI claims affect public policy, privacy, and consumer protection debates in 2026 (see privacy-focused guidance).
- Computer Science: rebuild a tiny ELIZA in Scratch or Python to learn pattern-matching and stateful interaction.
Practical Challenges & Teacher Tips
- Safety & Privacy: If you use live commercial chatbots, require teacher oversight, and do not record sensitive student data. Consider on-device or local storage approaches when possible (see storage considerations).
- Time: For limited schedules, condense to two lessons: discovery + evaluation and a one-hour project for PSAs.
- Technical setup: No-code ELIZA web tools are easiest. If your school blocks external sites, prepare offline transcripts and a local emulator or use the router and failover suggestions in the home-edge router field review.
Actionable Takeaways for Teachers (Quick Checklist)
- Bring ELIZA into class—show students how pattern rules create the illusion of understanding.
- Compare ELIZA with at least one modern chatbot transcript to highlight different failure modes.
- Teach and use the MODEL checklist on real CES 2026 headlines or classroom-chosen product blurbs.
- Assess with the 12-point rubric and have students produce a PSA to demonstrate their learning.
Case Study: Middle School Pilot (What Worked)
In a January 2026 pilot, a middle school class used ELIZA and then examined three CES 2026 product pages claiming AI. Students quickly learned that the most compelling marketing claims lacked concrete evidence. One group produced a short PSA titled "Ask Before You Buy: 5 Questions for AI Products," which the school posted on its website and used during a parent night. Teachers reported improved student ability to spot overclaiming in ads and more nuanced questions in science class.
Final Reflections: Building Durable AI Literacy
ELIZA is more than a historical curiosity. In 2026, it’s a pedagogical tool that makes invisible mechanisms visible and helps students practice skepticism. After CES 2026’s tidal wave of AI labels, students who learn to ask MODEL-style questions will be better prepared to sift marketing from method, noise from evidence, and hype from help.
Call to Action
Ready to try this in your classroom? Download the free teacher packet (lesson plans, transcripts, rubric, and PSA templates) and run a pilot this term. Invite students to bring one example of an "AI" product from the news for class evaluation. If you try the module, share student PSAs and anonymized outcomes with our community so we can refine the module together.
Related Reading
- Teach Discoverability: How Authority Shows Up Across Social, Search, and AI Answers
- Reducing AI Exposure: How to Use Smart Devices Without Feeding Your Private Files to Cloud Assistants
- Gemini vs Claude Cowork: Which LLM Should You Let Near Your Files?
- Field Review: Budget Vlogging Kit for Social Pages (2026)
- Small, Focused Quantum Projects: A Practical Playbook
- Measure It: Using Sleep Wearables to Test If Aromatherapy Actually Helps You Sleep
- Step-by-Step: Installing and Migrating Games to a MicroSD Express on Switch 2
- Pool Sustainability Playbook 2026: Energy, Water, and Lighting Analytics
- Subway Surfers City: How the Sequel Reinvents the Endless Runner for Mobile Seasons
Related Topics
equations
Contributor
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you
Preparing for Rainy Days: How Weather Patterns Affect Learning Environments
Why Reproducible Math Pipelines Are the Next Research Standard (2026) — A Practical Guide
Automated Equation Discovery in 2026: Hybrid Symbolic–Neural Workflows, Benchmarks, and Production Strategies
From Our Network
Trending stories across our publication group