We’ve all heard them—the predictable responses that make ethical discussions feel like fill-in-the-blank exercises.

“Victor should have been more responsible.”

“He should have thought things through.”

“He was too ambitious.”

But real ethical dilemmas don’t come with tidy answers. And in a world where AI is evolving faster than we can regulate it, our students are facing decisions just as complicated as Victor’s. 

The question is: Are we helping them think critically about those choices?

The Problem with Traditional Ethics Discussions

We’ve all been there. You pose a thoughtful question about a text, and students respond with what they think you want to hear.

These surface-level responses aren’t wrong, but they miss something crucial: the complex reality of making ethical choices in the moment. 

When we tacitly encourage this kind of surface-level thinking we shy away from our core purpose of helping our students become strong critical thinkers. Life is messy, and “the answers” are rarely clear cut.

Our students are making these messy choices every day with AI, often without realizing the issues are not cut and dry.

Why “Don’t Use AI” Is the Wrong Conversation

When I hear people talk about AI in education, I often hear one reaction: Just don’t use it.

But let’s be real—our students are using it. And they’re making ethical decisions about it every day, whether they realize it or not. 

I get it. AI tools can short-circuit deep thinking, make cheating easier, and leave us questioning what learning even is anymore. But taking a hard line against AI is a losing battle—because our students are using it, whether we like it or not. The real question isn’t “Should they use AI?” but rather, “Do they understand the responsibility that comes with using it?”

That’s exactly the shift I made in my classroom when we discussed Frankenstein—and it changed everything.

Creating Real Interdependence

The key to deeper discussions is creating genuine interdependence – where students actually need each other’s perspectives to build understanding. Here’s what worked in my classroom:

1. Start with Their Reality

Instead of beginning with Victor’s choices, we started with their own experiences using AI. What lines had they drawn? What choices were they wrestling with? Suddenly, they had skin in the game.

2. Build Pattern Recognition

We created a “red flag tracking system” – identifying warning signs in Victor’s journey that might parallel modern technology use. Students became invested in spotting patterns that could help them navigate their own choices.

3. Create Knowledge Gaps

Rather than having everyone analyze the same elements, we divided the class into focus groups where students are looking at how 

Each group developed expertise in their area, creating natural points where they needed insights from other groups to complete their understanding.

Making It Work in Your Classroom

Here’s how to create these deeper discussions:

1. Connect to Student Experience

Before diving into Victor’s choices, have students map their own technology ethics decisions. What boundaries have they set? What temptations do they face?

2. Create Real Stakes

Help students see how the ethical questions in Frankenstein directly connect to choices they’re making now. The discussion becomes more urgent when it’s personal.

3. Build Genuine Interdependence

Structure discussions so students actually need each other’s insights. This might mean:

  • Assigning different research focuses
  • Creating expert groups
  • Requiring synthesis across perspectives

4. Allow for Complexity

Move beyond simple right/wrong dichotomies. Help students explore the nuanced reality of ethical decision-making in both Victor’s time and their own.

The Real Transformation

The most powerful moment came when a usually quiet student raised her hand and said, “I always thought Victor was just arrogant and stupid. But now I get it. Sometimes you don’t realize you’ve crossed a line until you’re already past it.”

That’s when I knew the discussion had transcended mere literary analysis. Students weren’t just talking about a book anymore – they were exploring their own relationship with power, responsibility, and technological ethics.

Because ultimately, that’s what makes Frankenstein so relevant today. It’s not just a story about a university who created a monster. Not really. It’s about the choices we all face when given access to powerful (potentially dangerous) tools, and how we navigate the thin line between innovation and responsibility.

These conversations don’t have easy answers, and that’s exactly the point. Ethics—whether in Frankenstein or in our own use of AI—is rarely a simple matter of good versus bad. The real challenge is learning to navigate the gray areas. 

Frankenstein Resources

Read the Teaching Frankenstein Series:

Grab the Resources:

Beyond ‘Don’t Use AI’: Creating Meaningful Discussions About Technology Ethics
Tagged on:

One thought on “Beyond ‘Don’t Use AI’: Creating Meaningful Discussions About Technology Ethics

  • February 8, 2025 at 3:53 pm
    Permalink

    Hello! I loved your podcast episode where you discussed how to create knowledge gaps in Macbeth and asked students to track different aspects of the play, and I’m currently trying to figure out how to apply that same idea to Frankenstein. I was happy to see you’re thinking about that, too! What knowledge gaps do you use when you teach Frankenstein? Your one sentence is cut off in the Knowledge Gaps part, so I just thought I’d ask.
    Thanks!

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php
error: Content is protected !!
Skip to content