Skip to main content

EthiQuest: Generative AI and Us

A card-based activity where teachers, learners, or parents can select cards that prompt discussion/debate related to generative AI ethics. All issues are written to be relevant to 10-14 year old kids (e.g., “Alex uses an AI program to write a history essay and gets an A. Later, Alex feels guilty when the teacher praises the essay in front of the class. Should Alex confess to using AI?”; “Taylor is lonely and starts chatting with an AI friend who is always available. Taylor begins to prefer the AI’s company over real friends. Is it healthy to replace human friendships with AI?”).

Teacher Introduction

Today, we’re diving into the fascinating world of Generative AI. Generative AI is a special kind of computer program that can create new content all by itself. This could be anything from writing stories, making pictures, composing songs, or even creating videos. The “generative” part means it’s all about making new things that didn’t exist before.

Think of Generative AI like a really smart robot that’s learned a lot from reading books, looking at art, listening to music, and watching videos. You give it a tiny seed of an idea, like “write me a poem about the moon,” and using what it’s learned, it creates something new and unique.

Applications of Generative AI

  • Writing: It can write stories, essays, and even news articles.
  • Art: It can paint pictures or design graphics from scratch.
  • Music: It can compose new melodies or entire songs.
  • Videos: It can edit videos or create new scenes.

Cool Examples You Might Know

  • Chatbots: Like a computer program you can chat with that writes back to you in real time.
  • Art Apps: Apps where you type in what you want to see, and it draws it for you.
  • Music Generators: Websites where you pick a mood, and it creates a piece of music to match.

What Are Deepfakes?

A deepfake uses Generative AI to make videos or audio recordings that look and sound like real people doing or saying things they never actually did. Deepfakes can be super fun for creating funny clips or imagining cool scenarios. But, they can also be tricky because they might make it hard to tell what’s real and what’s not. That’s why it’s important to use them wisely and know how to spot them.

Real World Implications

Generative AI is super cool, but like any tool, how we use it matters. Today, we’re going to learn not just about the amazing things AI can do but also how to use it responsibly. We’ll explore creating with AI, from writing and art to music and beyond, and we’ll discuss the importance of being honest, respecting others’ work, protecting privacy, and making sure we’re having a positive impact.

Experimenting with MidJourney (or another generative AI tool?)

MidJourney is like your creative buddy that helps bring your imaginations to life. Want a picture of a castle on the moon? Or maybe a painting that combines the styles of your two favorite artists? MidJourney can help create that!

Using MidJourney, we’ll type in our ideas and see how the AI interprets them. It’s like telling a friend to draw something based on your description, but this friend is a computer program.

Once MidJourney creates our images, we’ll have a show and tell. This will be our chance to see how different people’s ideas turned into different kinds of artwork, all thanks to AI.

We’ll talk about what surprised us, what we learned, and how it felt to see our ideas transformed by AI. Did the outcomes match what we imagined? How does it feel to co-create with AI?

Activities

Co-design Activity 1: Designing AI Watermarks for Ethical Use

Prompt: Generative AI can create amazing things like essays, artworks, songs, and even realistic videos. But it’s important for people to know when something has been made with AI so they can understand its origins and how to trust it. Your task is to design a unique watermark for each type of AI-generated content: text (like essays), pictures (like digital art), videos, and deepfakes (videos that look real but are actually made by AI to mimic someone else doing or saying things they never did).

  1. Text (Essays, Articles):
    1. AI-generated texts include essays, reports, articles, and any written content produced by artificial intelligence.
    2. Watermark Design: Propose a watermark design that could be integrated into AI-generated texts. Consider the use of discrete symbols, unique headers, or footers that include AI identification. The design should be easily recognizable yet not detract from the readability of the text. There are no wrong answers, get creative!
  2. Pictures (Digital Art):
    1. This category encompasses all forms of visual artworks created by AI, ranging from digital paintings to graphic designs.
    2. Watermark Design: Envision a watermark that subtly but clearly indicates the artwork’s AI origin. This could be a specific signature style, an embedded symbol within the artwork, or a watermark that complements the art style without overwhelming it. There are no wrong answers, get creative!
  3. Videos:
    1. AI-generated or edited videos include any moving images produced or significantly altered by AI technologies.
    2. Watermark Design: Design a watermark approach that can be seamlessly integrated into videos. This could involve a translucent logo, a recurring visual motif, or a brief introductory message that does not interrupt the viewer’s experience. There are no wrong answers, get creative!
  4. Deepfakes:
    1. Deepfakes are hyper-realistic video or audio recordings made using AI, where individuals appear to say or do things they did not actually do.
    2. Watermark Design: Given the deceptive intent behind deepfakes, devising a watermark strategy is particularly challenging. Propose a method to embed a clear but non-intrusive indicator that alerts viewers to the content’s inauthentic nature, potentially through encoded symbols only visible under specific conditions or through digital certificates. There are no wrong answers, get creative!

 

Split into 4 small groups and design for one of of the types of Generative AI. Then, present to each other. Discuss why it’s important for AI-generated content to be identifiable and how your watermarks help achieve that! 

 

Co-design Activity 2: Crafting a Generative AI Code of Conduct

Prompt: As generative AI becomes more common, it’s crucial to think about how we use it responsibly. Imagine you’re creating a “Generative AI Code of Conduct” that outlines how to use this technology ethically, focusing on creating things like writings, art, music, and videos. This code should address concerns like honesty, respect for original creators, privacy, and the impact on society.

 

Consider the following points for your code:

  1. Honesty and Transparency: How do we tell people when we’ve used AI to help with our work? Why is it important to let them know?
  2. Respect for Originality: How can we make sure the AI doesn’t just copy someone else’s hard work without saying thank you or giving them something for it? What rules can we make to protect people’s original ideas?
  3. Privacy Protection: If AI uses personal stuff (like pictures or things we’ve written) to make something new, what rules do we need to make sure that’s done in a way that’s safe and respects everyone’s privacy?
  4. Positive Impact: How can we use AI to help people and make the world cooler and more creative, instead of causing problems or spreading lies? What are some ways to make sure AI is used for good stuff?
  5. Accountability: If something goes wrong and the AI ends up making something that could hurt someone’s feelings or spread wrong information, what steps should we take to fix it?

 

Presentation and Discussion:

 

In small groups, participants will create posters that illustrate their Generative AI Code of Conduct. These posters should include specific examples, do’s and don’ts, and visual symbols or scenarios.

Present the posters to the class, highlighting the importance of each guideline.

Engage in a discussion on the impact of ethical AI use on society, creativity, and individual rights, and how the Code of Conduct can guide responsible AI creation and consumption.

Present the posters and discuss as a class why each guideline is important and how it can help guide responsible AI creation and consumption.

 

Co-design Activity 3: Celebrating Human Creativity – What AI Can’t Replicate

Teacher Introduction:

While Generative AI can do some pretty amazing things, there are still lots of aspects of creativity and expression that are uniquely human. Today, we’re going to explore what makes human creativity special and discover things we can do that AI can’t quite match. This activity will help us appreciate the value of human touch in art, storytelling, music, and more!

 

Humans bring emotions, experiences, and imperfections to their creations that add depth and meaning. Our personal stories, the way we see the world, and even our mistakes make our creations unique and valuable.

 

Choose one of the following ways to show that: 

 

  1. Show the Process Behind the Creation:
    1. Task: Choose an art form (drawing, writing, music, etc.). Document your creative process from start to finish. Show your initial ideas, changes you made along the way, and how you felt during the creation. Discuss as a group how this process reflects your personal touch and how it might differ from an AI’s approach.

 

  1. Human vs. AI Creation: A Comparative Exploration:
    1. Task: Split into pairs and create two pieces of content: one using Generative AI (like MidJourney) and one made entirely by yourselves. Compare the results, focusing on the emotional depth, personal expression, and uniqueness of the human-made creation versus the AI’s.

 

Discussion Points:

  • How do human imperfections or personal experiences influence creativity?
  • In what ways do human creations connect differently with audiences compared to AI-generated content?
  • Can AI ever fully replicate the human creative process or emotional depth?

As a class, reflect on the importance of human creativity in an increasingly digital world. 

Discuss how embracing our unique human qualities can enhance our creations, even in collaboration with AI tools.

 

Co-design Activity 4: Exploring AI Tradeoffs

Objective:

To understand the tradeoffs associated with different AI systems by analyzing various scenarios where AI is utilized.

 

Axes: 

  1. Accuracy vs. Inclusivity: Achieving high accuracy in AI systems may sometimes lead to biases that exclude certain groups of people, resulting in a tradeoff between accuracy and inclusivity.
  2. Efficiency vs. Complexity: Increasing the efficiency of AI systems sometimes involves simplifying models, which may reduce their ability to handle complex tasks or datasets effectively.
  3. Privacy vs. Utility: Balancing privacy concerns with the utility of AI systems can be challenging, as collecting more data often leads to better performance but may compromise individuals’ privacy rights.
  4. Generalization vs. Specificity: AI models can be optimized to generalize well across different tasks or domains, but this may come at the expense of specific performance in any single task or domain.
  5. Scalability vs. Resource Consumption: Scaling AI systems to handle large amounts of data or complex problems may require significant computational resources, leading to tradeoffs in terms of cost and energy consumption.
  6. Transparency vs. Intellectual Property: Revealing the inner workings of AI systems for transparency purposes may expose proprietary algorithms or techniques, creating a tradeoff between transparency and protecting intellectual property.
  7. Automation vs. Human Oversight: Increasing automation in AI systems can enhance efficiency but may reduce the ability for human oversight, raising concerns about accountability and ethics.
  8. Fairness vs. Performance: Ensuring fairness in AI systems often involves implementing measures to mitigate biases, which may impact overall performance or accuracy.
  9. Robustness vs. Adaptability: Designing AI systems to be robust against adversarial attacks or noisy input may limit their ability to adapt to changing environments or new data distributions.
  10. Novelty vs. Reliability: Implementing cutting-edge AI techniques and algorithms may introduce new functionalities and capabilities but could also compromise the reliability and predictability of the system.

 

Language for young people to understand:

  1. Accuracy vs. Inclusivity: Sometimes making AI super accurate can leave out certain groups of people, which means there’s a tradeoff between getting things right and making sure everyone is included.
  2. Efficiency vs. Complexity: Making AI systems work faster often means making them simpler, which can make it harder for them to handle really tricky stuff.
  3. Privacy vs. Utility: It’s tough to balance how much data AI systems need to work well with how much privacy people want to keep. More data can mean better performance, but it might also mean giving up some personal privacy.
  4. Generalization vs. Specificity: AI can either be good at handling lots of different tasks or really great at one specific thing, but it’s hard for it to be both at the same time.
  5. Scalability vs. Resource Consumption: Making AI systems bigger to handle lots of information can use up a lot of computer power, which can be expensive and not very good for the environment.
  6. Transparency vs. Intellectual Property: If we want AI systems to be transparent and understandable, we might have to give away some of the secret tricks that make them work, which could hurt the companies that made them.
  7. Automation vs. Human Oversight: Letting AI do lots of things automatically can be really handy, but we have to make sure there are still humans keeping an eye on things to make sure they’re going right.
  8. Fairness vs. Performance: Making AI fair for everyone might mean it’s not as good at its job because we have to work hard to make sure it doesn’t favor one group over another.
  9. Robustness vs. Adaptability: AI systems can either be really good at handling unexpected stuff or be able to learn and change quickly, but it’s hard for them to do both at the same time.
  10. Novelty vs. Reliability: When we use the newest and coolest AI tech, it might give us some really cool features, but it might also be less reliable because it’s not been tested as much.

 

Printables