Teaching Generative AI and Its Impacts
In the fall of 2025 I am teaching an undergraduate course, "Designing with Emerging Technologies: Generative AI" at Olin College of Engineering. In Spring of 2025, I co-taught a similar graduate level version of the Olin course. This version was open to both RISD and Brown University students.
Cut through the hype and excitement surrounding generative AI by understanding for yourself what these tools can and cannot do. Through this course, students will learn to understand, design, and build with generative AI and, as they do, will grapple with the inevitable ethical concerns that these tools present. Depending on student interest and the changing nature of these tools, our activities may include partnering with generative AI to write short stories, creating speculative and practical designs or 3D renderings, generating diagrams, infographics and illustrations for reports, and introducing generative AI into students’ existing workflows. As we create, students will examine the quality of the outputs and concerns that arise about bias, hallucinations, efficiency, and more. Readings and discussions will address the ethics of AI, responsible use of generative AI, and design implications for engineers, designers, and other makers. A typical class will involve a brief instructor-led overview of the topic of the day, active learning with AI tools, and full-group discussions about the primary topic, emergent topics, and active learning experiences. No previous experience with either the theory or use of AI is required for this class; students will learn to use different tools in and outside of class. Students will use their experiences in class, through coursework, and beyond to develop conceptions about how they, personally, want to use (and not use) AI as responsibly as possible.
Cut through the hype and excitement surrounding generative AI by understanding for yourself what these tools can and cannot do. Through this course, students will learn to understand, design, and build with generative AI and, as they do, will grapple with the inevitable ethical concerns that these tools present. Depending on student interest and the changing nature of these tools, our activities may include partnering with generative AI to write short stories, creating speculative and practical designs or 3D renderings, generating diagrams, infographics and illustrations for reports, and introducing generative AI into students’ existing workflows. As we create, students will examine the quality of the outputs and concerns that arise about bias, hallucinations, efficiency, and more. Readings and discussions will address the ethics of AI, responsible use of generative AI, and design implications for engineers, designers, and other makers. A typical class will involve a brief instructor-led overview of the topic of the day, active learning with AI tools, and full-group discussions about the primary topic, emergent topics, and active learning experiences. No previous experience with either the theory or use of AI is required for this class; students will learn to use different tools in and outside of class. Students will use their experiences in class, through coursework, and beyond to develop conceptions about how they, personally, want to use (and not use) AI as responsibly as possible.
In the fall of 2024, I taught a graduate-level block seminar class, "Critical Thinking in the Time of Generative AI" in the Munich School of Politics and Public Policy at the Technical University of Munich. As I've been thinking about this topic for years now, it was incredibly rewarding to consider the critical elements I hope students learned from the class. I covers psychological fundamentals of how people learn, critical thinking itself, current and historical technologies that are designed to think with, and what we might want to design for people to continue to develop critical thinking going forward. The course description is as follows:
Hidden beneath the hype of generative AI and Large Language Models, lies fundamental questions of how these tools will impact how we think. When we write, draw, and create other media, the activity helps us to organize our thinking and to express what we know. What happens to how we think, what we know, and how we are understood by others when we shortcut this process with LLMs? In this course, we will examine how the process of creation develops our ability to think and how LLMs might impact learning and understanding and we will discuss what this means for how we think LLMs should be used in different contexts. In this module, students will be able to understand the foundational social science behind how extended cognition impacts how we think and work and how Large Language Model AI systems can be best designed to enable both human intelligence and experience. Readings will include foundations of LLMs, relevant cognitive models such as situated and extended cognition, recent research on how people are working with current AIs, and the design of tools to think with. Guest speakers will be invited to speak on topics of AI’s impact on creative activities such as writing, the design of technological tools for thinking and learning, and/or AI bias. The seminar will include lectures and panel discussions, group discussions, and a final project. The final project will be the summative measure of student work and class participation, and responses to readings will also be graded.
[ back]