Brian Redekopp leads AI values and learning project
Can you tell us a bit about your release-time project and what inspired you to take on this initiative?
BR: The main focus of the project is to create self-guided modules for students to help improve their AI literacy. The modules will be organized around three themes: understanding the basics of how generative AI tools like ChatGPT work, learning about the ethical issues involved in this technology, and discerning what sorts of uses are healthy and empowering from what sorts of uses are not.
In terms of what inspired me to take on the project, I suppose there are a couple of reasons. First and foremost, I’m very concerned about how an uncritical use of generative AI can seriously compromise students’ learning and overall well-being. As with social media and many other technologies, there’s a deep misalignment between the interests of the companies developing the technology and the interests of the users. I believe that a strong level of AI literacy is crucial if AI is going to serve human ends, rather than the other way around.
On a bit more of a positive note, I’ve been experimenting with AI in my own work quite a bit over the past couple of years, which has made me excited about the positive ways it can enhance learning. AI literacy is crucial here as well. Finally, I have a keen interest in the philosophy of technology, and designing AI literacy materials is a rewarding way to reflect on ideas in the field and to transate them into positive action.
How do you define “AI literacy,” and why do you think it’s important for students today?
BR: In pedagogy and course design one often encounters a distinction between knowledge, skills, and attitudes when it comes to the goals of a course or learning activity. It’s a helpful distinction for defining AI literacy as well.
So in terms of knowledge, AI literacy means understanding the basic principles behind what AI is and how it works. It also means being aware of the systems or forces at work in the development of AI and in its use, such as the economic incentives of developers and our psychological tendencies as users. (When we think about technology we tend to focus on the tool, but in order to get a handle on its benefits and harms we have to consider it in its human context.)
In terms of skills, AI literacy means not only knowing how to use AI tools effectively, but also being able to discern when AI is helpful and when it is not. This requires certain core skills in critical thinking, like identifying and questioning hidden assumptions and clarifying ideas.
Finally, in terms of attitudes, AI literacy means approaching AI with a critical mindset and an awareness of one’s own goals and priorities. “Critical” does not necessarily mean negative; it just means one is aware of the technology’s strengths, weaknesses, and ethical dimensions so that one is able to judge its outputs and regulate its use.
So, putting that all together, AI literacy could be defined as a critical, empowering approach to AI based on a solid understanding of AI technology and its human context.
This is very important if students (and all of us) are going to benefit from this technology and subordinate it to our own values, both individually and as a society.
What are some of the key goals you hope to achieve by the end and when is the end of the project?
BR: The project should be completed shortly after the end of the Fall 2026 semester. The main goal is to make available a series of short, engaging modules for students organized into the three themes. Though they’ll be self-guided, they’ll also be designed such that teachers can build on them in their own classes. So the larger goal is, of course, for the materials to actually be used by students and teachers.
What do you know about the use of AI at Dawson going into this project?
BR: Like most teachers, my knowledge of how students use AI is pretty piecemeal and anecdotal. In my own experience I’ve found that when students misuse it and submit work that is not really theirs, more often than not it stems from a lack of understanding of the technology, combined with feeling stressed and overwhelmed. I’ve also discovered in my own experiments with it in the classroom that it’s actually quite difficult for students to converse with a chatbot in a way that is conducive to learning. So my impression is that the use of “chat” is widespread amongst students, and that we need to bring this use out into the open in order to provide it with a positive framework.
What are you hoping to learn from the survey data?
BR: I’m really looking forward to the results of this survey—I’m hoping to get a much better picture of how students actually use AI, what their attitudes to it are, and what would be helpful to them in terms of what we offer at the college. The results will directly inform the content of the modules. So I’d really encourage all students to participate—this is a great opportunity for students and teachers to work together on this massive challenge, and for students to take an active role in their own education!
