Robert Stephens
Humanities, Philosophy

Minds & Machines

By Robert Stephens
Cohort 2019-2020

A.I.-RELATED COURSE CONTENT PORTFOLIO

There are a number of resources here for developing course modules on A.I. for Social Science and/or Humanities courses.  Feel free to use any of these slides or activities in your own courses.

The materials here are ones I use for a College level Philosophy course entitled “Minds & Machines”.  That course is mostly about A.I., but also covers some traditional Philosophy of Mind beyond just A.I. (e.g., the question of animal minds, the question of consciousness, a bit of Linguistics, etc.).

 

 

SLIDES/ASSIGNMENTS/ETC.

THE MOST HUMAN HUMAN

Brian Christian

Most Human Human

 

An excellent starting point for including A.I.-related content in any course would be Brian Christian’s 2011 book The Most Human Human, which he also developed into a shorter essay “Mind vs. Machine” in The Atlantic.  (The article is great if you wanted to spend just a few classes on the question of machine intelligence, without devoting the time to an entire book.)

Christian discusses the “Turing Test” and explains why it is such a challenge for machines to pass such a test.  Along the way, the book gives a nice lay introduction to the history of computer science and A.I. research, and some great background into literature, linguistics, and philosophy surrounding the question of artificial intelligence.  I use this book as the central text for my course, and then we jump in and out of it to supplemental readings and classes on various aspects that make contact with the themes and ideas Christian presents.  These submodules are below!

My slides on Brian Christian.

 

THE TURING TEST

In depth, including critiques

Turing

The Brian Christian readings invite more research into the “Turing Test” for A.I., so here are some materials that I use to go deeper on this subject:

Students can read Turing’s 1950 paper “Computer Machinery and Intelligence”, in which the Turing test is defined. Additionally, I introduce students to some famous philosophical objections to the Turing Test as a meaningful measure of “intelligence”: specifically, John Searle’s 1980 “Chinese Room” thought experiment.

Here are my slides for:

Turing

Searle

 

THE PROSPECTS FOR GENERAL A.I.

Context, communication and the “frame problem”

The limitations and challenges of machine communication are highlighted by the Turing test, so you can take students on a more in-depth tour of this aspect: why machines fail to understand context and/or have “common sense”.  This is helpful to lay out a distinction between what is known as General A.I. — a human-like intelligence, which does not (yet) exist — vs. the more limited domain-specific A.I. that currently exists.

I give students some background in the Linguistics field of Pragmatics (i.e., how we understand non-literal language, uses of irony, metaphor, implicature, etc.) by reading some of Paul Grice’s Logic and Conversation, and doing some fun exercises on decoding non-literal communication, and trying to figure out HOW one might program a machine to decode such remarks (see activity below).

We also read a short piece by Daniel Dennett entitled “Cognitive Wheels: the Frame Problem in A.I.” which introduces the “frame problem” of how to program machines to understand context in a dynamic environment.  We also look a bit at Jerry Fodor who has written extensively on that topic as well (and is a pessimist about A.I.) in books like The Language of Thought (1980), Modularity of Mind (1983) and  The Mind Doesn’t Work That Way (2000).

Here are my slides for:

Grice

The Frame Problem

And an in-class assignment based on looking at Grice.

 

A.I. ETHICS

Contemporary and future challenges posed by A.I.

An entire course could be devoted to this topic, or it can be a sub-module inserted into a broader ethics course.  As a quick way in to the question, I would recommend two texts to start with: 

Nick Bostrom and Eliezar Yudkowsky’s Ethics of A.I. (2011)

The A.I. Now Institute annual report

Bostrom’s work focuses on the future threats of A.I., including the possibility of human-like superintelligence being developed.  The A.I.Now Institute is a collective primarily run by Kate Crawford and Meredith Whittaker and their website has a ton of useful links, and numerous TED talks.

Here are my slides on A.I. Ethics.

Here are some in-class discussion scenarios involving A.I. Ethics.

 

DEEPER INTO THE PHILOSOPHICAL ISSUES

Some more Philo-heavy detours: functionalist theories of mind, the question of consciousness, etc.

 

If you want to engage in more Philosophy-oriented A.I.-related content, you can do a few things:

Some background on machine functionalism as a philosophical theory which underlies computer science and the prospects for A.I. generally.  Ian Ravenscroft has good Introductory chapters on Functionalism and Computationalism that introduce the main philosophers associated with it (Hilary Putnam, Ned Block, David Lewis) and connects their work directly to Turing and Searle and others already mentioned above.

Here are the chapters from Ravenscroft:

Computationalism

Functionalism

And my slides.

Another area of philosophical interest that ties in to A.I. is the question of conscious experience, and whether machines can ever have such experiences (or whether we can wed ours in part or in full to machinery in the future).  Susan Blackmore’s What is it Like to Be? is a good introduction to the problem of consciousness.  And her book Conversations on Consciousness (2007) is also a great resource, with multiple interviews with philosophers, psychologists and computer scientists.

Finally, David Chalmers and Andy Clark has a fun/weird argument for what they call the Extended Mind Thesis, which suggests there is nothing special about the human brain and that we already incorporate technology into our thinking and cognition.  Clark has a full book on the subject: Supersizing the Mind (2008).  Brie Gertler and Jerry Fodor both have critical reviews of this book, if you are interested in having students adjudicate that debate.  (It makes a great essay topic!)

Here are some more slides on:

Consciousness

Extended Mind

MORE TECHNICAL STUFF

Algorithms, machine learning, robotics, etc.

 

If you want to introduce some more technical context (though still designed for non-STEM students), there are a number of texts that I would recommend, all of which could be focused on in depth, or merely introduced as supplementary material.

Here is a quick list of three good, recent, lay audience books on the current state of A.I. research and the sorts of machine learning algorithms that are already embedded in our lives:

Algorithms to Live By – Brian Christian & Tom Griffiths (2016)

How Smart Machines Think (2018) – Sean Gerrish

Possible Minds: 25 Ways of Looking at A.I. (2019) – ed. John Brockman