I. Course description
The course aims to introduce the students to the philosophical foundations of cognitive science. This academic year, the course will focus on two subjects: the nature of consciousness and the status of Large Language Models (LLMs) as models of cognition. The course is comprised of two parts. During the first half of the semester, the instructor will give a series of lectures (with in-class discussion encouraged) to familiarize the students with basic philosophical concepts and ideas about consciousness and AI/LLMs. The second half of the semester will be devoted to student presentations.
II. Attendance
Students can miss no more than two meetings without consequences. If the number of missed meetings exceeds 2, the student must complete an additional assignment (e.g. write a summary of a selected article or chapter). An attendance list will be collected each week.
III. Student presentations
During the second part of the course, the students will break into twelve study groups (each consisting of 2-3 students) to deliver a series of in-class presentations, one per study group. The plan is to have two weekly presentations during the second half of the semester, with each presentation lasting approximately 35 minutes and followed by a short Q&A session. The presentation should consist of a synthetic and critical discussion of a given subject based on a thorough reading of multiple sources. Each study group should try to seriously engage with the relevant material (i.e. read through and develop a good understating of it), discuss it together, and try to recognize the most critical or recurring ideas in order to discern the “big picture” that emerges from the material (but also noting any conflicting evidence or points of contention between different authors). For presentation subjects and suggested reading lists, consult the course schedule below.
Tips on preparing presentations:
Stage 1: Get to know the material. Start by carefully reading through the papers listed as reading material for your presentation subject. Do not worry if you do not understand every detail (also, please remember that I can help you understand the material during my office hours). Instead, focus on the general picture that emerges from the readings. What is the core issue that those papers focus on? Why is this topic relevant for this particular class? What are the most important concepts, ideas, or arguments? Are there any points of contention between different authors? Also, please notice that the papers listed below are available on-line. They are either originally published as open access or are available in repositories other than academic journals (e.g. ReasearchGate, PhilPapers, Academia.edu, Semantic Scholar, or on personal web pages of authors). However, if you have trouble finding or accessing any material, please let me know. Also, please be aware that acquainting yourself with the papers may take time and effort, so start reading in advance (say, at the very least a month before your presentation).
Stage 2: Discuss. Share your thoughts about the readings during a group meeting (or over the course of multiple meetings – it is up to you how you organize your work). Try to find if you interpret relevant ideas similarly. If some of you had trouble understanding some parts of the material, perhaps other members of the study group can help you make sense of it. Discuss the material. Do you all agree with the positions defended in relevant works? Do you all agree on how to interpret those ideas? Which arguments do you find persuasive and why? Which parts of the readings did you find (most) interesting and relevant?
Stage 3: Prepare the presentation. When preparing the presentation, focus on the “big picture” rather than minute details. That is, you should focus on clearly stating the relevant problem/theory and summarizing the most relevant arguments or empirical results. The aim of the presentation is to introduce the class to a given subject in an easy-to-understand and engaging way. You do not have to base your presentation on all the readings listed under your presentation subject – after all, you may find some papers less interesting and relevant than others, and you may omit them if you choose to. When preparing the presentation, you may also make use of readings other than those listed.
IV. Grading
The grades will be determined by (1) student presentations and (2) in-class participation.
V. Course schedule
1. Introduction
2. Lecture 1: Consciousness, part I: Conceptual distinctions and the Hard Problem
3. Lecture 2: Consciousness, part II: Zombies and Mary the Neuroscientist
4. Lecture 3: Consciousness, part III: Illusionism vs panpsychism
5. Lecture 4: AI and the computational mind, part I: Introduction
6. Lecture 5: AI and the computational mind, part II: Philosophy of Large Language Models
7. Study break: no class this week but additional office hours to allow the students to consult the instructor about their presentations if needed
8. Student presentations
Presentation 1: The Attention Schema theory of consciousness
• Graziano, M. S. (2017). The Attention Schema Theory: A foundation for engineering artificial consciousness. Frontiers in Robotics and AI.
• Graziano, M. S., & Gennaro, . (2017). The Attention Schema Theory of Consciousness. In Routledge Handbook of Consciousness (pp. 174-187). Routledge.
• Graziano, M. S. A., Guterstam, A., Bio, B. J., & Wilterson, A. I. (2020). Toward a standard model of consciousness: Reconciling the attention schema, global workspace, higher-order thought, and illusionist theories. Cognitive Neuropsychology, 37, 155-172.
• Webb, T., & Graziano, M. S. (2015). The attention schema theory: a mechanistic account of subjective awareness. Frontiers in Psychology, 6.
Presentation 2: Exploring new frontiers in consciousness science: Contentless experiences in meditation and white dreams
• Fazekas, P., Nemeth, G., Overgaard, M. (2019). White dreams are made of colours: What studying contentless dreams can teach about the neural basis of dreaming and conscious experiences. Sleep Medicine Reviews, 43, 84–91.
• Windt, J. (2020). Consciousness in sleep: How findings from sleep and dream research challenge our understanding of sleep, waking, and consciousness. Philosophy Compass, 15(4), e126661.
• Woods, T.J., Windt, J.M., Carter, O. (2022). Evidence synthesis indicates contentless experiences in meditation are neither truly contentless nor identical. Phenomenology and the Cognitive Sciences.
• Woods, T.J., Windt, J.M., Carter, O. (2022). The path to contentless experience in meditation: An evidence synthesis based on expert texts. Phenomenology and Cognitive Sciences.
• Woods, T.J., Windt, J.M., Brown, L., Carter, O., Van Dam, N.T. (2023). Subjective experiences of committed meditators across practices aiming for contentless states. Mindfulness, 14, 1457–1478.
9. Student presentations
Presentation 3: Exploring new frontiers in consciousness science: Fetus consciousness
• Bayne, T., Frohlich, J., Cusack, R., Moser, J., Naci, L. (2023). Consciousness in the cradle: on the emergence of infant experience. Trends in Cognitive Sciences, 27, 1135–1149.
• Ciaunica, A., Safron, A., Delafield-Butt, J. (2021). Back to square one: the bodily roots of conscious experiences in early life. Neuroscience of Consciousness, 2021, niab037.
• Frohlich, J., Bayne, T., Crone, J.S., DallaVecchia, A., Kirkeby-Hinrup, A., Pedro A.M. Mediano, P.A.M., Moser, J., Talar, K., Gharabaghi, A., Preissl, H. (2023). Not with a “zap” but with a “beep”: Measuring the origins of perinatal experience. NeuroImage, 273, 120057.
• Moser, J., Schleger, F., Weiss, M., Sippel, K., Semeia, L., Preissl, H. Magnetoencephalographic signatures of conscious processing before birth. Developmental Cognitive Neuroscience, 49, 100964.
Presentation 4: The feasibility of artificial consciousness
• Aru, J., Larkum, M.E., Shine, J.M. (2023). The feasibility of artificial consciousness through the lens of neuroscience. Trends in Neurosciences, 46, 1008-1017.
• Butlin, P., Long, R., Elmoznino, E. et al. (2023). Consciousness in Artificial Intelligence: Insights from the science of consciousness. https://arxiv.org/abs/2308.08708
• Chalmers, D. (2023). Could a large language model be conscious? https://arxiv.org/abs/2303.07103
10. Student presentations
Presentation 5: How could we test for consciousness in animals and artificial systems?
• Allen, C., Trestman, M. (2016). Animal consciousness. In: E. Zalta (ed.). The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/consciousness-animal (focus on section 6 – Evolution and distribution of consciousness).
• Bayne, T., Seth, A.K., Massimini, M. et al. (2024). Tests for consciousness in humans and beyond. Trends in Cognitive Sciences, 28(5), 454–466.
• Dung, L (2023). Tests of animal consciousness are tests of machine consciousness. Erkenntnis.
• Schwitzgebel, E. (2020). Is there something it is like to be a garden snail? Philosophical Topics, 48(1), 39–63.
• Shelvin, H. (2021). Non-human consciousness and the specificity problem: A modest theoretical proposal. Mind & Language, 36(2), 297–314.
Presentation 6: The ethics of artificial consciousness
• Metzinger, T. (2021). Artificial suffering: An argument for a global moratorium on synthetic phenomenology. Journal of Artificial Intelligence and Consciousness, 8(1), 1–24.
• Lee, A. (forthcoming). Consciousness makes things matter. Philosophers’ Imprint.
• Hildt, E. (2023). The prospects of artificial consciousness: Ethical dimensions and concerns. American Journal of Bioethics - Neuroscience, 14(2), 58-71.
• Blckshaw, B.P. (2023). Artificial consciousness is morally irrelevant. American Journal of Bioethics - Neuroscience, 14(2), 72-74.
11. Student presentations
Presentation 7: Do (can) LLMs possess world models?
• Chiang, T. (2023). ChatGPT is a blurry JPEG of the web. New Yorker: https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web
• Mahovald, K., Ivanova, A., Blank, I.A., Kanwisher, N., Tenenbaum, J.B., Fedorenko, E. (2023). Dissociating language and thought in large language models. https://arxiv.org/abs/2301.06627
• Trott, S., Jones, C., Chang, T., Michaelov, J., Bergen, B. (2023). Do large language models know what humans know? Cognitive Science, 47, e13309.
• Yildirim, M, Paul, L.A. (2024). From task structures to world models: What do LLMs know? Trends in Cognitive Sciences, 28(5), 404–415.
Presentation 8: Meaning and understanding in LLMs?
• Chalmers, D. (2023). Does thought require sensory grounding? From pure thinkers to large language models. Proceedings and Addresses of the American Philosophical Association, 97, 22–45.
• Jones, C., Bergen, B. (2023). Does GPT-4 pass the Turing test? https://arxiv.org/abs/2310.20216
• Mitchell, M., Krakauer, D.C. (2023). The debate over understanding in AI’s large language models. PNAS, 120, e2215907120.
• Mollo, C.D., Milliere, R. (2023). The vector grounding problem. https://arxiv.org/abs/2304.01481
• Pezzulo, G., Parr, T., Cisek, P., Clark, A., Friston, K. (2024). Generating meaning: active inference and the scope and limits of passive AI. Trends in Cognitive Sciences, 28, 97–112.
12. Student presentations
Presentation 9: Aligning AI with human values: the case of algorithmic bias.
• Buolamwini, J., Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 115.
• Chen, R.J., Wang, J.J., Williamson D.F.K et al. (2023). Algorithmic fairness in artificial intelligence for medicine and healthcare. Nature Biomedical Engineering, 7,719–742.
• Fazelpour, S., Danks, D. (2021). Algorithmic bias: Senses, sources, solutions. Philosophy Compass, 16(8), e12760.
• Hao, K. (2020). The two-year fight to stop Amazon from selling face recognition to the police. MIT Technology Review, https://www.technologyreview.com/2020/06/12/1003482/amazon-stopped-selling-police-face-recognition-fight/
• Nicoletti, L, Boss, D. (2023). Humans are biased. Generative AI is even worse. Bloomberg: https://www.bloomberg.com/graphics/2023-generative-ai-bias/
• Samuel, S. (2022). Why it’s so damn hard to make AI fair and unbiased. Vox, https://www.vox.com/future-perfect/22916602/ai-bias-fairness-tradeoffs-artificial-intelligence
Presentation 10: The threat (?) of technological singularity and superintelligence
• Bostrom, N. (2014). Superintelligence: Paths, Dangers and Strategies. Oxford University Press.
• Carlsmith, J. (2022). Is power-seeking AI an existential risk? https://arxiv.org/abs/2206.13353
• Chalmers, D. (2010). The singularity: A philosophical analysis. Journal of Consciousness Studies 17, 7-65
• Dung, L. (2024). Is superintelligence necessarily moral? Analysis, anae033.
13. Summary