Research

Cameron Pattison | Vanderbilt University

cameron.pattison@vanderbilt.edu

Research Areas

  • AI Ethics and Philosophy of Mind
  • Justice and Fairness in AI Systems
  • Human-AI Interaction

Research Journey

Classical Foundations

My academic journey began exploring premodern conceptions of rationality in classical and Islamic philosophical traditions, providing me with unique conceptual frameworks that now inform my approach to contemporary AI ethics and governance challenges.

Bridge to Contemporary Questions

This foundation evolved into examining how language models challenge our assumptions about human cognition and what this means for moral status and ethical frameworks. I became particularly interested in using these models to stress-test philosophical assumptions about intelligence and understanding.

Current Focus

I'm now taking advantage of the pre-dissertation period to explore a few different directions: (1) developing formal methods for quantifying distances between belief systems using Bregman divergences and graph edit distances, (2) characterizing current AI alignment practices through the lens of non-ideal theory, and (3) leveraging insights from sequence-modeling architectures to challenge fundamental assumptions about reasoning and motivation.

As Co-Director of "AI and the Human" at Vanderbilt and research affiliate with ANU's MINT Lab, I also work hard to bring technical and philosophical research communities together. I love working with folks on both sides of this divide, and really enjoy developing open-source tools that integrate philosophical frameworks with technical implementations!

Current Projects

Ibn Arabi Translation Project

Digital Humanities Web Project

A digital parallel text edition of Ibn Arabi's "The Meccan Revelations: Chapter 178" with AI-assisted translation and alignment tools.

This project combines traditional textual scholarship with modern AI translation techniques to provide an accessible version of an untranslated classical Islamic philosophical text. The interface allows readers to compare the original Arabic with a new English translation, with interactive alignment features.

Normative Dimensions of Summarization

AI Ethics MINT Lab

Examining how AI systems summarize public comments and input in normatively weighty contexts, with a focus on identifying whether these systems capture or miss morally significant perspectives.

This research investigates three critical questions in AI-assisted democratic deliberation: (1) How are automated summarization systems currently used in policy contexts? (2) How are these models optimized, and what values do they prioritize? (3) What normative dimensions are overlooked, particularly high-signal, low-frequency perspectives that may hold moral significance?

Classical Source Analysis with LLMs

Digital Humanities GitHub

Developing innovative approaches to source analysis in classical Arabic, Syriac, and Greek texts using large language models, aiming to transform traditional philological methods.

This project bridges computational linguistics with traditional philological methods to provide evidence-based insights into the transmission of philosophical ideas across linguistic and cultural boundaries. It helps scholars discover overlooked textual connections between Greek, Arabic, and other classical traditions.

Co-Director of AI and the Human

AI Ethics Vanderbilt

Leading initiatives on AI applications in humanities research and teaching, with a focus on faculty development and interdisciplinary collaboration.

Coordinating a seminar series bringing together philosophers, ethicists, and AI researchers including David Chalmers, John Tasioulas, and Sina Fazelpour. The seminar examines how artificial intelligence reshapes our concepts of justice, power, and humanity through structured interdisciplinary conversations.

Nothing Infers: AI, Human Cognition, and the Taking Condition

Philosophy of Mind Academic Paper

Exploring how both AI systems (symbolic AI and large language models) and human cognition may fail to satisfy the philosophical "Taking Condition" for reasoning.

Drawing on evidence from neuroscience and cognitive psychology, this paper argues that human cognition likely operates on similar principles to AI, particularly in its reliance on pattern recognition and statistical learning. This leads to the problematic conclusion that, under the current definition, nothing 'reasons' - suggesting we need to reconsider our definition of reasoning itself.

Zotero PDF Chat

AI Tools GitHub

A tool that enables semantic search and AI-powered conversations with academic papers stored in your Zotero library.

This project connects to local Zotero libraries, processes PDFs to create AI-readable embeddings, and allows users to ask questions about their documents. The system provides answers based on document content with citations from the source material, making it easier to engage with large collections of academic literature.

Publications

Book Review: Inventing the Imagination

Book Review 2025

Author: Cameron Pattison

Publication: New School Graduate Faculty Philosophy Journal

Abstract: A critical review of Justin Humphreys' work on the philosophical concept of imagination.

Revelation in al-Fārābī's Virtuous City

Book Chapter 2024

Author: Cameron Pattison

Publication: Springer Studies in the History of Philosophy, Mind, Soul and the Cosmos in the High Middle Ages

Abstract: This chapter offers a comprehensive analysis of al-Fārābī's treatment of revelation in his Mabādi' Ārā' Ahl al-Madīnat al-Fāḍilah (The Principles of the Opinions of the People of the Virtuous City). It explores the structure of his theory, its philosophical origins, and its integration with other aspects of his thought.

Evaluating persona prompting for question answering tasks

Conference Paper 2024

Authors: Carlos Olea, Holly Tucker, Jessica Phelan, Cameron Pattison, Shen Zhang, Maxwell Lieb, Doug Schmidt, Jules White

Publication: Proceedings of the 10th International Conference on Artificial Intelligence and Soft Computing, Sydney, Australia

Abstract: Using large language models (LLMs) effectively by applying prompt engineering is a timely research topic due to the advent of highly performant LLMs, such as ChatGPT-4. Our results indicate that single-agent expert personas perform better on high-openness tasks and that effective prompt engineering becomes more important for complex multi-agent methods.

Curriculum Vitae