• Against willing servitude: Autonomy in the ethics of advanced artificial intelligence

    Item Type Journal Article
    Author Adam Bales
    Abstract Abstract Some people believe that advanced artificial intelligence systems (AIs) might, in the future, come to have moral status. Further, humans might be tempted to design such AIs that they serve us, carrying out tasks that make our lives better. This raises the question of whether designing AIs with moral status to be willing servants would problematically violate their autonomy. In this paper, I argue that it would in fact do so.
    Date 2025-03-31
    Language en
    Short Title Against willing servitude
    Library Catalog DOI.org (Crossref)
    URL https://academic.oup.com/pq/advance-article/doi/10.1093/pq/pqaf031/8100849
    Accessed 6/11/2025, 1:12:59 PM
    Rights https://creativecommons.org/licenses/by/4.0/
    Pages pqaf031
    Publication The Philosophical Quarterly
    DOI 10.1093/pq/pqaf031
    ISSN 0031-8094, 1467-9213
    Date Added 6/11/2025, 1:12:59 PM
    Modified 6/11/2025, 1:12:59 PM

    Attachments

    • PDF
  • Smartphones: Parts of Our Minds? Or Parasites?

    Item Type Journal Article
    Author Rachael L Brown
    Author Robert C Brooks
    Date 2025-05-26
    Language en
    Short Title Smartphones
    Library Catalog DOI.org (Crossref)
    URL https://www.tandfonline.com/doi/full/10.1080/00048402.2025.2504070
    Accessed 6/10/2025, 12:58:18 PM
    Pages 1-16
    Publication Australasian Journal of Philosophy
    DOI 10.1080/00048402.2025.2504070
    Journal Abbr Australasian Journal of Philosophy
    ISSN 0004-8402, 1471-6828
    Date Added 6/10/2025, 12:58:18 PM
    Modified 6/10/2025, 12:58:18 PM

    Attachments

    • Full Text PDF
  • AI rule and a fundamental objection to epistocracy

    Item Type Journal Article
    Author Sean Donahue
    Abstract Epistocracy is rule by whoever is more likely to make correct decisions. AI epistocracy is rule by an artificial intelligence that is more likely to make correct decisions than any humans, individually or collectively. I argue that although various objections have been raised against epistocracy, the most popular do not apply to epistocracy organized around AI rule. I use this result to show that epistocracy is fundamentally flawed because none of its forms provide adequate opportunity for people (as opposed to individuals) to develop a record of meaningful moral achievement. This Collective Moral Achievement Objection provides a novel reason to value democracy. It also provides guidance on how we ought to incorporate digital technologies into politics, regardless of how proficient these technologies may become at identifying correct decisions.
    Date 2025-06-01
    Language en
    Library Catalog Springer Link
    URL https://doi.org/10.1007/s00146-024-02175-9
    Accessed 6/13/2025, 9:07:27 AM
    Volume 40
    Pages 4105-4117
    Publication AI & SOCIETY
    DOI 10.1007/s00146-024-02175-9
    Issue 5
    Journal Abbr AI & Soc
    ISSN 1435-5655
    Date Added 6/13/2025, 9:07:27 AM
    Modified 6/13/2025, 9:07:30 AM

    Tags:

    • Artificial Intelligence
    • Democracy
    • Epistemology
    • Philosophy of Artificial Intelligence
    • Collective self-determination
    • Critical Thinking
    • Epistocracy
    • Humanitiy and Technology
    • Moral achievement

    Attachments

    • Full Text PDF
  • AI assisted ethics

    Item Type Journal Article
    Author Amitai Etzioni
    Author Oren Etzioni
    Abstract The growing number of ‘smart’ instruments, those equipped with AI, has raised concerns because these instruments make autonomous decisions; that is, they act beyond the guidelines provided them by programmers. Hence, the question the makers and users of smart instrument (e.g., driver-less cars) face is how to ensure that these instruments will not engage in unethical conduct (not to be conflated with illegal conduct). The article suggests that to proceed we need a new kind of AI program—oversight programs—that will monitor, audit, and hold operational AI programs accountable.
    Date 2016-06-01
    Language en
    Library Catalog Springer Link
    URL https://doi.org/10.1007/s10676-016-9400-6
    Accessed 6/13/2025, 10:43:46 AM
    Volume 18
    Pages 149-156
    Publication Ethics and Information Technology
    DOI 10.1007/s10676-016-9400-6
    Issue 2
    Journal Abbr Ethics Inf Technol
    ISSN 1572-8439
    Date Added 6/13/2025, 10:43:46 AM
    Modified 6/13/2025, 10:43:48 AM

    Tags:

    • Artificial Intelligence
    • Computer Ethics
    • Logic in AI
    • Engineering Ethics
    • Ethics of Technology
    • Philosophy of Artificial Intelligence
    • Communiterianism
    • Driverless cars
    • Ethics bot
    • Second-layer AI

    Attachments

    • Full Text PDF
  • The Philosophic Turn for AI Agents: Replacing centralized digital rhetoric with decentralized truth-seeking

    Item Type Preprint
    Author Philipp Koralus
    Abstract In the face of rapidly advancing AI technology, individuals will increasingly rely on AI agents to navigate life's growing complexities, raising critical concerns about maintaining both human agency and autonomy. This paper addresses a fundamental dilemma posed by AI decision-support systems: the risk of either becoming overwhelmed by complex decisions, thus losing agency, or having autonomy compromised by externally controlled choice architectures reminiscent of ``nudging'' practices. While the ``nudge'' framework, based on the use of choice-framing to guide individuals toward presumed beneficial outcomes, initially appeared to preserve liberty, at AI-driven scale, it threatens to erode autonomy. To counteract this risk, the paper proposes a philosophic turn in AI design. AI should be constructed to facilitate decentralized truth-seeking and open-ended inquiry, mirroring the Socratic method of philosophical dialogue. By promoting individual and collective adaptive learning, such AI systems would empower users to maintain control over their judgments, augmenting their agency without undermining autonomy. The paper concludes by outlining essential features for autonomy-preserving AI systems, sketching a path toward AI systems that enhance human judgment rather than undermine it.
    Date 2025-04-24
    Short Title The Philosophic Turn for AI Agents
    Library Catalog arXiv.org
    URL http://arxiv.org/abs/2504.18601
    Accessed 6/10/2025, 1:00:32 PM
    Extra arXiv:2504.18601 [cs]
    DOI 10.48550/arXiv.2504.18601
    Repository arXiv
    Archive ID arXiv:2504.18601
    Date Added 6/10/2025, 1:00:32 PM
    Modified 6/10/2025, 1:00:33 PM

    Tags:

    • Computer Science - Artificial Intelligence
    • Computer Science - Computers and Society

    Attachments

    • Full Text PDF
    • Snapshot
  • Artificial Intelligence and the Value Alignment Problem: A Philosophical Introduction

    Item Type Book
    Author Travis LaCroix
    Date 2025
    Language eng
    Short Title Artificial Intelligence and the Value Alignment Problem
    Library Catalog K10plus ISBN
    Place Peterborough
    Publisher Broadview Press Ltd
    ISBN 978-1-55481-629-3
    # of Pages 340
    Date Added 6/11/2025, 1:13:58 PM
    Modified 6/11/2025, 1:13:58 PM
  • Metaethical perspectives on ‘benchmarking’ AI ethics

    Item Type Journal Article
    Author Travis LaCroix
    Author Alexandra Sasha Luccioni
    Abstract Benchmarks are seen as the cornerstone for measuring technical progress in artificial intelligence (AI) research and have been developed for a variety of tasks ranging from question answering to emotion recognition. An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the ‘ethicality’ of an AI system. In this paper, drawing upon research in moral philosophy and metaethics, we argue that it is impossible to develop such a benchmark. As such, alternative mechanisms are necessary for evaluating whether an AI system is ‘ethical’. This is especially pressing in light of the prevalence of applied, industrial AI research. We argue that it makes more sense to talk about ‘values’ (and ‘value alignment’) rather than ‘ethics’ when considering the possible actions of present and future AI systems. We further highlight that, because values are unambiguously relative, focusing on values forces us to consider explicitly what the values are and whose values they are. Shifting the emphasis from ethics to values therefore gives rise to several new ways of understanding how researchers might advance research programmes for robustly safe or beneficial AI.
    Date 2025-03-19
    Language en
    Library Catalog Springer Link
    URL https://doi.org/10.1007/s43681-025-00703-x
    Accessed 6/11/2025, 1:14:49 PM
    Publication AI and Ethics
    DOI 10.1007/s43681-025-00703-x
    Journal Abbr AI Ethics
    ISSN 2730-5961
    Date Added 6/11/2025, 1:14:49 PM
    Modified 6/11/2025, 1:14:49 PM

    Tags:

    • Value alignment
    • AI ethics
    • Computer Ethics
    • Meta-Ethics
    • Normative Ethics
    • Engineering Ethics
    • Philosophy of Artificial Intelligence
    • Benchmarking
    • Metaethics
    • Moral dilemmas
    • Research Ethics
    • Unit testing

    Attachments

    • Full Text PDF
  • Honest Ai

    Item Type Book Section
    Author N. G. Laskowski
    Editor Henry Shevlin
    Library Catalog PhilPapers
    Publisher Oxford University Press
    Book Title AI in Society: Relationships (Oxford Intersections)
    Date Added 6/11/2025, 9:27:40 PM
    Modified 6/11/2025, 9:27:40 PM

    Attachments

    • Snapshot
  • Is there a tension between AI safety and AI welfare?

    Item Type Journal Article
    Author Robert Long
    Author Jeff Sebo
    Author Toni Sims
    Abstract The field of AI safety considers whether and how AI development can be safe and beneficial for humans and other animals, and the field of AI welfare considers whether and how AI development can be safe and beneficial for AI systems. There is a prima facie tension between these projects, since some measures in AI safety, if deployed against humans and other animals, would raise questions about the ethics of constraint, deception, surveillance, alteration, suffering, death, disenfranchisement, and more. Is there in fact a tension between these projects? We argue that, considering all relevant factors, there is indeed a moderately strong tension—and it deserves more examination. In particular, we should devise interventions that can promote both safety and welfare where possible, and prepare frameworks for navigating any remaining tensions thoughtfully.
    Date 2025-05-23
    Language en
    Library Catalog Springer Link
    URL https://doi.org/10.1007/s11098-025-02302-2
    Accessed 6/11/2025, 1:11:30 PM
    Publication Philosophical Studies
    DOI 10.1007/s11098-025-02302-2
    Journal Abbr Philos Stud
    ISSN 1573-0883
    Date Added 6/11/2025, 1:11:30 PM
    Modified 6/11/2025, 1:11:30 PM

    Tags:

    • Artificial Intelligence
    • Machine ethics
    • AI safety
    • Computer Ethics
    • AI consciousness
    • AI welfare
    • Animal Ethics
    • Catastrophic risk
    • Engineering Ethics
    • Ethics of Technology
    • Philosophy of Artificial Intelligence

    Attachments

    • Full Text PDF
  • Normative conflicts and shallow AI alignment

    Item Type Journal Article
    Author Raphaël Millière
    Abstract The progress of AI systems such as large language models (LLMs) raises increasingly pressing concerns about their safe deployment. This paper examines the value alignment problem for LLMs, arguing that current alignment strategies are fundamentally inadequate to prevent misuse. Despite ongoing efforts to instill norms such as helpfulness, honesty, and harmlessness in LLMs through fine-tuning based on human preferences, they remain vulnerable to adversarial attacks that exploit conflicts between these norms. I argue that this vulnerability reflects a fundamental limitation of existing alignment methods: they reinforce shallow behavioral dispositions rather than endowing LLMs with a genuine capacity for normative deliberation. Drawing from on research in moral psychology, I show how humans’ ability to engage in deliberative reasoning enhances their resilience against similar adversarial tactics. LLMs, by contrast, lack a robust capacity to detect and rationally resolve normative conflicts, leaving them susceptible to manipulation; even recent advances in reasoning-focused LLMs have not addressed this vulnerability. This “shallow alignment” problem carries significant implications for AI safety and regulation, suggesting that current approaches are insufficient for mitigating potential harms posed by increasingly capable AI systems.
    Date 2025-05-27
    Language en
    Library Catalog Springer Link
    URL https://doi.org/10.1007/s11098-025-02347-3
    Accessed 6/11/2025, 1:11:25 PM
    Publication Philosophical Studies
    DOI 10.1007/s11098-025-02347-3
    Journal Abbr Philos Stud
    ISSN 1573-0883
    Date Added 6/11/2025, 1:11:28 PM
    Modified 6/11/2025, 1:11:28 PM

    Tags:

    • Artificial Intelligence
    • AI safety
    • Large language models
    • Adversarial attacks
    • Alignment problem
    • Complexity
    • Computer Ethics
    • Logic in AI
    • Meta-Ethics
    • Normative Ethics
    • Normative reasoning

    Attachments

    • Full Text PDF