• On the Ethical Considerations of Generative Agents

    Item Type Preprint
    Author N'yoma Diamond
    Author Soumya Banerjee
    Abstract The Generative Agents framework recently developed by Park et al. has enabled numerous new technical solutions and problem-solving approaches. Academic and industrial interest in generative agents has been explosive as a result of the effectiveness of generative agents toward emulating human behaviour. However, it is necessary to consider the ethical challenges and concerns posed by this technique and its usage. In this position paper, we discuss the extant literature that evaluate the ethical considerations regarding generative agents and similar generative tools, and identify additional concerns of significant importance. We also suggest guidelines and necessary future research on how to mitigate some of the ethical issues and systemic risks associated with generative agents.
    Date 2024-11-28
    Library Catalog arXiv.org
    URL http://arxiv.org/abs/2411.19211
    Accessed 12/3/2024, 8:35:33 AM
    Extra arXiv:2411.19211
    DOI 10.48550/arXiv.2411.19211
    Repository arXiv
    Archive ID arXiv:2411.19211
    Date Added 12/3/2024, 8:35:33 AM
    Modified 12/3/2024, 8:35:37 AM

    Tags:

    • Computer Science - Artificial Intelligence
    • Computer Science - Computers and Society
    • Computer Science - Emerging Technologies
    • Computer Science - Multiagent Systems

    Attachments

    • Preprint PDF
    • Snapshot
  • The linguistic dead zone of value-aligned agency, natural and artificial

    Item Type Journal Article
    Author Travis LaCroix
    Abstract The value alignment problem for artificial intelligence (AI) asks how we can ensure that the “values”—i.e., objective functions—of artificial systems are aligned with the values of humanity. In this paper, I argue that linguistic communication is a necessary condition for robust value alignment. I discuss the consequences that the truth of this claim would have for research programmes that attempt to ensure value alignment for AI systems—or, more loftily, those programmes that seek to design robustly beneficial or ethical artificial agents.
    Date 2024-12-04
    Language en
    Library Catalog Springer Link
    URL https://doi.org/10.1007/s11098-024-02257-w
    Accessed 12/12/2024, 8:59:26 AM
    Publication Philosophical Studies
    DOI 10.1007/s11098-024-02257-w
    Journal Abbr Philos Stud
    ISSN 1573-0883
    Date Added 12/12/2024, 8:59:26 AM
    Modified 12/12/2024, 8:59:38 AM

    Tags:

    • AI
    • Artificial intelligence
    • Artificial Intelligence
    • Communication systems
    • Coordination
    • Incentives
    • Information transfer
    • Language
    • Linguistic communication
    • Machine learning
    • Normative theory
    • Objective functions
    • Objectives
    • Preferences
    • Principal-agent problems
    • The value alignment problem
    • Values

    Attachments

    • Full Text PDF
  • Are Large Language Models Consistent over Value-laden Questions?

    Item Type Preprint
    Author Jared Moore
    Author Tanvi Deshpande
    Author Diyi Yang
    Abstract Large language models (LLMs) appear to bias their survey answers toward certain values. Nonetheless, some argue that LLMs are too inconsistent to simulate particular values. Are they? To answer, we first define value consistency as the similarity of answers across (1) paraphrases of one question, (2) related questions under one topic, (3) multiple-choice and open-ended use-cases of one question, and (4) multilingual translations of a question to English, Chinese, German, and Japanese. We apply these measures to a few large ($>=34b$), open LLMs including llama-3, as well as gpt-4o, using eight thousand questions spanning more than 300 topics. Unlike prior work, we find that models are relatively consistent across paraphrases, use-cases, translations, and within a topic. Still, some inconsistencies remain. Models are more consistent on uncontroversial topics (e.g., in the U.S., "Thanksgiving") than on controversial ones ("euthanasia"). Base models are both more consistent compared to fine-tuned models and are uniform in their consistency across topics, while fine-tuned models are more inconsistent about some topics ("euthanasia") than others ("women's rights") like our human subjects (n=165).
    Date 2024-07-03
    Library Catalog arXiv.org
    URL http://arxiv.org/abs/2407.02996
    Accessed 12/1/2024, 8:44:06 PM
    Extra arXiv:2407.02996 version: 1
    DOI 10.48550/arXiv.2407.02996
    Repository arXiv
    Archive ID arXiv:2407.02996
    Date Added 12/1/2024, 8:44:06 PM
    Modified 12/1/2024, 8:44:09 PM

    Tags:

    • Computer Science - Artificial Intelligence
    • Computer Science - Computation and Language

    Attachments

    • Preprint PDF
    • Snapshot
  • The Method of Critical AI Studies, A Propaedeutic

    Item Type Preprint
    Author Fabian Offert
    Author Ranjodh Singh Dhaliwal
    Abstract We outline some common methodological issues in the field of critical AI studies, including a tendency to overestimate the explanatory power of individual samples (the benchmark casuistry), a dependency on theoretical frameworks derived from earlier conceptualizations of computation (the black box casuistry), and a preoccupation with a cause-and-effect model of algorithmic harm (the stack casuistry). In the face of these issues, we call for, and point towards, a future set of methodologies that might take into account existing strengths in the humanistic close analysis of cultural objects.
    Date 2024-11-28
    Library Catalog arXiv.org
    URL http://arxiv.org/abs/2411.18833
    Accessed 12/4/2024, 5:20:56 PM
    Extra arXiv:2411.18833
    DOI 10.48550/arXiv.2411.18833
    Repository arXiv
    Archive ID arXiv:2411.18833
    Date Added 12/4/2024, 5:20:56 PM
    Modified 12/4/2024, 5:21:00 PM

    Tags:

    • Computer Science - Computers and Society

    Attachments

    • Preprint PDF
    • Snapshot
  • Hidden Persuaders: LLMs' Political Leaning and Their Influence on Voters

    Item Type Preprint
    Author Yujin Potter
    Author Shiyang Lai
    Author Junsol Kim
    Author James Evans
    Author Dawn Song
    Abstract How could LLMs influence our democracy? We investigate LLMs' political leanings and the potential influence of LLMs on voters by conducting multiple experiments in a U.S. presidential election context. Through a voting simulation, we first demonstrate 18 open- and closed-weight LLMs' political preference for a Democratic nominee over a Republican nominee. We show how this leaning towards the Democratic nominee becomes more pronounced in instruction-tuned models compared to their base versions by analyzing their responses to candidate-policy related questions. We further explore the potential impact of LLMs on voter choice by conducting an experiment with 935 U.S. registered voters. During the experiments, participants interacted with LLMs (Claude-3, Llama-3, and GPT-4) over five exchanges. The experiment results show a shift in voter choices towards the Democratic nominee following LLM interaction, widening the voting margin from 0.7% to 4.6%, even though LLMs were not asked to persuade users to support the Democratic nominee during the discourse. This effect is larger than many previous studies on the persuasiveness of political campaigns, which have shown minimal effects in presidential elections. Many users also expressed a desire for further political interaction with LLMs. Which aspects of LLM interactions drove these shifts in voter choice requires further study. Lastly, we explore how a safety method can make LLMs more politically neutral, while raising the question of whether such neutrality is truly the path forward.
    Date 2024-11-11
    Short Title Hidden Persuaders
    Library Catalog arXiv.org
    URL http://arxiv.org/abs/2410.24190
    Accessed 11/18/2024, 4:02:40 PM
    Extra arXiv:2410.24190
    Repository arXiv
    Archive ID arXiv:2410.24190
    Date Added 11/18/2024, 4:02:40 PM
    Modified 11/18/2024, 4:02:40 PM

    Tags:

    • Computer Science - Computation and Language
    • Computer Science - Computers and Society

    Attachments

    • Full Text PDF
    • Snapshot