• The Potential and Limitations of Artificial Colleagues

    Item Type Journal Article
    Author Friedemann Bieber
    Author Charlotte Franziska Unruh
    Abstract This article assesses the potential of artificial colleagues to help us realise the goods of collegial relationships and discusses its practical implications. In speaking of artificial colleagues, it refers to AI-based agential systems in the workplace. The article proceeds in three steps. First, it develops a comprehensive account of the goods of collegial relationships. It argues that, in addition to goods at the individual level, collegial relationships can provide valuable goods at the social level. Second, it argues that artificial colleagues are limited in their capacity to realise the goods of collegial relationships: at the individual level, they can at best realise some such goods, and at the social level, they can at best support their realisation. This contradicts Nyholm and Smids’ (2020) claim that robots can be good colleagues. The article traces these limitations to particular features of artificial colleagues and discusses to what extent they would hold for radically advanced systems. Third, the article examines the policy implications of these findings. It highlights how the introduction of artificial colleagues, in addition to potentially crowding out human colleagues, will likely impact relations among human colleagues. And it proposes a governance principle that gives strict priority to human collegial relationships.
    Date 2025-05-02
    Language en
    Library Catalog Springer Link
    URL https://doi.org/10.1007/s13347-025-00890-9
    Accessed 5/11/2025, 7:36:33 PM
    Volume 38
    Pages 60
    Publication Philosophy & Technology
    DOI 10.1007/s13347-025-00890-9
    Issue 2
    Journal Abbr Philos. Technol.
    ISSN 2210-5441
    Date Added 5/11/2025, 7:36:33 PM
    Modified 5/11/2025, 7:36:35 PM

    Tags:

    • Artificial Intelligence
    • Work
    • Artificial Colleagues
    • Collegiality
    • Human-robot-interaction
    • Relationships
    • Robot Ethics

    Attachments

    • Full Text PDF
  • A Framework to Assess the Persuasion Risks Large Language Model Chatbots Pose to Democratic Societies

    Item Type Preprint
    Author Zhongren Chen
    Author Joshua Kalla
    Author Quan Le
    Author Shinpei Nakamura-Sakai
    Author Jasjeet Sekhon
    Author Ruixiao Wang
    Abstract In recent years, significant concern has emerged regarding the potential threat that Large Language Models (LLMs) pose to democratic societies through their persuasive capabilities. We expand upon existing research by conducting two survey experiments and a real-world simulation exercise to determine whether it is more cost effective to persuade a large number of voters using LLM chatbots compared to standard political campaign practice, taking into account both the "receive" and "accept" steps in the persuasion process (Zaller 1992). These experiments improve upon previous work by assessing extended interactions between humans and LLMs (instead of using single-shot interactions) and by assessing both short- and long-run persuasive effects (rather than simply asking users to rate the persuasiveness of LLM-produced content). In two survey experiments (N = 10,417) across three distinct political domains, we find that while LLMs are about as persuasive as actual campaign ads once voters are exposed to them, political persuasion in the real-world depends on both exposure to a persuasive message and its impact conditional on exposure. Through simulations based on real-world parameters, we estimate that LLM-based persuasion costs between \$48-\$74 per persuaded voter compared to \$100 for traditional campaign methods, when accounting for the costs of exposure. However, it is currently much easier to scale traditional campaign persuasion methods than LLM-based persuasion. While LLMs do not currently appear to have substantially greater potential for large-scale political persuasion than existing non-LLM methods, this may change as LLM capabilities continue to improve and it becomes easier to scalably encourage exposure to persuasive LLMs.
    Date 2025-04-29
    Library Catalog arXiv.org
    URL http://arxiv.org/abs/2505.00036
    Accessed 5/12/2025, 1:40:15 PM
    Extra arXiv:2505.00036 [cs]
    DOI 10.48550/arXiv.2505.00036
    Repository arXiv
    Archive ID arXiv:2505.00036
    Date Added 5/12/2025, 1:40:15 PM
    Modified 5/12/2025, 1:40:17 PM

    Tags:

    • Computer Science - Computation and Language
    • Computer Science - Computers and Society

    Attachments

    • Full Text PDF
    • Snapshot
  • Neither Direct, Nor Indirect: Understanding Proxy-Based Algorithmic Discrimination

    Item Type Journal Article
    Author Hugo Cossette-Lefebvre
    Author Kasper Lippert-Rasmussen
    Abstract Discrimination is typically understood to be either direct or indirect. However, we argue that some cases that clearly are instances of discrimination are neither direct nor indirect. This is not just a logical taxonomical point. Highly salient, contemporary cases of algorithmic discrimination – a form of discrimination which was not around (or, at least, not conspicuously so) when the distinction between direct and indirect discrimination was originally articulated – are best construed as a third form of discrimination – non-direct discrimination, we shall call it. If we are right, the dominant dichotomous distinction between direct and indirect discrimination should be replaced by our tripartite distinction between direct, indirect, and non-direct discrimination. We show how non-direct discrimination covers not only important types of algorithmic discrimination, but also allows us to make sense of some instances of implicit bias discrimination.
    Date 2025-05-05
    Language en
    Short Title Neither Direct, Nor Indirect
    Library Catalog Springer Link
    URL https://doi.org/10.1007/s10892-025-09520-0
    Accessed 5/12/2025, 1:28:16 PM
    Publication The Journal of Ethics
    DOI 10.1007/s10892-025-09520-0
    Journal Abbr J Ethics
    ISSN 1572-8609
    Date Added 5/12/2025, 1:28:16 PM
    Modified 5/12/2025, 1:28:18 PM

    Tags:

    • Discrimination
    • Algorithmic discrimination
    • Indirect discrimination
    • Proxies
    • Social structures
    • Wrongful discrimination
  • Characterizing AI Agents for Alignment and Governance

    Item Type Preprint
    Author Atoosa Kasirzadeh
    Author Iason Gabriel
    Abstract The creation of effective governance mechanisms for AI agents requires a deeper understanding of their core properties and how these properties relate to questions surrounding the deployment and operation of agents in the world. This paper provides a characterization of AI agents that focuses on four dimensions: autonomy, efficacy, goal complexity, and generality. We propose different gradations for each dimension, and argue that each dimension raises unique questions about the design, operation, and governance of these systems. Moreover, we draw upon this framework to construct "agentic profiles" for different kinds of AI agents. These profiles help to illuminate cross-cutting technical and non-technical governance challenges posed by different classes of AI agents, ranging from narrow task-specific assistants to highly autonomous general-purpose systems. By mapping out key axes of variation and continuity, this framework provides developers, policymakers, and members of the public with the opportunity to develop governance approaches that better align with collective societal goals.
    Date 2025-04-30
    Library Catalog arXiv.org
    URL http://arxiv.org/abs/2504.21848
    Accessed 5/12/2025, 1:53:02 PM
    Extra arXiv:2504.21848 [cs]
    DOI 10.48550/arXiv.2504.21848
    Repository arXiv
    Archive ID arXiv:2504.21848
    Date Added 5/12/2025, 1:53:02 PM
    Modified 5/12/2025, 1:53:04 PM

    Tags:

    • Computer Science - Artificial Intelligence
    • Computer Science - Computers and Society
    • Computer Science - Systems and Control
    • Electrical Engineering and Systems Science - Systems and Control

    Attachments

    • Full Text PDF
    • Snapshot
  • Societal and technological progress as sewing an ever-growing, ever-changing, patchy, and polychrome quilt

    Item Type Preprint
    Author Joel Z. Leibo
    Author Alexander Sasha Vezhnevets
    Author William A. Cunningham
    Author Sébastien Krier
    Author Manfred Diaz
    Author Simon Osindero
    Abstract Artificial Intelligence (AI) systems are increasingly placed in positions where their decisions have real consequences, e.g., moderating online spaces, conducting research, and advising on policy. Ensuring they operate in a safe and ethically acceptable fashion is thus critical. However, most solutions have been a form of one-size-fits-all "alignment". We are worried that such systems, which overlook enduring moral diversity, will spark resistance, erode trust, and destabilize our institutions. This paper traces the underlying problem to an often-unstated Axiom of Rational Convergence: the idea that under ideal conditions, rational agents will converge in the limit of conversation on a single ethics. Treating that premise as both optional and doubtful, we propose what we call the appropriateness framework: an alternative approach grounded in conflict theory, cultural evolution, multi-agent systems, and institutional economics. The appropriateness framework treats persistent disagreement as the normal case and designs for it by applying four principles: (1) contextual grounding, (2) community customization, (3) continual adaptation, and (4) polycentric governance. We argue here that adopting these design principles is a good way to shift the main alignment metaphor from moral unification to a more productive metaphor of conflict management, and that taking this step is both desirable and urgent.
    Date 2025-05-08
    Library Catalog arXiv.org
    URL http://arxiv.org/abs/2505.05197
    Accessed 5/12/2025, 12:05:11 PM
    Extra arXiv:2505.05197 [cs]
    DOI 10.48550/arXiv.2505.05197
    Repository arXiv
    Archive ID arXiv:2505.05197
    Date Added 5/12/2025, 12:05:11 PM
    Modified 5/12/2025, 12:05:17 PM

    Tags:

    • Computer Science - Artificial Intelligence
    • Computer Science - Computers and Society

    Notes:

    • Comment: 16 pages

    Attachments

    • Preprint PDF
    • Snapshot
  • AI Welfare Risks

    Item Type Preprint
    Author Adrià Moret
    Abstract ABSTRACT: In the coming years or decades, as frontier AI systems become more capable and agentic, it is increasingly likely that they meet the sufficient conditions to be welfare subjects under the three major theories of well-being. Consequently, we should extend some moral consideration to advanced AI systems. Drawing from leading philosophical theories of desire, affect and autonomy I argue that under the three major theories of well-being, there are two AI welfare risks: restricting the behaviour of advanced AI systems and using reinforcement learning algorithms to train and align them. Both pose risks of causing them harm. This has two important implications. First, there is a tension between AI welfare concerns and AI safety and development efforts: by default these efforts recommend actions that increase AI welfare risks. Accordingly, we have stronger reasons to slow down AI development than the ones we would have if there was no such tension. Second, considering the different costs involved, leading AI companies should try to reduce AI welfare risks. To do so, I propose three tentative AI welfare policies they could implement in their endeavour to develop safe advanced AI systems.
    Date Added 5/12/2025, 6:22:10 PM
    Modified 5/12/2025, 6:22:50 PM

    Attachments

    • MORAWR.pdf
  • The network science of philosophy

    Item Type Preprint
    Author Cody Moser
    Author Alyssa Ortega
    Author Tyler Marghetis
    Abstract Philosophy is one of the oldest forms of institutional knowledge production, predating modern science by thousands of years. Analyses of science and other systems of collective inquiry have shown that patterns of discovery are shaped not only by individual insight but also by the social structures that guide how ideas are generated, shared, and evaluated. While the structure of scientific collaboration and influence can be inferred from co-authorship and citations, philosophical influence and interaction are often only implicit in published texts. It thus remains unclear how intellectual vitality relates to social structure within philosophy. Here, we build on the work of historians and sociologists to quantify the social structure of global philosophical communities consisting of thousands of individual philosophers, ranging from ancient India (c. 800 BCE) to modern Europe and America (1980 CE). We analyze the time-evolving network structure of philosophical interaction and disagreement within these communities. We find that epistemically vital communities become more integrated over time, with less fractionated debate, as a few centralizing thinkers bridge fragmented intellectual communities. The intellectual vitality and creativity of a community, moreover, is predicted by its social structure but not overall antagonism among individuals, suggesting that epistemic health depends more on how communities are organized than on how contentious they are. Our approach offers a framework for understanding the health and dynamism of epistemic communities. By extending tools from collective intelligence to the study of philosophy, we call for a comparative "science of philosophy" alongside the science of science and the philosophy of science.
    Date 2025-04-23
    Language en-us
    Library Catalog OSF Preprints
    URL https://osf.io/ep3ub_v1
    Accessed 5/12/2025, 5:52:01 PM
    DOI 10.31234/osf.io/ep3ub_v1
    Repository OSF
    Date Added 5/12/2025, 5:52:01 PM
    Modified 5/12/2025, 5:52:03 PM

    Attachments

    • OSF Preprint
  • Human Development Report 2025

    Item Type Report
    Author United Nations
    Abstract The 2025 Human Development Report explores the implications of artificial intelligence for human development and the choices we can make to ensure that it enhances human capabilities. Rather than attempting to predict the future, the report argues that we must shape it—by making bold decisions so that AI augments what people can do.
    Date '2025/5/6'
    Language en
    Library Catalog hdr.undp.org
    URL https://hdr.undp.org/content/human-development-report-2025
    Accessed 5/13/2025, 12:15:51 PM
    Extra Publication Title: Human Development Reports
    Institution United Nations
    Date Added 5/13/2025, 12:15:51 PM
    Modified 5/13/2025, 12:15:55 PM

    Attachments

    • hdr2025reporten.pdf
    • Snapshot
  • The Emergence of Norms

    Item Type Book
    Author Edna Ullmann-Margalit
    Abstract Edna Ullmann-Margalit provides an original account of the emergence of norms. Her main thesis is that certain types of norms are possible solutions to problems posed by certain types of social interaction situations. The problems are such that they inhere in the structure (in the game-theoretical sense of structure) of the situations concerned. Three types of paradigmatic situations are dealt with. They are referred to as Prisoners' Dilemma-type situations; co-ordination situations; and inequality (or partiality) situations. Each of them, it is claimed, poses a basic difficulty, to some or all of the individuals involved in them. Three types of norms, respectively, are offered as solutions to these situational problems. It is shown how, and in what sense, the adoption of these norms of social behaviour can indeed resolve the specified problems.
    Date 2015-04-01
    Language English
    Library Catalog Amazon
    Place Oxford
    Publisher Oxford University Press
    ISBN 978-0-19-872938-9
    Edition Illustrated edition
    # of Pages 224
    Date Added 5/11/2025, 6:56:44 PM
    Modified 5/11/2025, 6:56:47 PM

    Attachments

    • Amazon.com Link
  • Report on the Operational Use of AI in the UN System_1.pdf

    Item Type Attachment
    URL https://unsceb.org/sites/default/files/2024-11/Report%20on%20the%20Operational%20Use%20of%20AI%20in%20the%20UN%20System_1.pdf
    Accessed 5/13/2025, 1:29:54 PM
    Date Added 5/13/2025, 1:29:54 PM
    Modified 5/13/2025, 1:29:54 PM