Item Type | Web Page |
---|---|
Abstract | This report, prepared by the KGI Expert Working Group on Recommender Systems, offers comprehensive insights and policy guidance aimed at optimizing recommender systems for long-term user value and high-quality experiences. Drawing on a multidisciplinary research base and industry expertise, the report highlights key challenges in the current design and regulation of recommender systems and proposes actionable solutions for policymakers and product designers. A key concern is that some platforms optimize their recommender systems to maximize certain forms of predicted engagement, which can prioritize clicks and likes over stronger signals of long-term user value. Maximizing the chances that users will click, like, share, and view content this week, this month, and this quarter aligns well with the business interests of tech platforms monetized through advertising. Product teams are rewarded for showing short-term gains in platform usage, and financial markets and investors reward companies that can deliver large audiences to advertisers. |
Date | 2025-03-04 |
Language | en-US |
Short Title | Better Feeds |
URL | https://kgi.georgetown.edu/research-and-commentary/better-feeds/ |
Accessed | 3/13/2025, 8:35:10 AM |
Website Title | Knight-Georgetown Institute |
Date Added | 3/13/2025, 8:35:10 AM |
Modified | 3/13/2025, 8:35:52 AM |
Item Type | Preprint |
---|---|
Author | Francis Rhys Ward |
Abstract | I am a person and so are you. Philosophically we sometimes grant personhood to non-human animals, and entities such as sovereign states or corporations can legally be considered persons. But when, if ever, should we ascribe personhood to AI systems? In this paper, we outline necessary conditions for AI personhood, focusing on agency, theory-of-mind, and self-awareness. We discuss evidence from the machine learning literature regarding the extent to which contemporary AI systems, such as language models, satisfy these conditions, finding the evidence surprisingly inconclusive. If AI systems can be considered persons, then typical framings of AI alignment may be incomplete. Whereas agency has been discussed at length in the literature, other aspects of personhood have been relatively neglected. AI agents are often assumed to pursue fixed goals, but AI persons may be self-aware enough to reflect on their aims, values, and positions in the world and thereby induce their goals to change. We highlight open research directions to advance the understanding of AI personhood and its relevance to alignment. Finally, we reflect on the ethical considerations surrounding the treatment of AI systems. If AI systems are persons, then seeking control and alignment may be ethically untenable. |
Date | 2025-01-23 |
Library Catalog | arXiv.org |
URL | http://arxiv.org/abs/2501.13533 |
Accessed | 3/13/2025, 8:07:31 AM |
Extra | arXiv:2501.13533 [cs] |
DOI | 10.48550/arXiv.2501.13533 |
Repository | arXiv |
Archive ID | arXiv:2501.13533 |
Date Added | 3/13/2025, 8:07:31 AM |
Modified | 3/13/2025, 8:07:33 AM |
Comment: AAAI-25 AI Alignment Track
Item Type | Journal Article |
---|---|
Author | René van Woudenberg |
Author | Chris Ranalli |
Author | Daniel Bracker |
Abstract | Is ChatGPT an author? Given its capacity to generate something that reads like human-written text in response to prompts, it might seem natural to ascribe authorship to ChatGPT. However, we argue that ChatGPT is not an author. ChatGPT fails to meet the criteria of authorship because it lacks the ability to perform illocutionary speech acts such as promising or asserting, lacks the fitting mental states like knowledge, belief, or intention, and cannot take responsibility for the texts it produces. Three perspectives are compared: liberalism (which ascribes authorship to ChatGPT), conservatism (which denies ChatGPT's authorship for normative and metaphysical reasons), and moderatism (which treats ChatGPT as if it possesses authorship without committing to the existence of mental states like knowledge, belief, or intention). We conclude that conservatism provides a more nuanced understanding of authorship in AI than liberalism and moderatism, without denying the significant potential, influence, or utility of AI technologies such as ChatGPT. |
Date | 2024-02-26 |
Language | en |
Short Title | Authorship and ChatGPT |
Library Catalog | Springer Link |
URL | https://doi.org/10.1007/s13347-024-00715-1 |
Accessed | 3/12/2025, 8:59:33 AM |
Volume | 37 |
Pages | 34 |
Publication | Philosophy & Technology |
DOI | 10.1007/s13347-024-00715-1 |
Issue | 1 |
Journal Abbr | Philos. Technol. |
ISSN | 2210-5441 |
Date Added | 3/12/2025, 8:59:33 AM |
Modified | 3/12/2025, 8:59:42 AM |
Item Type | Journal Article |
---|---|
Author | Eva Schmidt |
Author | Paul Martin Putora |
Author | Rianne Fijten |
Abstract | Artificial intelligent (AI) systems used in medicine are often very reliable and accurate, but at the price of their being increasingly opaque. This raises the question whether a system’s opacity undermines the ability of medical doctors to acquire knowledge on the basis of its outputs. We investigate this question by focusing on a case in which a patient’s risk of recurring breast cancer is predicted by an opaque AI system. We argue that, given the system’s opacity, as well as the possibility of malfunctioning AI systems, practitioners’ inability to check the correctness of their outputs, and the high stakes of such cases, the knowledge of medical practitioners is indeed undermined. They are lucky to form true beliefs based on the AI systems’ outputs, and knowledge is incompatible with luck. We supplement this claim with a specific version of the safety condition on knowledge, Safety*. We argue that, relative to the perspective of the medical doctor in our example case, his relevant beliefs could easily be false, and this despite his evidence that the AI system functions reliably. Assuming that Safety* is necessary for knowledge, the practitioner therefore doesn’t know. We address three objections to our proposal before turning to practical suggestions for improving the epistemic situation of medical doctors. |
Date | 2025-01-13 |
Language | en |
Short Title | The Epistemic Cost of Opacity |
Library Catalog | Springer Link |
URL | https://doi.org/10.1007/s13347-024-00834-9 |
Accessed | 3/15/2025, 8:50:27 AM |
Volume | 38 |
Pages | 5 |
Publication | Philosophy & Technology |
DOI | 10.1007/s13347-024-00834-9 |
Issue | 1 |
Journal Abbr | Philos. Technol. |
ISSN | 2210-5441 |
Date Added | 3/15/2025, 8:50:27 AM |
Modified | 3/15/2025, 8:50:29 AM |
Item Type | Journal Article |
---|---|
Author | Justin D’Ambrosio |
Abstract | In conversation, speakers often felicitously underspecify the content of their speech acts, leaving audiences uncertain about what they mean. This paper discusses how such underspecification and the resulting uncertainty can be used deliberately, and manipulatively, to achieve a range of noncommunicative conversational goals—including minimizing conversational conflict, manufacturing acceptance or perceived agreement, and gaining or bolstering status. I argue that speakers who manipulatively underspecify their speech acts in this way are engaged in a mock speech act that I call pied piping. In pied piping, a speaker leaves open a range of interpretations for a speech act while preserving both plausible deniability and plausible assertability; depending on how the audience responds, the speaker can retroactively commit to any of the interpretations left open, and so try to retroactively update the common ground. I go on to develop a model of how pied-piping functions that incorporates game-theoretic elements into the more traditional common-ground framework in order to capture the uncertainty of updating. |
Language | en |
Library Catalog | Zotero |
Date Added | 3/12/2025, 10:43:53 AM |
Modified | 3/12/2025, 10:43:57 AM |
Item Type | Preprint |
---|---|
Author | Justin B. Bullock |
Author | Samuel Hammond |
Author | Seb Krier |
Abstract | This paper examines how artificial general intelligence (AGI) could fundamentally reshape the delicate balance between state capacity and individual liberty that sustains free societies. Building on Acemoglu and Robinson's 'narrow corridor' framework, we argue that AGI poses distinct risks of pushing societies toward either a 'despotic Leviathan' through enhanced state surveillance and control, or an 'absent Leviathan' through the erosion of state legitimacy relative to AGI-empowered non-state actors. Drawing on public administration theory and recent advances in AI capabilities, we analyze how these dynamics could unfold through three key channels: the automation of discretionary decision-making within agencies, the evolution of bureaucratic structures toward system-level architectures, and the transformation of democratic feedback mechanisms. Our analysis reveals specific failure modes that could destabilize liberal institutions. Enhanced state capacity through AGI could enable unprecedented surveillance and control, potentially entrenching authoritarian practices. Conversely, rapid diffusion of AGI capabilities to non-state actors could undermine state legitimacy and governability. We examine how these risks manifest differently at the micro level of individual bureaucratic decisions, the meso level of organizational structure, and the macro level of democratic processes. To preserve the narrow corridor of liberty, we propose a governance framework emphasizing robust technical safeguards, hybrid institutional designs that maintain meaningful human oversight, and adaptive regulatory mechanisms. |
Date | 2025-02-14 |
Library Catalog | arXiv.org |
URL | http://arxiv.org/abs/2503.05710 |
Accessed | 3/12/2025, 10:40:59 AM |
Extra | arXiv:2503.05710 [cs] |
DOI | 10.48550/arXiv.2503.05710 |
Repository | arXiv |
Archive ID | arXiv:2503.05710 |
Date Added | 3/12/2025, 10:40:59 AM |
Modified | 3/12/2025, 10:41:12 AM |
Comment: 40 pages
Item Type | Preprint |
---|---|
Author | Michael Timothy Bennett |
Abstract | Are biological self-organising systems more `intelligent' than artificial intelligence? If so, why? We frame intelligence as adaptability, and explore this question using a mathematical formalism of causal learning. We compare systems by how they delegate control, illustrating how this applies with examples of computational, biological, human organisational and economic systems. We formally show the scale-free, dynamic, bottom-up architecture of biological self-organisation allows for more efficient adaptation than the static top-down architecture typical of computers, because adaptation can take place at lower levels of abstraction. Artificial intelligence rests on a static, human-engineered `stack'. It only adapts at high levels of abstraction. To put it provocatively, a static computational stack is like an inflexible bureaucracy. Biology is more `intelligent' because it delegates adaptation down the stack. We call this multilayer-causal-learning. It inherits a flaw of biological systems. Cells become cancerous when isolated from the collective informational structure, reverting to primitive transcriptional behaviour. We show states analogous to cancer occur when collectives are too tightly constrained. To adapt to adverse conditions control should be delegated to the greatest extent, like the doctrine of mission-command. Our result shows how to design more robust systems and lays a mathematical foundation for future empirical research. |
Date | 2025-02-03 |
Language | en-us |
Library Catalog | OSF Preprints |
URL | https://osf.io/e6fky_v2 |
Accessed | 3/15/2025, 8:30:54 AM |
DOI | 10.31219/osf.io/e6fky_v2 |
Repository | OSF |
Date Added | 3/15/2025, 8:30:54 AM |
Modified | 3/15/2025, 8:30:57 AM |
Item Type | Preprint |
---|---|
Author | Anthony Aguirre |
Abstract | Dramatic advances in artificial intelligence over the past decade (for narrow-purpose AI) and the last several years (for general-purpose AI) have transformed AI from a niche academic field to the core business strategy of many of the world's largest companies, with hundreds of billions of dollars in annual investment in the techniques and technologies for advancing AI's capabilities. We now come to a critical juncture. As the capabilities of new AI systems begin to match and exceed those of humans across many cognitive domains, humanity must decide: how far do we go, and in what direction? This essay argues that we should keep the future human by closing the "gates" to smarter-than-human, autonomous, general-purpose AI -- sometimes called "AGI" -- and especially to the highly-superhuman version sometimes called "superintelligence." Instead, we should focus on powerful, trustworthy AI tools that can empower individuals and transformatively improve human societies' abilities to do what they do best. |
Date | 2025-03-07 |
Short Title | Keep the Future Human |
Library Catalog | arXiv.org |
URL | http://arxiv.org/abs/2311.09452 |
Accessed | 3/13/2025, 8:18:13 AM |
Extra | arXiv:2311.09452 [cs] |
DOI | 10.48550/arXiv.2311.09452 |
Repository | arXiv |
Archive ID | arXiv:2311.09452 |
Date Added | 3/13/2025, 8:18:13 AM |
Modified | 3/13/2025, 8:18:16 AM |
Comment: 62 pages, 2 figures, 3 appendices. This is a total rewrite and major expansion of a previous version entitled "Close the Gates."