- Robots, institutional roles and joint action: some key ethical issues
Abstract
In this article, firstly, cooperative interaction between robots and humans is discussed; specifically, the possibility of human/robot joint action and (relatedly) the possibility of robots occupying institutional roles alongside humans. The discussion makes use of concepts developed in social ontology. Secondly, certain key moral (or ethical—these terms are used interchangeably here) issues arising from this cooperative action are discussed, specifically issues that arise from robots performing (including qua role occupants) morally significant actions jointly with humans. Such morally significant human/robot joint actions, supposing they exist, could potentially range from humans and robots jointly caring for the infirm through to jointly killing enemy combatants.
- Possibilities and challenges in the moral growth of large language models: a philosophical perspective
Abstract
With the rapid expansion of parameters in large language models (LLMs) and the application of Reinforcement Learning with Human Feedback (RLHF), there has been a noticeable growth in the moral competence of LLMs. However, several questions warrant further exploration: Is it really possible for LLMs to fully align with human values through RLHF? How can the current moral growth be philosophically contextualized? We identify similarities between LLMs’ moral growth and Deweyan ethics in terms of the discourse of human moral development. We then attempt to use Dewey’s theory on an experimental basis to examine and further explain the extent to which the current alignment pathway enables the development of LLMs. A beating experiment serves as the foundational case for analyzing LLMs’ moral competence across various parameters and stages, including basic moral cognition, moral dilemma judgment, and moral behavior. The results demonstrate that the moral competence of the GPT series has seen a significant improvement, and Dewey’s Impulse-Habit-Character theory of moral development can be used to explain this: the moral competence of LLMs has been enhanced through experience-based learning, supported by human feedback. Nevertheless, LLMs’ moral development through RLHF remains constrained and does not reach the character stage described by Dewey, possibly due to their lack of self-consciousness. This fundamental difference between humans and LLMs underscores both the limitations of LLMs’ moral growth and the challenges of applying RLHF for AI alignment. It also emphasizes the need for external societal governance and legal regulation.
- Leading good digital lives
Abstract
The paper develops a conception of the good life within a digitalized society. Martha Nussbaum’s capability theory offers an adequate normative framework for that purpose as it systematically integrates the analysis of flourishing human lives with a normative theory of justice. The paper argues that a theory of good digital lives should focus on everyday life, on the impact digitalization has on ordinary actions, routines and corresponding practical knowledge. Based on Nussbaum’s work, the paper develops a concept of digital capabilities. Digital capabilities are combined capabilities: To possess a digital capability, an individual must acquire certain skills and abilities (internal capabilities) and needs access to devices and external infrastructures like internet connections. If societies as a whole and everyday environments are digitalized to a certain degree, the possession of specific digital capabilities is a crucial precondition for a flourishing life. The paper likewise analyzes challenges that are connected to digital capabilities. Digital structures are constantly changing. In consequence, digital capabilities are never acquired once and for all, but always precarious and in danger of being lost—with serious consequences for individual everyday lives in digitalized environments. As digital capabilities are crucial for leading a good life, people are entitled to develop and maintain them. They describe demands of justice. Using the examples of filling in an online form and digital education, the paper finally illustrates the size of institutional changes that are necessary to meet these demands.
- LLMs beyond the lab: the ethics and epistemics of real-world AI research
Abstract
Research under real-world conditions is crucial to the development and deployment of robust AI systems. Exposing large language models to complex use settings yields knowledge about their performance and impact, which cannot be obtained under controlled laboratory conditions or through anticipatory methods. This epistemic need for real-world research is exacerbated by large-language models’ opaque internal operations and potential for emergent behavior. However, despite its epistemic value and widespread application, the ethics of real-world AI research has received little scholarly attention. To address this gap, this paper provides an analysis of real-world research with LLMs and generative AI, assessing both its epistemic value and ethical concerns such as the potential for interpersonal and societal research harms, the increased privatization of AI learning, and the unjust distribution of benefits and risks. This paper discusses these concerns alongside four moral principles influencing research ethics standards: non-maleficence, beneficence, respect for autonomy, and distributive justice. I argue that real-world AI research faces challenges in meeting these principles and that these challenges are exacerbated by absent or imperfect current ethical governance. Finally, I chart two distinct but compatible ways forward: through ethical compliance and regulation and through moral education and cultivation.
- AI responsibility gap: not new, inevitable, unproblematic
Abstract
Who is responsible for a harm caused by AI, or a machine or system that relies on artificial intelligence? Given that current AI is neither conscious nor sentient, it’s unclear that AI itself is responsible for it. But given that AI acts independently of its developer or user, it’s also unclear that the developer or user is responsible for the harm. This gives rise to the so-called responsibility gap: cases where AI causes a harm, but no one is responsible for it. Two central questions in the literature are whether responsibility gaps exist, and if yes, whether they’re morally problematic in a way that counts against developing or using AI. While some authors argue that responsibility gaps exist, and they’re morally problematic, some argue that they don’t exist. In this paper, I defend a novel position. First, I argue that current AI doesn’t generate a new kind of concern about responsibility that the older technologies don’t. Then, I argue that responsibility gaps exist but they’re unproblematic.
- Nullius in Explanans: an ethical risk assessment for explainable AI
Abstract
Explanations are conceived to ensure the trustworthiness of AI systems. Yet, relying solemnly on algorithmic solutions, as provided by explainable artificial intelligence (XAI), might fall short to account for sociotechnical risks jeopardizing their factuality and informativeness. To mitigate these risks, we delve into the complex landscape of ethical risks surrounding XAI systems and their generated explanations. By employing a literature review combined with rigorous thematic analysis, we uncover a diverse array of technical risks tied to the robustness, fairness, and evaluation of XAI systems. Furthermore, we address a broader range of contextual risks jeopardizing their security, accountability, reception alongside other cognitive, social, and ethical concerns of explanations. We advance a multi-layered risk assessment framework, where each layer advances strategies for practical intervention, management, and documentation of XAI systems within organizations. Recognizing the theoretical nature of the framework advanced, we discuss it in a conceptual case study. For the XAI community, our multifaceted investigation represents a path to practically address XAI risks while enriching our understanding of the ethical ramifications of incorporating XAI in decision-making processes.
- Urban Digital Twins and metaverses towards city multiplicities: uniting or dividing urban experiences?
Abstract
Urban Digital Twins (UDTs) have become the new buzzword for researchers, planners, policymakers, and industry experts when it comes to designing, planning, and managing sustainable and efficient cities. It encapsulates the last iteration of the technocratic and ultra-efficient, post-modernist vision of smart cities. However, while more applications branded as UDTs appear around the world, its conceptualization remains ambiguous. Beyond being technically prescriptive about what UDTs are, this article focuses on their aspects of interaction and operationalization in connection to people in cities, and how enhanced by metaverse ideas they can deepen societal divides by offering divergent urban experiences based on different stakeholder preferences. Therefore, firstly this article repositions the term UDTs by comparing existing concrete and located applications that have a focus on interaction and participation, including some that may be closer to the concept of UDT than is commonly assumed. Based on the components found separately in the different studied cases, it is possible to hypothesize about possible future, more advanced realizations of UDTs. This enables us to contrast their positive and negative societal impacts. While the development of new immersive interactive digital worlds can improve planning using collective knowledge for more inclusive and diverse cities, they pose significant risks not only the common ones regarding privacy, transparency, or fairness, but also social fragmentation based on urban digital multiplicities. The potential benefits and challenges of integrating this multiplicity of UDTs into participatory urban governance emphasize the need for human-centric approaches to promote socio-technical frameworks able to mitigate risks as social division.
- Mind the gap: bridging the divide between computer scientists and ethicists in shaping moral machines
Abstract
This paper examines the ongoing challenges of interdisciplinary collaboration in Machine Ethics (ME), particularly the integration of ethical decision-making capacities into AI systems. Despite increasing demands for ethical AI, ethicists often remain on the sidelines, contributing primarily to metaethical discussions without directly influencing the development of moral machines. This paper revisits concerns highlighted by Tolmeijer et al. (2020), who identified the pitfall that computer scientists may misinterpret ethical theories without philosophical input. Using the MACHIAVELLI moral benchmark and the Delphi artificial moral agent as case studies, we analyze how these challenges persist. Our analysis indicates that the creators of MACHIAVELLI and Delphi “copy” ethical concepts and embed them in LLMs without questioning or challenging these concepts themselves sufficiently. If an ethical concept causes friction with the computer code, they only reduce and simplify the ethical concept in order to stay as close as possible to the original. We propose that ME should expand its focus to include both interdisciplinary efforts that embed existing ethical work into AI, and transdisciplinary research that fosters new interpretations of ethical concepts. Interdisciplinary and transdisciplinary approaches are crucial for creating AI systems that are not only effective but also socially responsible. To enhance collaboration between ethicists and computer scientists, we recommend the use of Socratic Dialogue as a methodological tool, promoting deeper understanding of key terms and more effective integration of ethics in AI development.
- Procedural fairness in algorithmic decision-making: the role of public engagement
Abstract
Despite the widespread use of automated decision-making (ADM) systems, they are often developed without involving the public or those directly affected, leading to concerns about systematic biases that may perpetuate structural injustices. Existing formal fairness approaches primarily focus on statistical outcomes across demographic groups or individual fairness, yet these methods reveal ambiguities and limitations in addressing fairness comprehensively. This paper argues for a holistic approach to algorithmic fairness that integrates procedural fairness, considering both decision-making processes and their outcomes. Procedural fairness emphasizes the importance of fair decision-making procedures, which aligns with theories of relational justice that stress the quality of social relations and power dynamics. We highlight the need for substantive procedural fairness to ensure better outcomes and address forward-looking responsibilities. Additionally, we propose leveraging Public Engagement, a core dimension within the well-established Responsible Research and Innovation framework, to enhance procedural fairness in ADM systems. Our contribution underscores the value of Public Engagement in fostering fairer ADM processes, thereby expanding the current focus beyond technical outcome-based approaches to encompass broader procedural considerations.
- Value-laden challenges for technical standards supporting regulation in the field of AI
Abstract
This perspective paper critically examines value-laden challenges that emerge when using standards to support regulation in the field of artificial intelligence, particularly within the context of the AI Act. It presents a dilemma arising from the inherent vagueness and contestable nature of the AI Act’s requirements. The effective implementation of these requirements necessitates addressing hard normative questions that involve complex value judgments. These questions, such as determining the acceptability of risks or the appropriateness of accuracy levels, need to be addressed in order to achieve compliance with the AI Act. However, this creates a dilemma: either the hard normative questions left open by the AI Act are addressed by the standards or they are addressed by the actors involved in the conformity assessment. This paper argues that the latter approach is more likely. Consequently, regulatory intermediaries such as notified bodies will be responsible for making critical value judgments while evaluating compliance with the AI Act’s value-laden requirements. This shift raises a series of concerns and implications that warrant further exploration.
- Digital sovereignty and artificial intelligence: a normative approach
Abstract
Digital sovereignty is a term increasingly used by academics and policymakers to describe efforts by states, private companies, and citizen groups to assert control over digital technologies. This descriptive conception of digital sovereignty is normatively deficient as it centres discussion on how power is being asserted rather than evaluating whether actions are legitimate. In this article, I argue that digital sovereignty should be understood as a normative concept that centres on authority (i.e., legitimate control). A normative approach to digital sovereignty is beneficial as it supports critical discourse about the desirability of actors’ assertions of control. It is also more closely aligned with traditional definitions of sovereignty that are grounded in ideas of sovereign authority. To operationalise this normative approach to digital sovereignty and demonstrate the deficiencies of a descriptive approach, the role that “Big Tech” companies are playing in controlling artificial intelligence is considered from both perspectives. Through this case study, it is highlighted that Big Tech companies assert a high degree of control (i.e., descriptive digital sovereignty), but that they lack strong input legitimacy and have a questionable amount of output legitimacy. For this reason, it is argued that Big Tech companies should only be considered quasi-sovereigns over AI.
- The repugnant resolution: has Coghlan & Cox resolved the Gamer’s Dilemma?
Abstract
Coghlan and Cox (Between death and suffering: Resolving the gamer’s dilemma. Ethics and Information Technology) offer a new resolution to the Gamer’s Dilemma (Luck, The Gamer’s Dilemma. Ethics and Information Technology). They argue that, while it is fitting for a person committing virtual child molestation to feel self-repugnance, it is not fitting for a person committing virtual murder to feel the same, and the fittingness of this feeling indicates each act’s moral permissibility. The aim of this paper is to determine whether this resolution – the repugnant resolution – successfully resolves the Gamer’s Dilemma. We argue that it does not.
- Large language models and their big bullshit potential
Abstract
Newly powerful large language models have burst onto the scene, with applications across a wide range of functions. We can now expect to encounter their outputs at rapidly increasing volumes and frequencies. Some commentators claim that large language models are bullshitting, generating convincing output without regard for the truth. If correct, that would make large language models distinctively dangerous discourse participants. Bullshitters not only undermine the norm of truthfulness (by saying false things) but the normative status of truth itself (by treating it as entirely irrelevant). So, do large language models really bullshit? I argue that they can, in the sense of issuing propositional content in response to fact-seeking prompts, without having first assessed that content for truth or falsity. However, I further argue that they need not bullshit, given appropriate guardrails. So, just as with human speakers, the propensity for a large language model to bullshit depends on its own particular make-up.
- A data-centric approach for ethical and trustworthy AI in journalism
Abstract
AI-driven journalism refers to various methods and tools for gathering, verifying, producing, and distributing news information. Their potential is to extend human capabilities and create new forms of augmented journalism. Although scholars agreed on the necessity to embed journalistic values in these systems to make AI systems accountable, less attention was paid to data quality, while the results’ accuracy and efficiency depend on high-quality data in any machine learning task. Assessing data quality in the context of AI-driven journalism requires a broader and interdisciplinary approach, relying on the challenges of data quality in machine learning and the ethical challenges of using machine learning in journalism. To better identify these, we propose a data quality assessment framework to support the collection and pre-processing stages in machine learning. It relies on three of the core principles of ethical journalism—accuracy, fairness, and transparency—and participates in the shift from model-centric to data-centric AI, by focusing on data quality to reduce reliance on large datasets with errors, making data labelling consistent, and better integrating journalistic knowledge.
- Socially Disruptive Technologies and Conceptual Engineering
Abstract
In this special issue, we focus on the connection between conceptual engineering and the philosophy of technology. Conceptual engineering is the enterprise of introducing, eliminating, or revising words and concepts. The philosophy of technology examines the nature and significance of technology. We investigate how technologies such as AI and genetic engineering (so-called “socially disruptive technologies”) disrupt our practices and concepts, and how conceptual engineering can address these disruptions. We also consider how conceptual engineering can enhance the practice of ethical design. The issue features seven articles that discuss a range of topics, including trust in blockchain applications and the evolving concept of nature. These articles highlight that as technology changes the world and our concepts, conceptual engineering provides invaluable tools and frameworks to reflect on these changes and adapt accordingly.
- AI content detection in the emerging information ecosystem: new obligations for media and tech companies
Abstract
The world is about to be swamped by an unprecedented wave of AI-generated content. We need reliable ways of identifying such content, to supplement the many existing social institutions that enable trust between people and organisations and ensure social resilience. In this paper, we begin by highlighting an important new development: providers of AI content generators have new obligations to support the creation of reliable detectors for the content they generate. These new obligations arise mainly from the EU’s newly finalised AI Act, but they are enhanced by the US President’s recent Executive Order on AI, and by several considerations of self-interest. These new steps towards reliable detection mechanisms are by no means a panacea—but we argue they will usher in a new adversarial landscape, in which reliable methods for identifying AI-generated content are commonly available. In this landscape, many new questions arise for policymakers. Firstly, if reliable AI-content detection mechanisms are available, who should be required to use them? And how should they be used? We argue that new duties arise for media and Web search companies arise for media companies, and for Web search companies, in the deployment of AI-content detectors. Secondly, what broader regulation of the tech ecosystem will maximise the likelihood of reliable AI-content detectors? We argue for a range of new duties, relating to provenance-authentication protocols, open-source AI generators, and support for research and enforcement. Along the way, we consider how the production of AI-generated content relates to ‘free expression’, and discuss the important case of content that is generated jointly by humans and AIs.