Nuestro compañero Joan Llorca, junto con Belén Liedo y María Victoria Martínez-López (colaboradora de FiloLab), ha publicado un nuevo artículo en la revista AI & Society: Journal of Knowledge, Culture and Communication titulado: «Trusting the (un)trustworthy? A new conceptual approach to the ethics of social care robots».
Reproducimos a continuación el abstract del artículo, que se puede leer a través de este enlace.
Social care robots (SCR) have come to the forefront of the ethical debate. While the possibility of robots helping us tackle the global care crisis is promising for some, others have raised concerns about the adequacy of AI-driven technologies for the ethically complex world of care. The robots do not seem able to provide the comprehensive care many people demand and deserve, at least they do not seem able to engage in humane, emotion-laden and significant care relationships. In this article, we will propose to focus the debate on a particularly relevant aspect of care: trust. We will argue that, to answer the question of whether SCR are ethically acceptable, we must first address another question, namely, whether they are trustworthy. To this end, we propose a three-level model of trust analysis: rational, motivational and personal or intimate. We will argue that some relevant forms of caregiving (especially care for highly dependent persons) require a very personal or intimate type of care that distinguishes it from other contexts. Nevertheless, this is not the only type of trust happening in care spaces. We will adduce that, while we cannot have intimate or highly personal relationships with robots, they are trustworthy at the rational and thin motivational level. The fact that robots cannot engage in some (personal) aspects of care does not mean that they cannot be useful in care contexts. We will defend that critical approaches to trusting SCR have been sustained by two misconceptions and propose a new model for analyzing their moral acceptability: sociotechnical trust in teams of humans and robots.