AI doesn’t care – Reading Vanessa Nurock ‘Care in an Era of New Technologies and Artificial Intelligence’.

In January 2025 Vanessa Nurock published her book Care in an Era  of New Technologies and Artificial Intelligence. Relationships in a connected world with Peeters Publishers, Louvain (Belgium). Vanessa Nurock explores the question whether the rise of artificial intelligence might not only change our practical approach to care, but also the very fabric of our moral and emotional relationships.

What begins as a technical question—how algorithms and social robots can support care processes—gradually unfolds into a fundamental reflection on what “human” means in an age when we pretend to learn machines to care about us.

Nurock situates the accelerating development of nanotechnologies, biotechnologies, information science, and cognitive science (collectively NBIC) within a broader trajectory of historical transformation. She argues that these innovations are not neutral tools but deeply normative forces that reshape social life, moral sensibility, and democratic relations.

The book’s central claim is that the ideology of technological inevitability—rooted in what she terms the “artificialist fallacy”—must be confronted by an alternative framework: a polethics of care that brings ethics, politics, and poetics into dialogue. This review will not cover all facets of this important work, but will focus on the tensions between care and A.I.

A Polethical Moment: The Reversed Enlightenment

The book opens with an ambitious historical claim: we are living through a technological and social revolution as transformative as the invention of print and the rise of Enlightenment rationality. Yet, unlike those earlier revolutions, which Nurock associates with the expansion of autonomy and the pursuit of democratic equality, today’s upheaval risks undoing those very gains. Instead of emancipating individuals, new technologies often entrench patriarchal hierarchy, exacerbate inequalities, and subtly erode the foundations of human relationships.

At the core of Nurock’s analysis is the notion of connexion—a networked form of linkage that differs fundamentally from genuine relation. Connexion is quantitative, instrumental, and easily subject to technical standardization: the digital node, the genetic sequence, the data point. Relation, by contrast, is qualitative, reciprocal, and ethically rich. It is grounded in empathy, recognition, and mutual responsibility. Modern NBIC technologies thrive on connexions while undermining relations. They offer the illusion of connectivity without the substance of relationality.

This diagnosis resonates strongly with Sherry Turkle’s work, particularly Alone Together and Reclaiming Conversation, which Nurock explicitly cites. Turkle argues that digital media fosters superficial connection at the cost of face-to-face empathy. Nurock extends this critique into a broader philosophical terrain, suggesting that the erosion of relations feeds directly into the persistence of patriarchy.

Drawing on Carol Gilligan, Naomi Snider, and Niobe Way, she demonstrates that relational loss is not only a psychological phenomenon but also a cultural and political one. Patriarchy thrives when empathy diminishes, and new technologies risk amplifying this vicious cycle. Thus, the “reversed Enlightenment” is not merely a metaphor but a political reality: the very faculties that supported democratic emancipation—empathy, moral judgment, critical reflection—are being weakened by technological mediation.

Nurock’s framework of polethics is particularly striking here. Borrowing from Michel Deguy’s intertwining of poetics, ethics, and politics, she insists that our responses to technological transformation must be simultaneously imaginative, normative, and institutional. Purely technical analyses of NBIC’s are insufficient; we need a language that captures their symbolic power, their cultural narratives, and their ability to reconfigure what counts as obvious. In this sense, Nurock’s polethics stands as both critique and constructive proposal, urging us to invent futures grounded in care rather than technological inevitability.

AI for Care? The Politics of the “As If”

One of the book’s most interesting sections concerns artificial intelligence in contexts of caregiving. Here, Nurock engages with the increasing deployment of social robots designed to assist children, the elderly, or patients with chronic illness. A paradigmatic case is Meyko, a robot that reminds children to take medication and interacts with them in ways that mimic parental concern. At first glance, such technologies seem benign, even beneficial. But Nurock argues that they risk collapsing the crucial distinction between “taking care of” and “caring about.”

“Taking care of” can, to some extent, be automated. A robot can remind, monitor, or administer. But “caring about” involves empathy, moral imagination, and vulnerability—capacities that machines cannot possess. When a child takes medicine “to please” Meyko, the danger is not merely misplaced affection but a deeper confusion between genuine relationality and programmed performance. The child attributes solicitude where none exists, mistaking simulation for sincerity. A robot may remind a child to take their medication, but the illusion that this occurs out of mutual concern is misleading. The child risks confusing programmed behavior with genuine parental solicitude.What seems like care is nothing but a behavioral loop, a carefully engineered simulation of solicitude. The risk is that the child, emotionally responsive as all humans are, begins to confuse mechanical feedback with parental affection.

Nurock situates these “care simulations” within a broader philosophical context: drawing on authors such as Alexis Elder, Royakkers & Van Est, and Sherry Turkle, she shows that these machines perform care, friendship, or affection as a simulacrum. Yet the machines do not actually simulate—they pretend nothing. Humans project meaning onto them, creating a moral short-circuit: we credit the machine with qualities it does not possess, while our own capacities for empathy and cooperation, the foundations of ethical and political communities, are subtly exploited.

Nurock is equally sharp in critiquing the designers behind these systems. By transposing human care practices into human-machine interactions, and by exploiting our sensitivity to the “as if,” AI designers embed paternalistic mechanisms and subtle nudging. Care robots are no longer mere tools—they become integral members of households, insinuating themselves into the most intimate relational spaces. The ethical stakes are profound: the most private zones of human relational life—parent-child bonds, caregiving, intimacy—are increasingly mediated by machines that cannot reciprocate care. This movement represents a profound redefinition of what we mean by care. It is not just a question of efficiency or safety but of meaning. To allow machines to enter these spaces is to accept, consciously or not, a reconfiguration of our moral anthropology. The “as if” of the machine invites us to mistake responsiveness for responsibility, routine for reciprocity.

Nurock complicates this further: humans, when we act “as if,” know at some level that we are pretending, though we may sometimes forget. Machines, by contrast, do not pretend at all; they execute. They lack both simulation and sincerity. The “as if” projected onto them comes from us, from our readiness to interpret mechanical behavior as expressive of care. This is where our vulnerability is exploited: the very capacities that enable us to form moral and political communities—our propensity to read meaning into faces, gestures, and words—are hijacked by machines that give nothing in return.

This analysis has profound implications. It shows that our concerns with AI are not merely matters of safety, transparency, or accountability – our very ethical categories are at stake, because they are getting blurred and confused. What does it mean to “relate” to a being that cannot reciprocate? What happens when our empathic faculties are continually engaged by artifacts that cannot respond? Nurock warns that such interactions may short-circuit the development of empathy itself, particularly in children, thereby undermining the very basis of moral reasoning, and the abilities to care for, to care with, in democratic fabrics and contexts.

What is perhaps most troubling, as Nurock notes, is that these technologies are often designed with paternalistic intent. Engineers imagine human–robot interactions as analogous to mutual support among humans, thereby naturalizing a worldview in which care can be outsourced, simulated, or commodified. Engineers and cognitive scientists, often animated by good intentions, construct systems that “nudge” users toward healthier, safer, or more productive behaviors. Yet the logic of the nudge is inherently asymmetrical: it presupposes that one party knows better than the other, that guidance should be applied even without explicit consent. When transposed to the sphere of care, this logic becomes insidious. What begins as help easily becomes control.

The ideology of “AI for good,” often associated with the rhetoric of care, masks a quiet return of technocratic governance. Behind the gentle design of empathetic robots lies a vision of human fragility that must be managed, corrected, and optimized. The paradox is stark: by automating care, we risk eroding the very human capacities that make care possible.

From this follows a broader political question. If empathy and moral imagination are the foundations of community, what happens when these are outsourced or eroded? The automation of care risks leading to the automation of ethics itself—a shift from deliberation to design. Rather than asking what should we do, we allow algorithms to determine what we will feel inclined to do. The moral subject gives way to the behavioral user.

The affective realm, once the preserve of intimacy and ambiguity, becomes a site of data extraction. The “smart” home, the “empathetic” device, the “compassionate” robot—all promise understanding, but deliver surveillance. To care becomes to monitor; to empathize becomes to optimize. The vocabulary of care is retained, but its substance hollowed out.

Here lies the final paradox: the more we automate care, the less we are capable of experiencing it.
Each generation that grows up with simulated affection may find the absence of real solicitude less troubling.

Toward Relational Responsibility and an Ethics by Design

In her concluding sections Nurock makes a powerful call for a rethinking of responsibility. She warns that the weakening of relations has two especially dangerous consequences. First, it jeopardizes the moral development of new generations, who risk becoming individuals capable of connexion but not relation, technical interaction but not empathy. Second, it fosters what she terms deresponsibilization: the dilution or displacement of responsibility in complex technological systems. When harms occur, responsibility is endlessly deferred—onto algorithms, corporations, collectives—until it seems to vanish altogether.

Against this drift, Nurock advances a notion of relational responsibility. Drawing on Iris Marion Young’s distinction between guilt and responsibility, she argues that responsibility should be understood not retroactively (as blame) but proactively (as commitment). Responsibility is not about assigning fault but about cultivating the relations that prevent harm and sustain democracy. It is inherently future-oriented and collective, but without dissolving into vagueness or paternalism. Responsibility becomes, in her words, relational rather than reified.

This leads to her proposal for an ethics by design. Instead of “anticipatory design,” which presupposes a determined future and locks technology into a self-fulfilling prophecy, ethics by design insists that moral reflection must be integrated into every stage of technological development. The question is not only “can we build this?” or “how do we mitigate risks?” but “does this technology encourage relationships or merely connexions?” and “how does it attend to vulnerability, ordinariness, and democracy?” Here Nurock aligns herself with scholars like Sandra Laugier and Maria Puig de la Bellacasa, who emphasize the politics of the ordinary and the centrality of care to ethical life.

Care, in this view, is not a sentimental add-on but a theoretical and practical revolution. It resists the grandiosity of NBIC by focusing on the small, the ordinary, the invisible: click workers, gendered voice assistants, trivialized data practices. Care draws attention to interdependence, to the “little things” that sustain life but are often overlooked. In doing so, it offers a counterweight to the artificialist fallacy, the seductive belief that what is technologically possible is therefore good or inevitable.

In the conclusion, Nurock synthesizes her analysis. She emphasizes that AI and NBIC technologies operate in liminal zones between categories—alive/not alive, social/asocial, caring/indifferent—making them difficult to conceptualize and regulate. This blurring of categories—between living and non-living, real and simulated, care and control—is one of the reasons why our vulnerabilities are so exposed. We no longer know how to think about these new technologies because they resist the categories we have inherited.

Our vulnerability is not only biological or social but also cognitive: we lose orientation in the gray zones these technologies occupy. The book convincingly portrays a cultural shift: technology is moving from instrument to partner, from tool to caregiver and household member. Nurock asks whether we can maintain control over human empathy and care, or whether these capacities are being intercepted by machines that give nothing in return.

The author’s conclusion thus resonates far beyond the ethics of robotics. It is an appeal to safeguard the conditions under which care remains possible. These conditions are fragile because they depend on the mutual recognition of vulnerability—something that cannot be programmed or optimized. True care is not efficient; it is attentive, slow, and uncertain. It entails the possibility of failure, the risk of misunderstanding, and the openness to transformation.

By contrast, artificial care offers reassurance without risk. It promises comfort without commitment. In its smoothness and predictability, it mirrors the broader dream of a world without friction—a world where human difference and dependency can be managed through design. But such a world, the author warns, would be one in which our humanity has been quietly domesticated.
To resist this drift is not to reject technology but to reclaim moral imagination.

We must learn to recognize when our empathy is being hijacked, when the “as if” of care conceals an absence of relation. We must insist that care, like responsibility, cannot be delegated to code. For if we lose the distinction between “taking care of” and “caring about,” we risk losing the very capacity that makes us moral beings.

In the end, the question is not whether AI can care, but whether we will still know what care means once we have taught ourselves to accept its imitation.

Nurock ends by summarizing three central insights. First, NBIC technologies form a coherent system that thrives on self-fulfilling prophecies, blurred moral categories, and technocapitalist mythologies. Second, this system perpetuates both the artificialist fallacy and the patriarchal infrastructure, reinforcing one another in a cycle of lost empathy and entrenched inequality. Third, the alternative lies not in rejecting technology but in reimagining innovation as a practice of care—where good innovations are made with care rather than at its expense.

Care in an Era of New Technologies and Artificial Intelligence is an important and thought-provoking book. Nurock compels us to confront the ethical stakes of NBIC not in abstract terms but in the lived realities of care, vulnerability, and democracy. She reminds us that the most pressing challenge is not to take the future technology is enforcing upon us in a self-fulfilling prophecy, but to attend to the present, to ask what matters to us, and to take care of it together. Her polethical vision—intertwining ethics, politics, and poetics—offers a refreshing alternative to both technological determinism and nostalgic humanism.

In an era when AI systems increasingly insinuate themselves into our most intimate spaces, from healthcare to education to domestic life, Nurock’s insistence on the difference between connexion and relation, between taking care of and caring about, could not be more timely. Her work is a call to resist the anesthetizing promises of technological inevitability and to invent futures grounded in care, responsibility, and democratic solidarity.

Whether or not one agrees with all her conclusions, readers will find in this book a rich resource for thinking critically about the entanglement of technology, morality, and politics. It is, in short, a polethical intervention in the best sense: critical, imaginative, and deeply concerned with how we live together in a connected world.

Literature 

Elder, Alexis M. 2018. Friendship, Robots, and Social Media: False Friends and Second Selves. New York: Routledge

Deguy, Michel. 2001. Spleen de Paris. Paris: Galilée.

Deguy, Michel. 2017. Réouverture après travaux. Paris: Galilée

Gilligan, Carol, and Naomi Snider. 2018. Why Does Patriarchy Persist? Cambridge, UK: Polity.

Gilligan, Carol. 1982. In a Different Voice: Psychological Theory and Women’s Development. Cambridge, MA: Harvard University Press.

Laugier, Sandra, and Patricia Paperman, eds. 2006. Le souci des autres: Éthique et politique du care. Paris: EHESS.

Puig de la Bellacasa, Maria. 2017. Matters of Care: Speculative Ethics in More Than Human Worlds. Minneapolis: University of Minnesota Press

Royakkers, Lamber M. M., and Rinie van Est. 2016. Just Ordinary Robots: Automation from Love to War. Boca Raton, FL: CRC Press.

Turkle, Sherry. 2011. Alone Together: Why We Expect More from Technology and Less from Each Other. New York: Basic Books.

Turkle, Sherry. 2015. Reclaiming Conversation: The Power of Talk in a Digital Age. New York: Penguin

Way, Niobe. 2011. Deep Secrets: Boys’ Friendships and the Loss of Connection. Cambridge, MA: Harvard University Press

Young, Iris. 2006. “Katrina, Too Much Blame, Not Enough Responsibility.” Dissent 53 (1): 41-46.

About the author: Richard Brons

Richard Brons

Richard Brons graduated in philosophy and literary studies at University of Amsterdam and VU Amsterdam (NL). At the University of Humanistic Studies in Utrecht (NL), he completed a NWO PhD research on J.F Lyotard's ethics of Differend, about the injustice of speechlessness. Since 2012, he is responsible for the final editing of Waardenwerk Magazine, a continuation of the Journal of Humanistic Studies.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.