Can tech truly work for care?
A third way between luddites and techno-solutionists hinges on a key value: humility
Caregiving is often posed as the last bastion of life that should remain untouched by technology. Even traditionally tech-friendly news outlets are routinely concerned with the impact of AI and automation on our medical and interpersonal care systems:
The Economist: How AI is rewiring childhood
The Economist: Millions are turning to AI for therapy
The FT: Why your AI companion is not your friend
BBC: Is this the year domestic robots come into our homes?
Many see the application of technology to care as cold, sterile and removed of the essential humanity that care requires. But is this intrinsically true? Are technology and care truly incompatible, or is it our applications that need figuring out?
My master’s degree is in computer science, and my thesis was in the field of human-computer interaction (HCI). HCI is a multi-disciplinary area which studies how people interact with digital technologies in order to make those interactions useful, adequate, meaningful, and ethical. I focused on building play technologies – think interactive toys – to encourage social play in mixed groups of neurodiverse and neurotypical children. Play is essential for early development, but autistic children are often excluded from social play groups due to differences in social and communication preferences. Building the right kinds of toys can facilitate social play within groups, leading to more positive play interactions for all children.
An uneasy pairing
Technology is pervasive throughout caregiving: from toys for children and baby monitors to telephones and personal alarms (which detect falls or shaking motions and help access emergency care) as well as medical technology (x-rays, MRIs, ultrasounds) and accessibility tech (wheelchairs, hearing aids, prosthetics, and in the near future, self-driving cars). We use technology to extend human capacity and reduce unnecessary suffering, but also to facilitate connection where physical restraints preclude it. A loved one can be on the other side of the world, hard of hearing, or still inside the womb, and technology can help to connect us.
Despite this long history of compatibility, the addition of modern technologies to social wellbeing and care solutions often feels deeply uncomfortable. Even as someone who has spent time researching and building in this field, I feel a persistent unease when reading headlines about AI in schools, robots in houses, or surveillance technologies introduced into care homes for older people.
This discomfort reflects more than a fear of novelty. Instead, it points to an underlying awareness that modern societies are already struggling to provide care in ways that feel sufficiently humane and relational. We intuitively perceive that technology is being introduced not to deepen care, but to compensate for its absence. Care systems are overstretched, so technological solutions act as substitutes for true time and presence.
Maternal care is one such system. Modern obstetric technologies such as continuous fœtal monitoring have undoubtedly saved countless lives and remain essential in high-risk situations. Yet their routine and sometimes unnecessary application has also led to significant harm1. Over-medicalising a typically healthy physiological process tends to reduce the autonomy of those giving birth2, and marginalise midwifery and community-based birth practices. It is not surprising, then, that many people working at the frontiers of obstetrics respond by calling for a return to older or “lostˮ practises, forms of care grounded in community or embodied knowledge built up through generations of human experience. Yet once again, while the majority of women who have doula-assisted or home births feel empowered by the experience, a more extreme though fringe tendency to reject modernity altogether can have fatal consequences.
Similarly, the recent use of AI to address pervasive loneliness in society feels disturbingly miscalculated. The concern many of us experience when confronted with AI companions, chatbots designed for emotional support, or “friendshipˮ technologies that provide artificial intimacy stems from a recognition that these are being positioned as remedies for structural failures rather than as complements to a healthy social life. They fail to distinguish between supporting care and overriding it. If our patterns of work, urban design, digital use, and economic incentives have produced widespread isolation, fracturing communities and leaving people with little time or energy for relationships, introducing artificial companions merely risks compounding this harm.
Money is of course a big factor here. It is much cheaper to offer someone a chatbot than to hire a care professional; and unlike relational care therapies, one can get very wealthy selling tech solutions. Technological tools facilitate the privatisation of care, allowing the state to outsource part of its responsibilities. This can be a good thing, enabling national institutions like the UK’s NHS to access ground-breaking treatment technologies they would not have had the capacity to develop themselves. For example, FLOW headsets used to treat treatment-resistant depression through stimulation of specific brain regions have shown really promising results, and are now used by the NHS in a number of areas. However, private companies’ growth imperative means they often need to continuously push these tools towards new users, even those who would be better served by relational therapies.
Like other societal failures of care, mental illness and loneliness at scale are the outcome of decades of policy choices and cultural shifts. Addressing them meaningfully will require slow, contested, and deeply political changes, along with significant investment whose returns will not be visible within one electoral cycle. Technological solutions, by contrast, can be developed and scaled rapidly by a small number of people. They offer the appearance of action without demanding structural change, and in doing so risk us dismissing loneliness as an individual condition rather than a collective failure. By investing in a technology, a company or government can say, “We are actively working to tackle loneliness and improve mental health” without actually having to make any difficult decisions or long-term changes. This is easy to see in the workplace, where companies will buy a subscription to apps like Headspace in the name of protecting employee wellbeing, without addressing the structural issues contributing to stress and burnout. We are adding plant food and hoping to see flowers bloom, while the roots continue to rot beneath the surface. Without looking at the underlying conditions that sustain human connection, technological interventions risk producing superficial growth while accelerating the decay below.
Given this, it might seem like the inclusion of tech in the future of care is exclusively problematic. Yet the benefits touched on above remain real, and there are constructive ways to harness them. To understand what a more balanced approach could look like, I spoke to Elaine Czech and Rachel Keys, two researchers from the BIG (Bristol Interaction Group) lab, which works to improve peopleʼs lives through the design and study of technology.
To trust or not to trust your smart watch
Rachelʼs motivations to work in HCI began as deeply personal. Her husband became unwell with a heart condition that doctors would not take seriously. Through a wearable device (think smart watches), he was able to gather data and use it as evidence that further investigation was needed. He was eventually rushed to hospital for treatment.
Following this experience, Rachel chose to do her PhD research on the use of wearables in the diagnosis and support of people with heart conditions. She has lived experience of the fact that technology can provide invaluable data that allows people to access the appropriate care. This has been particularly important for women and minority groups, whose self-reported symptoms are still not taken as seriously by medical professionals. Women’s health has historically been shockingly underresearched3, and the widespread use of digital trackers is helping to address this imbalance faster than the medical research community could do alone4.
These tools allow users to conduct long-term research on themselves, tracking symptoms over months and years. This, Rachel pointed out, is especially useful for women whose hormonal changes mean symptoms can appear over the course of many months, and present atypically. Wearables allow people to see what their “normalˮ is and track detailed information over extended periods of time. For women with POTS and long Covid, wearable-generated data can offer especially meaningful insights.
Here, technology helps people advocate for themselves and receive support tailored to their needs, without medical practitioners making potentially biased assumptions about which categories they fall into. We shouldn’t be in a situation where many people still have to fight for their symptoms to be taken seriously – but seeing as we are, we should make use of the tools we now have at our disposal as part of the push for substantial change.
The major downside at the moment, Rachel explained, is that “health care professionals donʼt know whether to trust the dataˮ. Wearables often cannot market themselves as medical devices, which would subject them to different product legislation – so they call themselves “wellnessˮ devices. They often function as black boxes, meaning we donʼt know exactly how the data is being collected and processed, and therefore struggle to assess the quality of the measurements. “It puts doctors in difficult situations, because the data is so medicalised that they feel they have to act on it. Research coming out of the US is showing that wearables lead to excess use of medical resources and over-intervention5”. We are all naturally concerned with our health – but having access to a constant feed of information risks encouraging us to needlessly investigate perfectly normal variations.
Building with those we are building for
Elaine Czech’s research currently focuses on building tools for people with Parkinsonʼs. She has been designing a study that uses a wearable device triangulated with home cameras to monitor the progression of the disease over time. We spoke about the importance of people working in HCI acting as “translatorsˮ between the engineers and end users.
“End users can’t necessarily articulate what they want really clearly or in computer science terms”, she explained, “and so the engineers are stuckˮ. Translators are therefore vital in making sure that the needs of caregivers and care receivers are heard, while mitigating expectations of what a technology can truly deliver. Elaine emphasised the importance of engineers and researchers beginning to look at “care ecosystemsˮ. Care is, at its core, relational: it is never received or delivered in isolation. Yet most research is still focused on solutions for individuals. Elaine is lobbying for more research on dyads – mostly couples. Her research mainly centres on older people living with long-term health conditions, most of whom live with a partner, so any technological support needs to work for both people. Like physical spaces, care technology must be designed to center interpersonal exchange.
Giving people agency in the kinds of technologies available to them is an essential step towards this goal. Part of my own research focused on the importance of including the people you are building for as an essential part of the design process. In developing toys for social play, we designed activities that encourage children to come up with games that get everyone involved, including people with specific sensory or social preferences. These game designs were then used as foundational in the technology design phase. Co-design values lived experience alongside technical and design expertise, and involves actively listening to care recipients and practitioners to understand the heart of their concerns. As such, it stands a much better chance of addressing the root causes of failures of care.
To achieve co-design at scale, various groups need to be willing to participate in research and work with developers. This highlights another barrier researchers face: participation. Researchers fight an uphill battle in recruiting participants for studies. In the UK, one of the best ways to recruit from a diverse pool of people is to get NHS ethics approval, which allows you to advertise your study through hospitals and GP practices, and lends reassurance to wary volunteers. However, many researchers wonʼt even apply for NHS ethics because the approval process takes too long – up to 18 months, which on a 3-year PhD program, just isn’t viable. If we are going to build technologies that really work for the people who need them, we need to change the systems that prohibit this research from happening.
People also need to be willing to partake. It is a big ask to open your life up to potentially invasive research with no promise that it will be successful. This is further hampered by a history of unethical research programmes: Elaine pointed to the Tuskegee syphilis study, a shocking example of medical malpractice where known treatments were purposefully withheld from 400 African-American men with syphilis in order to observe the progression of the disease6. More than a hundred of them died as a result.
Stay in your lane
In many ways, our discomfort with the application of technology to care is less about the tools themselves than about what they reveal. The use of tech functions as a microcosm for our systems at large, making visible the absences and inequities already present in them. At its worst, tech can be used to dehumanise, isolate, over-intervene, and mask serious flaws in our delivery of care. At its best, it allows people to gain back their autonomy, and provides life-saving information or intervention.
The main issue seems to be a lack of what we could call ‘technological humility’. Tech enthusiasts are famously very good at claiming that tech will overtake our lives and solve all of our problems, and very bad at admitting that it makes some of the latter hopelessly worse. A welcome example of humility came from Arthur Mensch, the founder of Mistral, France’s AI giant. At a conference given last month to students of the country’s top engineering school, he said:
« We don’t believe in the techno-solutionism that seems to prevail on the US West Coast, which consists in thinking that all the world’s problems will be solved by artificial general intelligence. (…) It will not solve climate change — it objectively tends to make it worse — and it won’t solve our issues with care, with contact, with our ageing populations. There is an entire class of care and relational jobs where AI only has a minor role to play. »
Such pragmatism is refreshing. For tech to work in care, it has to accept being a supportive background player, instead of trying to take over the entire field. Public care systems need leadership that is extremely clear-eyed about the limitations of technological tools, and extremely ambitious about investing in people. That means higher salaries, better working conditions, more relational training, improved communication between designers and end users, and reduced barriers to essential research. It could also mean radical measures: France’s national agency for the social economy recommends the exclusive delegation of all care activities to the state and to mission-oriented private companies by 20507. Paradoxically, by bringing our failures to light, the technological revolution might be the very thing that forces us to confront what good care truly requires.
Angolile CM, Max BL, Mushemba J, Mashauri HL. Global increased cesarean section rates and public health implications: A call to action. Health Sci Rep. 2023 May 18;6(5):e1274.
Mascarenhas Silva CH, Laranjeira CLS, Pinheiro WF, de Melo CSB, Campos E Silva VO, Brandão AHF, Rego F, Nunes R. Pregnant women autonomy when choosing their method of childbirth: Scoping review. PLoS One. 2024 Jul 11;19(7).
See also Pure Unity Health, ‘Addressing the Women’s Health Gap in the NHS’.
Regensteiner, J. G., McNeil, M., Faubion, S. S., Bairey-Merz, C. N., Gulati, M., Joffe, H., Redberg, R. F., Rosen, S. E., Reusch, J. E.-B., & Klein, W. (2025). Barriers and solutions in women’s health research and clinical care: A call to action. The Lancet Regional Health - Americas, 44, 101037. https://doi.org/10.1016/j.lana.2025.101037.
See Cheung, C. C., & Saad, M. (2024). Wearable devices and psychological wellbeing—Are we overthinking it? Journal of the American Heart Association, 13(15), e035962. https://doi.org/10.1161/JAHA.124.035962.
Brandt, A. M. (1978). Racism and Research: The Case of the Tuskegee Syphilis Study. The Hastings Center Report, 8(6), 21–29. https://doi.org/10.2307/3561468.
ESS France, Stratégie nationale de développement de l’ESS, 22/09/2025.
Further reading:






