Over the American Thanksgiving holiday, I went to Hong Kong to see my mom. Due to geographical distance, I only get to see her in person about twice a year, even though I call her every day to check in. My dad and younger brother, who lived with her and handled almost all financial matters in the household, passed away respectively in recent years. My mom, who was already hyper-anxious, has become even more so since my brother’s death. She sleeps poorly and is increasingly forgetful, which exacerbates her anxiety and memory issues, but she has infrequent visits to mental health resources due to her fear of stigma and long wait times for specialists. With a long history of osteoporosis and increasingly frequent near misses and falls that have resulted in injuries, I have had many conversations with her regarding safety precautions, reminding her about medical appointments and medications. While she has a domestic helper to assist with her daily living, my mom has also, at times, expressed unsubstantiated suspicions towards her helper. When I voiced concerns regarding observed cognitive and functional decline, my mom downplays her deterioration, even when she forgets what she has forgotten.
A friend who is familiar with my mom’s situation and my work in AI for elder care joked that I may want to give AI health monitoring devices a try to fill some of the service and memory gaps. She opined that my mom may feel more assured if there is an AI tool that can alert her when she is at risk of falling, remind her to take her medications, and monitor her domestic helper’s activities to assure her afterward. Even if AI may not cure my mom’s anxiety, my friend mused, remote monitoring may at least ease my anxiety and the need to call her every day. Since my mother often dismisses my proclamations and reminders, my friend suggested that an AI model may be a better messenger than me.
AI Health Monitoring
The use of remote technologies to support health monitoring outside of the formal clinical setting is not a new phenomenon. As I have explained elsewhere, CCTV cameras and ambient sensors have provided support and can alert observers of adverse events or physiological decline. Nonetheless, they rely on human observers to monitor video feeds or other signals continuously in real-time to respond accordingly. Even if my mom gets a medical alert system, these devices are not preventive—they generally only help after injuries or adverse events have occurred, and users must be conscious to alert for help.
With advancing AI-powered technologies becoming increasingly available to healthcare organizations, clinicians, and consumers, there is great optimism among technologists and health systems that these tools can provide ongoing monitoring, accurate detection and identification of different behaviors, and effective predictive analysis of the monitored person’s disease progression to not only detect but also prevent serious health risks. This may be particularly beneficial for people like my mother, who may have undiagnosed physiological and cognitive changes partly due to infrequent clinical assessments and inadequate medical management. Machine learning (ML) algorithms that can continuously identify, record, measure, analyze, and predict the user’s health status may take a cognitive load off my mom since she would not have to constantly try to recall her activities and symptoms, which has become more difficult in recent months. If the data collected by these devices can be shared with her care providers, that may facilitate more accurate diagnosis and care planning. Comprehensive real-world data that are integrated into my mom’s electronic health record (EHR) may provide a more dynamic picture of her health status and associated environmental factors to promote personalized assessment and care management that is timely, meaningful, and realistic to my mom’s context to improve health outcomes.
Longitudinal health data may also help clinicians prioritize scheduling patients who require more comprehensive or urgent consultations and provide remote support to less urgent patients. For example, our research team in Vancouver, Canada, recently conducted a study with people living with Parkinson’s Disease (PD), with a focus on ethical considerations of developing and using computer vision models to monitor and predict PD progression. Some participants were hopeful that AI models that can identify patients with more serious deterioration may expedite their visits with their neurologist instead of waiting for their annual appointment, and patients may be able to use the information and predictions for self-management.
Who is Afraid of AI Home Health Monitoring?
AI-powered health monitoring is increasingly promoted through the language of individual freedom, choice, and/or empowerment. There is an unequivocal optimism among health technologists that long-term health monitoring outside of formal clinical settings would be readily accepted or even welcomed by targeted individuals. In addition to expanding hospital-at-home programs that allow acute-care patients to be treated in their own homes due to technological advances and regulatory changes by the CMS (Centers for Medicare and Medicaid Services) during the COVID-19 pandemic, the ever-growing market of direct-to-consumer (DTC) health monitoring technologies reflects and reinforces such optimism. These technologies are touted as tools that can democratize health information by allowing users to collect and access their data 24/7, empower them to engage in self-management, facilitate more informed clinical visits, and promote older adults’ and people with chronic conditions’ ability to live safely, privately, and independently in the community. In cases where individuals resist recommended continuous monitoring, the justification shifts to how personal health surveillance is part of our caring practice grounded in paternalistic benevolence to reduce health risks and associated harm for those who are deemed by others to be incapable of protecting themselves.
Certainly, the idea of being able to access information and predictions about my mom’s health-related activities to help inform care management is appealing for some of the reasons outlined above. Nonetheless, my friend’s suggestion brings up intersecting ethical concerns about how AI health monitoring may change our definition and expectation of consent, professional and societal expectations of self-management regardless of individual readiness, and who should have access to AI model outputs based on our data and for what purpose. As industrialized countries increasingly embrace remote health monitoring for aging populations and people living with chronic conditions, how should we balance users’ physical safety and their autonomy? For example, to what extent should my mom be free to refuse continuous health monitoring that may help to minimize injury risks and promote health benefits? While my friend noted in jest that AI health monitoring may reduce my need to call my mom every day, being continuously surveilled remotely may ironically exacerbate my mom’s anxiety if she feels more controlled or scrutinized and yet more socially isolated. In the aforementioned study with people living with PD, some participants expressed concerns about privacy at home and also wondered if patients identified by the model as having slower disease progression may have less frequent access to professional consultation or visits from family members.
My friend recognized that the biggest hurdle to implementing her recommendation is my mom’s likely reluctance to be monitored. I added that if my mom’s memory continues to decline, even if she initially agrees, as time goes on, the validity of the initial consent will be increasingly questionable. But more importantly, as monitoring devices are gradually more connected in the era of the Internet of Things (IoT) and big data, where different sources and types of data are merged for ML and algorithmic analysis, my mom’s decisional autonomy regarding AI home health monitoring must be understood within the wider socio-technological and health care milieu. As I have explored elsewhere using the lens of relational autonomy, people’s capacity to exercise their agency in the context of expanding availability and promotion of AI health monitoring is shaped by broader issues of power asymmetry. Norms around techno-utopianism and AI solutionism, whereby AI health monitoring is presumed to be a superior practice, intersect with how health care and elder care are financed, organized, and delivered. They pre-determine what health monitoring and follow-up care options are available and recommended to people like my mom. Hierarchically constructed professional and social relations, including not only therapeutic relationships but also intergenerational relationships that evolve with changes in circumstances, intersect and shape the cultural meanings of personal responsibility, healthy living/aging, trust, and caregiving. These norms, in turn, structure the ethical space within which remote AI health monitoring options and corresponding care management alternatives are presented and considered by stakeholders in different social locations. Besides CMS waivers that drove many health systems in recent years to launch or expand programs to provide acute hospital care and monitoring at home, hospitals and clinicians mindful of metrics around hospital length of stay and readmission may have financial and reputational incentives in addition to quality commitments to promote subacute monitoring outside the clinic and hospital walls. Worried and exhausted loved ones, especially women who often juggle professional and household responsibilities, may also need support in providing care. For potentially monitored individuals, those who lack access to health care and perceive to have a stigmatizing condition are more likely to use AI-powered self-diagnosing platforms.
Recognizing the divergent stakeholder interests and the corresponding power dynamics is important, especially because individuals targeted for continuous monitoring may weigh risk prevention and privacy differently compared to those who recommend ongoing surveillance but may have less power to define for themselves their ideal balance of these considerations and refuse being monitored as their health needs increase. In one study with Meals on Wheels clients and their adult children regarding their perceptions of in-home health monitoring technologies, researchers found that the children preferred these technologies more than their elderly parents. My mom does not like me asking her helper about her daily activities—it is doubtful that she would welcome continuous monitoring and data sharing with me based on my worries about her risks rather than her own self-assessment. While some older adults may find intelligent sensor monitoring potentially helpful, there is evidence that they desire control over when they should be monitored and what information AI systems may share with family and caregivers.
As expanding varieties of monitoring technologies are proposed for people’s living quarters, boundaries to not only professional reach but also intrusion by others in social relations and commercial parties are key to preventing abuse of power and violation of personal privacy. Medicalized AI-powered technologies that indefinitely collect heterogeneous data convert personal and intimate home experiences into medical information for analysis and recommendations. My mom’s progressing anxiety may have other relational and circumstantial origins rather than only natural biological deterioration, given her compound grief and exhaustion from having to cope with and manage various financial and legal matters after my dad’s and brother’s deaths. With IoT and expanding capabilities of monitoring technologies to record and analyze different types of data, the collection and use of multiple data points to promote personalized care is increasingly intrusive. For example, using not only sensor and computer vision technologies to track and measure her physiological status but also ambient listening to gauge the content of my mom’s thoughts and grief may provide a fuller picture of her experience but opens more pathways for sensitive data to be accessed by (unknown) others. On the flipside, AI models that focus on narrow sets of identifiers may miss potentially relevant factors around her anxiety in its assessment, raising questions of how accurate and helpful they may truly be for diverse populations. Since ongoing monitoring allows observation even when there is no imminent health risk, these evolving socio-technological practices can change expectations of what information healthcare professionals and others may legitimately collect, access, and share with third parties in the name of risk management and health protection. There are also questions of what privacy and data protection others who are inadvertently monitored should have, especially when some may have less social power to resist (e.g., my mother’s domestic helper).
AI Health Monitoring and Epistemic Injustice
If I were to seriously consider my friend’s suggestion, it would be not only because of the potential benefits discussed earlier. It would also be partly because I doubt that my mom is remembering or telling me the whole story, and would like other independent ways to validate, supplement, or dispute her claim. But uncritically turning to AI monitoring as a presumed “objective” source of truth to check the veracity of the targeted person’s claim can have relational implications. By recommending such monitoring or receiving data from these AI-powered technologies, it may reinforce our power dynamics between the carer and the cared-for. On the one hand, if my mom can show longitudinal data and predictive outputs as complementary information to validate her testimonial credibility, she may feel more confident in seeking care and advocating for herself. On the other hand, if my mom is expected or even required to allow longitudinal surveillance and have data from these applications to support her claims, it implies that her own testimony of her experiences is deemed less credible if her report cannot be supported by corresponding “objective” data. She would then be put into the position of being forced to accept technological intrusion to fend off human intrusion and reclaim her epistemic authority over her own experiences and risk assessments. From a relational perspective, a presumption of and overreliance on “objective” quantified data while dismissing the primacy of people’s own experience may exacerbate epistemic injustice, as it reinforces illegitimate social power and wrongs the user in their capacity as a knower. As health systems boast the convenience of hospital-at-home programs and technologists extend the ease of health monitoring access by expanding DTC offerings, the heavy promotion of AI algorithmic predictions in the clinical and consumer markets even when the information does not always have clinical value may inflate the credibility of these technologies while further shifting epistemic power away from targeted individuals. Users themselves may not feel confident about their own embodied experience if it is not “validated” by machine judgment. Such concerns are heightened if we don’t have a way to resolve my mom’s reports or concerns that conflict with AI outputs by black-box algorithms, or if she feels disempowered to challenge the AI predictive analytics due to internalized AI solutionism. In fact, why even ask my mom for her own reports when we can simply look at the AI outputs?
Coming Full Circle
For many people of Chinese descent, Winter Solstice is a day with lots of cultural traditions, including paying respect to ancestors and loved ones who have passed away. This year, like previous years and similar cultural occasions, my mom asked to schedule a video call so that I could pay respect virtually. I was going to meet with friends but determined an appropriate time after our social gathering to call. But on this shortest day of the year, I somehow mistakenly scheduled our call based on Daylight Savings Time and was thus an hour off. When I missed my mom’s text shortly after the agreed-upon time because I turned off text alerts during my gathering, she started to panic that her usually reliable daughter had somehow vanished. When she finally called me after 20 minutes of no response, she expressed her fear that something bad had happened to me, and she didn’t know how to find me. While still out of breath from her anxiety, she asked, “How can I track you in the future to make sure you are ok?”
Anita Ho
Anita Ho (PhD, MPH) is a bioethicist and health services researcher with a unique combined academic training and experience in philosophy, clinical/organizational ethics, public health, and business. Anita is currently an Associate Professor at the UCSF Bioethics Program and a Clinical Professor at the Centre for Applied Ethics at the University of British Columbia. She is also the Vice President of Ethics for CommonSpirit Health (California Region). Her book, Live Like Nobody is Watching: Relational Autonomy in The Age of Artificial Intelligence Health Monitoring, was published by Oxford University Press in 2023.