Ethical Implications Raised for the Cognitively and Physically Impaired: Redesigning the Human Experience through Brain-Computer Interfaces
Human-computer interaction-centric technologies, such as Brain-Computer Interfaces, demonstrate the potential for synergy, rather than partition, between user and device. BCIs quite literally sync with a user’s brain through their brainwaves, meaning society now has access to one of the most internalized reservoirs of data possible — human thoughts — but equally, a risk of compromising the most sensitive information yet. Brain-Computer Interfaces, with the help of artificial intelligence, can redesign the human experience by giving those with disabilities the chance to regain experiences most take for granted, such as walking or speaking. Through artificial intelligence based solutions that are making a direct impact on some of the most marginalized groups of people — those who face impediments in daily activity — by acting as a rehabilitation method — it is apparent that brain-computer interfaces with artificial intelligence enhancements present greater growth than harm.
Foundations of Brain-Computer Interfaces
For some general understanding, brain-computer interfaces (BCIs) allow a human brain and a computer-based system to exchange brain information. Brain-computer interfaces capture a user’s neural and central nervous system (CNS) activity, along with providing the human freedom to personally interact with a machine. For instance, if the human has hardware such as electrodes from a BCI connected to their brain, their brain waves will provide the computer-based system a signal to carry out their desired command. All of a sudden, if a paralyzed individual has the thought of moving their arm in a certain direction, their neural and nervous system signals — an uptick in certain centers of the body’s activity — can deliver the computer the command to assist their movement. The user has control over a body they were just powerless over, as BCIs enable a desired artificial command to be executed without any physical constraints from the body — the computer signal processing and responsive robotic device movement will do the work. But we must ask ourselves, why is this an initiative worth our support and attention? Brain-computer interfaces have the ability to fully or partially replace lost functions or memories, such as immobilization or communication. In the process of improving functional independence, brain-computer interfaces can also help restore control over different bodily functions through FES (functional electrical stimulation) on nerves or paralyzed muscles to move a patient’s hand.
By helping a patient increase their physical and cognitive mobility again through brain-computer interfaces, patients can become one step closer to restoring their original quality of life, prior to impediments, or reaching new horizons in their physical movement, in their ability to express themselves, that they hadn’t been able to explore for a lifetime. In the article “Brain–computer interfaces in neurological rehabilitation’’ by Janis J Daly and Jonothan R Wolpaw, we can deconstruct the positive effects of brain-computer interfaces through artificial intelligence and rehabilitation. Daly and Wolpaw (2008) discuss that by re-establishing some independence, BCI technologies can substantially improve the lives of people with otherwise devastating neurological disorders. BCI solutions might also restore more effective motor control to people after stroke or other traumatic brain disorders by helping to guide activity-dependent brain plasticity by use of EEG brain signals to indicate to the patient the current state of brain activity and to enable the user to subsequently lower abnormal activity (p.1). The current advancements in BCI technology will enable individuals who have suffered traumatic brain disorders to complete tasks that would have been impossible for paralyzed individuals to complete. These advancements to motor control from brainwaves would even allow some individuals in the future to be a vital part of a workplace where they will not be limited by their paralysis.
Introduction to Popular Brain-Computer Interaction Technologies: Neuralink and Possible Concerns
We introduce a new and more complicated brain-computer interface technology that claims to implement more touch points in the brain than the average BCI — this initiative is owned by Elon Musk and widely known as Neuralink. Through the company mission statement, Neuralink (2022) claims that we are creating the future of brain interfaces: building devices now that will help people with paralysis and inventing new solutions that will expand our abilities, our community, and our world (p. 1). While Neuralink is still conducting trials for their products, Musk plans to achieve symbiosis between “artificial intelligence and the human brain”. He believes that this collaboration would lead to telepathic communication in the next few years. But, combining any artificial intelligence with human intelligence raises many ethical concerns. We must ask the question of what determines if something is ethical, and how can we make sure that safety is a priority for anyone that has access to this brain-computer interface technology?
Given the novelty of Brain-Computer Interface devices is transcending the abilities of motion and infrared sensor technology to synchronize with internalized neural and nervous system subtleties, something intimate and variant from individual to individual, ethics in this realm means honoring the individuality of each user. At no point should BCI inventions become so automated that it dehumanizes the nuance of experience from person to person, and the privacy of each user’s information — from their biodata to their reasons for utilizing the products — should be kept as sacred as the individuality of the process. In the Insider article “Elon Musk’s Neuralink wants to embed microchips in people’s skulls and get robots to perform brain surgery” by Isobel Asher Hamilton, we deconstruct Elon Musk’s futuristic aspirations and even the basic concerns of Neuralink’s project timeline and overall ethics. Hamiliton (2022) quotes that Musk has made lots of fanciful claims about the enhanced abilities Neuralink could confer. In 2020, Musk said people would “save and replay memories” like in “Black Mirror,” or telepathically summon their car (p. 3). In this discussion of telepathy, I find that Neuralink prioritizes entertainment and embellishing the human experience above providing rehabilitation and quality-of-life improvement for those who arguably need BCI most — the physically impaired. Patients suffering with physical disabilities have more imminent requirements from BCI as compared to able-bodied consumers who would consult Neuralink’s technology for recreational, not fundamentally live-improving, reasons. The discussion of telepathy strays away from the mission statement’s claim of helping those suffering from physical impairments.
Professor Jackson (2022) tells Insider, not to say that that won’t happen, but that he thinks that the underlying neuroscience is much more shaky. We understand much less about how those processes work in the brain, and just because you can predict the position of the pig’s leg when it’s walking on a treadmill, that doesn’t then automatically mean you’ll be able to read thoughts (p. 3). While these technological initiatives seem promising, we must take a step back and think in the present day to understand what is feasible through research. How can we establish ethical practices with the implementation of a brain chip? The most important ethical concern with Neuralink’s brain chip is how rapidly this initiative is being pushed out, as it usually takes years of trials to get it approved for safety in human subjects.
Positive Implications of Brain-Computer Interfaces and Artifical Intelligence: California Institute of Technology Clinical Trials + Additional Framework
On the contrary, we can also look into successful brain-computer interfaces that have advanced physical and neurological rehabilitation. Artificial intelligence can be used to aid or enhance other brain-computer interface inventions by helping to identify activity patterns with patients — better accuracy with cognitively or physically impaired patients in rehabilitation points to better success.
In the article “The brain-reading devices helping paralyzed people to move, talk and touch”, Liam Drew interviews scientists to talk about how machine learning and artificial intelligence ultimately enhance brain-computer interfaces. Hochberg (2006) “Today’s BCI users have much finer control and access to a wider range of skills. In part, this is because researchers began to implant multiple BCIs in different brain areas of the user and devised new ways to identify useful signals. But Hochberg says the biggest boost has come from machine learning, which has improved the ability to decode neural activity. Rather than trying to understand what activity patterns mean, machine learning simply identifies and links patterns to a user’s intention. We have neural information; we know what that person who is generating the neural data is attempting to do; and we’re asking the algorithms to create a map between the two. That turns out to be a remarkably powerful technique. (p. 5). These developments of artificial intelligence and machine learning provide new insights into decoding neural signals at a much faster rate and in real-time. Machine learning (ML) can help aid brain-computer interfaces with self-decision-making strategies through these improved algorithms. Overall researchers have been more motivated to include machine learning incentives with brain-computer interfaces for better accuracy in results — combining BCI and ML will only lead to more promising and efficient answers.
Drew’s article also introduces patients who have successfully joined a clinical trial for neurosurgery at the California Institute of Technology and implanted double “grids of electrodes” in their cortex. Drew (2022) informs us of the positive implications of neurosurgery and that Johnson is one of an estimated 35 people who have had a BCI implanted long-term in their brain. Only around a dozen laboratories conduct such research, but that number is growing. And in the past five years, the range of skills these devices can restore has expanded enormously. Last year alone, scientists described a study participant using a robotic arm that could send sensory feedback directly to his brain; a prosthetic speech device for someone left unable to speak by a stroke; and a person able to communicate at record speeds by imagining himself handwriting. (p. 1). Assuming that brain-computer interface initiatives are rapidly booming as said in the article, it raises the curiosity of how it will benefit someone else seeking the same help as Johnson. From this article alone, we can see what some would describe as a miracle. Once someone experiences loss of a vital function like writing a letter or hugging someone they love, regaining the ability to do so is what makes the human experience so important. Normalcy is often taken for granted, and it is important to continuously fund these clinical trials to conduct more research to provide affordable and smarter opportunities for those suffering from all different demographics.
After some more research, I was able to explore what I personally think can be seen as the framework of innovation for brain-computer interfaces. Arrow was able to successfully modify a car to help a quadriplegic and former IndyCar race driver regain the freedom to drive — the groundbreaking part is that he is able to successfully move his vehicle with only the motion of his head. The Arrow (2022) website states that for acceleration, the driver would tilt his head back and tap the headrest, signaling the car to accelerate in 10 mph increments. The car responded directly via a rotary actuator attached to the gas pedal. For braking, the driver bit down on a sensor between his teeth, an instruction that is translated to a rotary actuator attached to the brake pedal (p. 2). While this is not an example of a pure brain computer-interface initiative, it is real proof that something like navigating a car through simple head tilts can eventually be combined with BCI to be successfully done. Driving a car may seem like a mundane task for most, but if you are a patient that has been stripped away from an old enjoyable pastime, having the ability to drive a car is freeing. New initiatives with brain-computer interfaces will only push for more inventions for those suffering with similar impairments, and these examples mentioned showcase how to do so successfully.
Artificial Intelligence in Healthcare: General Concerns for Patients
When it comes to general healthcare concerns with artificial intelligence, I wanted to explore how artificial intelligence will be integrated into our healthcare and research in the medical field. Brain-computer interfaces is an umbrella under general healthcare; so while BCI may not directly seem related to artificial intelligence and healthcare, AI can always be pushed to be a vital part of brain-computer interfaces. All artificial intelligence systems (including AI systems in healthcare) use hefty datasets and rely on machine learning to produce insights from other given data.
In the summary of the AI Now public symposium “The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term” hosted by The White House and New York University’s Information Law Institute, they discuss the rapid deployment of artificial intelligence and the social and economic questions raised because of it. What social systems do we need to change in order to create an ethical and equitable future? NYU’s Information Law Institute (2016) states that the integration of AI systems into medical research presents exciting prospects for understanding the foundations of illness, developing new treatments, making more finely grained diagnoses, and even tailoring medicine to specific individuals (p. 13). In this day in age, it is common for healthcare facilities to rely on technological products for holding patient data or any general health data. We rely on artificial intelligence implementations to improve our healthcare from patient to doctor communication, a plethora of online diagnosis, organization, drug discovery, and even feasibility.
When it comes to medical decisions, having an artificial intelligence backed healthcare system seems exciting since it promises more individualized approaches for the patient. While this seems very promising, we tend to forget that there is a large gap until we can deem AI reliant healthcare as unbiased and “ethical”. There may be a situation where a patient has a complicated health condition, and they may need more individualized care by a human, instead of a generalized diagnosis by an algorithm.
In 2020, I decided that I wanted to explore more within the healthcare and telecommunications realm and I worked at an artificial intelligence startup that focused on improving patient communication. Over my five months as an intern, I realized that there were a lot of social and racial limitations that impacted the ability of accurate healthcare. There have been many cases of favoritism or ignorance on the topic of minority populations in medicine and even inaccurate datasets from research. I also learned about the importance of simple user experiences and how the reliance of computers and automation harmed the older population since they lack the technological familiarity with the new technologies that younger generations might understand. I strongly recall doing a case study on a 83 year old man who was unable to get help for his wife because of a hospital’s automated messaging systems, and in result she passed away. To help the general public, those creating these artificial intelligence algorithms need to receive a strong education on these social topics for patient safety — there is a way to diversify their datasets and show less bias within AI deployment with the correct training and research. Racial bias and social bias go hand in hand, and as long as our healthcare system is educated on the topic, it will make it easier for society to progress in new advancements that will not immediately harm anybody. More education on these topics will lead to more ethical and empathetic practices within a wide range of industries, starting from healthcare.
Ethical Questions of Brain-Computer Interfaces: How to be Human?
Now that we have discussed the basics of brain-computer interface initiatives and the healthcare system, it would be important to discuss the baselines of human ethics with technology for the physically and mentally impaired. Some may be concerned with how this affects the human experience, by quality and authenticity — but what does it truly mean to be human? In this section, I will discuss the ethical questions that someone may encounter with brain-computer interfaces to examine these responsibilities proactively. There are four main topics that I would like to cover: a patient’s loss of sense of self, patient trial experiences, artificial intelligence bias protection, and privacy concerns with brain-computer interface technology.
Each human has the freedom to make their own choices when it comes to how they want to live and portray themselves. With these new technological products, we challenge the concept of identity and whether those with disabilities should change their lifestyles to live by the norms. Patients can lose their sense of self, when their familiar identities become augmented. For example, someone that has been deaf their entire life usually adopts the practice of sign language, and sees that new language as a major part of their identity. There should not be any negative connotation with any disability, and as we progress with new inventions, we also need to make sure that it is seen as a helpful initiative. A metric helpful for gauging the ethics of a BCI initiative is its philosophy — its call to create the avowed life-changing products in the first place. There is a thin line between designing rehabilitative technology for the impaired from a place of ableism, with the claim of certain initiatives, such as Neuralink, appearing to be that a disabled user is inherently living life polarizing to the able-bodied population, and as such, this must be changed. With a technology touching some people’s chronic conditions, and therefore the course of their entire life, as delicately as BCI, it must be presented as a choice and a compromise between the user’s past and future. The solution should not be all for, or all out against the patient. Compromise, balance, and a lack of volatility in this field is key. Each human experience has a different meaningful value, and to be a more inclusive society, we need to make sure our technologies can collaboratively work together with patients to make them feel welcome.
Ethical Questions of Brain-Computer Interfaces: Biodata Privacy and Consent Concerns
With brain-computer interfaces, we must also question how to preserve our neural and physical biodata. Privacy is a major concern with all types of technologies, but when it comes to protecting patient information, what course of action should be taken?
In the article “The ethics of brain–computer interfaces”’ by Liam Drew we discuss the ethics surrounding new developments with technological services for the disabled. Marcello Ienca (2019) states that brain information is probably the most intimate and private form of data to exist. Digitally stored neural data could be stolen by hackers or used inappropriately by companies to whom users grant access. Neuroethicists’ concerns have forced developers to attend to the security of their devices, to more diligently protect consumer data, and to cease demanding access to social-media profiles and other sources of personal information as a condition of a device’s use. Nevertheless, as consumer neurotechnology gains steam, ensuring that privacy standards are acceptable remains a challenge (p. 3). Of course, protecting patient data is not that straightforward, but privacy must be precisely adhered to with the development of AI-backed BCIs. Compromising user data, in this context, means risking someone’s identity in the most internal, intimate form. We know the term ‘identity theft’ to be someone’s Social Security Information, their address, and their demographic information, being stolen and taken on by someone with malicious intent. But with the advent of these new products , we risk ‘identity theft’ being taken to the next level — thieves can now completely embody the compromised individual’s identity down to their thoughts. Misuse can lead to levels of manipulation, threats, and even discrimination.
My father suffers from a disability due to a hemorrhagic stroke, and it pains me to see the amount of ignorance by even medical professionals towards those suffering from similar cognitive and physical impairments. Over the years, I have identified mistreatment from healthcare practitioners that were supposed to protect their patients. My father was unable to stand up for himself, because he was unable to speak or move at all. While this is a very specific example of healthcare mistreatment, let me explain why this should be a concern for the general population. With increasingly rapid technological advancement, the gap between new healthcare innovations and the healthcare workers themselves will only increase exponentially. That is, if current protocol, and lack of education for not only the consumer of rehabilitative biomedical devices, the patient. As someone who has been given the privilege of full-functioning health, I believe it is essential to fight for those who cannot speak up. During my father’s recovery, I know that he strived for normalcy and had a very difficult time with rehabilitation — he lost all function in his body and was unable to talk for about nine months. As new devices emerge with brain-computer interfaces, there needs to be complete transparency and user disclosure protocol for consumers to understand how personal health data is being used. When it comes to a patient with disabilities, there is greater susceptibility to being taken advantage of amidst new brain-computer interface trials. If a patient is unable to verbally communicate or physically consent to a medical decision, can we really call this new clinical trial ethical for said patient? In general, patients with disabilities take on the risk of being used as a “lab rat” for these new products. There is also another risk to participants when involving implanted devices — brain surgery itself is very dangerous and there will always be unknown risks of failure with these clinical trials. Devices may not be supported forever, and as solutions continue to advance, companies that manufacture these implanted devices may stop producing the mechanisms. When it comes to clinical trials, the patient must also be provided full disclosure of their treatment plan to ensure no rights are violated at any time — it is important that patients are able to feel supported during and after their treatment. Basic human empathy is something that should be a fundamental right and vital practice for any new technology.
Ethical Questions of Brain-Computer Interfaces: Disability Bias in Algorithms
In lieu of the obvious healthcare-provisions and provisioned disparity, we can shift our attention to other questions like who the target audience of life-enhancing products should be, and how we will educate the general public to make a more successful initiative. We must ask ourselves what would be the most beneficial when it comes to brain-computer interface solutions. Determining if it is necessary to enhance those that are deemed physically healthy over those that may need more support is another major issue when it comes to the priorities of brain-computer interfaces. On a more inclusive standpoint, how can we focus our attention to help those with physical and cognitive disabilities so they can also experience normality?
In the article “Disability, Bias, AI” by the New York University AI Now Institute, different researchers question how we can ensure protection for the disabled through topics of artificial intelligence disability bias and how the term “disability” is understood. Sarah Rose (2017) claims that integrating disability into the AI bias conversation helps illuminate the tension between AI systems’ reliance on data as the primary means of representing the world, and the fluidity of identity and lived experience. Especially given that the boundaries of disability (not unlike those of race and gender) have continually shifted in relation to unstable and culturally specific notions of “ability,” something that has been constructed and reconstructed in relationship to the needs of industrial capitalism, and the shifting nature of work (p. 2). By discussing disability activism, we can pinpoint common problems in our community and in constructed artificial intelligence systems easier. We can ensure better protection for anyone that may need it and inform those that lack the strong education of what it means to be one with a “disability”. Generally, there is a lot of misinformation that spreads when concerning social bias — educating ourselves and our communities about all aspects of identity including, race, gender, disability, sexual orientation, socioeconomic status will only lead to better and more inclusive products. Accessible healthcare and general inventions will lead to better success when algorithms being developed are fed more diverse data. More diverse data with algorithms will usually lead to better results for the general population opposed to one specific race or community. On the contrary, we get to ask the question of who gets chosen for these initiatives — who develops these new services and what is their background? While someone may be qualified technically, one may not be as educated with empathetic or diverse initiatives. The question of “what is empathy” is vital to successfully solve these issues, and while it differs by person, empathy should be built around the claim of imagining what or how someone may be feeling. By understanding how a person is able to think we can come up with solutions that may be more immediately beneficial since it is directly related to a patient’s initial concerns.
Conclusion and Simplified Course of Action!
In our general community, it is our duty to be supportive to those that may not have the ability to voice their opinions — especially those that are physically unable. Brain-computer interfaces with artificial intelligence are still in their infancy stage of development, and of course the initiative still needs a couple of years to develop carefully. But we should not disregard the potential of how it can change a patient’s course of life by offering freedoms that their disability has taken away. With more research and trials, there is a lot of potential to help those in need rebuild their lives by giving them their functionality back. By educating our communities on the different types and stages of disabilities, we can be a more inclusive society and gain the ability to assess different effects of brain-computer interfaces without explicit, and even implicit, social bias. The full human experience should be a fundamental right. Allowing someone who is physically or cognitively disabled to experience true human connection and interactions with their loved ones is a vital component in “how to be human.” By redesigning ethical healthcare practices for those with disabilities, and maintaining transparency surrounding the use of personal biodata with the corresponding technologies, we can continue to make advancements in empowering all communities without leaving anyone feeling behind. As humans, we are also responsible for progressions in our futures — whether it is inclusivity within smaller immediate communities or overall society. It is fundamental for these new engineering practices to consider different perspectives of the cognitively or physically impaired while redesigning a product that claims to change someone’s course of life positively.
** all opinions are my own!
References
Drew, Liam. “The Brain-Reading Devices Helping Paralysed People to Move, Talk and Touch.” Nature, vol. 604, no. 7906, 2022, pp. 416–419., https://doi.org/10.1038/d41586-022-01047-w.
Drew, Liam. “The Ethics of Brain–Computer Interfaces.” Nature, vol. 571, no. 7766, 24 July 2019, https://doi.org/10.1038/d41586-019-02214-2.
Sebastián-Romagosa, Marc, et al. “Brain Computer Interface Treatment for Motor Rehabilitation of Upper Extremity of Stroke Patients — a Feasibility Study.” Frontiers in Neuroscience, vol. 14, 2020, https://doi.org/10.3389/fnins.2020.591435.
Studio, Play. “Interfacing with the Brain.” Neuralink, Neuralink, https://neuralink.com/approach/.
Wolpaw, Jonathan R., and Janis J Daly. “Brain–Computer Interfaces.” Neurological Rehabilitation, vol. 7, no. 11, 3 Oct. 2013, pp. 67–74., https://doi.org/10.1016/b978-0-444-52901-5.00006-x.
“Disability, Bias, and Ai — AI Now Institute.” Disability, Bias, and AI, AI Now Institute at NYU, https://ainowinstitute.org/disabilitybiasai-2019.pdf.
Hamilton, Isobel Asher. “Elon Musk’s Neuralink Wants to Embed Microchips in People’s Skulls and Get Robots to Perform Brain Surgery.” Business Insider, Business Insider, 16 Feb. 2022, https://www.businessinsider.com/neuralink-elon-musk-microchips-brains-ai-2021-2.
“Sam Car: Arrow Electronics.” Electronic Components Online, https://www.arrow.com/en/fiveyearsout/stories/sam-car.