cft

The Final Frontier: Brain Data

Telepathy is approaching—Brain-machine interfaces will vastly change the ways we communicate reality


user

Andy Mandrell

3 years ago | 6 min read

The art of communication between humans and machines, aided by an increasingly technological and data-driven society, has reached a pinnacle of optimization. While we once communicated through physical interaction and spoken language, the future of communication may depend on a more telepathic form: brain-to-machine communication.

A brain-machine interface is a device that converts neuronal information into a form that an external computer or machine can then use to execute our actions and decisions.

Although this device may allow humans to directly commute with machines thought-to-action, the technology requires a significant amount of data to provide consistent, accurate, and desirable results. This core requirement reveals many ethical implications of a technology that works closely with the final frontier of our human data: brain data.

Although there is true and great potential with this technology, there are also daunting ethical consequences that must be addressed in order to minimize risk and harm.

Privacy

Image via pixabay

With new ways to communicate with the brain, and given the significantly intimate nature of the data the brain-machine interface collects, there is a possibility for new violations of human privacy.

While many brain-machine interface applications may be created with a benevolent intent of enhancing the quality of human lives, allowing access to human brain data can significantly harm the user’s privacy.

Private information, such as brain data, collected by third-party entities, could be used to target the brain-machine interface user in malicious ways. The entities which possess this private data could manipulate brain-machine interface users in many harmful ways, including:

  • Sending unwanted stimuli to cause physical damage to the body
  • Sending visual or auditory stimuli to the user and analyzing the unconscious brain’s response with the intent to collect or steal private information (bank information, passwords, location, secrets)
  • Drawing population inferences from examining large scale, raw brain databases with the intent of producing target user profiles of one’s memory, emotional inclinations, preferences, and other attributes.

Given the highly sensitive nature of this data, there is no doubt that it presents a very high potential value. One could reason that those in control of the data would not hold back from the extraction or sale of this data to third party vendors — companies would be motivated to acquire and process such private data because it allows them to operate more efficiently, conduct internal research, and commercialize.

For example, the APPLE ECG watch has influenced insurance companies to modify prices based on the outcomes of the sensitive data (our body’s electrical impulse) that the watch collects; basic marketing and strategy tells us that it is an important competitive advantage to know the profile of users and consumers before the sale of products.

While APPLE ECG collects electrical impulses, it is unthinkable what modifications to our daily lives would result from the collection and use of brain data.

Ultimately, the combination of these motives poses a direct threat to a brain-machine interface user’s privacy and we must strive to draft regulations that would mitigate the power and monetary motives inspired by this technology. Examples are modifying the GDPR to ensure international applicability or strictly controlling the sale and transfer of brain data.

Autonomy

Image via pixabay

With a technology that optimizes to our preferences and decisions, brain-machine interfaces threaten to impair our human sense of autonomy; our ability to self-determine and make independent decisions, uninfluenced by external bodies, is at risk with the rise of this technology.

For ethicists, autonomy refers to an individual’s capacity to self-determine. According to most brain-machine interface researchers, brain-machine interfaces are intended to increase physical autonomy and quality of life in severely disabled individuals.

The concept of autonomy in this technology differs between ethicists and brain-machine interface researchers; brain-machine interface researchers limit the discussion of autonomy to disabled patients, who will likely be a minority of users in the future of brain-machine interface development.

Brain-machine interface algorithms continually adapt to and learn from user thoughts and tendencies and may suggest or anticipate future actions that the user wants to take. As a consequence, users may build a dependency on brain-machine interface suggestions and lose the ability to ensure that intentional decisions are controlled and realized.

With the ability to now rely on brain-machine interfaces to execute thoughts and knowledge, a user may lose a sense of independent thought and personal development.

A classic example is limiting your set of actions and thoughts to those you know the device will accurately and reliably execute (less exploration, more exploitation). If current brain-machine interface research defines autonomy as previously noted, then a user’s human sense of self autonomy may be threatened.

In order to avoid this, brain-machine interface researchers must bring in experts in multiple domains to effectively define the ethical implications of human autonomy and modify their technology and practices accordingly.

Security

Image via pixabay

Security is a common task that engineers work to ensure in computer technology. The collection, storage and transfer of the most sensitive human data will call for new protective and prevention measures, combined with detection and event response in order to avoid and protect against safety accidents.

As with all new technologies that utilize human data, there is a common threat of hackers and malicious persons that wish to inflict harm. If a hacker could gain access to your brain data or brain-machine interface, they may be able to inflict harm in unthinkable ways, as described in the privacy section above.

Security is not something that should be an ad-hoc criterion: user’s of brain-machine interfaces should not have to worry about security at all during usage. The slightest breach in security could have an enormous impact on a user — possibly resulting in permanent brain damage or death.

Responsibility

Image via pixabay

The potential widespread use of brain-machine interfaces brings up an important question pertaining to moral and legal responsibility: does the use of a brain-machine interface make the user responsible for all of the machine’s decisions?

A troublesome consequence of this technology is exemplified by the concept of the Moral Crumple Zone, which describes the tendency for attributing the fault of defective algorithms and datafied systems onto the human subject. Many humans have experienced thinking something, but have abstained from communicating or executing the thought.

There has been extended discussion over who to ascribe fault to if a brain-machine interface sees the thought of a user and accidentally executes a harmful action. Although the user may not act in a certain way on their own, they are subject to the consequence of the brain machine interface’s decision and ultimately their thoughts.

Current research and understandings of moral and legal responsibility are insufficient for dictating the use of a brain-machine interface. As a result, we must carefully specify the scenarios in which we ascribe fault for a brain-machine interface and critically review our understanding of responsibility within brain-machine interfaces, supplemented by methods currently used to address this ethical concern in other technologies.

Why is this important and what can we do?

It is no doubt that computational technologies have become increasingly valuable in the everyday lives of our society. With the rising and ubiquitous use of big data to empower both humans and machines, we find ourselves amidst a time in which we must carefully contemplate the design, impact, and ethics of technology.

Although a brain-machine interface may allow us to significantly improve our everyday lives, we must prioritize the safety and well-being of humanity. We must first pull in experts from many fields — ethicists, legal experts, neuroscientists, and engineers — who support open discussions directed towards addressing these concerns.

For the general population, we can challenge traditional regulatory frameworks, criticize current research practices, and examine the human contexts and ethics of the technology.

Ultimately, we must encourage discourse and empower ourselves by challenging technologies that infringe upon our basic human rights.

Upvote


user
Created by

Andy Mandrell


people
Post

Upvote

Downvote

Comment

Bookmark

Share


Related Articles