Three vital ethical issues highlighted by the vast potential of neural interfaces

By

Last week I participated in a TV panel discussion on the implications and ethics of neural interfaces, together with the fascinating cyborg artist Neil Harbisson and Oxford University neuroethicist Stephen Rainey. A video of the full program is below.



I was delighted to participate in the conversation as I am increasingly turning my attention to the evolution and implications of Brain Computer Interfaces (BCI) as one of the most important developments for the future of humanity.

An excellent report last week from the Royal Society examined the state of the underlying neural interface technologies, their likely future evolution, and the many implications including individual and societal opportunities and risks.

Some of the most exciting potentials of neural interfaces include disabled people having greater control over their lives, support for a variety of medical issues, and potentially anyone being able to augment their cognitive capabilities.

As we discussed in the panel discussion, from the plethora of ethical issues raised by neural interfaces, I think that three are particularly important: access, choice, and security.

Access

The potential extraordinary positive impact of neural interfaces to disabled people will not be available to everyone, due not least to the cost of the complex systems and installation. We need to decide the basis on which these will be allocated.

On a broader scale, neural interfaces have the potential to enhance the capabilities of everyone in performing their work and achieving their ambitions. While in the very long run interfaces may be available to all, for the foreseeable future their cost means that only more affluent people will get them. This could result in an aggravation of ongoing polarization of wealth, income, and opportunity, so is a fundamentally important issue.

Choice

While it is not a major decision to use non-invasive neural interfaces such as headbands, it is a substantially bigger step to have devices installed in your brain. In particular early adopters will be taking personal risks in laying the ground for others to follow.

There will inevitably grow a divide between those who choose to use neural interfaces to augment their capabilities, and those who do not, for any number of reasons including religion, risk aversion, and simple dislike of modifying their bodies and minds.

Security

The potential risks of brain interfaces are certainly frightening, including governments or marketers reading our minds or directly shaping our thinking, or even malware being loaded in our brains.

The response should not be simply to ignore the potential of neural interfaces. However it does mean that security must be front and center from the very outset of designing neural interfaces.

There are major challenges in the rapid rise of the Internet of Things because security has often been an afterthought. We cannot allow the same thing to happen with neural interfaces.

If you are interested in delving more into the manifold issues raised by the development of neural interfaces, watch the TV discussion or read the Royal Society report. I will be sharing more soon on this very important topic.