If you’re a nerd, you’ve probably heard of Elon Musk’s latest company, Neuralink, a company “developing ultra high bandwidth brain-machine interfaces to connect humans and computers.” If you’re really into this sort of thing, you might have run into the excellent (but extremely long!), 36000+ word article on what Neuralink is trying to achieve on Wait But Why, based on interviews with Musk and his team. If you don’t wish to read through the whole thing, I’ll sum up:

  • Throughout human history, progress has been highly linked to the ability to share information.
  • The human brain exchanges data with itself orders of magnitude faster than it exchanges data with others due to the bottlenecks of speech and hearing.
  • Brain-machine interfaces not only present the possibility to allow the brain to handle tasks that machines can do better (computation, lossless storage of bulk information, etc), but to remove the limiting barriers in communications between individuals.
  • The effects on society of BMIs would be staggering (many examples given)

A brain-machine interface is not simply a sensor. To any human looking at the data coming off of a BMI, it would appear to be chaotic noise. To parse it, you have to use another neural net — this time, an Artificial Neural Net (ANN). 

While ANNs have existed for quite some time, they’ve undergone staggering levels of improvement in recent years, to the point that they’re making tasks that have long been considered “AI Hard” (aka, “humans can do them, AIs can’t”) available to the public. For example, pull up Google Photos and search for some random thing that you might have pictures of on your phone — say, “beer” or “concert” or “rock” or “forest”. It’s not perfect, but it does surprisingly well for such an incredibly difficult task. Speech recognition used to lead to almost laughably bad results; today, with modern neural nets, the accuracy is surprisingly good, and is found in many products. Facial recognition systems have gotten good enough that they’re leading to disturbing ethical consequences. Task after task, ANNs are getting better than humans — for a recent example, lip reading. The advancements have not just been simply due to Moore’s Law providing for more processing capability; the biggest advances have come due to improved learning algorithms.

In a previous job I had, we used ANNs for brain research. Literally using artificial brains to help study biological brains; we trained the ANNs to dissect MRI images of the brain into individual sections (a time-consuming task). When I first started, humans were far better at it than the ANNs were. By the end, the ANNs were beating humans.

ANNs do not work by the same means as human brains. But nor do they need to. They just need to work.

Neural nets, whether artificial or biological, are “trained” — that is, “rewarded” when they return good results and “punished” when they return bad ones. With an ANN reading out synapses from the nervous system and trying to interpret them, and being appropriately rewarded for doing what the user wants, it can steadily learn to “interpret” the data it’s receiving. At the same time, the biological nervous system does the same thing, strengthening and weakening connections to the neurons being read and being rewarded when it achieves a desired result.  The net result?  Things like this:

Current BMI systems are, unfortunately, highly limited, as they only monitor a small number of neurons (or collective groups of neurons) at a time; the data “pipe” is very small (particularly for non-invasive BMIs), and this communication through them is poorer than using our brain’s innate mechanisms. Neuralink wishes to change this, advancing BMIs to the point that they vastly exceed our brain’s IO abilities. This would not just enable the linking of humans with devices, but the direct linking of humans with each other.

This is where things start to get interesting.

Information in a brain is not stored in a single specific location. You don’t have a neuron whose firing means “Florida”, another which means “Georgia”, and so forth; data is distributed, and thought proceeds in an almost “voting”-like manner via the synaptic contributions of numerous neurons. When a given neuron’s activation threshold is reached, it fires off, contributing to further activation elsewhere.

With an increasingly high bandwidth, bidirectional BMI, the BMI’s ANN not only interprets and is rewarded based on what the biological brain wants, but it likewise contributes back, taking part in the same “voting” process by activating synaptic pathways. The greater the bandwidth, the greater the BMI’s contribution. In short, it slowly ceases becoming just an interface, and progressively becomes more of your mind itself.

As BMIs enable linkage between individuals, a high bandwidth / low latency BMI between two people who communicate extensively begins creating effective links between their brains, as if they were physically connected. The thought processes of the two minds begin to merge at the interconnects, to the degree that the brain and the BMI’s ANN are “rewarded” for doing so.

Which brings us to the topic of death.


I’ve long thought about my ideal scenario — one which I don’t expect to happen in my lifetime. That would be, a scenario in which BMIs have advanced to the point that each neuron’s behavior, down to every last dendritic connection, can be bidirectionally interfaced and fully simulated, including its evolution over time. When a person is nearing the end of their life, they are “transitioned” into the simulation: one neuron at a time, that neuron is simulated, the biological neuron caused to commit apoptosis (suicide), and the simulated results passed along to its neighbors as though they were coming from the biological neuron.  One after the next, each neuron is steadily “migrated” into simulation, until there is no biological brain left, only the simulation of it. There is never any point in which the two are ever separate, just a steady transition from the physical world to the digital.

This is, however, vastly different from the state of the art today.

Our ability to simulate individual neurons is lacking. And the best BMIs today only monitor around 500 neurons or groups of neurons, only based on their synaptic patterns. There are a good chunk of a hundred billion neurons in the brain, and a full, bidirectional simulation involves a much more in-depth connection.

Neuralink aims to improve this scenario. Techniques being investigated are ones that I had considered earlier, including neural dust (tiny silicon sensors spread throughout the brain) and optogentics (using a virus to make neurons flash and respond to optical stimulation). The hope is to have procedures as simple as Lasik, or potentially even via injection through the bloodstream, to provide for the interface. Musk’s goal is to have the first consumer products within 10 years, although that has been considered highly unrealistic, even if just from the standpoint of FDA approval alone.

The key point is, Neuralink might ultimately succeed in dramatically increasing the data pipe, but they’re unlikely to have such a detailed, full-brain model. Yet they would still end up with something that is a piece of you. Literally a part of your mind that is effectively immortal.

But just a part.

And when you die, it becomes all that is left. A ghost of you.

Encoded within the ANN are the pathways that your brain rewarded while it was alive — specific feedbacks for specific responses. If your thoughts of Florida were based on a fishing trip to the Florida Keys when you were 13 years old, and your brain ever communicated with the outside world about it in any form, your brain strengthened connections to the BMI’s ANN concerning those things that represent “Florida” to you. If you have some key arguments in mind that you make whenever debating with people about climate change, those connections were strengthened. All of the sorts of things that make you “you” were strengthened in the ANN.

With the loss of the biological brain, how much of “you” can be reconstituted from this fragmentary, throughput-limited portion of your mind? It’s hard to say. We have real-world examples of “limited bandwidth connections in brains being severed” — for example, the cutting of the corpus callosum (the interconnect between the two halves of the cerebrum), which is sometimes used as a treatment for severe, debilitating epilepsy. The most remarkable aspect of the procedure is how unremarkable the results are. There are some unusual complications — for example, a “split brain” individual, shown only an image in the left eye, often cannot vocally name what they’ve seen, as the image is sent only to the right side of the brain, while in most people, speech is handled on the left side. But the person remains basically the same person; each side of the brain does its best to make sense of what the other side is doing and justify it. A classic example is the case where an image of a chicken is shown to the right eye, and a snowy field to the left eye, and they ask the person to pick objects corresponding to what they saw with each hand. The left hand, for example, may choose a snow shovel, while the right hand may choose a chicken foot. But when asked why they choose the shovel, the person tends to respond along the lines of “the shovel is for cleaning out the chicken coop”, with no doubt in their mind as to why that was the answer.

Regardless of whether the technology of the time can evolve an ANN to retain some sort of consciousness on its own, that ANN continues to exist so long as it is preserved for future generations. Should the technology ever exist, whatever is stored in that “brain” could be brought back, to whatever degree is then possible.

Another possibility, regardless of consciousness within the ANN itself, is that of communication with the dead. If the deceased’s ANN ceases to be modified by backpropagation (“rewarding” changing the strength or types of connections), but remains as a fixed interface, then there exists the possibility of another “brain” (biological or artificial) learning to interface with it. They think of Florida and receive back your fragmentary memories of Key West. They think of global warming and get back your arguments. Once an interface to the deceased’s ANN has been learned, backpropagation could be resumed in that ANN, allowing it to continue to learn and function as a part of the mind of the living person.

To sum, up, and with analogy:

In Japan, it is common for people to have a butsudan (仏壇, lit. “Buddhist altar”):

Tables with the names of the deceased in the family, and/or their photographs, are often present, depending on the sect. These tablets (ihai) are often treated as if the spirit of the dead ancestor, and candles or incense may be lit for them. They function as a way to communicate with those who had passed away, with those who passed more recently being treated as individuals, and those who passed long before merging into a sort of collective family spirit.

If Elon Musk succeeds with Neuralink, may well end up with such a connection to the dead, in a form that is quite a bit more… direct.

  • April 23, 2017
Available for Amazon Prime