TransWikia.com

How to Keep a Murderer from Hacking Your Cybernetic Implants

Worldbuilding Asked on September 3, 2021

Because my original question regarding the legality of DNL tech
(Link here The legality of Direct Neural Link Technology) was a tad bit too broad, I decided to try again, this time I will break it down into more manageable bite-sized nightmare scenarios that I think we can all agree we should all be concerned about.

Now the basics of Direct Neural Link tech as previously described is all about connecting your mind to the world around you, like downloading information directly into your brain, or allowing cybernetic limbs all the feeling and vitality of good old fashion flesh and blood, or even stimulating nerves or bypassing them all together to make the lame walk and the blind see.

But if you put a door in your head to let your mind out into the world, that door could also let something darker from the outside world into your skull. In this case to commit the oldest crime known to man. Cold-blooded murder.

And the worst part is if we assume that DNL tech Will be half as ubiquitous in the future as depicted in most sci-fi dealing with it, there might be a variety of murders, from a jealous lover hacking the spouse of an ex to do the deed, all the up to hacking a presidential aid for targeted assassination.

Now I think we can all agree that no one wants this to happen, not the populace putting this tech in their heads, not the police who would have to investigate these killings, not the muggles who like me don’t want to be in the passenger’s seat when one of google self-driving cars hops the curve, runs over three peoples and drives off a bridge, and certainly not the corporations who would be sunk by the many justified class-action lawsuits.

So the question is: How do we keep a psychopath with a laptop from hacking people’s nervous systems in order to murder someone?

5 Answers

Security overdrive

One way to keep your brain link secure is to hire and maintain a group of expert hackers who are constantly hacking an making fixes for possible intrusions. This is good for high profile people since this means there are team standing by for your security, and good for normal people since they get great security. The company that make this would also want this since the more people use it the more likely an attack will occur on a less important user than the people that are paying for the security service. you can also have cash rewards for potential intrusion methods so most people will turn over the intrusion method when discovered instead of trying to use it illegally.

Don't trust the user

In our current culture, we install apps and browse sites constantly downloading information and exposing ourselves to risk. Because of this users can easily get hacked by downloading something to compromise themselves. The solution, take away the user's freedom. you want the brand new in brain video editor, nope, you get the old, secure, tested, version that was developed in house and never updates, since updates implies security threats. Every function has one application and one application only. To browse the internet the content is scraped and scrubbed so you don't get infected and if that process can't complete the site is just blocked. Secure messaging exists, but no emojis or extra characters that could cause exploits. This isn't great, but it works.

Neuroscience is hard

In order to hack someone you need to first understand their brain and deal with the idiosyncrasies of its wiring. For simple stuff like putting information on the optic nerve or tapping muscle fibers that might be simple, but getting someone to alter their behavior is hard. Also, since every brain is different, every hack is different. A hack that gets the president to pass a new law will get his vice president to feel hungry and have a few memories that looks like static, and will cause a random citizen to remember static and have uncontrollable nausea. Therefore to hack a brain you need a lot of effort and at least a masters degree in neuroscience. In that case anyone who would hack a brain would instead us a cheaper solution, like a bullet or something.

Limited brain functions

The reason the brain is not hack-able is because the technology doesn't have the ability to to hack the brain. The link can put sensory data on nerves, interpret simple impulses to muscles, and helping you remember images and sound. But changing someone's thoughts is impossible because the link doesn't have that capability. The machine has read and write access to the brain, but your brain sanitizes inputs so you can't brain wash someone. The best you can do is send mass junk data or pain, but any person with a wrench or access to Quaalude has that ability also. you would still see jamming attacks and doing this to someone is traffic could be a death sentence, but it will mostly stop attacks.

laptops are not great hacking tools Assuming that you are being literal with your laptop example, most laptops have about 16 GB of ram and a passable GPU. It is possible that to execute an attack you need to calculate information about the whole brain to manipulate every piece to get the expected result. This will probably need more than 16 gb and just ok GPU. you might be able to do hacks with Desktops that border on supercomputers, but you will keep hacking out of the true script kiddies' hands.

Correct answer by Charlie Hershberger on September 3, 2021

Am currently writing a story centered on it.
I prefer to call this technology Brain Computer Interface (BCI) or brain-machine interface (BMI) rather than Direct Neural Link exactly to point out that this technology is NOT direct. You do not connect directly to the network.
First of all current real studies have pointed out the difficulty of signal processing.

Signal Processing
One of the issues we will find when dealing with brain-data, is that the data tends to contain a lot of noise. When using EEG, for example, things like grinding of the teeth will show in the data, as well as eye-movements. This noise needs to be filtered out.
The data can now be used for detecting actual signals. When the subject is actively generating signals , we are usually aware of the kind of signals we want to detect. One example is the P300 wave, which is a so-called event related potential that will show up when an infrequent, task-relevant stimulus is presented. This wave will show up as a large peak in your data and you might try different techniques from machine learning to detect such peaks.
A Beginner’s Guide to Brain-Computer Interface and Convolutional Neural Networks

This problem becomes even more severe as you have to increase the resolution of the device to perform complex tasks. Imagine the degree of complexity going from a signal like 'go forward' to transmitting the memory of a scene you experienced. Furthermore different individuals will show different signals. The higher the resolution is the more divergent these signals are.

You need an AI
What is needed is a middleware resident AI (this is why the link is NOT direct) that will learn the user's patterns and will grow with him/her. A number of 'games' will be played in the learning process to align the AI. It will then be in charge of both acquiring signals from the user and act on it and receiving input from the network and activating due signals in the brain.
Obviously it would also check and filter incoming data packets.

Let's imagine a scenario to lay out how all of this would work.
Our BCI user is sitting at a pizzeria on Piazza Navona. Being an English speaker she forms the thought of "how to order a Pepperoni Pizza in Italian". The signals of that thought are read by the resident AI that has been trained for years in reading correctly her thoughts. The AI then prepares the inquiry for the network and provides a crypto code attached to it. The inquiry is posted to the network servers through secure protocols. These servers are registered and certified so that they can provide the results of the inquiry with a corresponding crypto code. The AI receives the packets of the search results, verifies that the provided crypto codes match, scans the packets for malicious instructions. Then examines the results. If needed performs further research to refine the results. All of this in a matter of milliseconds, without disturbing the user.
As the results satisfy the AI it presents the answer to the user such as: "posso avere una pizza ai peperoni per favore?" but also with the notion that there is not such a thing in Italy.

So, a malicious attacker would have to find a way through all of the security in place and fool the AI into forming signals with malicious consequences. No system is 100% safe, but chances of success in an attack would be lower and lower the more complex the information passed on is. It would be impossible to bypass the security to control someone as a remote tool but it may be possible to manage implanting in his / her mind ideas through careful feeding of information. But does not that happen all the time with us?
Would the user be able to detect an external thought inserted in her mind? Would it glaringly show itself? "Until yesterday I was happy with Norman but now I have this feeling that he is up to something".
Well, it's your story, up to you to decide.

Answered by Duncan Drake on September 3, 2021

1) Use Application specific chip sets (think that is the correct name)

Your cybernetic system would contain a series of 'chips' specifically designed to perform one (or a specific set of tasks) fixed at the time of fabrication. Critically the programming in these chip sets cannot be changed post production (excepting perhaps in situations where the person wanting to install new software has direct physical access to the chip - in which case they could switch out the old chip for a new one loaded with the update.) So unless you 'hacker' gets you character under the knife they wont be able to override existing programs.

The downside is of course that like your washing machine your character won't be able to upload or install new improved software without surgery and whatever software he does have will be relatively inflexible.

2) Limit the possibility of signal/sensor jamming or hacking by utilizing detachable external receivers if not transmitters. Links to systems inside your characters body can exist (e.g. implanted just under the skin) but the 'box' containing the receiver at least should be external and detachable. Otherwise you risk your character being attacked with hostile data inputs at critical moments or even continuously (imagine looped Celine Dion video tracks playing inside your head 24/7 with no off switch).

Answered by Mon on September 3, 2021

How do we prevent people from killing each other with other lethal methods we walk around with every day, like cars, power tools, or drain cleaner? Investigation and forensics. You make it likely enough to get caught that most people don't want to take the chance.

Yes, it's possible to use J. Random Computer to hack J. Random Citizen's cybernetic implants in a way that kills them. However, the implants keep a hardware-level, indelible record of input received - commands, the type of equipment that transmitted them, usernames and verification codes, and the like. You could physically remove the implants, but that's just stabbing someone to death with extra steps - you'd have to be extremely careful to avoid leaving forensic evidence left and right. Otherwise, it's a matter of investigation, finding people with motive who could have sent the fatal commands, then verifying whether they have an alibi.

Answered by Cadence on September 3, 2021

Take out the brain interface from the question and it ceases being a worldbuilding question to become a question more fit for security.se.

With or without the brain interface, the question boils down to:

How do we keep a psychopath with a lab top from hacking

Because the motive and the target are irrelevant. And the only correct answer is that you can't. Security has always been a cat-and-mouse game, and there is no indication that it will ever stop being so.

So just like love and war, this is a game in which the only way to prevent a loss is by not playing it.

You could live without cybernetics. It would be like living without electricity in the 21st century, which some people do on their own volition (i.e.: Amish communities). In a world that depends ever more on technology this could really get you cut off of the grid. Many good scifi stories have a plot like that, where the hero is unable or unwilling to use the latest technologies because of reasons (i.e.: the book Forever Peace, and the movie Surrogates).

Otherwise, once you've got that chip in your head, you're game.

Answered by The Square-Cube Law on September 3, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP