Artificial Intelligence And The Art Of Killing

Just few weeks ago, a robot killed a factory worker. The factory was that of Volkswagen, one of the worlds biggest auto manufacturers. The worker was installing the stationary robot, when it suddenly hit and crushed him. The worker died in the hospital, and now the authorities are clueless about who they should blame.

It’s said that Artificial Intelligence will be the undoing of man; that it is the hell we will create for ourselves. In this incident, the robot might have had a defect since they are not programmed to take decisions on who should live and who should die, but it serves as a perfect example for the naysayers. Everybody fears a terminator like future, where we are fighting for our very survival.

Artificial Intelligence - Asesinato de Francisco Guerrero
You don’t want a robot to be doing this. Would you? – (Analogy borrowed from ‘The Mexican Ripper’ by artist José Guadalupe Posada).

It is a rational train of thought. We create Artificial intelligence that assumes superiority over humans and assumes that we have outlived our usefulness, it launches war on us. And human kind is exterminated. I say rational not because it does not sound extremely stupid.

Elon musk thoroughly condemns Artificial intelligence. He does not cite concrete reasons for the same, but he firmly maintains the opinion. If somebody hits me by mistake while travelling on the road, I will feel a momentary surge of anger, but I will try to understand why the other person hit me. In any scenario, being the kind of person I am, I would rather not retaliate even though the other person might have done it on purpose. I want my peace of mind, and what is right and what is wrong does not concern me for such a trivial matter. Making a machine understand this is almost impossible, or at least it seems the case.
Angels and Demons; the world is made up of them. In reality it’s gray, but that’s how we generally perceive people. Darkness is evil, and the bearer of evil is the devil.

It would boil down to whether the person is right or wrong. With this in mind, one question comes to mind – how would Artificial Intelligence determine whether the person is right or wrong? One way would be to check if the person under observation has done something questionable, when compared against a list of ideal behaviours that a human being must display. Another thing to consider is the impact or his/her irregular behaviour. That is how we would think; although in our case its second nature to analyse things, so we end up not asking questions. The movies tell us that at some point in a future we may or may not be able to foresee. Artificial intelligence is going to decide that it needs to rule the world, and machines will start killing people.

Can we perhaps avoid this situation? If I make a self sustaining Artificial Intelligence machine, i’ll override some of its behaviours. First of all, there should never be a society where machines and humans co-exist as equals, simply because the purpose of us creating Artificial Intelligence is so that they can help us survive, not to establish their own civilisation. And so even if we give them the power to analyse and gather knowledge, their ultimate goal should always be that of satisfying our interests. That sounds tyrannical, but that is the only way to avoid a possible future disaster.

Paradoxes are fascinating. ‘This statement is false’ is a positive assertion which equates to true, but which contradicts the context of the statement. We can process paradoxes because we accept that two contradictory statements can exist in a similar way that contradictions exist in our daily lives. We can be good and bad at the same time, and this experience teaches us to accept conflicting views. Artificial Intelligence will merely convert statements into equations to understand them, and when it encounters something like ‘statement = false = true’, it fails. How it will react to this scenario is unpredictable.

I believe we are doing Artificial Intelligence the wrong way. Instead of getting them closer to human thinking, we should give each one of them a specialisation, and have them explore the possibilities of that specialisation. Google is already doing that with their self driven cars. I can imagine a future where a car will drive along a coast line by itself without anyone just to the enjoy the view. We can have Artificial Intelligence work on solving big world problems like pollution, poverty, crime, corruption, disease, global warming or have them working on future possibilities in Space exploration, medicine, alternate resources of fuel and food. If we try to make them exactly like human beings, we might not utilise their potential, because humans have the tendency to degrade through the way they think. Instead make them focus on one particular thing, and have them become smarter and become experts and innovators at that. This way, they might actually turn out to be a boon for society, and not the death of it as we’re predicting so conveniently now.