Yaabot

If Your Robot Commits Murder, Should You Go To Prison?

Roll back approximately twenty years. What do you see? Mankind had just tapped into the power of transistors. We started exploiting combinations of multiple silicon transistors to perform basic binary tasks.

We did not stop there, however. Mankind’s best minds worked together to innovate further. There were smaller transistors – offering higher power. Capacities doubled almost every 18 months – validating Moore’s law every single time. Apart from exponentially faster computing systems, the current pace of technological progress also teases us with fully autonomous machines or robots. These fully autonomous machines will not tire, will not rest and will be highly efficient. As they come to ‘life’, many robots are expected to replace us humans doing simple tasks. There is even speculation that they could take on much more complex roles – like replacing our police force.

But what if autonomous robots end up on the other end of this spectrum? What if, instead of being law enforcement agents, robots end up committing crimes? What consequences follow? Should we hold a human individual responsible for a robot’s actions? How reliable should a robot be before we can trust it? 

All of us could be robots by 2030: Ray Kurzweil 

There will be many legal implications with this issue. Additionally, there there will be a more pressing issue – the personal issue. How will we accept a robot in critical places such as hospitals? We simply cannot react the same if a robot is performing a critical surgery. Yes, robots are highly efficient, but that isn’t adequate in critical situations. Autonomous machines may make certain choices in the middle of surgery, based on probabilities of success – but if a robot fails simply because of probability, do you hold it responsible for negligence? 

yaabot_robot_jail_2

In my opinion, we’re pretty far off from achieving true consciousness in these biological beings. A robot is a machine at the end of the day – a machine that works on pre-fed instructions and rules. We can create robots that obey these specific codes or laws such that these autonomous machines never break civil conduct. Like Asimov’s Three Laws of Robotics. You might argue that we do not need jurisdiction for robots for they will never break this code of conduct. But as machines becoming increasingly complex and autonomous – they will gain the ability to interpret laws and rules as they like. An autonomous machine can think and evolve – that’s where most of the research on artificial intelligence is focused at right now.

Also Read: The Need for AI Safety is Real

Preparing for the worst then – how is society supposed to react to robotic crimes?

Who’s responsible for Robotic Crimes?

In many cases, a legal theory is evocative of possible methods to problems that will require further work to evaluate. A legal theory allows us to define certain classes of ethical and well-defined legal problems. Once robots are a sizeable portion of human society – there will be an inevitable need to regulate their actions and the resultant consequences. For that, we’re going to need a revamp of our existing laws.

Formulating laws specific to robots will need some deliberation and discussion – primarily because of a lack of a precedent. We haven’t had advanced autonomous devices interacting with society so far – which will make this law-making process an unfamiliar affair. Some may argue that this isn’t entirely true. Autonomous drones do exist – even if they’re used by nations for military warfare.

In the case of autonomous drones – like those from USA bombing areas in Afghanistan & Pakistan – it is USA as an entity that is held responsible for the drone’s actions. Such an analogy can be extended to robots too, wherein the manufacturer is ultimately held responsible for whatever action the device executes. This kind of setup is likely to be well accepted by the masses, because it’d lead to safer robots from manufacturers who would want to avoid legal trouble. We can apply a civil law to all robots rather than a criminal law. This is not difficult – we already have civil laws for faulty designs. With this system, we can hold the legal owner/manufacturer responsible for any mishap if he fails to take proper care of the technology/robot.

We cannot end it there; a faulty design can cause problems. If found a problem with the design, the company must be held responsive.

Robots won’t be the first non human entity to commit a crime, if they do. Corporations as a whole have done it too, multiple times. A slight modification to this concept involves charging the robot itself for the crime, much like you charge an organization if it commits a crime. In this Big Think video below, Jerry Kaplan from The Stanford Center for Legal Informatics explains that if organizations are charged with a crime, they face consequences that could put them out of business. Robots could be tried the same way. If we’re going to be dealing with intelligent, autonomous robots – these machines will understand that breaking a code of conduct, or committing a crime, will render them unable to accomplish whatever goal they have been designed for. 

Robots as Quasi-persons?

We can treat robots as Quasi-persons. Globally, our current legal system makes sure that the final responsible entity is always a human or a corporation. We are a long way away from calling a robot a human, so this could be a possible solution. Quasi-persons is a simple concept. Minor children are a prime example of quasi-persons.

Minors do not enjoy the full privileges. They cannot sign contracts, and cannot involve themselves in various legal arrangements. They can do this only through the actions of their parents or lawful guardians. The same reasoning could be applied to robots, and they could be considered quasi-agents. In such cases, the individual who grants robots permission to act on their behalf is legally responsible for all of its action. If robots are commercially adopted by humans on a global scale – it’ll mean owners are the entities responsible for potential robot crimes.

This kind of legal setup may not be up for mass adoption, however. Primarily because it aims to protect manufacturers and organizations, and instead puts the burden on owners for robotic conduct. This could consequently lead to lower adoption rates for robots by the masses. 

Related: How far are we from The Singularity?

Regarding crime and punishment, it doesn’t make sense to physically punish a robot. Even if a robot has a body, torture and punishment are baseless as robots have no emotions. Sure, it’ll render the robot unable to achieve it’s goal or task – but somehow, this method seems incomplete. 

We cannot solve practical, ethical and meta-ethical problems by legal theory alone. There is still a long way to go, of course, and our laws will adapt as artificial intelligence itself evolves.

Exit mobile version