There has been considerable discussion surrounding robotics, artificial intelligence and their impact on both jobs and daily life in recent years. However, very little is actually known about the legal implications of these innovations. These legal implications will become increasingly more important as robotics and AI perform more and more tasks traditionally performed by humans. For example, it is crucial to determine, in a legal framework, who is responsible for an accident caused by a robot. Scientific progress moves swiftly, putting pressure on lawmakers to quickly develop rules for these new frontiers.
An Arizonian woman was recently killed crossing the street by a self-driving car during a test drive of an Uber self-driving fleet. The accident justifiably caused considerable uproar among the public. The issue was exacerbated by the fact that the human test operator was in the car at the time and was still unable to stop the car and avoid the accident. This incident is not unique, unfortunately, and others will likely happen in other contexts. This fact requires lawmakers to prepare for new and potentially very complex legal battles. “Within the current legal framework, responsibility for an accident can only be attributed to humans,†explained Marta Fasan, a PhD student in European comparative juridical studies at the University of Trento and was awarded a scholarship funded by the Italian Institute of Technology (IIT).
Fasan’s study focus is the ethical and legal implications or robotics and AI, as well as how they should be regulated in the future. “We are unable to make a machine liable for an accident. It’s pointless – how do you punish a robot? Responsibility must therefore be given to the person who was ‘closest’ at the time – the one responsibile for the machine at or imediately before the moment the accident occured.†The real problem, however, is when robots act on their own. “The question is whether it is appropriate, or even useful, to develop new legal categories that assume responsibility of the machine. Real, very serious problems might arise when these robots become completely autonomous, able to make decisions independent of human beings. At the moment, robots of this kind exist but are only prototypes.â€
The development of these new laws regarding robotics and AI is only in its initial phase. As of now, there is no general regulation at the national or European level. However, that is quickly changing. “The English Parliament has issued a document addressing the various problems that the injection of AI into day-to-day life may create. The same has been done in France, and the German Parliament has already adopted rules for the use of vehicles with automatic or semi-automatic driving capabilities. In these countries, the law states that the responsibility lies with the one in the car, even if the ‘automatic pilot’ function is in use. In a resolution in 2017, the EU Parliament declared a set of ethical and legal principles that will guide the European Commission in adopting legislation that will regulate the introduction of these new technologies in the market. The challenge currently facing lawmakers is whether the concept of product liability can be adopted in these cases. Lawmakers must determine the extent of responsibility that may be attributed to those who produced or designed the machine, as well as those who could have possibly helped avoid accidents resulting from the use of that machine.
Then there is the issue of ethics in the world of AI. These machines are becoming increasingly intelligent as they are programmed to take decisions in a variety of situations. Fasan discusses an opinion expressed last year by the National Committee of Bioethics (CNB) that explored the potential ethical problems resulting from the usage of these new technologies. A simple example is that of a car with advanced autonomous functions that will need to decide what to do when pedestrians unexpectedly cross the road. Who must the car choose to save? Pedestrians, or passengers? Is it right to establish an ethical code for what to do in these situations? If so, what is the correct choice?
The stance taken so far by leading insitutions is that humans must adopt an active role in this discussion, rather than serve as passive actors in the development of new technologies. “The assistance of a robotic instrument is convenient for the surgeon. For example, in the diagnostic phase, some intelligent systems are extremely efficient in diagnosing some cases of cancer as they are able to analyze a large quantity of data and screen for diseases far faster and with greater accuracy than any human. But it is a complimentary device to the human mind and intuition, and we should be wary of a world in which the doctor determines the final diagnosis, yet the machine takes the responsibility.â€