Science continues to march forward, pushing the border of what’s possible and presenting significant ethical implications along the way. Many, including the Church, are currently trying to maintain balance in the relationship between the rights of man and the marvels of science.

The realm of artificial intelligence forces scientists, sociologists, politicians and the religious to question themselves: To what extent is it possible to advance the pursuit of scientific discovery before these advanced, intelligent machines “overtake” man? What does it mean for a machine to be intelligent, and how does it become that way? These are the questions legal scholars and experts are attempting to answer as they study artificial intelligence and all its complexities.

Last year, the European Commission drew up a draft code of ethics on the use of artificial intelligence. It consists of a series of guidelines for the creation of reliable artificial intelligence systems, respecting the fundamental role of human beings. Brussels called fifty-two international experts from private companies, universities and public institutions ( The AI HLEG is a European Commission-backed working group made up of representatives from industry, academia and NGOs, and was formed as part of the Commission’s ongoing work to develop EU policy responses to the development, challenges and new opportunities posed by AI technologies)  to draft a text that was published in December 2018.

The code is extremely detailed, outlining the appropriate application of fundamental principles of European law in the development and use of intelligent systems. The document supports the “robustness and security of systems” while maintaining the superiority of human beings in the relationship with AI. It prioritizes the dignity and freedom of man, as the autonomy of man should take precedence over that of the artificial. Humans should always maintain the ability to supervise, or control, machines to safely limit robots’ capacity to make autonomous decisions.

For many this draft is concerning – several analysts believe that the draft is limited by the fact it is a series of open rules which may be followed voluntarily. Governments, researchers and companies can follow these rules if they so desire, but they are not bound to do so. “It is not a flaw, it is a declaration of intent by the European Union to develop a coordinated plan for the development of AI. It would be premature to have something binding,” explains Marta Fasan, a PhD student at the University of Trento and scholar of robotics, AI regulation, and both their ethical and legal implications. “The potential of these new technologies and how to assimilate them in society is still uncertain. We first need to understand both how to utilize them, and how the EU can then appropriately intervene as a regulator. That said, the will of the Member States at national level must always be respected. The passage of a non-binding document is necessary. I am not against the adoption of a single binding act, but we must understand what kind of act the EU intends to adopt: the typology of EU law issued is relevant, as authority granted to the European institutions changes according to treaties. However, at the present, a simple declaration of intent is completely appropriate. It would be too complex to intervene and regulate the use of these technologies now, at such an early phase.” The document explores the issue of privacy, a matter very dear to the EU. Many fear that technology will be used to implement mass surveillance systems. For the EU, the difference between the latter and the identification of an individual is “crucial to the development of trustworthy AI.” For this reason, according to the General Data Protection Regulation (GDPR), data processing is lawful only if there are valid legal requirements. “We give up our privacy when we give consent to the use of cookies – while we surf the web, for example. It is never an informed consent, we rarely read all the provisions on the use of the data. But the aspect of privacy will always be taken into consideration: the European approach, unlike the American, is very sensitive to fundamental rights. The document reiterates the need to invest in the development of the AI, above all for productive purposes, but only on the condition that the ethical and legal principles underlying the European system are respected.” Human beings remain central to design; the nature of robotics and AI should be purely complimentary.

In terms of the legal component of Trustworthy AI, the guidelines do state that AI systems must respect all applicable laws and regulations, but they acknowledge that law can often lag behind the pace of technological change.

“There is still an important gap between AI and human intelligence. Robots with applied AI, although advanced, fail to perform certain activities. And that gap has not been filled yet. For this reason, it is still difficult to entrust machines with significant responsibility.” The rules that regulate the use of new technologies will evolve as quickly and consistently as the development of the technologies themselves. “This is the problem between law and scientific progress: law has a duty to guarantee a minimum level of certainty, but science has a tendency to race ahead, especially in recent years. We must not arrive at legislation so restrictive that it discourages research.” Is there a risk to enacting retroactive laws, contravening a cornerstone of the law? “It is not easy, but the cooperation between lawyers and AI scholars – and society – to understand each other’s fears, might be a solution. The legal community has taken up the challenge to tackle issues as they arise, in an attempt to stay ahead of the curve.”