Can machines replace judges?
A philosophical approach by Philippos Kourakis*
There are various ways in which technology could change the way people who are involved in the legislative process and law enforcement work. In this text we will focus on the question of machines taking over the judiciary, and if that could be in line with Ronald Dworkin’s right solution thesis.
Using a specific algorithm
Lawyers Casey and Niblett [1] describe a hypothetical future situation in which the information and predictions we can derive from technology will be of such precision where we can assign the judge’s role to machines. The process, as they say, will be the following: in some US states, an algorithm is already being used by judges to predict the possibility that the accused will not appear before the court. Although this algorithm has not replaced the judges, it is reasonable to assume that the more effective it will be, the more the judges will rely on it, until they ultimately depend entirely on it.
Τhere is a question through this (hypothetical) scenario on how such a move would be in harmony with the very nature of law. To give an answer, we will turn to Dworkin’s work and in particular to his theory regarding the right solution thesis.
The theory of the right solution and its possible misinterpretation
Dworkin in his early career has shaken the philosophical and rigorous currents of his time, arguing that always, even in the most controversial and difficult cases, there is a right solution [2]. At first, this position seems to be largely expressed by those who support the replacement of judges by machines if the right solution seems reasonable to emerge from a mechanistic process of the highest precision. However, this approach is a misinterpretation of Dworkin’s position.
Dworkin himself had predicted such a misinterpretation. In the Empire of the Law (1986), he wrote [3]:
“I have never designed an algorithm to be used in the courtroom. No computer wizard could draw from my arguments a program which, after gathering all the facts of the case and all the texts of previous laws and judgments, would give us a verdict that would find everyone in agreement.”
Dworkin’s statement stems from his belief that the correct method of hearing cases is an exercise that is fundamentally interpretative and worthwhile and, as such, is based on principles. The judge can find the right solution in each case, but only by finding the best possible interpretation.
The best interpretation is expressed by those who, according to the letter of the law, can legitimately justify the coercion imposed by the law on its companions. In this process, Dworkin argues that the judge tries to preserve the integrity of the law by interpreting it in its best light, having in mind that the law is the creation of a community in which the unifying element is the attempt to justify state coercion.
Dworkin believed that each case had a right solution, but nevertheless, every case is difficult, and finding a solution is a very important exercise of political ethics. Therefore, despite the formalist texture of the philosopher’s belief in a correct solution to each case, he realized that the legal system, being an organic unit, is constantly changing with its individual elements being as constant as possible between them.
Will technology replace judges?
The question that arises from the above is whether the pace of technology development and the path it has taken will lead to machines effectively replacing judges, finding the right answer even in difficult cases. Machine Learning can indeed redirect a set of rules so that a more general goal can be served, which is something that may well be ethically welcomed. From this perspective, Machine Learning is dynamic and structured with continuity. Therefore, if it was used to deal with real assumptions, it would do so with some kind of integrity that would be mechanical in its nature.
Nonetheless, the desired goals would remain intact. The static nature of political ethics, on which the legal system would be based, would detract from legality, in Dworkin’s view. For the philosopher, integrity has the meaning that all parts of the legal system can be revised, since the argumentative disagreement reaches the foundations of legality by looking at basic questions such as how citizens should be taxed and whether they should be taxed or if there should be policies of positive discrimination [4]. Following this reasoning, legislative policies are based on principles that arise through the interpretation of difficult cases. This process aims to consolidate past decisions in a way that would justify state coercion on the part of the interpretive community.
The conclusion
To sum up, it is understandable that the prospect of technology through Machine Learning could hardly be in harmony with legality as expressed by Dworkin. Machine Learning does not work on principles. It operates on statistical relationships that do not reflect ethical principles. Its operation would therefore be abolished to the extent that a system (the legal) would require it to act fundamentally morally.
*Philippos Kourakis is a lawyer with a specialization in Philosophy of Law and Criminology. He holds a Bachelor from the Law School of Athens and a Master from Oxford University in Criminology as well as a Master in Philosophy of Law from the National Kapodistrian University of Athens.
[1] Casey, Anthony J. and Niblett, Anthony, Self-Driving Laws (June 5, 2016). Available at SSRN: https://ssrn.com/abstract=2804674
[2] Ronald Dworkin,Taking Rights Seriously(London: Duckworth, 1978), chapter 4
[3] Ronald Dworkin,Law’s Empire (Cambridge, MA: Harvard University Press, 1986) p. 412
[4] Ibid, p. 73
Homo Digitalis receives two scholarships for free participation in the most popular conference for the protection of privacy and private data in the world
Our organization has the pleasure and honour to have received two scholarships from the program “Epic Public Voice Scholarships for NGOs” to participate in the 40th “International Conference of Data Protection and Privacy Commissioners” in Brussels (22-26 October).
The scholarships could be obtained only by 20 organizations worldwide and they are provided by EPIC, a worldwide well-respected research centre headquartered in Washington D.C, U.S.A which focuses its activity and attention at the protection of privacy, freedom of expression and the democratic values in the society of information.
The conference is organized by the European Data Protection Supervisor (EDPS) and it is widely respected concerning the issues of privacy and personal data protection.
Taking part in the process we will be able to observe speeches and conversations about various relevant issues and exchange ideas with other digital rights organizations from all over the world, academics, as well as representatives of organizations of the EU and the Council of Europe, with government spokesmen of other countries, members of supervising authorities and company agents.
The schedule of the event can be found here.
Stay tuned!
Researcher’s Night was concluded successfully
On Friday 5 October Homo Digitalis had the pleasure to participate in Researcher’s Night in the premises of the National Centre for Scientific Research “Demokritos”.
Alongside with 9 of the most prominent Greek research institutes and many more participants, Homo Digitalis presented its activity.
Hundreds of people got to know their digital self and got acquainted with their digital rights.
We warmly thank all the visitors of the Researcher’s Night for their interest and their passion.
We particularly thank “Athena” Research and Innovation Centre for hosting us.
We will be back soon with more action!
The team of Homo Digitalis in Researcher’s Night
Enrich your knowledge and get information about your digital rights through educational quizzes
Questions are interdependent with knowledge, since knowledge is the process of asking questions. As you have already been informed by Homo Digitalis, we are in the first week of the European Cyber Security Month.This article will guide you through some very interesting material that will enrich your knowledge through questions, educational questionnaires and quizzes.
Every time you use the internet a new digital world unfolds before you. Through a simple Internet connection, you have access to an ocean of information that you can use to get informed, educated, to communicate and have fun.
However, this world is not just a world of opportunities, but also a world full of challenges and risks. Campaigning and raising awareness are the keys in order to take full advantage of the offered opportunities given to you.
Our journey to awareness is about to begin with the first educational questionnaire on our list, which comes from the Communications Privacy Authority (ADAE). ADAE is one of the independent authorities, provided by the Greek Constitution. Its purpose is to protect the free correspondence or communication in any possible way. ADAE has recently posted a training questionnaire on its website with questions and answers based on a number of important issues.
If you have questions about the dangers stemming from the installation of malicious software on your computer or mobile device, if you suspect that your telephone or internet communication is being monitored, or if you want to know what is the right way to respond when you receive threatening or abusive calls, then the ADAE questionnaire will give you the right guidelines.
Next stop in our October journey to awareness is the variety of quizzes of knowledge and awareness created by the Hellenic Safer Internet Centre team within the SaferInternet4Kids campaign and is specially designed for children and teachers.
The Hellenic Safer Internet Centre operates under the Institute of Technology and Research (ITE) and through the SaferInternet4kids campaign website it sensitizes children, parents and teachers regarding the safe use of the Internet and social networking applications. If you want to know about your personal data, cyberbullying, excessive internet engagement, and more, you should definitely visit this site.
The third and last stop of our current trip is the Network and Information Security quiz prepared by the European Network and Information Security Agency (ENISA). The third and last stop of our current trip is the Network and Information Security quiz prepared by the European Network and Information Security Agency (ENISA). This quiz is available in all languages of the Member States of the European Union and is divided into two themes, privacy and general security. Start the quiz here and get detailed answers for questions like: Is incognito mode private? How can botnets affect you? What concerns do cookies raise and what is VPN?
Fortunately, over the last few months a Safe Navigation Guide has been prepared for you by the Homo Digitalis team, which contains basic information about your device settings and general online behavior. If you have not yet taken the time to read this guide, do not waste time. Getting properly informed is only a few clicks away.
The European Cyber Security Month is here
Alongside with the hack of 50 million Facebook accounts
You might have already read about the recent cyber-attack on Facebook and the fact that the intruders gained access to more than 50 million accounts.
A tremendous amount of personal data, such as conversations, photos, important information regarding the lives and relationships of all these users with other users are in the hands of the hackers. This attack clearly shows how vulnerable we all are. Even Internet giants, such as Facebook, are not able to protect their users in certain cases.
This attack reminds us that expensive cyber security systems are not always enough. Regardless of the security measures everyone uses, there will always be a team of talented hackers, which might be able to take advantage of some human mistake or a weakness in the installed cybersecurity systems and successfully hack them after persistent efforts.
The protection of digital rights, such as privacy, protection of personal data and the freedom of online expression and information, is intrinsically linked to the security of the computer systems and the adoption of the pertinent techniques or organizational measures, which guarantee the requisite protection.
For this reason, the European Union Agency for Network and Information Security (ENISA) together with the European Commission (DG CONNECT) and other partners devote October to cyber security every year. For the sixth time the campaign “European Cyber Security Month” is here to draw the attention of people and organizations on the importance of the security of information on cyberspace.
Through events taking place in various EU Member States or “digital” meetings, which you can follow through your computer, this campaign aspires to promote the safer use of the Internet and enhance the interest of the public in cyber security.
If you want to get informed on the various events taking place, you can have a look at the map of the events here.
Homo Digitalis in the Researcher’s Night
The Researcher’s Night is celebrated in over 300 European cities with great involvement of the research and academic community. It is an initiative by the European Commission, which aspires to highlight the role of the researcher as well as his/her scientific and social work with an emphasis on his/her human side.
Homo Digitalis is pleased to participate in the Researcher’s Night with nine of the largest Research Institutes in Greece. Through interactive demonstrations, Homo Digitalis will introduce you to your digital self and warn you about digital rights, which we all enjoy when we use the Internet.
The Researcher’s Night will be held on Friday, October 5, at “Demokritos”, the National Centre for Research in Natural Sciences
Address: Patriarchou Grigoriou IV and Neapolis 27, 15 341, Agia Paraskevi, Athens.
Entrance is free of charge.
Come and join us to become part of the greatest celebration of Research and Science in Greece!
Digital heritage: Digital data as a component of the heritage of a deceased person
By Angelina Vlachou*
In 2018, all of us use the Internet quite extensively; through this extensive use a large quantity of digital data which concerns us is gathered. This data stem from profile creation in social media, from our e-mail address, from websites, which a person might use for his/her business, etc.
When the person these data concern dies, these data fall under the notion of “digital heritage”. The discussion on digital heritage is huge.
In this initial stage, it must be noted that both national and EU legal provisions on the protection of personal data -including the recent General Data Protection Regulation (GDPR) Preamble 27- concern personal data of natural living persons.
In particular, GDPR leaves in the discretion of Member States to decide whether they will extend protection to deceased persons or not. There are countries, which have already provided for such protection, i.e. Denmark.
Thus, arises the legal (and real) issue of the possibility to offer protection or not, but also the legal handling of the large volume of digital data, which constitute the digital heritage of a deceased person.
The most well-known social media platforms have already faced the necessity to solve this issue. More specifically, they have faced the question of what is going to happen to their users’ accounts, who are not alive anymore.
There are two possibilities hitherto, which slightly differ depending on the terms and conditions of each company. On the one hand, there is the possibility to totally erase the account; this possibility is usually given only to close relatives of the deceased person and it is often needed to submit an electronic form and to present some documentary evidence.
On the other hand, platforms like Facebook, Twitter and Instagram, also offer the possibility to change an account to a memorialized account. In this option, the deceased user’s profile is preserved, while some sensitive personal data, such as address and contact information, are removed. Additionally, Facebook in particular deactivates the connection information for these accounts. As a result, if someone wishes to log in an account to have access to the deceased person’s data, he/she will face the message “We cannot share the connection information for a memorialized account. Connecting to a third person’s account violates our policy in any case”.
After all these, someone can easily wonder what happens to the large volume of digital data of a deceased person, which have been gathered for years, and more particularly in the case in which his closest persons wish to get access to them, as in the case of his material and intellectual property. The Federal Court of Karlsruhe, Germany, had to answer this question on 12 July 2012. This decision has no impact on the Greek legal order, but it constitutes one of the first judicial approaches of the impugned issue.
The dispute between the parents of a 15-year old deceased girl and Facebook was brought before the Court, which had to decide on the final instance. The dispute initiated from the fact that after the death of their daughter, the parents, who knew her password – since they were the ones who had created the account under the condition that they would also have access to it- wished to read her conversations on messenger, to see if she committed suicide or not. The account had been amended to a memorialized account – unknown by who- and thus, they could not do so. Therefore, the dispute went into courts.
In the first instance, the family was vindicated. The court of first instance accepted that the legal heirs of a deceased person inherit his/her digital data alongside with the rest of his/her property. This ruling was overturned in the court of second instance, which claimed that lifting the prohibition to access to the digital data of the deceased girl would constitute an excessive interference with the right to confidentiality of communications of the users, who are alive, and were communicating with the girl.
Finally, the Federal Court lined up with the court of first instance, adjudicating that there is no justification for handling the digital data of a deceased person in a different way than the rest of his property, which passes to his/her legal heirs.
The Court decided that the underage girl had concluded a contract of use with Facebook which -due to her death- passes to her parents-heirs. Therefore, her digital data were provided to the parents and the digital heritage was assimilated to the “analogue”. In other words, it was decided that the girl’s account on the social media is one of the property elements which passed to her relatives.
Nonetheless, this decision is only the beginning of the discussion and the legal “landscape” remains vague.
On the one hand, the assimilation of digital heritage to analogue appears to be convincing, particularly if we think that after death there is no prohibition on access to the written communications of a person. On the other hand, the particularity of digital data cannot be dismissed, since in the contemporary age, where technology progresses at a great rate, passing a large volume of digital files to persons other than the initial recipient, puts many rights (i.e. the right to confidentiality of communications, right to protection of personal data, right to privacy) of a vast number of persons under excessive risk. The number of persons under threat is much bigger than the number of persons concerned by some “traditional” communication.
These persons are, of course, the alive “followers-friends” of the deceased. While there is a lack of clear legal provision (in national and european level), it is deemed problematic to request from the judge to decide on permitting the transfer of digital data or not in each case. It is not an exaggeration to fear that decisions with a fundamentally different result will be issued, since these platforms have users all around the globe, in States with absolutely different legislations and legal systems.
The only way seems to be the uniform -as much as this is possible- regulation of the impugned situation, as is always suggested for the problems which arise in the digital age. The only thing that is for sure, is that the conversation has just begun.
*Angelina Vlachou is a lawyer. She holds a Master in Public Law and Political Science from the Aristoteleio University of Thessaloniki. She is a PhD candidate in the Law School of Aristoteleio University.
Homo Digitalis made an oral statement to the UN
Konstantinos Kakavoulis of Homo Digitalis, represented the organization in the United National Human Rights Council 39th session (Geneva, 10-28 September 2018).
In a joint statement with the International Organization for the Elimination of All Forms of Racial Discrimination (EAFORD) and Geneva International Centre for Justice (GICJ), Homo Digitalis spoke under the General Debate on Item 2 on the right to privacy in the digital age.
Full text of the statement:
Mr. President,
We would like to thank the High Commissioner for her oral update and wish her luck and success in her endeavour for the protection of human rights in a constantly changing world.
EAFORD, Geneva International Centre for Justice and Homo Digitalis would like to focus particularly in the High Commissioner’s Report on the Right to Privacy in the Digital Age.
The Internet reforms our society as a whole, but also the human existence in itself, by creating a new, digital representation of ourselves; a digital personality, which is not necessarily identical to our real personality, but enjoys the same freedoms and rights.
To this end, the High Commissioner’s Report is more acute than ever.
We wish to underline that ensuring the protection of individuals against unlawful or arbitrary interference from surveillance measures requires that effective national legal frameworks are in place.
However in many jurisdictions, national legislation is non-existent, ambiguous or outdated.
Even under the EU’s GDPR, a milestone in the protection of the right to privacy in the digital age, governments still have ample scope to claim that national security justifies attacks on privacy.
We urge all States, civil society and stakeholders to work towards giving individuals knowledge and tools necessary to look after themselves.
We should always remember that the only non-legal instrument that is powerful enough to provoke change is human conscience.
Thank you.
Internet "records": Benefits and risks by profile creation
By Ioannis Ntokos*
The natural and mental characteristics from which we all consist of describe not only our appearance, but also our character, our temperament, our behaviour, as well as the way in which we react to persons and certain situations. Therefore, a person can be described as insightful, athletic, or consistent to his/her obligations depending on what others observe for them.
Our digital self
With the expansion of the Internet and the contemporary technological achievements, these characterizations are not confined in the traditional social environment (such as a school or the family environment of a person).
They also come in the Internet, which, through the use of means such as smartphones, facilitates the collection of a person’s data, which give him a corresponding characterization. Thus, profile creation goes beyond the real world and comes in the electronic environments we all use, influencing our lives in ways we may not understand.
The most common examples
An example of this “invisible” influence is the purchase of plane tickets online. E-shops which sell such tickets create the user’s profile, which consists from the computer brand used for the purchase, among others. The use of an expensive computer (such as Apple) is automatically recorded and the user is charged higher for the ticket; the reason for this is that since he/she owns an expensive computer, he/she can pay a more expensive ticket.
Profile creation in the digital world occurs usually in the following way: an electronic device, through appropriate software and connection to the Internet, collects personal data of its user (with, but sometimes without, his/her consent), which are stored in various means and archived depending on their content.
Youtube recommendations are a classic example. These recommendations are based on the user’s views and his/her behaviour generally in the Internet. Youtube automatically records every video that the user watches and creates the corresponding profile. For instance, Ioannis has watched 70% videos with dogs and 30% videos on football. Depending on the user’s profile, the service promotes to him/her content corresponding to his/her profile. For instance, it would promote more videos with dogs and less football videos to Ioannis.
The one side of the moon
Creation of such profiles can have positive, but also negative consequences, when used to classify someone depending on his/her characteristics (such as discretion, reliability, vitality or his/her music preferences). One of the benefits of this technique is that the created profile can be used to provide him/her with products and services, which really match him. Additionally, the profile makes the use of the platforms and services, which employ it, easier; the user has an electronic identity, which helps him to browse easier on the Internet and to get served based on it. This happens with e-shops, such as Amazon, in which the user receives notifications from the system for products, which are deemed to match his/her profile. In other words, the user is identified based on the information of his profile.
The other side of the moon
However, the creation of profiles by a person might also entail risks. A person’s profile is as accurate as the data taken into account for its creation, as well as the efficiency of the algorithm used. Inaccurate data and inefficient algorithms will create a profile which does not match reality.
The consequences
The consequences of all the above might be very significant for a person. An imprecise profile can lead to the provision and forwarding of unwanted services, products or content; it can also play an important role in the decision-making regarding the person. It is worth thinking of the consequences of the denial of a bank institution to provide a loan, for which a person has applied, based on imprecise data constituting the applicant’s profile. These imprecise data was used to process this application by electronic means.
Of course, the creation of a profile and the potential automated decision-making, does not automatically equate to discriminatory treatment by the profile creator. The discrimination will be equivalent to the precision of the created profile through the collection and processing of a person’s data. This precision is not guaranteed; as we will see in a future article, the correct and accurate processing of personal data, as well as the creation of a pertinent profile by electronic computer systems, are as strong as the impartiality and integrity of the algorithms used.
To sum up, we observe that the main threats arising from such practices are two; firstly, the potential decision-making for the person involved and secondly, the potential inaccuracy of a profile because of a) collection of non-relevant data or b) use of biased algorithms and calculating methods. In any case, the cognizance of the existence and the nature of such practices constitute a first line of defense against their negative consequences.
If the topic of the article interests you, you can learn more information by a reading an academic paper here.
* Ioannis Ntokos holds an LL.M. in Law & Technology. He is based in the Netherlands and works in the protection of personal data and is a defender of it, as well as a defender of digital right in general.