The copyright owner of the above image is Any IP Ltd. You can visit their website at https://anyip.io/
Written by Theodora Firingou *
While making use of various online services, one cannot help but notice that we are constantly bombarded with suggestions regarding not only content that we may like, but also products and services of multiple advertisers. YouTube and Netflix, for instance, offer recommended videos and movies; Spotify even provides you with the Spotify Radar to help you discover new music; Facebook and others advertise products according to your previous searches. Common denominator of the use of recommendation algorithmic systems by the various online service providers is their will to “please” the user by enhancing his/her personalised online experience.
However, is this attempt to personalise your interaction with the service really that innocent? How does an algorithm decide what I would like to watch or listen to?
Probably, most of us have experienced having just met someone and then, all of a sudden, receiving a suggestion to befriend this person on Facebook. Or have you ever noticed that a product you were discussing about on the Messenger app is afterwards advertised to you on your Facebook newsfeed? Doesn’t that feel creepy? And what happens if things get serious? For instance, a user could be discriminated when offered a recommended job or could even end up stranded in a filter bubble with limited worldview or choices.
Baffled by this series of unanswered questions, I was motivated to carry out research on the right to explanation. Five months later and after having conducted empirical research by exercising my right to explanation against major online service providers, I received an award for my thesis on algorithmic accountability and the right to explanation. Herewith I would like to share some of my findings.
Problem statement
Due to the complex and opaque nature of algorithmic systems, their extensive use in automated decision-making and profiling has brought up questions regarding issues of transparency and accountability. Algorithms are conceived as a ‘black box’ and thus hinder any attempt to assess the decision-making process and its results.
At the same time, the European data protection legislation, and predominantly the General Data Protection Regulation (GDPR), demands transparency and accountability on behalf of the data controller and lays down relevant safeguards which the controllers must respect. One of these safeguards is the right to explanation, the existence and scope of which have, however, initiated an extensive academic debate.
In this context, whether or not the implementation of the right to explanation in practice reflects its underlying scope, became the main research question of my master thesis. To that end, the following methodology was applied: Firstly, in order to identify the right’s scope, I focused on mapping and analysing the European legislative framework and the relevant legal literature. Secondly, emphasis was given to the conduct of empirical research regarding the implementation of the right to explanation. In particular, the right to explanation was exercised against five different online service providers, who were questioned about the way their recommendation algorithmic systems regarding personalised content and targeted advertisements work.
The legal framework
Despite the lack of a neat, explicit ‘right to explanation’ labelled provision in either the Data Protection Directive or the GDPR, the right derives from Article 22 and Recital 71 GDPR on the safeguards against automated-decision making, Articles 13 (2) (f) and 14 (2)(g) GDPR regarding controllers’ notification duties and, lastly, Article 15 (1) (h) GDPR as well as Article 12 of the Directive 95/46 on the right of access. In particular, according to Article 22 of the GDPR, ‘data controllers shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his/her point of view and to contest the decision.’
Moreover, as laid down in Articles 13 to 15 of the GDPR, the data subject shall have access to the personal data and the information about ‘the existence of automated decision-making, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.’ Additionally, according to Article 12 of the Data Protection Directive controllers must provide data subjects with ‘knowledge of the logic involved’ in any automated decision-making.
Scope and applicability of the right to explanation
Through the analysis of the relevant legal provisions, the academic debate around them and the contradiction of the argumentation against the right to explanation, it was made possible to identify the right’s scope and applicability. It was concluded that a systemic and teleological reading of the provisions -especially in the light of GDPR’s spirit to empower individuals’ data protection- confirms the existence of the right to explanation.
In particular, it resulted from the analysis that the right to explanation entails the provision of meaningful information about the logic involved to the data subject, in the sense that the meaningfulness of the information provided must be interpreted flexibly. Thus, the information may refer to either the system functionality or a specific decision and it can constitute either ex ante or ex post explanation in relation to the timing at which the decision was reached. Moreover, in order to assess whether an explanation is meaningful or not the information provided must be examined in the light of its functional value (especially with regard to enabling the exercise of the data subject’s rights and freedoms). Additionally, the explanation should lead to individualised transparency, in the sense of personalised understanding of information of a meaningful depth. The information should also be intelligible and provided in a clear and plain language, so that a regular data subject (i.e. usually a user with no expertise on technology-related matters) would be able to fully comprehend it.
Regarding the applicability criteria of the right, automated decision-making (including profiling) which results in a solely automated decision being reached without any meaningful human intervention ought to be taking place. Furthermore, the automated decision must have legal or similarly significant effects, which however should be interpreted in a broad sense, including cases where the data subject’s freedoms and rights are endangered or even the case of targeted advertising based on profiling.
Lastly, the right to explanation must be respected regardless of trade secrets and IP rights. This means that they cannot serve as a justification to refuse providing information and that data protection rights overweight trade secrecy or IP rights.
Compliance issues revealed
Taking into consideration the scope and applicability of the right to explanation, my empirical research focused on examining whether the right to explanation fulfils this scope when exercised in practice. In particular, I filed a number of explanation requests regarding recommended content and targeted advertisements before five online service providers, namely Facebook, YouTube, LinkedIn, Spotify and Netflix.
Unsurprisingly, the analysis of the empirical research’s results revealed a great number of compliance issues and a gap between theory and practice.
Filing explanation requests and obtaining meaningful information about the logic involved in algorithmic systems responsible for automated decision-making was actually an extremely challenging procedure; it required legal literacy on the matter, organisation, persistence and patience. In other words, it is doubtful whether a regular data subject would ever manage to efficiently exercise his/her rights after facing such hurdles.
Although privacy policies were easily found, they were often problematic in terms of completeness and clarity. Identifying the right communication means to contact the controllers was even more troublesome.
However, the most worrying findings resulted from the correspondence with the controllers. Various malfunctions, such as organisational and administrative avoidance strategies, lack of awareness, ignorance and denial to address the requests rendered the procedure complicated. Moreover, the explanations provided were not satisfactory; generic, fragmental and misleading information was provided and could thus not possibly fulfil the scope and rationale of the right to explanation since it could not be conceived as meaningful information. Some controllers refused to provide a full explanation and justified their position either on trade secrecy grounds or by arguing that Article 22 GDPR, and consequently Article 15 (1)(h), do not apply since the automated processing does not produce legal or similarly significant effects. However, none of these arguments constitute valid grounds on which data controllers could rely in order to avoid providing an explanation to the data subject.
To sum up, the findings of the empirical research on a limited number of broadly used online service providers indicated that the right to explanation does not fulfil its scope under the European data protection legislation when practically exercised against data controllers. Most worryingly, it was confirmed that the data subjects’ rights are being significantly disrespected in the online environment. Afterall, maybe we should think twice before celebrating this generously ‘enhanced personalised experience’ since the legally provided safeguards to protect us against malicious processing of our personal data, especially during automated decision-making and profiling, do not seem to be implemented by major controllers. It is thus doubtful that we could rely on a transparent and accountable processing of our data.
* Theodora Firingou is a lawyer holding an LLM in Penal Law (LL.M, University of Hamburg) and LL.M IP/ICT Law, KUL. She focuses on data protection & privacy law and mainly on the issues arising from the use of new technologies such as Artificial Intelligence (‘AI’).