Open letter on the implementation of the provisions of the new Directive on Copyright

Today, on 20 May 2019, EDRi and 41 other organisations, Homo Digitalis included, sent an open letter to the European Commission, requesting the organisations’ active inclusion in the implementation process of the newly adopted Copyright Directive, through the upcoming stakeholder dialogue.

Specifically, as provided for in Article 17 para. 10 of the new Copyright Directive, as of 6 June 2019 the Commission, in cooperation with the Member States, shall organise stakeholder dialogues to discuss best practices for cooperation between online content-sharing service providers and rightholders.

Therefore, by today’s open letter, the signatory organisations express their wish to be actively involved in this process, in order to achieve the establishment of a working group that will consist of representatives of organisations who aim to protecting and promoting Human Rights in current digital era. Given the provisions of Article 17, and the challenges deriving from the protection of privacy and the freedom of expression and information on the Internet, the participation of the mentioned organisations is considered necessary within the Commission’s organised dialogues.

You can learn more about the letter in the relevant article of EDRi and see the full text here.


Homo Digitalis at the Greek-French School of Piraeus "Saint Paul"

Today, Homo Digitalis was hosted by the Greek-French School of Piraeus “Saint Paul” and spoke to the students of the Secondary and High school about digital rights.

Specifically, Homo Digitalis conducted a presentation concerning cyberbullying and its consequences to the students of Secondary school and proposed protection ways. In addition to this, Homo Digitalis talked to the students of 1st and 2nd grade of the High school about digital footprints and their rights regarding their personal data.

Mr. Panagiotis Gialis, member of Homo Digitalis explains to the students of Secondary school the consequences of Cyberbullying

The following members of Homo Digitalis have worked for the presentation: Mrs. Mary Mouzaki, Mrs. Anastasia Karagianni,  Mr. Panagiotis Gialis, Mr. Kimonas Georgakis,  Mrs. Maria-Alexandra Papoutsi and Mr. Konstantinos Kakavoulis.

Mr. Konstantinos Kakavoulis, a Saint Paul graduate and founding member of Homo Digitalis, discusses with High school students about protection ways against Cyberbullying

In fact, the members of Homo Digitalis, Mr. Kimonas Georgakis and Mr.Konstantinos Kakavoulis had the pleasure to visiting the school they graduated from, with a completely different role this time.

Mr. Kimonas Georgakis, a Saint Paul graduate, talks to High School students about digital footprint

Mrs. Maria-Alexandra Papoutsi, member of Homo Digitalis explains to the students of the 1st and 2nd grade of the High school the right to access

We would like to thank the Administration of the High School for this invitation! Special thanks go to Mr. Antonis Voutsinos, Deputy General Manager of the school, to Mr. Koutsa, Head of the High school, to Mrs. Stamataki, High School Director, to Mr. Rousso, Deputy Director of the High School, and to Mrs. Lymberi for their impeccable hospitality.

High School Director, Mr. Koutsa, High School Director, Mr. Stamatakis, with the two graduates of Saint Paul and the President of Homo Digitalis, Mrs. Elpida Vamvaka

Stay tuned! There will be more presentations at schools!


Open Letter on the dangers of using deep packet inspection

Today, 15 May 2019, European Digital Rights (EDRi) along with other 45 civil society organizations, academics and private actors from 15 different countries, including Homo Digitalis, sent an open letter to European legislators informing them on the dangers resulting from the extensive use of deep packet inspection technology.

This technology has significant potential for intrusion into user privacy, but mobile operators continue to use it to investigate the content of our communications and to collect information such as the applications we use and the material we see on the internet. By extending zero-rating to almost all EU Member States (except two), companies use this technology to provide packets that give access to only specific services and service providers ( e.g. packets for exclusively Internet use for specific social networking platforms etc).

You can learn more about the open letter in the relevant EDRi article and see the full text here.


How "free" are our choices in the Big Data era? The example of Netflix

Written by Evangelos Farmakidis *

We have just finished the last episode of the new season of our favorite Netflix series and we have decided to go for a walk. Before we turn off the TV and get ready, another series is stirring our interest. It appeared in the trending and in the “choices for you” and it happens to be the kind of series we like. After a quick look we decide to watch only the Trailer to make sure it is to our liking.

Indeed it is! Going-out  is canceled and a new episode marathon (Binge-Watching) begins.

How free are our choices in the era of Big Data?

Did we really want to stay at home and watch a new series or did we want to go out for a walk?

Maybe according to an artistic manner our will is significantly influenced and our decisions guided?

We will try to give a short, simple and understandable answer to the above questions.

The Netflix example is certainly not accidental. It is not as a coincidence the arrival of the new proposed interesting series on the screen of our television.

Netflix today counts 137 million subscribers in 190 countries and owes much of its success to Big Data. The data analysis is a practice the company has implemented since the early years of its establishment, when streaming service was unavailable and Netflix provided its customers exclusively with services such as sending DVDs to their home by regular mail. Studying the preferences of its clients, it proposed films that might be of their interest.

In doing so, it wanted to increase its income while simultaneously facing up the problem that arose whenever a film won an Oscar or a famous film critic wrote an enthusiastic review for a film: the demand for that film was booming and as a result Netflix can’t cope with the rising demand and on the other hand older movies were not chosen by the customers so the company loses more revenue.

So it had to find a way to turn its clients into less famous or older movies. For this reason, it developed a prediction algorithm, which called Cinematch, to propose to its users new movies based on other users’ preferences.

Later, in October 2006, wanting to improve the algorithm’s performance, it launched an open competition, the Netflix Prize. The team, which would be able to improve the algorithm’s results to a satisfactory degree would win a $ 1 million cash prize. That competition drew the attention of the entire global community, brought together more than 40,000 teams of experts (in the fields of mathematics, statistics, information technology, etc.) from 183 different countries around the world.  For that purpose, researchers got access to ratings and reviews 500,000 users of Netflix.

It took three years to achieve the desired result and the prize was finally awarded on 21 September 2009. The winning algorithm was the result of a consortium of 4 teams called BellKor’s Pragmatic Chaos and improved the results of the existing algorithm by 10.06%. Today, the result according to which we receive proposals, exceeded at 85% success rate.

Keeping us busy with a constant stream of suggestions, Netflix manages to renew our subscription every month. If there were not such proposals, it is likely that after the end of the last season of our favorite series, we would cancel our subscription, at least until the new season of our favorite serie is released.

A typical example of Netflix’s personalized commercial practice is the following: To promote its – perhaps most famous – series, which established the company, House of Cards, different trailers with different versions of the same series were shot, but aimed at a different audience groups, depending on their preferences. Thus, each trailer featured a different trailer for exactly the same series, according to their preferences. For example, drama lovers saw a more dramatic version of the series, while adventure lovers saw a more adventurous version of the series etc.

Netflix today process various data from its users, such as age, gender, geographic location, information about their computer or other devices used to access the service, programs they has watched since their registration, days and times associated, the history of their searches, and even the way they scrolled while browsing. It still records every time they pause, go back to re-watch a scene, or pass a boring scene.

In science today, there is no universally accepted definition for Big Data. However, we can say that “Big Data” defines data, regardless of type, that have the following key characteristics: excessive volume, high variety, and high collection speed – even in real time – from multiple sources.

Data mining is the process whereby one obtains useful information through the proper processing of “raw”, unclassified, complex, and large volumes of data previously collected and stored on huge and vast databases.

The informations extracted from data is a powerful “weapon” in the hands of Marketers, who use them to promote products or even design new ones.

Today’s data, during the time of the 4th Industrial Revolution, have the same value as oil for the 2nd Industrial Revolution and steam for the 1st (The world’s most valuable resource is no longer oil, but data ).

As such, Netflix knows which of its programs to recommend us, but it also supports the production of new programs based on the preferences and habits of its users. Knowing exactly what its users prefer, it produces programs that are almost a success before they even turn around.

The use of these marketing methods is by no means reprehensible, neither it is, of course, the intention of the writer to condemn Big Data, which prove to be very useful in many areas of our life beyond commercial activity, such as in Medical Science.

On the other hand, the benefits for the informed consumer are numerous, as he is given the opportunity to make the right choices which will be to his liking and suit his needs while saving time and money.

The European Union has already recognized the value of personal data since the mid-1990s and has set up a specific legislative framework to facilitate their free flow and to protect the residents of its member states. Its last major legislative initiative is the adoption of the General Data Protection Regulation or more commonly known as GDPR. It should be noted that the new ePrivacy Regulation is expected to address, including other and the processing of personal data in electronic communications.

The importance of big data for the modern economics and the science of Marketing is unquestionable. After all, as Dan Zarrella has rightly pointed out, “Marketing without data is like driving a car with your eyes closed”.

However,  consumers do need to be aware in order these practices to serve their interests , not by influencing their will, manipulating their decisions and defining their lifestyles.

Bibliografy

    • Mareike Jenner, (2018), Netflix and the Re-invention of Television. Palgrave Macmillan.
    • Pant V., Yu E., (2018), Conceptual Modeling to Support the “Larger Goal” Pivot – An Example from Netflix. In: Buchmann R., Karagiannis D., Kirikova M. (eds) The Practice of Enterprise Modeling. PoEM 2018. Lecture Notes in Business Information Processing, vol 335. Springer, Cham.
    • Kai-Ingo Voigt, OanaBuliga, Kathrin Michl, (2017), Entertainment on Demand: The Case of Netflix. In: Business Model Pioneers, Springer International Publishing.
    • Jenkins J., (2017), Netflix. In: Schintler L., McNeely C. (eds) Encyclopedia of Big Data, Springer, Cham
    • Roberts R., (2017), Live TV, Netflix, Amazon, the Universe!In: Mastering Media with the Raspberry Pi. Apress, Berkeley, CA.
    • McDonald K. & Smith-Rowsey D., (2016), The Netflix effect: Technology and entertainment in the 21st century. London: Bloomsbury Academic.
    • Amatriain X., Basilico J., (2015), Recommender Systems in Industry: A Netflix Case Study. In: Ricci F., Rokach L., Shapira B. (eds) Recommender Systems Handbook. Springer, Boston, MA.
    • Mary J. Cronin, (2014), Netflix Switches Channels. In: Top Down Innovation, Springer International Publishing.
    • Keating, Gina, (2012), Netflixed: The Epic Battle for America’s Eyeballs. Portfolio/ Penguin.
    • Robert M. Bell, Yehuda Koren& Chris Volinsky, (2010), All Together Now: A Perspective on the Netflix Prize. CHANCE, 23:1, 24-29.
    • S. Finlay, (2014), Predictive Analytics, Data Mining and Big Data, Palgrave Macmillan UK.
    • Min Chen, Shiwen Mao, Yin Zhang, Victor CM Leung, (2014), Big Data: Related Technologies, Challenges and Future Prospects, Springer International Publishing.
    • Hrushikesha Mohanty, PrachetBhuyan, Deepak Chenthati, (2015), Big Data: A Primer, Springer India.

* Evangelos Farmakidis is a member of Homo Digitalis, trainee lawyer,  graduate of Master of Science in “Law and Informatics” of the Department of Applied Informatics, of the University of Macedonia and the Law School, Democritus University of Thrace, postgraduate student of Criminal Law and Forensic Sciences of Law School, Democritus University of Thrace, holder of a Diploma in Social Economy and Social Entrepreneurship and Accredited Ombudsman of the Ministry of Justice, Transparency and Human Rights.


Concept Note to the United Nations Committee on the Rights of Children

Today, Homo Digitalis responding to the invitation of the UN Committee on the Rights of the Child (‘CRC’) submitted Concept Note on children’s rights in relation to the digital environment.

The note will be used by the CRC in the adoption of the General Comment on children’s rights in relation to the digital environment.

You can read the note in English here.


How to create and use powerful passwords

Written by Vyron Kavalinis *

On the web, it is widespread that a website needs the user’s registration to display its content or provide its service or even allow to comment on an article. The registration of the user, and consequently, the account creation requires the use of a username and a password.

The username will need to be unique and no longer linked to the page itself to create the required account while the username and password combination proves the user’s identity and the correct completion give them access to the content of your page. Even in our email, if we want to sign in we will need a username (usually our email address) and a password.

The password is usually a combination of letters, symbols and numbers. The use of strong passwords is necessary to protect the user’s security and identity. An easy password is more likely to be guessed by someone else and therefore has access to our personal data.

Initially, an easy password is short. The bigger the password is, the harder  is to be guessed by someone, and the resulting combinations are much more. From researches into millions leaked passwords, it has been revealed that combinations and selections preferred by users are very easy and are in the form “123456”, “password”, “football” and other simple words that we all use in our everyday life, and it is therefore easy for a  third person to guess and find.

It is also worth mentioning the fact that a large number of users use the same password on all the pages they need to link. So if someone knows our email or our username then with a single password they can have access to all the pages we have an account, whether this page is our bank’s account or a shop that we are buying or even our own profile on Facebook.

The best way to increase security levels is to create more complex passwords. It is recommended that the password be long, usually over 12 characters, and be sentences that the user can easily remember.

A good way is the use of Online tools, which add words at random and create sentences for their use as passwords or setting up codes in accordance with some options determined by the user. In the text that follows we shall refer to some examples of such tools that you can use.

It is worth mentioning that a password with at least 12 characters can take few centuries for an invader to break it. In the current computer capacities and with the simultaneous use of many of them, this time might not be actual but still is so much to break. For example, according to researches that have been carried out, a supercomputer (having an efficiency as 100 computers at the same time) can break a 10-character password in 3 years.

It is not recommended to use the same password in every web site and application as also to not write down the passwords on simple text files or notebooks.

Moreover, the use of symbols and numbers  can really help as the password becomes more complicated and therefore more difficult for a third person to find it.

The use of password generators is a very good solution since the most enable the user to set the parameters of the password and to create one, ready for use. Generators’ use is very helpful as if necessary for a web site to use capital symbols and small will create a more complex password. For example in that case a human would allocate a password “Letmein!123”, while a password generator would allocate “lwIXgHeaWiq”. The second choice is more difficult to find even if it doesn’t include special characters and symbols.

The use of password generators doesn’t require specific and specialised knowledge from the user, while there are many online tools that can be used for the creation of our passwords. We can show you online password generators that you can use:

Strong password generator (https://www.strongpasswordgenerator.com/). It gives the opportunity to define the length of the code and also some options of configuration, like the use of “voice words”. With the use of voice words actually the generator shows the letter and number combination in words so the passwords be more memorable.

Norton Password Generator (https://my.norton.com/extspa/idsafe?path=pwd-gen|). Norton, known in the field of safety has set up an online tool for the creation of passwords. This specific tool gives many options as the choice of the length of the code and the use of capitals, symbols and numbers.

XKpassword (https://xkpasswd.net/s/). XKpassword is probably one of the few, who offer so many options for the creation of the password. A feature differentiating it from the majority of password generators is the selection of provider according to the rules of which the password will be created. Some of these examples are according to the frameworks of AppleID, WiFi etc.

Finally, we would recommend the use of password managers for the storage and the management of your passwords. Password managers are substantially programs, which manage your passwords and store them coded in order to be understood by someone other. Through the use of this programs you just need to know one password and this is your access code to your password manager.

With the use of password managers you don’t have to remember your passwords by heart, as they have addons for every known browser, that when you enter to a web site they immediately recognise through the relevant form and give you the possibility of automatic completing.

Some password managers, also authorise the automatic completion with random passwords when registering and their automatic storage.

Since there is the possibility for somebody to intercept the password from the password manager and therefore to have access to the others, many from password managers provide also extra safeguards in the event of unusual mobility.

One of the most well-known password managers is the LastPass and the 1Password. Both provide the possibility of free use, while upon payment subscription they unlock more options and functions. Both have admins for Chrome, Mozilla, Opera and operate with Windows, Linux and MacOS. It is also noteworthy that if you take notice that your main code has been intercepted you can request for your account to be deleted, while 1Password recognises the device with which you are connecting and if you wish to connect from a new one you have to complete the master password you have been given after your registration to the application automatically.

We shall mention that there have been notified various safety lapses in password managers. Despite all these, each and every company immediately takes all the necessary steps to fill these gaps and increase the safety of their services. Even after of those notifications their use is considered to be more safe than the storage of passwords within a simple file, which will not contain any type of encryption.

Homo Digitalis has no interest in suggesting the above tools. We recommend you use these tools as safe alternatives given the wide variety of such tools. It shall be noted that many of these tools might have as objective to intercept your data. Therefore, we recommend you be very careful when you are using such tools.

* Viron is a graduate from the Department of Informatics Engineering, TEI Crete. He works for a company, which operates in the field of Web hosting and domain names. He deals with the development of web sites and safety. In the past he has undertaken SSL certificates.


Homo Digitalis visited Evangeliki Model High School of Smyrna

Today, Homo Digitalis was hosted by Evangeliki Model High School of Smyrna and spoke to the students of the 1st and 2nd grade of the High school about digital footprints and their rights regarding their personal data.

The presentation was conducted in order to prepare the students for their participation in the Youth Parliament (in Greek: “Βουλή των Εφήβων”, Vouli ton Efivon).

The following members of Homo Digitalis have worked and participated in the presentation and the project: Mrs. Mary Mouzaki, Mrs. Anastasia Karagianni, Mr. Panagiotis Gialis, Mr. Kimonas Georgakis, Mrs. Maria-Alexandra Papoutsi and Mr. Konstantinos Kakavoulis.

We would like to thank the Administration of the High School for this invitation!

Stay tuned! There will be more visits in schools!


Open letter on net neutrality addressed to the European Commission and BEREC

Two years since the entry into force of the new rules on net neutrality, the European Commission has launched today, the 03.04.2019 their report regarding the implementation of these provisions by the Member States.

Unfortunately, the Commission’s report on open internet doesn’t provide the necessary deep analysis, which one would have expected. Specifically, it comes, in spite of its size, to general and superficial conclusions without addressing any issues of the omissions of the concerned Member States, including our country, concerning the implementation of these provisions.

These shortcomings have been underlined to recent corresponding studies carried out by recognised organisations of civil society, such as of Epicentre.works.

As a reaction to the situation which has been established and in order to express openly the risks that arise in the European Union, concerning net neutrality, 29 organisations of civil society, Homo Digitalis included, sent today an open letter addressed to the European Commission and the Body of European Regulators for Electronic Communications (BEREC).

The letter stresses out the need to ensure in practice the protection of Internet users (natural and legal persons), in order the Internet traffic to comply with the principle of non-discrimination. The letter’s purpose is to reopen the debate with the European institutions for the upcoming revision of BEREC’s guidelines, and shall be an opportunity of improvement in the problematic situation that has occurred.

What we hope is a real digital single market that protects and promotes the open, neutral and non-discriminatory access to the Internet.

The full text of the letter is available here.


Algorithmic transparency and accountability of online service providers: Exercising the right to explanation

The copyright owner of the above image is Any IP Ltd. You can visit their website at https://anyip.io/

Written by Theodora Firingou *

While making use of various online services, one cannot help but notice that we are constantly bombarded with suggestions regarding not only content that we may like, but also products and services of multiple advertisers. YouTube and Netflix, for instance, offer recommended videos and movies; Spotify even provides you with the Spotify Radar to help you discover new music; Facebook and others advertise products according to your previous searches. Common denominator of the use of recommendation algorithmic systems by the various online service providers is their will to “please” the user by enhancing his/her personalised online experience.

However, is this attempt to personalise your interaction with the service really that innocent? How does an algorithm decide what I would like to watch or listen to?

Probably, most of us have experienced having just met someone and then, all of a sudden, receiving a suggestion to befriend this person on Facebook. Or have you ever noticed that a product you were discussing about on the Messenger app is afterwards advertised to you on your Facebook newsfeed? Doesn’t that feel creepy? And what happens if things get serious? For instance, a user could be discriminated when offered a recommended job or could even end up stranded in a filter bubble with limited worldview or choices.

Baffled by this series of unanswered questions, I was motivated to carry out research on the right to explanation. Five months later and after having conducted empirical research by exercising my right to explanation against major online service providers, I received an award for my thesis on algorithmic accountability and the right to explanation. Herewith I would like to share some of my findings.

Problem statement

Due to the complex and opaque nature of algorithmic systems, their extensive use in automated decision-making and profiling has brought up questions regarding issues of transparency and accountability. Algorithms are conceived as a ‘black box’ and thus hinder any attempt to assess the decision-making process and its results.

At the same time, the European data protection legislation, and predominantly the General Data Protection Regulation (GDPR), demands transparency and accountability on behalf of the data controller and lays down relevant safeguards which the controllers must respect. One of these safeguards is the right to explanation, the existence and scope of which have, however, initiated an extensive academic debate.

In this context, whether or not the implementation of the right to explanation in practice reflects its underlying scope, became the main research question of my master thesis. To that end, the following methodology was applied: Firstly, in order to identify the right’s scope, I focused on mapping and analysing the European legislative framework and the relevant legal literature. Secondly, emphasis was given to the conduct of empirical research regarding the implementation of the right to explanation. In particular, the right to explanation was exercised against five different online service providers, who were questioned about the way their recommendation algorithmic systems regarding personalised content and targeted advertisements work.

The legal framework

Despite the lack of a neat, explicit ‘right to explanation’ labelled provision in either the Data Protection Directive or the GDPR, the right derives from Article 22 and Recital 71 GDPR on the safeguards against automated-decision making, Articles 13 (2) (f) and 14 (2)(g) GDPR regarding controllers’ notification duties and, lastly, Article 15 (1) (h) GDPR as well as Article 12 of the Directive 95/46 on the right of access. In particular, according to Article 22 of the GDPR, ‘data controllers shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his/her point of view and to contest the decision.’

Moreover, as laid down in Articles 13 to 15 of the GDPR, the data subject shall have access to the personal data and the information about ‘the existence of automated decision-making, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.’ Additionally, according to Article 12 of the Data Protection Directive controllers must provide data subjects with ‘knowledge of the logic involved’ in any automated decision-making.

Scope and applicability of the right to explanation

Through the analysis of the relevant legal provisions, the academic debate around them and the contradiction of the argumentation against the right to explanation, it was made possible to identify the right’s scope and applicability. It was concluded that a systemic and teleological reading of the provisions -especially in the light of GDPR’s spirit to empower individuals’ data protection- confirms the existence of the right to explanation.

In particular, it resulted from the analysis that the right to explanation entails the provision of meaningful information about the logic involved to the data subject, in the sense that the meaningfulness of the information provided must be interpreted flexibly. Thus, the information may refer to either the system functionality or a specific decision and it can constitute either ex ante or ex post explanation in relation to the timing at which the decision was reached. Moreover, in order to assess whether an explanation is meaningful or not the information provided must be examined in the light of its functional value (especially with regard to enabling the exercise of the data subject’s rights and freedoms). Additionally, the explanation should lead to individualised transparency, in the sense of personalised understanding of information of a meaningful depth. The information should also be intelligible and provided in a clear and plain language, so that a regular data subject (i.e. usually a user with no expertise on technology-related matters) would be able to fully comprehend it.

Regarding the applicability criteria of the right, automated decision-making (including profiling) which results in a solely automated decision being reached without any meaningful human intervention ought to be taking place. Furthermore, the automated decision must have legal or similarly significant effects, which however should be interpreted in a broad sense, including cases where the data subject’s freedoms and rights are endangered or even the case of targeted advertising based on profiling.

Lastly, the right to explanation must be respected regardless of trade secrets and IP rights. This means that they cannot serve as a justification to refuse providing information and that data protection rights overweight trade secrecy or IP rights.

Compliance issues revealed

Taking into consideration the scope and applicability of the right to explanation, my empirical research focused on examining whether the right to explanation fulfils this scope when exercised in practice. In particular, I filed a number of explanation requests regarding recommended content and targeted advertisements before five online service providers, namely Facebook, YouTube, LinkedIn, Spotify and Netflix.

Unsurprisingly, the analysis of the empirical research’s results revealed a great number of compliance issues and a gap between theory and practice.

Filing explanation requests and obtaining meaningful information about the logic involved in algorithmic systems responsible for automated decision-making was actually an extremely challenging procedure; it required legal literacy on the matter, organisation, persistence and patience. In other words, it is doubtful whether a regular data subject would ever manage to efficiently exercise his/her rights after facing such hurdles.

Although privacy policies were easily found, they were often problematic in terms of completeness and clarity. Identifying the right communication means to contact the controllers was even more troublesome.

However, the most worrying findings resulted from the correspondence with the controllers. Various malfunctions, such as organisational and administrative avoidance strategies, lack of awareness, ignorance and denial to address the requests rendered the procedure complicated. Moreover, the explanations provided were not satisfactory; generic, fragmental and misleading information was provided and could thus not possibly fulfil the scope and rationale of the right to explanation since it could not be conceived as meaningful information. Some controllers refused to provide a full explanation and justified their position either on trade secrecy grounds or by arguing that Article 22 GDPR, and consequently Article 15 (1)(h), do not apply since the automated processing does not produce legal or similarly significant effects. However, none of these arguments constitute valid grounds on which data controllers could rely in order to avoid providing an explanation to the data subject.

To sum up, the findings of the empirical research on a limited number of broadly used online service providers indicated that the right to explanation does not fulfil its scope under the European data protection legislation when practically exercised against data controllers. Most worryingly, it was confirmed that the data subjects’ rights are being significantly disrespected in the online environment. Afterall, maybe we should think twice before celebrating this generously ‘enhanced personalised experience’ since the legally provided safeguards to protect us against malicious processing of our personal data, especially during automated decision-making and profiling, do not seem to be implemented by major controllers. It is thus doubtful that we could rely on a transparent and accountable processing of our data.

* Theodora Firingou is a lawyer holding an LLM in Penal Law (LL.M, University of Hamburg) and LL.M IP/ICT Law, KUL. She focuses on data protection & privacy law and mainly on the issues arising from the use of new technologies such as Artificial Intelligence (‘AI’).