Guidelines by Homo Digitalis in the context of the European Data Protection Day

January 28 has been established as the European Data Protection Day by the Council of Europe. Information society and the increasing use of the Internet lead to the growth of our digital footprint. Personal data constitute an endless bone of contention for the companies which base their corporate model on them.

Which are the challenges and what can you do to protect your personal data? Homo Digitalis, in the context of the European Data Protection Day, created a short video with guidelines to help you prevent potential violations, as well as ways to react should you feel that your rights have been violated.

Watch the video and get informed through our website!


Homo Digitalis for the European Data Protection Day

January 28 has been established as the European Data Protection Day by the Council of Europe. The members of Homo Digitalis created a video explaining what do personal data mean for them.


The right to be forgotten

Written by Apollonia Ioannidou*

“What happens to the memory we do not recall? Can we preserve the past itself?’’  In these words, Proust, decades ago, described his anxiety about the things which are forgotten. Besides, it is undisputed that human memory is weak and cannot remember everything. Everyone suspects that there are things that we cannot recall: Even our own self is hiding in his experiences and cannot be solid and intact. The thymic memory is deeper than the man himself. It surpasses him. Besides, people by their very nature tend to forget; remembering is the exception rather than the rule.

In the short story “Funes, the Memorious” the great Argentine writer Jorge Luis Borges describes the tragic life of a person who never forgets anything, thus highlighting the decisive importance of the processes of oblivion for a healthy and balanced human life. It is precisely this gap that technological development has come to cover, by creating a website where information is kept intact erasing the process of oblivion. The right to be forgotten was adopted on the one hand in order to defend the protection and, on the other hand, to establish control over the personal data of individuals. Although there are cases where this right has been implemented, its exact content has not yet been clarified. In addition, it is a fact that this right also conflicts with other known and established rights, creating even greater need for its analysis and clarification.

What is the right to be forgotten?

The right to oblivion is defined as “the right not to refer to past events that belong to the past and are no longer relevant”. We would think that this right applies mainly to the mass media and is understood as the right of the person not to be subject to journalistic interest and commentary on past situations in his life.  This, of course, is considered to be reasonable because the opposite would increase the difficulty of reintegrating the individual into society if, for example, they were using journalistic means for committing criminal offenses. This is not an absolute right; a fair balance must be found when there is a legitimate public interest in information. Of course, it is not clear when there is a reason for legitimate interest.

The example of HIV positive women

The case of HIV-positive women in April 2012 is remarkable. During massive checks conducted by the police, women were subject to forced HIV tests, accused of being prostitutes knowing that they have HIV and wanting to transmit the HIV virus to the alleged customers. At the same time, their photos and ID details were published, while it came out that they were HIV positive as well as the prosecution against them. This publication is alleged to have had a serious impact on the data subjects, perpetuated them permanently and potentially led to the suicide of some of them.

This data was made public because, as claimed, there was a legitimate public interest in information. Of course, it is remarkable that the publication, the continuous reproduction of the topic and the over-displaying of the photographs of the women brought the opposite results, as the men who had had sex with these women were not medically examined, fearing that they would also become targeted by the media. It therefore emerges that it is unclear whether and when there is indeed a public interest requirement requiring the publication of photographs and data to inform the public.

When it comes to the right to be forgotten, it is worth mentioning that in the international literature there is a variety of terms referring to this right, the right to forget and the right to be forgotten, the right to oblivion, or the right to delete.

The digital oblivion

By digital oblivion, we mean the right of individuals to stop the processing of their personal data, but also to delete them when they are not needed for legitimate purposes. The European Commission has recently requested clarification of the concept and made an initial effort to give its own (broad and vague) definition (above). Undoubtedly, this right means that personal information of a person must be irrevocably abolished. In 2008, Jonathan Zittrain also proposed a similar concept -called reputation bankruptcy- allowing people a “new start” on the Internet. It can be obvious that when a person withdraws its consent or expresses its wish to stop the processing of its personal data, the data must be irrevocably removed and removed from the data processor’s servers. However, this affair does not fit perfectly with the legal, economic and technical reality.

The GDPR includes detailed complicated provisions, which can cover a wide range of situations. The level of protection afforded to data subjects is also to be praised, especially as regards the rights of the data subject, such as the right to oblivion, as this will contribute to the further protection of such data sets that are so sensitive as to adversely affect life of the data subject. These rights, some of which are novel, will contribute in the long run not only to improving the level of data protection for the data subjects but also to a large extent to the provision of free flow of information to promote the trust of data subjects on their security and hence the greater ease of doing business across the EU.

*Apollonia Ioannidou holds a Bachelor from the Law School of the Aristotle University of Thessaloniki and a Master from Panteion University, Faculty of Public Administration, Law, Technology and Economy track. She is currently attending a second Master in the Faculty Business for Lawyers in Alba Business School.


An imaginary football story

By Konstantinos Kakavoulis

It is May and we are in the middle of spring in Barcelona.

It is 5 in the afternoon and the first locals have already appeared at Plaça del Sol to enjoy a cool beer after a day at work. As it is Monday, the stores are already closed. At Cafe del Sol, the only ones who do not seem to be tourists are three men who look pretty tanned by the sun – a sign that they started their excursions to Costa Brava early this year.

They are almost 45 years old, but they look younger. Manu, the wittiest of the company, immediately orders 3 frozen beers and one portion of patatas bravas. The conversation soon turns to last night’s local football derby between the worldwide reputed Barcelona and Espanyol, a very well-run team that does not have the shine of their fellow countrymen, but after yesterday’s victory they have become major league contesters.

The three friends had attended the stadium to watch the match. It was the first time after more or less a decade that the fans of the away team were allowed to enter the stadium alongside with the home fans. After a crowd disorder caused some very serious incidents, the Spanish federation had decided to forbid the movement of fans and so, for several years only the home team fans could watch the game in the stadium. Manu, Barcelona fan since he was a child, starts the discussion. “Your first goal starts with a clear foul in front of the referee. It should have never counted.”

“Yes, but the penalty he gave you after that was a clear dive. The defender never touched your player“ was Felipe’s, a Sevillian and warm supporter of any team playing against Barcelona, immediate response.

“It seemed that the referee never meant to give the final whistle of the match. He was expecting that you could equalize until the very last moment” adds Jordi, whose father was a legendary goalkeeper of Espanyol.

“In any case, it is a pity we could not get a picture of us three. We had so many years to hang out together in the stadium”, Felipe recalls.

“It is really unacceptable that this new law forbids a photo of 3 friends who managed to go to the stadium and spend a beautiful afternoon supporting their teams. I honestly cannot figure out why this is forbidden “.

“Come on, do not grumble, let’s take one now. Estelle asked me to send her a photo of us”, Manu suggests.

“Forget it, we cannot take any photo even here. They have brought this Miró sculpture in the square. If its visible in the picture, we cannot send it anywhere. “

“Well, are they insane?” Manu says with a red-faced face – maybe he was not that nervous, maybe he just turned red when he ate the last chilly potato.

“At least I will write a very nice article about our experience in the stadium,” says Felipe who works for a well-known online magazine.”

“Just be careful, do not put a picture of the match again. They will not let you upload it, like last time.”

“But who wants to read an article about a football match with no picture in it?”

“I would not read it, even if I knew you had written it.”

“But the first goal was a foul,” Manu says.

“Well, you can say whatever you want. This year’s championship is ours. “

“Will we get another round?”

*The above story is fictional. No elements beyond the site, patatas bravas and ongoing football debates in Barcelona’s squares for the two teams of the city are real.

A scenario banning the movement of fans in Spain, as in Greece, seems unlikely. Almost as improbable as Espanyol being league contesters alongside FC Barcelona, ​​winning them in one of the last matches of La Liga.

The restrictions that the three friends discuss about the photos and the article for the match are not that fictional though. It is very likely that they will soon be part of our daily routine. Read more here.


Fake News on the Internet - Nature, Dangers and Troubleshooting

Written by Ioannis Ntokos

What is fake news?

Fake news is not a new phenomenon. According to Wikipedia, fake news is a form of gutter press or propaganda that consists of deliberate misinformation or farce propagating through the traditional press, media transmission or online social media. ” We notice, therefore, that fake news can be spread through a variety of communication methods. It is worth mentioning, for example, the spread of a rumor within closed, or even more widespread, social clusters over the last century (the so-called gossip, which was often based on rumors rather than on reality).

The main feature that has changed over fake news in the 21st century is the method by which they are spread. Beyond the “traditional” type, which is still used to create propaganda, modern media, mainly based on the use of the Internet for their operation, have been added to the list of ways in which untrue news appear to the average user. Newspapers, magazines and news agencies (especially the largest in scope) have acquired their own website, online channel, and electronic ‘forms’ to exploit the widespread growth of the internet. Access to these media has become very easy on the internet, with the latter being a source of information for millions of people around the world. According to a survey by Reuters Institute (2016), most residents in 26 countries surveyed are now more reliant on social media rather than the press to get informed.

Reference value at this point is the impressive but also worrying (as we shall see below) Chinese avant-garde in news broadcasting. The Xinhua Chinese press has designed and created the first news anchormen entirely based on Artificial Intelligence. This is undoubtedly an impressive achievement, but the risks of misinformation and fake news remain.

What are the implications and risks of spreading fake news over the Internet?

The first of the most damaging effects of fake news is of a legal nature, and it has to do with possible violations of rights to information and expression. These rights are enshrined in the European Convention on Human Rights, ECHR (Article 10), and constitutionally guaranteed in Greece under Articles 5a and 14 respectively. Based on these provisions, every Greek citizen must be able to be informed and express himself/herself, without restrictions (unless exceptions are allowed). The impact of fake news, in legal terms, at first glance, seems to be very important: this news misinforms the citizen, with the consequent violation of his/her fundamental rights. At first sight, therefore, the dissemination of such news is constitutionally forbidden in Greece. This prohibition applies to news broadcasted by any means, including the Internet. On a more practical level, fake news found in online environments can be harmful, given the way they become known and accessible to the general public. The means by which this news is transmitted (i.e. the Internet) helps: a) the ease of writing / authoring; b) the ease of their transmission (in which the recipient of such news can now play an active role, their distribution to more people through social networking platforms); and c) the difficulty of identifying the source of the news given the vast amount of information available on the internet.

What do all the above mean? Quite simply, the ease with which such news can reach an Internet user, coupled with the overwhelming – already available and new – information, makes their dissemination extremely easy. This ease grows sharply with the help of social media, offering an extremely effective channel of communication of such news with their ultimate recipient. Because of their ease of creation, they can be extremely persuasive and plausible. At the same time, this news is difficult to crosscheck and confirm because of the already large amount of information available on the Internet and the difficulty of filtering them from the average user of online media.

Undoubtedly, the most damaging effect of fake news is the spreading of their (untrue) content. The person targeted by such publications, news and content generally faces the dangers arising from the inducements of those who disseminate them. Every kind of purpose is served by the dissemination of such news. Indicatively, these may be political, economic, social, humanitarian or terroristic. The influence of the recipient may lead to subsequent manipulation, terror, prejudice and marginalization. Similarly, especially when false news refers to individuals or organizations, they can cause non-material damage to them, in the form of slander, prejudice, hate, and positive feelings, without relying on true facts.

Methods of dealing with the phenomenon

In conclusion, misleading news on the Internet, having many points in common with those used in more “disconnected” environments, is even more disastrous. How can we therefore get protected against dubious validity news? The most effective tool for this is undoubtedly the use of critical thinking. The better you filter / analyze the content you encounter on the Internet, the easier it is to identify inconsistencies and misconceptions.

For this, you can ask the following:

– What is the source of the news?

– Who is the author / writer?

– Are the above credible?

– When was the news published? Is it recent / crucial?

– Published in more media / from different sources?

– Is the content objective or subjective / biased?

The awareness of the phenomenon, its features, and the ways in which it spreads, is the first precautionary step against news made up for misleading and malicious purposes.


Artificial intelligence in the courts: Myths and reality

By Eleftherios Chelioudakis

Many people think that the term “artificial intelligence” is synonymous to technological development, whilst it is frequently presented as the hope for resolution of serious problems, which plague our societies.

Through its articles, our team has tried to explain to our readers the term “artificial intelligence”, as well as the reason why we should be cautious regarding the developments in this sector.

Through this article we will focus on the frenzy regarding artificial intelligence, we will find out if the idea of its use on the area of justice constitutes a new and innovative approach and at the end we will note issues which merit particular attention. However, it is not the first time that Homo Digitalis focuses its attention on the use of artificial intelligence on the area of justice as we have already hosted an article regarding philosophical issues, which concern the replacement of the judiciary by machines and by means of Machine Learning.

In the information society we live in, the increased use of computers and the internet result in the rampant growth of modern human’s digital footprint. Smart devices, like smartphones, wearable systems which record our sporting activity and our health status, even coffee machines, fridges and toothbrushes, simple household appliances, collect and process a flood of information for its users and demonstrate aspects of their daily life and their personality.

The volume of information produced is so enormous, that it suddenly adds value on the gathered information. Large firms base their business model on the exploitation of these information. The goods and services of these firms are provided “for free” and users “pay” with their personal data, which are analysed and shared with third parties in order to create targeted advertisements, which will lead to profit-making.

As societies, we are heading to the belief that collecting information will bring us closer to acquiring knowledge. As human beings we do not have the intellectual capability to process the vast amounts of information arising; thus, we are placing our hopes on the calculating force of machines. The key that opens the door to control diseases, to combat crime, to better administration of our cities and to our personal well-being is data analysis, the identification of correlations between them and the production of forecasts and comparisons. At least this is the idea we are called to embrace.

Thus, sectors like artificial intelligence, which two decades ago were considered outdated and ineffective, such as the sector of machine learning, suddenly attracted more attention. The modern smartphone, the use of the internet and the computers, the improvement of processors, as well as the increased capacity of the means’ of data storage, gave to the algorithms of machine learning the requisite fuel; large amounts of data.

Serious problems in various sectors, such as health, local governance, self-improvement, transports and policing can be resolved as if by magic through the analysis of the amount of information gathered. Certainly, the judicial system could not be absent from these sectors.

At this point, we should distinguish between the use of mechanisms and artificial intelligence tools for supporting court administration (such as the use of Natural Language Processing tools, which aims to the automation of bureaucratic procedures and rapid writing, registration and analysis of judgements and other documents), and the use of the said mechanisms and tools for the granting of justice and the influence on the decision-making procedure from judicial authorities. This article does not address the first category, as the challenges and restrictions arising there, are the same for other application areas of Natural Language Processing technology. In contrast, the article focuses on the second category and the idea that artificial intelligence mechanisms could abet judicial authorities in the decision-making procedure.

The truth is that the idea of using technology in the decision-making procedure in the area of justice is not pioneering nor innovative. On the contrary, it is a past idea that has appeared and has been used a lot in foreign judicial systems, such as Canada, Australia and U.S.A. already since the end of the past century, while it is commonly known to the similar sector of judicial psychology and psychiatric. Systems such as “Level of Service Inventory-Revised (LSI-R)” and the “Correctional Offender Management Profiling for Alternative Sanctions (COMPAS)” were used as risk assessment tools to help the judge in various stages of the criminal proceedings, such as the suspect’s detention, conviction, imposition of a sentence and the decision to release sentenced on behaviour before serving his/her sentence. Frequently, technologies, which just implement mathematical and statistical methods of risk assessment, are baptised by their creators and media as artificial intelligence. Analyses have been repeatedly carried out over the last few decades for these systems and the critical review according to their solvency and their effectiveness differ depending on which agency finances the related researches.

While the uncertainty about the use of different technologies on justice area has been strongly expressed the Council of Europe (1. Parliamentary Assembly (PACE), Recommendation 2101 (2017) 1, Technological convergence, artificial intelligence and human rights, 2. Parliamentary Assembly: Motion for a recommendation about Justice by algorithm – the role of artificial intelligence in policing and criminal justice systems, and 3. European Commission for the Efficiency of Justice (CEPEJ): European Ethical Charter on the use of artificial intelligence in judicial systems) and the European Union (1. European Commission: Communication on Artificial Intelligence in Europe, and 2. European Commission’s Hight-Level Expert Group on Artificial Intelligence (AI HLEG): Draft Ethics guidelines for trustworthy AI) through their actions during the past two years, explore the possibility of the use of artificial intelligence mechanisms in the area of justice from their Member States.

Although we are against the introduction of artificial intelligence mechanisms into the decision-making procedure and we believe that this approach is not the solution to any problem from those besetting the judiciary, because of the frenzy which prevails during the last few years, we consider it important to mention briefly the main issues emerging from the assumed use of artificial intelligence in the field of justice- especially in criminal proceedings. The enumeration which follows is indicative:

    1. Decision-making solely based on automated processing: as provided by Article 11 of Directive 2016/680, taking a decision based exclusively on automated processing, which produces unfavourable legal effects concerning the data subject or largely affects him, is forbidden. Except in the cases, where the law allows the decision at issue, providing appropriate safeguards to ensure the data subject’s rights and freedoms, as at least the right for human intervention. It is, therefore, recognisable that the human factor is an indispensable component in the decision-making procedure.
    2. Risk of discrimination and quality of the data used: Artificial intelligence mechanisms, which are trained on the basis of processed data are dependent on the quality of these. In simple terms, the provisions for my future behaviour will be based on other people’s data based on which the algorithm has been trained. If the quality of data used during the training is low, or if any prejudice underlies these data, predictions are condemned to be insolvent. They may also be illegal if they are based on personal data, which are by their very nature particularly sensitive as defined in Articles 10 and 11 of Directive 2016/680.
    3. Technical training of judges and lawyers: Before the use of any artificial intelligence mechanism on bearings and the decision-making processing, it is a reasonable prerequisite that professional users of this mechanism who use it on daily basis, are familiar with technology. Unfortunately, most judges and lawyers have a poor technical knowledge and do not have programming capabilities or basic knowledge on the capabilities and the functionality of the different artificial intelligence mechanisms. In light of the above, a basic training of the law practitioners is considered necessary, already starting from bachelor studies and a retraining of judges during their education in judiciary.
    4. Clear rules concerning data ownership used by artificial intelligence mechanisms: Under no circumstances should companies which have created the artificial intelligence mechanism, have access to the personal data of the people tried and the people condemned, nor use them commercially nor for research purposes. Justice cannot be a profit-making sector.
    5. Concise and explicable way of the way of operation of the artificial intelligence mechanism: The area of justice is interwoven with the principles of transparency and impartiality. Therefore, if a judge bases, even partly, his decision on a prediction of some artificial intelligence mechanism, it should be possible to explain the reason, why the mechanism has come to this prediction. If this explanation is not possible, the judge’s decision which was based on it, does not comply with the principle of transparency and will not be considered as impartial. At this point, it should be underlined that popular and complex mechanisms, such as neural networks, make particularly difficult to meet this requisite.
    6. Checking artificial intelligence mechanisms’ effectiveness by independent authorities: A scheduled and regular assessment of the validity of the predictions, made by artificial intelligence mechanisms, should be conducted. An independent supervisory authority with sufficient financial resources and personnel with a high level of knowledge and experience is the ideal body for the fulfilment of this task. The assessments of this authority should be based on both quantitative data, such as statistics, arising out of the efficacy of predictions of artificial intelligence mechanisms and qualitative factors, such as the conclusions taken into consideration case-by-case.

Undoubtedly, the implementation of artificial intelligence mechanisms in any sector of modern life is a complex issue, requiring a multidisciplinary approach. In any event, it is not merely a legal issue. On the contrary, it has intense ethical and social aspects, which require serious contemplation. Rapid technology development is of the utmost importance for the improvement of our life quality and for sure we should integrate it in our society. However, its inclusion should be done with thorough preparation and planning. Only then we will experience new technologies’ benefits and we will severely restrict challenges and dangers for the protection of Human Rights.


The right to access to exams’ written answers

Aikaterini Psihogiou

Preliminary ruling of the Court of Justice of the European Union answered the question, if candidates’ written answers in the context of exams and the related corrections of the examiner constitute personal data, as well as whether the candidate has the rights to access and correction of his writing subsequently to the completion of the examination.

The facts of the case

The application for a preliminary ruling was submitted in the context of legal proceedings between Peter Nowak and the Data Protection Commissioner of Ireland, concerning the denial of the Commissioner to allow to P. Nowak to access his corrected test in an examination that he had participated in, on the ground that the information included on it was not personal data. Having doubts whether a written test is personal data, the Supreme Court of Ireland submitted to the CJEU a request for a preliminary ruling on the interpretation of directive 95/46/EC on the protection of individuals with regards to the processing of personal data and the free movement of such data.

Court’s response

To begin with, Directive 95/46/EC defines as personal data “any information related to an identified or identifiable natural person”. An identifiable person is one who “can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity.”

The use of the expression “any information” lets on, according to the Court, the legislator’s objective to add a broader definition on this term, covering any information either objective or subjective, in the form of opinion or assessment, providing that information relates to the person concerned; in other words, because of its content, its purpose or its result, the information is connected to the specific person.

According to the Court’s reasoning on the said case, the content of the candidate’s written replies in the context of an exam is an indication of the level of knowledge and the candidate’s skills in a given sector, as well as the way of thinking, his/her reasoning and his/her critical eye.

Furthermore, in the event of handwritten exams, the answers provide information relating to candidate’s handwriting.

Moreover, the purpose of collecting these answers is to estimate the candidate’s professional skills and his/her ability to exercise a specific profession.

Lastly, the use of this information is liable to have an impact on the candidate’s rights and interests, as it can determine or affect, for example, the possibilities of access on the profession or his/her desired working position.

As regards the related with candidate’s answers examiner’s corrections, the Court found that they constitute information, which concern the candidate, as the content of these corrections is the examiner’s assessment of the examinee’s capabilities. These corrections are also able to bring him/her consequences.

The Court decided, therefore, that under circumstances such as those in the said case, the candidate’s written answers on exams and the possible relevant corrections of the examiner constitute personal data of the candidate. Accordingly, the candidate has in principle rights of access and correction (Article 9 of the Directive and Article 15,16 of the Regulation 2016/679), both on his written answers and examiner’s rectifications.

The Court, however, clarifies that the right of rectification does not allow the candidate to “correct”, a posteriori, the “wrong” answers, (as the potential mistakes do not constitute an inaccuracy that needs to be corrected, but rather constitute evidence indicating the candidate’s level of knowledge). Finally, the rights of access and correction do not extend to the questions of the exams, which do not constitute candidate’s personal data.

Practically, in Greece, how can a candidate exercise the right of access on his answers and the related examiner’s corrections?

Practically, a candidate can request written or orally from the examining authority to have access to his/her answers and examiner’s corrections. The right of access is exercised free of charge in principle. The candidate could be asked to pay a reasonable charge, if only the request is manifestly ill-founded or excessive (e.g. when it is repeated) or the number of copies that the examining authority is being asked to give is large. The examining authority has no more than one month to satisfy the request -from its submission- and exceptionally this time limit may be extended by up to a further two months.

*Aikaterini Psihogiou, LL.M., CIPP/E, is a Lawyer in Athens, graduate of the Law School of Athens and holder of Master’s degree (Cum Laude) in Law and Technology in Tilburg University. She is CIPP/E certified. She is working as consultant on personal data protection.

Source:http://curia.europa.eu/juris/document/document.jsf?text=&docid=198059&pageIndex=0&doclang=el&mode=lst&dir=&occ=first&part=1&cid=455309

Homo Digitalis supports the important work carried out by the Data Protection Authority. We are optimistic that our action and through continued cooperation we will contribute to its mission.


The interview of Nikos Theodorakis in Homo Digitalis

Homo Digitalis has the honor to host the interview of Nikos Theodorakis, a Greek who excels at both academic and professional level in Europe and America. Mr. Theodorakis is an associate professor at the University of Oxford and a partner at Stanford University, while at the same time practicing law at an American law firm and has served as a consultant to international organizations.

As perceived, he is best suited to talking about the importance of personal data on our business activities, business and everyday life, and we thank him warmly for the interview he gave us.

– You’ve started your academic career in Trade Law, but for some years now you’ve turned to Personal Data and Privacy Protection Law. Is there any relation between trade and personal data?

Undoubtedly! Personal data, which are frequently called the “oil” of the 21st century, is an integral part of every commercial activity. Either as the very centre of online services, or assisting at the contractual supply of goods, personal data is the driving force behind every commercial activity. In the past, trade was a relatively decentralised procedure, however, nowadays, data is used in every commercial activity. Therefore, to me, the transition, in fact the conjugation, from trade law to personal data protection law was a rational and probably necessary choice, taking into consideration the every-increasing importance of data.

– You are working both as a professor and researcher in some of the most important universities internationally and as a lawyer in a large law firm. How is it possible to combine an academic career with practising law?

I have to admit that is a very demanding combination, among other things because it entails frequent travelling between Oxford, Brussels, Athens and New York for academic and professional contributions. However, this “balancing act” really satisfies me as every sector offers you something different: academy is a forum for the exchange of ideas, where you constantly learn, horizontally from your colleagues and vertically from your students, while dealing with legal issues, which need further examination and resolution from the scientific community. Practicing law is more intense, active and pressing as you are requested to solve your client’s problem as soon as possible, in a practical way, while the legal strategy you create is dynamic. The combination of these two contrary occupations makes me evolve, so for now the weariness is certainly worth trying out!

– You are cooperating with very important universities in the US. What is their stance towards GDPR? Should we feel lucky that it exists in Europe or does it simply cause more problems?

The truth is that the Regulation has been largely discussed in the academic community and the legal community in the US, due to the large number of American firms that operate in Europe, through their physical or web presence, and due to the extraterritorial application of the regulation, under conditions. I can say that the dialogue during the last years on the other end of the Atlantic was really productive; actually in recent discussions I had with my colleagues at the law schools of Stanford and Columbia universities, I observe an increasing interest and knowledge on the subject. In fact the Regulation led to an intensive debate of respective initiatives of federalist nature in USA. The first indications have already appeared at the new consumer protection legislation of California and the Cloud Act.

– What do you think is the level of compliance with the GDPR for Greek companies and organisations? Where do we stand comparatively with other countries?

This is a complicated question because we have to distinguish between companies, which have fully complied and those which have taken the basic measures required, probably superficially, leading to a “compliance theatre”. One of the Regulation’s negative effects is that the market has been flooded with professionals who were not experts and were promising that they could help a company to comply fully with the Regulation at a very low cost. However, this is a process that takes time, total structural adjustment regarding the use of data and creation of a substantial way of thinking in support of the data protection. I would say that the minority of companies has substantially complied, a large majority has superficially complied –which leaves for possible risks- and finally a big percentage of companies hasn’t complied at all yet – which is very dangerous.

Which is the role of the citizens in achieving companies’ and organisation’s compliance with the GDPR?

The role of civil society is to be aware and show interest in their rights -as the right to be forgotten and to portability- and exercise them in good faith if they have any doubt or question about how companies process their data. Citizens are the best guardians of this new legislation, as they have to use their strength for improvement and transparency in the use of data. They could also be organised in a coordinated manner, through an organisation like Homo Digitalis, and condemn possible infringed behaviour to the competent body of our country, the Greek Data Protection Authority.

– Can the conferred by the GDPR rights help Greek citizens in practice?

Certainly, as citizens can exercise a series of rights, which give them more and substantial control of their data. The user’s increased control on his data was one of the main reasons, which led to the Regulation, in view of the fact that companies collect and process a wealth of data for us from various sources; accordingly the user must control who is processing his/her data, why and where it is transferring this data. Overall, the rights that the Regulation offers result in augmented transparency and accountability for using Greeks -and European- citizens’ data.

– Both as a professor and a researcher, so as a lawyer you come up against new challenges. Which of them do you think that we will face in Greece and what would you like to be the action of an organisation like Homo Digitalis according to them?

I believe that in the foreseeable future we will face challenges such as data leakage and networks confidentiality breach, massive hacking in conjunction with ransom demands extortion in cryptocurrency, lack of companies’ ability to cope efficiently with users’ requests for exercising their rights, spot checks from prosecution authorities and complexity on how blockchain and artificial intelligence interact, or conflict with the Regulation. An organisation like Homo Digitalis can adopt working documents and organise workshops for the examinations of these challenges.

– How do you expect the relation of technology and human in the future?

It is a fact that the relation between technology and human will continue to be made increasingly complex through the evolution of artificial intelligence and the Internet of Things. Therefore, developments that a few decades ago were figments of scientific imagination are now much closer than we may think.


Homo Digitalis met with the Greek Data Protection Authority

Introductory meeting between Homo Digitalis and the Greek Data Protection Authority

Homo Digitalis had the pleasure of meeting the representatives of the Greek Data Protection Authority. Present at the meeting were the Chairperson of the Authority Mr K. Menoudakos, also Mr V. Zorkadis (Director of the Secretariat) and E. Athanasiadis (Communication Manager). Homo Digitalis was represented by E. Vamvaka, Mr K. Kakavoulis and Mr E. Mandrakis.

We discussed the broader context of the personal data protection and privacy in our country and in particular the results of the first period of application of the GDPR and the compliance of Greek stakeholders with it.

We also turned our attention to the feedback that the Authority receives from the citizens regarding the protection of their personal data. We acknowledged the importance of the Authority’s mission and the relevant challenges and we consider necessary the reinforcement of the Authority with the requisite human and economic means.

The Authority applauded the presence of an NGO, which occupies with the protection of digital rights and can reinforce the dialogue within civil society and emphasise their importance in Greece. Both sides sought possibilities for cooperation directed to the enhancement and protection of these rights.

Homo Digitalis supports the important work carried out by the Data Protection Authority. We are optimistic that our action and through continued cooperation we will contribute to its mission.