Eleftherios Chelioudakis of Homo Digitalis as a Trainer in the 3rd OSCE ODIHR Training on Personal Data & Border Management
From December 4 to 6, Eleftherios Chelioudakis, our Co-founder and Executive Director, participated as a trainer in the third training session organized by the international organization OSCE, specifically its Office for Democratic Institutions and Human Rights (ODIHR) in Warsaw. The training explored the use of new technologies at international borders, as well as the risks and opportunities they pose for Human Rights.
In our five presentations, we focused on the technologies used at borders, the Human Rights impacted by these uses, the provisions of the GDPR and Directive 2016/680 LED, the significant decision by the Hellenic Data Protection Authority (HDPA) regarding the “KENTAYROS” and “YPERION” systems, as well as technical tools that human rights defenders can use in their work!
We extend our heartfelt thanks to the participants for their dynamic presence, to the team of outstanding trainers Nikola Kovačević, Djordje Alempijevic, and Arancha Garcia del Soto for their knowledge and expertise, and to the organizing team, Lola Girard and Veronica Grazzi, for their impeccable organization and contributions. It is a great honor for us to participate!
You can learn more here.
We Publish Our Third Study on the AI Act, Focusing on Article 5 & Prohibited Practices
Today, Homo Digitalis publishes its third study on Regulation 2024/1689, the now widely known AI Act, titled “Artificial Intelligence Act: Analysis of Provisions on Prohibited Practices in Article 5 of Regulation 2024/1689.”
The authors of this study are Sophia Antonopoulou, Lamprini Gyftokosta, Tania Skrapaliori, Eleftherios Chelioudakis, and Stavroula Chousou.
The aim of this Homo Digitalis analysis is to systematically approach each provision of Article 5 of the AI Regulation, related to Manipulative or deceptive techniques, Exploitation of vulnerabilities, Assessment of social behavior, Facial recognition database use, Prediction of criminal offenses, Emotion detection,Biometric categorization systems, and Remote biometric identification.
With our study, we provide targeted questions highlighting the critical aspects of individual provisions, identifying the so-called “gray areas”—points that present ambiguities, overlaps, or potential interpretative challenges. We substantiate our concerns with specific examples and pose precise questions to be addressed by the upcoming guidelines of the European Commission’s AI Office and the national legislator.
As with our first two studies (published in October and November 2024, respectively), our third study also aims to support the Ministry of Digital Governance in its mission to transpose the AI Act into Greek legislation. Additionally, through our detailed analyses and arguments, we aim to contribute to the maturation of public discourse and empower more Civil Society organizations to actively participate in it.
You can read our study, “Artificial Intelligence Act: Analysis of Provisions on Prohibited Practices in Article 5 of Regulation 2024/1689,” (available in EL) here.
Homo Digitalis participated in AI Office’s consultation on Prohibited Practices under the AI Act
In November 2024, the European Commission’s Artificial Intelligence (AI) Office launched a consultation on AI Act prohibitions and AI system definition.
The guidelines under development will help national competent authorities as well as providers and deployers in complying with the AI Act’s rules on such AI practices ahead of the application of the relevant provisions on 2 February 2025.
Homo Digitalis participated in this public consultation process by submitting our input, in an attempt to highlight challenges and provide further clarity on practical aspects and use cases.
The authors’ team of our public consultation is composed of our Director on Human Rights & Artificial Intelligence, Lamprini Gyftokosta and our members Sophia Antonopoulou and Stavrina Chousou.
You can read our input here.
Stay tuned, since our dedicated report on the AI Act and its provisions on Prohibited Practices is to be published soon!
We publish our Second AI Act Study on market surveillance authorities and the AI governance ecosystem
Today Homo Digitalis publishes its second study on Regulation 2024/1689, the now well-known AI Act, entitled “AI Act: Analysis of Provisions for AI Governance and Competent Market Surveillance Authorities“.
The writing team for the study consists of Homo Digitalis’ Director of Human Rights and AI, Lamprini Gyftokosta, and our member Niki Georgakopoulou.
The purpose of this Homo Digitalis analysis is to highlight some of the critical issues raised by the implementation of the AI governance system provisions, taking into account national structures as well as the civil society perspective.
More specifically, in this analysis we answer the following questions:
- What governance structure does the Regulation propose for AI?
- What does the concept of ‘market surveillance authority’ mean for the AI Regulation?
- What is in Regulation 2019/1020 and why should we consider its provisions together with the AI Act?
- Which Greek authorities meet the requirements set out in the two Regulations and why?
- What governance models have been adopted or are under discussion in other jurisdictions at this time?
- What are our main concerns?
- What are our main suggestions for improvement?
We recall that on 12 November, the Department for Digital Government took the first official step in implementing the AI Act by publishing the list of national authorities for the protection of fundamental rights. These principles include: The Data Protection Authority, the Ombudsman, the Communications Privacy Authority and the National Human Rights Commission.
In this regard, as early as 25 October, with our first Study “Analysis and proposals for the incorporation of the provisions on impact assessment on fundamental rights in Greece“, we had already presented detailed proposals on this issue. If you did not have time to read our Study, we invite you to see the one-page summary we prepared, specifically for the National Fundamental Rights Authorities.
The Ministry’s publication was only the first step. The next critical obligation is the institutional design of the market surveillance authorities, which must be completed by August 2, 2025, in accordance with Article 113 of the Regulation.
The second Study that we are publishing today is precisely intended to assist the Ministry of Digital Governance, which has the task of carrying out the difficult task of synthesizing this ecosystem in Greece, but also, with our detailed analyses and arguments, to help mature the public debate and enable more civil society organizations to actively participate in it.
We publish our first detailed study of the AI Act and the provisions of the FRIAs
Homo Digitalis publishes today its first Study on the European Regulation on Artificial Intelligence entitled “Artificial Intelligence Act: Analysis and proposals for the incorporation of the provisions on fundamental rights impact assessment in Greece”.
The authors of this first Study are our member Sophia Antonopoulou and Homo Digitalis’ Director of Human Rights & Artificial Intelligence, Lamprini Gyftokosta.
The Study is the first of a series of analyses that we will be publishing in the near future on various important provisions of the AI Regulation, which aim both to inform decision makers in Greece about important provisions of the AI Act in order to assist in its successful implementation, and to frame the public debate on AI in Greece by providing specific arguments and proposals.
The focus of the first Homo Digitalis Study is to highlight some critical issues raised by the implementation of the provisions on AI in the Fundamental Rights (FRR), from the perspective of civil society. Besides, it aims to contribute constructively to the public debate by proposing concrete solutions for an effective impact assessment process with regard to high-risk AI systems.
In summary, the main conclusions of the Study include the following concerns:
- The exclusion from the obligation to carry out NRAs of AI systems used exclusively by private services.
- The complete lack of sanctions for those who violate the provisions on TDRs.
- The ambiguities and interpretative gaps regarding how to carry out an ERA, the updating of data and the re-conducting of an ERA, the risk assessment and proposed measures, the notification of the market surveillance authority and the exemptions from such notification of the market surveillance authority; and
- The lack of transparency in the use of AI systems and the preparation of SIAs in the areas of law enforcement, migration and asylum management and border control.
It also summarises the proposals for improvement in five key points that are crucial for the effective protection of fundamental rights against any violations of AI systems:
- It is proposed to exercise discretion under Article 99(2) and to introduce sanctions for non-compliance with the provisions on AI practices. It is further proposed that the relevant sanctions should be on the same scale as those for non-compliance with the prohibition of AI practices under Article 99(3) of the Regulation.
- It is proposed to establish detailed governance arrangements with clear procedures for handling complaints and appeals and to ensure stakeholder participation in the Greek law that will incorporate the Regulation.
- Amend Law 4780/2021, the provisions of which govern the functioning of the National Human Rights Commission to assume the role under Article 77 of the Regulation under certain conditions.
- In addition to the template for conducting a NCHR, it is necessary to develop guidelines, including an extensive analysis of Recital 96,Articles 6(2), 27, 43, 46, 49 and 77 of the AI Regulation.
You can read the Homo Digitalis Study in detail here.
Homo Digitalis participates in the European Commission's Open Consultation on General-Purpose AI
Yesterday, 18/9 Homo Digitalis submitted its responses to the European Commission’s Open Consultation under the title “FUTURE-PROOF AI ACT: TRUSTWORTHY GENERAL-PURPOSE AI”. The consultation covered issues concerning the future implementation of the AI ACT legislation and how to make the use of General-Purpose AI models trustworthy.
Homo Digitalis’ position paper on the Consultation was prepared by our organisation’s AI & Human Rights Director, Lamprini Gyftokosta and our member Tania Skrapaliori
You can read our statement here.
Open letter: The dangers of age verification proposals to fundamental rights online
Today, Homo Digitalis joined EDRi and other 62 organisations and experts urge the European Commission to halt proposals for using age verification tools when implementing DigitalServicesAct and eIDAS.
Evidence and lived experiences show these tools are dangerous, discriminatory and unsafe:
-Exclusive: Document-based verification excludes those without IDs, worsening the digital divide
-Invasive: Their ‘accuracy’ relies on processing vast amounts of personal data, threatening our right to online anonymity
-Pose privacy risks: Age estimation methods often use sensitive data like biometrics, which are prone to errors & bias.
-Discriminatory: Biometric-based approaches can be biased, based on gender, race, or disability.
Age Verification tools aren’t a silver bullet for addressing children’s needs online. Read more in the open letter here.
Letter from Homo Digitalis before NCRTV on the obligations arising from Regulation 2024/1083
On September 3, Homo Digitalis submitted an open letter (no. 4844-3-9-24) to the The National Council for Radio and Television (NCRTV), in which it raised two (2) questions regarding the application of the provisions of Article 25 of Regulation 2024/1083 on the establishment of a common framework for media services in the internal market.
In particular, as stipulated in Article 25, par. 2-3, the competent independent authorities in the Member States shall monitor and submit an annual report on the allocation of public advertising expenditure to media service providers and online platform providers. Those annual reports shall be made publicly available in an easily accessible format and shall be based on the information made public on an annual basis by electronic and user-friendly means by public authorities or entities that have incurred public advertising expenditure.
This information shall include at least:
(a) the official names of the media service providers or online platform providers from which services have been purchased,
(b) where applicable, the official names of the business groups to which any media service providers or online platform providers belong; and
(c) the total annual amount spent and the annual amounts spent per media service provider or online platform provider.
In view of the above, Homo Digitalis submitted the following two (2) questions before the NCRTV:
1 ) Has the NCRTV proceeded with the publication of the relevant annual report for the year 2023 regarding the allocation of state advertising expenditure to media service providers and online platform providers? If the NCRTV does not have the relevant competence, before which body should we appeal?
2) Does the NCRTV know whether public authorities or entities which have incurred expenditure on state advertising have complied with their obligation to publish annually by electronic and user-friendly means the information detailed in Article 25(2) of Regulation 2024/1083? In what ways could we as civil society monitor the respective activities of public authorities/entities in order to assist you in your work and inform you of any shortcomings in their compliance?
We hope to receive an answer to our questions soon.
Civil Society Common Statement: United Against Spyware Abuse in the EU and Beyond
Spyware isn’t just a privacy issue — it’s a threat to the very foundations of our democratic values. By undermining independent decision-making, restricting public debate, and silencing journalists and activists, spyware erodes the pillars of a healthy civic space.
As new European Union institutions prepare to take office following the EU elections, the growing threat of spyware has become a pressing global concern that demands immediate attention.
On Tuesday 3/9, Homo Digitalis joined Center for Democracy and Technology Europe (CDT Europe), alongside 30 civil society and journalists' organisations in publishing a joint statement urging the incoming EU institutions to prioritise action against the misuse of spyware in the new legislative term.
Some of our coalition’s key recommendations include:
- A ban on the production, sale, and use of spyware that disproportionately harms fundamental rights.
- Stronger export controls to prevent the misuse of these technologies beyond the EU.
- Transparency and accountability in government contracts involving spyware.
As Silvia Lorenzo Perez, Director of CDT Europe’s Security, Surveillance & Human Rights Programme, puts it: "The incoming EU institutions have the opportunity to correct the failures of the last legislature by taking concrete and decisive action against the abuse of spyware surveillance."
The new EU institutions must seize this moment to restore public trust, protect our fundamental rights, and uphold the values that define the Union.
You can read the EN version of the letter here, and the EL version of the letter here.
Homo Digitalis has also addressed related concerns, before the Council of Europe’s Commissioner for Human Rights in a recent Open Letter submitted in August. You can read more about this here.