Apply for the 2nd funding call of NGI TALER!

Make sure to apply to the second #funding call of NGI TALER until June 1st 2024 12:00 CEST (noon)!

Part of the budget of NGI TALER is reserved for open calls to fund free software and privacy preserving efforts that are aligned with the topics and approach of NGI TALER! We invite your contributions to help reshape the state of play of digital payment systems, and to help create an open, trustworthy and reliable internet for all!

We are seeking project proposals between 5.000 and 50.000 euro. The call is open to SMEs, academics, public sector, nonprofits, communities and individuals. You can contribute exciting new capabilities to GNU Taler itself, build auxiliary tools, work on user experience, develop integrations into FOSS applications and open standards (for example, enabling P2P micropayments in an instant messenger, open social media platform or video conferencing tool), or develop improvements to infrastructure components (like merchant backends)!

Visit the website of NGI – The Next Generation Internet for more information about this, and other NGI funding calls here.

Check NLnet foundation‘s website to read the detailed Open Call, the Guide for Applicants, the Eligibility Requirements and our FAQs and submit your form here.

Join now our TALER Intergration Community Hub (TALER ICH) to discuss together and ask your questions here.


We submitted important questions to the Minister of Interior, Ms Kerameos, on the project "Development and operation of a tool for the strategic planning of public sector staffing in terms of artificial intelligence" and its pilot application in 9 institutions

On April 15, Homo Digitalis submitted an electronic letter to the Minister of Interior, Ms Kerameos, regarding the Ministry’s project entitled “Development and operation of a tool for strategic planning of public sector staffing in terms of artificial intelligence”.

Our letter was communicated to the President of the Personal Data Protection Authority, Mr. Menoudakos, and to the Data Protection Officer of the Ministry of Interior, Mr. Theocharis.

More specifically, this project relates to the development and operation of a tool for the strategic planning of human resources in the public sector in terms of artificial intelligence and concerns the following axes:

– Creation of an integrated framework for strategic staffing planning (optimal allocation of existing and new staff) in the public sector (including technical specifications for the implementation and revision of existing frameworks)

– Pilot implementation in 9 Public Sector Entities and more specifically in MOD SA, AADE, OAED, Athens General Hospital “G. Municipality of Thessaloniki, Region of Attica, Ministry of Education and Religious Affairs, Ministry of Environment and Energy and Ministry of Culture and Sports,

– Design of training programmes for (a) users and (b) upgrading the skills of civil servants, and

– Development of the knowledge repository of civil servants.

According to relevant information posted on the website of the Ministry of Interior and articles in various media, the Ministry of Interior is the project manager and has already contracted with Deloitte for its preparation. In fact, according to the timetable, the work has made significant progress.

In its letter, Homo Digitalis requests information from the Minister on a number of questions regarding both the legal framework for the protection of personal data (Law 4624/2019 – GDPR), and the legal framework for the use of artificial intelligence systems by the public sector (Law 4961/2022), as the pilot implementation of the project is expected to take place immediately in the 9 institutions mentioned above.

Specifically, we put the following questions to the Minister in our letter:

-Has the Ministry of Interior carried out a data protection impact assessment before the project was announced, in accordance with the principles of data protection “already by design” and “by default”?
-Has a relevant Data Protection Impact Assessment been carried out specifically in relation to the pilot implementation of the platform in the 9 public bodies?
-If the relevant Assessments have been prepared, has the Ministry considered it necessary to consult the Data Protection Authority in this respect?

-Does the Ministry consider the 9 public bodies as joint controllers and if so, has the Ministry proceeded with the relevant obligations as set out in Article 26 GDPR?
-Can the Ministry inform us of the relevant categories of personal data, the purposes of the processing for which such data are intended, and the legal basis for the processing you intend to use?
-Can the Ministry point us to the exact website where the Ministry of Interior’s contract with Deloitte is posted so that we can study the relevant provisions contained therein, especially with regard to the processing of personal data?
-Finally, has the Ministry of Interior proceeded to comply with the obligations arising from the provisions of Law 4961/2022, and in particular has an algorithmic impact assessment been carried out (Article 5), has it taken the necessary transparency measures (Article 6), has the project contractor fulfilled their obligations in this respect (Article 7), and has the Ministry kept a register (Article 8) in view of the forthcoming pilot use of the system?


The Looming Disinformation Crisis: How AI is Weaponizing Misinformation in the Age of Elections

By Anastasios Arampatzis

Misinformation is as old as politics itself. From forged pamphlets to biased newspapers, those seeking power have always manipulated information. Today, a technological revolution threatens to take disinformation to unprecedented levels. Generative AI tools, capable of producing deceptive text, images, and videos, give those who seek to mislead an unprecedented arsenal. In 2024, as a record number of nations hold elections, including the EU Parliamentary elections in June, the very foundations of our democracies tremble as deepfakes and tailored propaganda threaten to drown out truth.

Misinformation in the Digital Age

In the era of endless scrolling and instant updates, misinformation spreads like wildfire on social media. It’s not just about intentionally fabricated lies; it’s the half-truths, rumors, and misleading content that gain momentum, shaping our perceptions and sometimes leading to real-world consequences.

Think of misinformation as a distorted funhouse mirror. Misinformation is false or misleading information presented as fact, regardless of whether there’s an intent to deceive. It can be a catchy meme with a dubious source, a misquoted scientific finding, or a cleverly edited video that feeds a specific narrative.  Unlike disinformation, which is a deliberate spread of falsehoods, misinformation can creep into our news feeds even when shared with good intentions.

How the Algorithms Push the Problem

Social media platforms are driven by algorithms designed to keep us engaged. They prioritize content that triggers strong emotions – outrage, fear, or click-bait-worthy sensationalism.  Unfortunately, the truth is often less exciting than emotionally charged misinformation. These algorithms don’t discriminate based on accuracy; they fuel virality. With every thoughtless share or angry comment, we further amplify misleading content.

The Psychology of Persuasion

It’s easy to blame technology, but the truth is we humans are wired in ways that make us susceptible to misinformation. Here’s why:

  • Confirmation Bias: We tend to favor information that confirms what we already believe, even if it’s flimsy. If something aligns with our worldview, we’re less likely to question its validity.
  • Lack of Critical Thinking: In a fast-paced digital world, many of us lack the time or skills to fact-check every claim we encounter. Pausing to assess the credibility of a source or the logic of an argument is not always our default setting.

How Generative AI Changes the Game

Generative AI models learn from massive datasets, enabling them to produce content indistinguishable from human-created work. Here’s how this technology complicates the misinformation landscape:

  • Deepfakes: AI-generated videos can convincingly place people in situations they never were or make them say things they never did. This makes it easier to manufacture compromising or inflammatory “evidence” to manipulate public opinion.
  • Synthetic Text: AI tools can churn out large amounts of misleading text, like fake news articles or social media posts designed to sound authentic. This can overwhelm fact-checking efforts.
  • Cheap and Easy Misinformation: The barrier to creating convincing misinformation keeps getting lower. Bad actors don’t need sophisticated technical skills; simple AI tools can amplify their efforts.

The Dangers of Misinformation

The impact of misinformation goes well beyond hurt feelings. It can:

  • Pollute Public Discourse: Misinformation hinders informed debate. It leads to misunderstandings about important issues and makes finding consensus difficult.
  • Erode Trust: When we can’t agree on basic facts, trust in institutions, science, and even the democratic process breaks down.
  • Targeted Manipulation: AI tools can allow for highly personalized misinformation campaigns that prey on specific vulnerabilities or biases of individuals and groups.
  • Influence Decisions: Misinformation can influence personal decisions, including voting for less qualified candidates or promoting radical agendas.

What Can Be Done?

There is no single, easy answer for combating the spread of misinformation. Disinformation thrives in a complicated web of human psychology, technological loopholes, and political agendas. However, recognizing these challenges is the first step toward building effective solutions.  Here are some crucial areas to focus on:

  • Boosting Tech Literacy: In a digital world, the ability to distinguish reliable sources from questionable ones is paramount. Educational campaigns, workshops, and accessible online resources should aim to teach the public how to spot red flags for fake news: sensational headlines, unverified sources, poorly constructed websites, or emotionally charged language.
  • Investing in Fact-Checking: Supporting independent fact-checking organizations is key. These act as vital watchdogs, scrutinizing news, politicians’ claims, and viral content.  Media outlets should consider prominently labeling content that has been verified or clearly marking potentially misleading information.
  • Balancing Responsibility & Freedom: Social media companies and search engines bear significant responsibility for curbing the flow of misinformation. The EU’s Digital Services Act (DSA) underscores this responsibility, placing requirements on platforms to tackle harmful content. However, this is a delicate area, as heavy-handed censorship can undermine free speech. Strategies such as demoting unreliable sources, partnering with fact-checkers, and providing context about suspicious content can help, but finding the right balance is an ongoing struggle, even in the context of evolving regulations like the DSA.
  • The Importance of Personal Accountability: Even with institutional changes, individuals play a vital role. It’s essential to be skeptical, ask questions about where information originates, and be mindful of the emotional reactions a piece of content stirs up. Before sharing anything, verify it with a reliable source. Pausing and thinking critically can break the cycle of disinformation.

The fight against misinformation is a marathon, not a sprint. As technology evolves, so too must our strategies. We must remain vigilant to protect free speech while safeguarding the truth.


We published our successful Report for the five-year period 2018-2023

In 2018, we started with 6 founding members, 25 volunteers and 1,000 euros in the organization’s account. Nobody knew us, but we knew what we wanted to achieve, what gap we were trying to fill, where we wanted to go.Today, we celebrate our 6 years of operation and publish our 5-year report, about everything we have achieved in the period 2018 -2023. The detailed Report contains the beginning of our story, information about the mission, vision and values of Homo Digitalis, and a thorough review of all our major successes by pillar of action, namely a) Awareness, b) Advocay , and c) Legal Actions and Interventions. Finally, in order to enhance transparency about our financial accounts, we have also included all of our  Financial Reports for the entire five years period!

You can read our Homo Digitalis’ “Five Year Report 2018-2023” in Greek here or in English here. The report was curated by our Director on Human Rights and AI, Lambrini Gyftokosta.

Looking back we are happy, proud and excited, because until the summer of 2023, we: gained over 130 volunteers, have steadily increased our revenue by 353% every year, filed over 20 complaints with Greek and European authorities, managed to fine Clearview AI €20 million (the largest in Greece), visited more than 30 schools and raised awareness with our actions for more than 3500 students and citizens, gave more than 40 media interviews in Greece and Europe, supported more than 50 joint actions with other Greek and European organisations in the field of digital rights, acquired more than 10.000 followers on social media (LinkedIn, Facebook, Instagram, X), published more than 150 articles of scientific, technical and legal interest on our website with the contribution of our volunteers, became the first and only organisation from Greece to be a member of EDRi, the European Digital Rights Network; and although we started as a purely voluntary organisation, we managed to hire our first employee!

On this journey we were not alone. One of our greatest successes is our collaboration with a large network of universities, organisations, institutions, research centres and all our member volunteers who helped us take our actions one step further!

Looking ahead we are optimistic. We are moving forward dynamically, conquering small and big goals that will bring us even closer to the world we dream of and want to build together!


The Hellenic Data Protection Authority fines the Ministry of Migration and Asylum for the "Centaurus" and "Hyperion" systems with the largest penalty ever imposed to a Greek public body

Two years ago, in February 2022, Homo Digitalis had filed a complaint against the Ministry of Immigration and Asylum for the “Centaurus” and “Hyperion” systems deployed in the reception and accommodation facilities for asylum seekers, in cooperation with the civil society organizations Hellenic League for Human Rights and HIAS Greece, as well as the academic Niovi Vavoula.

Today, the Hellenic Data Protection Authority identified significant GDPR violations in this case by the Ministry of Immigration and Asylum and decided to impose a fine of €175.000 euro – the highest ever imposed against a public body in the country.

The detailed analysis of the GDPR highlights the significant shortcomings that the Ministry of Immigration and Asylum had fallen into in the context of preparing a comprehensive and coherent Data Protection Impact Assessment, and demonstrates the significant violations of the GDPR that have been identified and relate to a large number of subjects who have a real hardship in being able to exercise their rights.

Despite the fact that the DPA remains understaffed, with a reduced budget, facing even the the risk of eviction from its premises, it manages to fulfil its mission and maintain citizens’ trust in the Independent Authorities. It remains to be seen how long the DPA will last if the state does not stand by its side.

Of course, nothing ends here. A high fine does not in itself mean anything. The Ministry of Immigration and Asylum must comply within 3 months with its obligations. However, the decision gives us the strength to continue our actions in the field of border protection in order to protect the rights of vulnerable social groups who are targeted by highly intrusive technologies.

You can read our press release here.

You can read Decision 13/2024 on the Authority’s website here.


We participated at DFF’s Annual Strategy Meeting (ASM24)

We participated at DFF’s Annual Strategy Meeting (ASM24)

Two weeks ago, Homo Digitalis’ President, Elpida Vamvaka, was in Berlin at Digital Freedom Fund’s Annual Strategy Meeting (ASM24). We are grateful for the chance to engage in enriching dialogue with such inspiring fellow digital rights defenders working to propel human rights forward!

The meeting’s goals were to share meaningful exchanges and updates on digital rights topics, explore new opportunities to organise and collaborate at the intersection of racial, social, economic and environmental justice, to centre care, to safeguard well-being and to build resilience.

The meeting featured peer-driven highlights from DFF’s network, discussions mapping the 2024 landscape and beyond on digital rights issues, knowledge and skill sharing sessions, and a powerful panel on war crimes & digital rights. Stay tuned for the video coming soon!

Topics ranged from queer & trans*, labour, disability, environmental, welfare, prisoners’, children’s and migrants’ rights, to spyware, surveillance, digital policing, platform accountability, movement lawyering, organising for digital justice, and many more.

We would like to extend a heartfelt thank you to the organizers for inviting us, as well as to all individual participants and represented organisations for making this year’s Annual Strategy Meeting a success.


We participated at Alan Turing Institute's Workshop on th responsible governance of the use of AI in recruitment and employment’

On the 14th of March, our Director on AI and Human Rights, Lamprini Gyftokosta, participated in an online meeting organised by the Alan Turing Institute “Towards responsible governance of the use of AI in recruitment and employment’. Stakeholders from civil society, government, academia, and industry shared their views on best practices for the use of artificial intelligence (AI) in recruitment and employment, including the development of standards in this field.

Findings from this workshop will help refine the direction and scope of an AI Standards Hub research project led by researchers from The Alan Turing Institute, which will aim to investigate the role of consensus-based standards in governing the use of AI in recruitment and employment across jurisdictional borders.

In Greece, the pilot “AI based strategic workforce planning tool for the public sector” as announced by the Minister of Digital Governance, is an initiative that will apply to more than 700.000 people when completed. As Homo Digitalis underlined during the workshop, harmonised standards in areas like recruitment and employment, even if voluntary, are necessary to create a culture of compliance to the new AI rules. The role of the Greek supervisory authority in enforcing the standards and the law will be paramount, especially since according to the Greek law implementing GDRP, the employees cannot authorise Homo Digitalis to submit a complaint on their behalf without disclosing their names, putting them in an impossible position.


From Clean Monday to Cyber Cleanliness: Bridging Traditions with Modern Cyber Hygiene Practices

By Anastasios Arampatzis and Ioannis Vassilakis

In the heart of Greek tradition lies Clean Monday, which marks the beginning of Lent leading to Easter and symbolizes a fresh start, encouraging cleanliness, renewal, and preparation for the season ahead. This day, celebrated with kite flying, outdoor activities, and cleansing the soul, carries profound significance in purifying one’s life in all aspects.

Just as Clean Monday invites us to declutter our homes and minds, there exists a parallel in the digital realm that often goes overlooked: cyber hygiene. Maintaining a clean and secure online presence is imperative in an era where our lives are intertwined with the digital world more than ever.

Understanding Cyber Hygiene

Cyber hygiene refers to the practices and steps that individuals take to maintain system health and improve online security. These practices are akin to personal hygiene routines; just as regular handwashing can prevent the spread of illness, everyday cyber hygiene practices can protect against cyber threats such as malware, phishing, and identity theft.

The importance of cyber hygiene cannot be overstated. In today’s interconnected world, a single vulnerability can lead to a cascade of negative consequences, affecting not just the individual but also organizations and even national security. The consequences of neglecting cyber hygiene can be severe:

  • Data breaches.
  • Identity theft.
  • Loss of privacy.

As we celebrate Clean Monday and its cleansing rituals, we should also adopt cyber hygiene practices to prepare for a secure and private digital future free from cyber threats.

Clean Desk and Desktop Policies – The Foundation of Cyber Cleanliness

Just as Clean Monday encourages us to purge our homes of unnecessary clutter, a clean desk and desktop policy is the cornerstone of maintaining a secure and efficient workspace, both physically and digitally. These policies are not just about keeping a tidy desk; they’re about safeguarding sensitive information from prying eyes and ensuring that critical data isn’t lost amidst digital clutter.

  • Clean Desk Policy ensures that sensitive documents, notes, and removable storage devices are secured when not in use or when an employee leaves their desk. It’s about minimizing the risk of sensitive information falling into the wrong hands, intentionally or accidentally.
  • Clean Desktop Policy focuses on the digital landscape, advocating for a well-organized computer desktop. This means regularly archiving or deleting unused files, managing icons, and ensuring that sensitive information is not exposed through screen savers or unattended open documents.

The benefits of these policies are profound:

  • Reduced risk of information theft.
  • Increased efficiency and enhanced productivity.
  • Enhanced professional image and competence.

The following simple tips can help you maintain cleanliness:

  1.     Implement a Routine: Just as the rituals of Clean Monday are ingrained in our culture, incorporate regular clean-up routines for physical and digital workspaces.
  2.     Secure Sensitive Information: Use locked cabinets for physical documents and password-protected folders for digital files.
  3.     Adopt Minimalism: Keep only what you need on your desk and desktop. Archive or delete old files and dispose of unnecessary paperwork.

Navigating the Digital Landscape: Ad Blockers and Cookie Banners

Using ad blockers and understanding cookie banners are essential for maintaining a clean and secure online browsing experience. As we carefully select what to keep in our homes, we must also choose what to allow into our digital spaces.

  • Ad Blockers prevent advertisements from being displayed on websites. While ads can be a source of information and revenue for site owners, they can also be intrusive, slow down web browsing, and sometimes serve as a vector for malware.
  • Cookie Banners inform users about a website’s use of cookies. Understanding and managing these consents can significantly enhance your online privacy and security.

To achieve a cleaner browsing experience:

  • Choose reputable ad-blocking software that balances effectiveness with respect for websites’ revenue models. Some ad blockers allow non-intrusive ads to support websites while blocking harmful content.
  • Take the time to read and understand what you consent to when you agree to a website’s cookie policy. Opt for settings that minimize tracking and personal data collection where possible.
  • Regularly review and clean up your browser’s permissions and stored cookies to ensure your online environment remains clutter-free and secure.

 

Cultivating Caution in Digital Interactions

In the same way that Clean Monday prompts us to approach our physical and spiritual activities with mindfulness and care, we must also navigate our digital interactions with caution and deliberateness. While brimming with information and connectivity, the digital world also harbors risks such as phishing scams, malware, and data breaches.

  • Verify Before You Click: Ensure the authenticity of websites before entering sensitive information, and be skeptical of emails or messages from unknown sources.
  • Use BCC in Emails When Appropriate: Sending emails, especially to multiple recipients, should be handled carefully to protect everyone’s privacy. Using Blind Carbon Copy (BCC) ensures that recipients’ email addresses are not exposed to everyone on the list.
  • Recognize and Avoid Phishing Attempts: Phishing emails are the digital equivalent of wolves in sheep’s clothing, often masquerading as legitimate requests. Learning to recognize these attempts can protect you from giving away sensitive information to the wrong hands.
  • Embrace skepticism in your online interactions: Ask yourself whether information shared is necessary, whether links are safe to click, and whether personal data needs to be disclosed.

Implementing a Personal Cyber Cleanliness Routine

Drawing inspiration from the rituals of Clean Monday, establishing a personal routine for cyber cleanliness is beneficial and essential for maintaining digital well-being. The following steps can help show a cleaner digital life.

  • Enable Multi-Factor Authentication (MFA) wherever it is possible to keep unauthorized users out of personal accounts.
  • Periodically review privacy settings on social media and other online platforms to ensure you only share what you intend to.
  • Unsubscribe from unused services, delete old emails and remove unnecessary files to reduce the cognitive load and make it easier to focus on what’s important.
  • Just as Clean Monday marks a time for physical and spiritual cleansing, set specific times throughout the year for digital clean-ups.
  • Keep abreast of the latest in cybersecurity to ensure your practices are up-to-date. Knowledge is power, particularly when it comes to protecting yourself online.
  • Share your knowledge and habits with friends, family, and colleagues. Just as traditions like Clean Monday are passed down, so too can habits of cyber cleanliness.

Embracing a Future of Digital Cleanliness and Renewal

The principles of Clean Monday can also be applied to our digital lives. Maintaining a healthy, secure digital environment is a continuous commitment and requires regular maintenance. We take proactive steps toward securing our personal and professional data by implementing clean desk and desktop policies, navigating the digital landscape with caution, and cultivating a routine of personal cyber cleanliness. Let us embrace this opportunity for a digital clean-up and create a safer digital world for all.


Spyware: A New Threat to Privacy in Communication

*By Sofia Despoina Feizidou

The Athens Polytechnic uprising in November 1973 was the most massive anti-dictatorial protest and a precursor to the collapse of the military dictatorship regime imposed on the Greek people since April 21, 1963. Among other things, this regime had abolished fundamental rights.

One of the most critical fundamental rights is the right to the protection of correspondence, especially the confidentiality of communication. The right of an individual to share and exchange thoughts, ideas, feelings, news, and opinions within an intimate and confidential framework, with chosen individuals, without fear of private communication being monitored or any expression being revealed to third parties or used against them, is essential to democracy. Therefore, it is a fundamental individual right enshrined in international and European legislation, as well as in national Constitutions. The provision of Article 19 of the Greek Constitution dates back to 1975 (which may not be a coincidence).

However, the revelation of the surveillance of politicians or their relatives, actors, journalists, businessmen, and others one year ago shows that the protection of communication privacy remains vulnerable, especially in the modern digital age.

Spyware: A New Asset in the Arsenal of Intelligence Services and Companies

Spyware is a type of malware designed to secretly monitor a person's activities on their electronic devices, such as computers or mobile phones, without the end user's knowledge or consent. Spyware is typically installed on devices by opening an email or a file attachment. Once installed, it is difficult to detect, and even if detected, proving responsibility for the invasion is challenging. Spyware provides full and retroactive access to the user’s device, monitoring internet activity and gathering sensitive information and personal data, including files, messages, passwords, or credit card numbers. Additionally, it can capture screenshots or monitor audio and video by activating the device's microphone or camera.

Some of the most well-known spyware designed to invade and monitor mobile devices remotely include:

  1. Predator: This spyware is installed on the device when a user receives a message containing a link that appears normal and includes a catchy description to mislead the user into clicking on the link. Once clicked, the spyware is automatically installed, granting full access to the device, messages, files, as well as its camera and microphone.
  2. Pegasus: Similar to Predator, Pegasus aims to convince the user to click on a link, which then installs the spyware on the device. However, Pegasus can also be installed on a device without requiring any action from the user, such as a missed call on WhatsApp. Immediately after installation, it executes its operator's commands and gathers a significant amount of personal data, including files, passwords, text messages, call records, or the user’s location, leaving no trace of its existence on the device.

In June 2023, the Chairman of the European Parliament’s Committee of Inquiry investigating the use of Pegasus and similar surveillance spyware stated: "Spyware can be an effective tool in fighting crime, but when used wrongly by governments, it poses a significant risk to the rule of law and fundamental rights." Indeed, the technological capabilities of spyware provide unauthorized access to personal data and the monitoring of people's activities, leading to violations of the right to communication confidentiality, the right to the protection of personal data, and the right to privacy in general.

According to the Committee's findings, the abuse of surveillance spyware is widespread in the European Union. In addition to Greece, the use of such software has been found in Poland, Hungary, Spain, and Cyprus, which is deeply concerning. The need to establish a regulatory framework to prevent such abuse is now in the spotlight, not only at the national level but primarily at the EU level.

What Do We Need?

  1. Clear Rules to Prevent Abuse: European rules should clearly define how law enforcement authorities can use spyware. The use of spyware by law enforcement should only be authorized in exceptional cases, for a predefined purpose, and for a limited period of time. A common legal definition of the concept of 'national security reasons' should be established. The obligation to notify targeted individuals and non-targeted individuals whose data were accessed during someone else’s surveillance, as well as procedures for supervision and independent control following any incident of illegal use of such software, should also be enshrined.
  2. Compliance of National Legislation with European Court of Human Rights Case Law: The Court grants national authorities wide discretion in weighing the right to privacy against national security interests. However, it has developed and interpreted the criteria introduced by the European Convention of Human Rights, which must be met for a restriction on the right to confidential, free communication to be considered legitimate. This has been established in numerous judgments since 1978.
  3. Establishment of the "European Union Technology Laboratory": This independent research institute would be responsible for investigating surveillance methods and providing technological support, such as device screening and forensic research.
  4. Foreign Policy Dimension: Members of the European Parliament (MEPs) have called for a thorough review of spyware export licenses and more effective enforcement of the EU’s export control rules. The European Union should also cooperate with the United States in developing a common strategy on spyware, as well as with non-EU countries to ensure that aid provided is not used for the purchase and use of spyware.

Conclusion

In conclusion, as we reflect upon the lessons of history and the enduring struggle for democracy and fundamental rights, Benjamin Franklin's timeless wisdom resonates with profound significance: "They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety." The recent revelations of spyware abuse have starkly illustrated the delicate balance between security and individual freedoms. While spyware may be wielded as a tool in the fight against crime, its potential for misuse poses a grave threat to the rule of law and the very principles upon which our democratic societies are built.

*Sofia-Despina Feizidou is a lawyer, and graduate of the Athens Law School, holding a Master's degree with specialization in "Law & Information and Communication Technologies" from the Department of Digital Systems of the University of Piraeus. Her thesis was on the comparative review of the case law of the European Courts (ECtHR and CJEU) on mass surveillance.