We submitted important questions to the Minister of Interior, Ms Kerameos, on the project "Development and operation of a tool for the strategic planning of public sector staffing in terms of artificial intelligence" and its pilot application in 9 institutions

On April 15, Homo Digitalis submitted an electronic letter to the Minister of Interior, Ms Kerameos, regarding the Ministry’s project entitled “Development and operation of a tool for strategic planning of public sector staffing in terms of artificial intelligence”.

Our letter was communicated to the President of the Personal Data Protection Authority, Mr. Menoudakos, and to the Data Protection Officer of the Ministry of Interior, Mr. Theocharis.

More specifically, this project relates to the development and operation of a tool for the strategic planning of human resources in the public sector in terms of artificial intelligence and concerns the following axes:

– Creation of an integrated framework for strategic staffing planning (optimal allocation of existing and new staff) in the public sector (including technical specifications for the implementation and revision of existing frameworks)

– Pilot implementation in 9 Public Sector Entities and more specifically in MOD SA, AADE, OAED, Athens General Hospital “G. Municipality of Thessaloniki, Region of Attica, Ministry of Education and Religious Affairs, Ministry of Environment and Energy and Ministry of Culture and Sports,

– Design of training programmes for (a) users and (b) upgrading the skills of civil servants, and

– Development of the knowledge repository of civil servants.

According to relevant information posted on the website of the Ministry of Interior and articles in various media, the Ministry of Interior is the project manager and has already contracted with Deloitte for its preparation. In fact, according to the timetable, the work has made significant progress.

In its letter, Homo Digitalis requests information from the Minister on a number of questions regarding both the legal framework for the protection of personal data (Law 4624/2019 – GDPR), and the legal framework for the use of artificial intelligence systems by the public sector (Law 4961/2022), as the pilot implementation of the project is expected to take place immediately in the 9 institutions mentioned above.

Specifically, we put the following questions to the Minister in our letter:

-Has the Ministry of Interior carried out a data protection impact assessment before the project was announced, in accordance with the principles of data protection “already by design” and “by default”?
-Has a relevant Data Protection Impact Assessment been carried out specifically in relation to the pilot implementation of the platform in the 9 public bodies?
-If the relevant Assessments have been prepared, has the Ministry considered it necessary to consult the Data Protection Authority in this respect?

-Does the Ministry consider the 9 public bodies as joint controllers and if so, has the Ministry proceeded with the relevant obligations as set out in Article 26 GDPR?
-Can the Ministry inform us of the relevant categories of personal data, the purposes of the processing for which such data are intended, and the legal basis for the processing you intend to use?
-Can the Ministry point us to the exact website where the Ministry of Interior’s contract with Deloitte is posted so that we can study the relevant provisions contained therein, especially with regard to the processing of personal data?
-Finally, has the Ministry of Interior proceeded to comply with the obligations arising from the provisions of Law 4961/2022, and in particular has an algorithmic impact assessment been carried out (Article 5), has it taken the necessary transparency measures (Article 6), has the project contractor fulfilled their obligations in this respect (Article 7), and has the Ministry kept a register (Article 8) in view of the forthcoming pilot use of the system?


The Looming Disinformation Crisis: How AI is Weaponizing Misinformation in the Age of Elections

By Anastasios Arampatzis

Misinformation is as old as politics itself. From forged pamphlets to biased newspapers, those seeking power have always manipulated information. Today, a technological revolution threatens to take disinformation to unprecedented levels. Generative AI tools, capable of producing deceptive text, images, and videos, give those who seek to mislead an unprecedented arsenal. In 2024, as a record number of nations hold elections, including the EU Parliamentary elections in June, the very foundations of our democracies tremble as deepfakes and tailored propaganda threaten to drown out truth.

Misinformation in the Digital Age

In the era of endless scrolling and instant updates, misinformation spreads like wildfire on social media. It’s not just about intentionally fabricated lies; it’s the half-truths, rumors, and misleading content that gain momentum, shaping our perceptions and sometimes leading to real-world consequences.

Think of misinformation as a distorted funhouse mirror. Misinformation is false or misleading information presented as fact, regardless of whether there’s an intent to deceive. It can be a catchy meme with a dubious source, a misquoted scientific finding, or a cleverly edited video that feeds a specific narrative.  Unlike disinformation, which is a deliberate spread of falsehoods, misinformation can creep into our news feeds even when shared with good intentions.

How the Algorithms Push the Problem

Social media platforms are driven by algorithms designed to keep us engaged. They prioritize content that triggers strong emotions – outrage, fear, or click-bait-worthy sensationalism.  Unfortunately, the truth is often less exciting than emotionally charged misinformation. These algorithms don’t discriminate based on accuracy; they fuel virality. With every thoughtless share or angry comment, we further amplify misleading content.

The Psychology of Persuasion

It’s easy to blame technology, but the truth is we humans are wired in ways that make us susceptible to misinformation. Here’s why:

  • Confirmation Bias: We tend to favor information that confirms what we already believe, even if it’s flimsy. If something aligns with our worldview, we’re less likely to question its validity.
  • Lack of Critical Thinking: In a fast-paced digital world, many of us lack the time or skills to fact-check every claim we encounter. Pausing to assess the credibility of a source or the logic of an argument is not always our default setting.

How Generative AI Changes the Game

Generative AI models learn from massive datasets, enabling them to produce content indistinguishable from human-created work. Here’s how this technology complicates the misinformation landscape:

  • Deepfakes: AI-generated videos can convincingly place people in situations they never were or make them say things they never did. This makes it easier to manufacture compromising or inflammatory “evidence” to manipulate public opinion.
  • Synthetic Text: AI tools can churn out large amounts of misleading text, like fake news articles or social media posts designed to sound authentic. This can overwhelm fact-checking efforts.
  • Cheap and Easy Misinformation: The barrier to creating convincing misinformation keeps getting lower. Bad actors don’t need sophisticated technical skills; simple AI tools can amplify their efforts.

The Dangers of Misinformation

The impact of misinformation goes well beyond hurt feelings. It can:

  • Pollute Public Discourse: Misinformation hinders informed debate. It leads to misunderstandings about important issues and makes finding consensus difficult.
  • Erode Trust: When we can’t agree on basic facts, trust in institutions, science, and even the democratic process breaks down.
  • Targeted Manipulation: AI tools can allow for highly personalized misinformation campaigns that prey on specific vulnerabilities or biases of individuals and groups.
  • Influence Decisions: Misinformation can influence personal decisions, including voting for less qualified candidates or promoting radical agendas.

What Can Be Done?

There is no single, easy answer for combating the spread of misinformation. Disinformation thrives in a complicated web of human psychology, technological loopholes, and political agendas. However, recognizing these challenges is the first step toward building effective solutions.  Here are some crucial areas to focus on:

  • Boosting Tech Literacy: In a digital world, the ability to distinguish reliable sources from questionable ones is paramount. Educational campaigns, workshops, and accessible online resources should aim to teach the public how to spot red flags for fake news: sensational headlines, unverified sources, poorly constructed websites, or emotionally charged language.
  • Investing in Fact-Checking: Supporting independent fact-checking organizations is key. These act as vital watchdogs, scrutinizing news, politicians’ claims, and viral content.  Media outlets should consider prominently labeling content that has been verified or clearly marking potentially misleading information.
  • Balancing Responsibility & Freedom: Social media companies and search engines bear significant responsibility for curbing the flow of misinformation. The EU’s Digital Services Act (DSA) underscores this responsibility, placing requirements on platforms to tackle harmful content. However, this is a delicate area, as heavy-handed censorship can undermine free speech. Strategies such as demoting unreliable sources, partnering with fact-checkers, and providing context about suspicious content can help, but finding the right balance is an ongoing struggle, even in the context of evolving regulations like the DSA.
  • The Importance of Personal Accountability: Even with institutional changes, individuals play a vital role. It’s essential to be skeptical, ask questions about where information originates, and be mindful of the emotional reactions a piece of content stirs up. Before sharing anything, verify it with a reliable source. Pausing and thinking critically can break the cycle of disinformation.

The fight against misinformation is a marathon, not a sprint. As technology evolves, so too must our strategies. We must remain vigilant to protect free speech while safeguarding the truth.


The Hellenic Data Protection Authority fines the Ministry of Migration and Asylum for the "Centaurus" and "Hyperion" systems with the largest penalty ever imposed to a Greek public body

Two years ago, in February 2022, Homo Digitalis had filed a complaint against the Ministry of Immigration and Asylum for the “Centaurus” and “Hyperion” systems deployed in the reception and accommodation facilities for asylum seekers, in cooperation with the civil society organizations Hellenic League for Human Rights and HIAS Greece, as well as the academic Niovi Vavoula.

Today, the Hellenic Data Protection Authority identified significant GDPR violations in this case by the Ministry of Immigration and Asylum and decided to impose a fine of €175.000 euro – the highest ever imposed against a public body in the country.

The detailed analysis of the GDPR highlights the significant shortcomings that the Ministry of Immigration and Asylum had fallen into in the context of preparing a comprehensive and coherent Data Protection Impact Assessment, and demonstrates the significant violations of the GDPR that have been identified and relate to a large number of subjects who have a real hardship in being able to exercise their rights.

Despite the fact that the DPA remains understaffed, with a reduced budget, facing even the the risk of eviction from its premises, it manages to fulfil its mission and maintain citizens’ trust in the Independent Authorities. It remains to be seen how long the DPA will last if the state does not stand by its side.

Of course, nothing ends here. A high fine does not in itself mean anything. The Ministry of Immigration and Asylum must comply within 3 months with its obligations. However, the decision gives us the strength to continue our actions in the field of border protection in order to protect the rights of vulnerable social groups who are targeted by highly intrusive technologies.

You can read our press release here.

You can read Decision 13/2024 on the Authority’s website here.


From Clean Monday to Cyber Cleanliness: Bridging Traditions with Modern Cyber Hygiene Practices

By Anastasios Arampatzis and Ioannis Vassilakis

In the heart of Greek tradition lies Clean Monday, which marks the beginning of Lent leading to Easter and symbolizes a fresh start, encouraging cleanliness, renewal, and preparation for the season ahead. This day, celebrated with kite flying, outdoor activities, and cleansing the soul, carries profound significance in purifying one’s life in all aspects.

Just as Clean Monday invites us to declutter our homes and minds, there exists a parallel in the digital realm that often goes overlooked: cyber hygiene. Maintaining a clean and secure online presence is imperative in an era where our lives are intertwined with the digital world more than ever.

Understanding Cyber Hygiene

Cyber hygiene refers to the practices and steps that individuals take to maintain system health and improve online security. These practices are akin to personal hygiene routines; just as regular handwashing can prevent the spread of illness, everyday cyber hygiene practices can protect against cyber threats such as malware, phishing, and identity theft.

The importance of cyber hygiene cannot be overstated. In today’s interconnected world, a single vulnerability can lead to a cascade of negative consequences, affecting not just the individual but also organizations and even national security. The consequences of neglecting cyber hygiene can be severe:

  • Data breaches.
  • Identity theft.
  • Loss of privacy.

As we celebrate Clean Monday and its cleansing rituals, we should also adopt cyber hygiene practices to prepare for a secure and private digital future free from cyber threats.

Clean Desk and Desktop Policies – The Foundation of Cyber Cleanliness

Just as Clean Monday encourages us to purge our homes of unnecessary clutter, a clean desk and desktop policy is the cornerstone of maintaining a secure and efficient workspace, both physically and digitally. These policies are not just about keeping a tidy desk; they’re about safeguarding sensitive information from prying eyes and ensuring that critical data isn’t lost amidst digital clutter.

  • Clean Desk Policy ensures that sensitive documents, notes, and removable storage devices are secured when not in use or when an employee leaves their desk. It’s about minimizing the risk of sensitive information falling into the wrong hands, intentionally or accidentally.
  • Clean Desktop Policy focuses on the digital landscape, advocating for a well-organized computer desktop. This means regularly archiving or deleting unused files, managing icons, and ensuring that sensitive information is not exposed through screen savers or unattended open documents.

The benefits of these policies are profound:

  • Reduced risk of information theft.
  • Increased efficiency and enhanced productivity.
  • Enhanced professional image and competence.

The following simple tips can help you maintain cleanliness:

  1.     Implement a Routine: Just as the rituals of Clean Monday are ingrained in our culture, incorporate regular clean-up routines for physical and digital workspaces.
  2.     Secure Sensitive Information: Use locked cabinets for physical documents and password-protected folders for digital files.
  3.     Adopt Minimalism: Keep only what you need on your desk and desktop. Archive or delete old files and dispose of unnecessary paperwork.

Navigating the Digital Landscape: Ad Blockers and Cookie Banners

Using ad blockers and understanding cookie banners are essential for maintaining a clean and secure online browsing experience. As we carefully select what to keep in our homes, we must also choose what to allow into our digital spaces.

  • Ad Blockers prevent advertisements from being displayed on websites. While ads can be a source of information and revenue for site owners, they can also be intrusive, slow down web browsing, and sometimes serve as a vector for malware.
  • Cookie Banners inform users about a website’s use of cookies. Understanding and managing these consents can significantly enhance your online privacy and security.

To achieve a cleaner browsing experience:

  • Choose reputable ad-blocking software that balances effectiveness with respect for websites’ revenue models. Some ad blockers allow non-intrusive ads to support websites while blocking harmful content.
  • Take the time to read and understand what you consent to when you agree to a website’s cookie policy. Opt for settings that minimize tracking and personal data collection where possible.
  • Regularly review and clean up your browser’s permissions and stored cookies to ensure your online environment remains clutter-free and secure.

 

Cultivating Caution in Digital Interactions

In the same way that Clean Monday prompts us to approach our physical and spiritual activities with mindfulness and care, we must also navigate our digital interactions with caution and deliberateness. While brimming with information and connectivity, the digital world also harbors risks such as phishing scams, malware, and data breaches.

  • Verify Before You Click: Ensure the authenticity of websites before entering sensitive information, and be skeptical of emails or messages from unknown sources.
  • Use BCC in Emails When Appropriate: Sending emails, especially to multiple recipients, should be handled carefully to protect everyone’s privacy. Using Blind Carbon Copy (BCC) ensures that recipients’ email addresses are not exposed to everyone on the list.
  • Recognize and Avoid Phishing Attempts: Phishing emails are the digital equivalent of wolves in sheep’s clothing, often masquerading as legitimate requests. Learning to recognize these attempts can protect you from giving away sensitive information to the wrong hands.
  • Embrace skepticism in your online interactions: Ask yourself whether information shared is necessary, whether links are safe to click, and whether personal data needs to be disclosed.

Implementing a Personal Cyber Cleanliness Routine

Drawing inspiration from the rituals of Clean Monday, establishing a personal routine for cyber cleanliness is beneficial and essential for maintaining digital well-being. The following steps can help show a cleaner digital life.

  • Enable Multi-Factor Authentication (MFA) wherever it is possible to keep unauthorized users out of personal accounts.
  • Periodically review privacy settings on social media and other online platforms to ensure you only share what you intend to.
  • Unsubscribe from unused services, delete old emails and remove unnecessary files to reduce the cognitive load and make it easier to focus on what’s important.
  • Just as Clean Monday marks a time for physical and spiritual cleansing, set specific times throughout the year for digital clean-ups.
  • Keep abreast of the latest in cybersecurity to ensure your practices are up-to-date. Knowledge is power, particularly when it comes to protecting yourself online.
  • Share your knowledge and habits with friends, family, and colleagues. Just as traditions like Clean Monday are passed down, so too can habits of cyber cleanliness.

Embracing a Future of Digital Cleanliness and Renewal

The principles of Clean Monday can also be applied to our digital lives. Maintaining a healthy, secure digital environment is a continuous commitment and requires regular maintenance. We take proactive steps toward securing our personal and professional data by implementing clean desk and desktop policies, navigating the digital landscape with caution, and cultivating a routine of personal cyber cleanliness. Let us embrace this opportunity for a digital clean-up and create a safer digital world for all.