Digital Services Act: Striking a Balance Between Online Safety and Free Expression

By Anastasios Arampatzis

The European Union’s Digital Services Act (DSA) stands as a landmark effort to bring greater order to the often chaotic realm of the internet. This sweeping legislation aims to establish clear rules and responsibilities for online platforms, addressing a range of concerns from consumer protection to combatting harmful content. Yet, within the DSA’s well-intentioned provisions lies a fundamental tension that continues to challenge democracies worldwide: how do we ensure a safer, more civil online environment without infringing on the essential liberties of free expression?

This blog delves into the complexities surrounding the DSA’s provisions on chat and content control. We’ll explore how the fight against online harms, including the spread of misinformation and deepfakes, must be carefully weighed against the dangers of censorship and the chilling of legitimate speech. It’s a balancing act with far-reaching consequences for the future of our digital society.

Online Harms and the DSA’s Response

The digital realm, for all its promise of connection and knowledge, has become a breeding ground for a wide range of online harms.  Misinformation and disinformation campaigns erode trust and sow division, while hate speech fuels discrimination and violence against marginalized groups. Cyberbullying shatters lives, particularly those of vulnerable young people. The DSA acknowledges these dangers and seeks to address them head-on.

The DSA places new obligations on online platforms, particularly Very Large Online Platforms (VLOPs) with a significant reach.  These requirements include:

  • Increased transparency: Platforms must explain how their algorithms work and the criteria they use for recommending and moderating content.
  • Accountability: Companies will face potential fines and sanctions for failing to properly tackle illegal and harmful content.
  • Content moderation: Platforms must outline clear policies for content removal and implement effective, user-friendly systems for reporting problematic content.

The goal of these DSA provisions is to create a more responsible digital ecosystem where harmful content is less likely to flourish and where users have greater tools to protect themselves.

The Censorship Concern

While the DSA’s intentions are admirable, its measures to combat online harms raise legitimate concerns about censorship and the potential suppression of free speech. History is riddled with instances where the fight against harmful content has served as a pretext to silence dissenting voices, critique those in power, or suppress marginalized groups.

Civil society organizations have stressed the need for DSA to include clear safeguards to prevent its well-meaning provisions from becoming tools of censorship. It’s essential to have precise definitions of  “illegal” or  “harmful” content – those that directly incite violence or break existing laws. Overly broad definitions risk encompassing satire, political dissent, and artistic expression, which are all protected forms of speech.

Suppressing these forms of speech under the guise of safety can have a chilling effect, discouraging creativity, innovation, and the open exchange of ideas vital to a healthy democracy. It’s important to remember that what offends one person might be deeply important to another. The DSA must tread carefully to avoid empowering governments or platforms to unilaterally decide what constitutes acceptable discourse.

Deepfakes and the Fight Against Misinformation

Deepfakes, synthetic media manipulated to misrepresent reality, pose a particularly insidious threat to the integrity of information. Their ability to make it appear as if someone said or did something they never did has the potential to ruin reputations, undermine trust in institutions, and even destabilize political processes.

The DSA rightfully recognizes the danger of deepfakes and places an obligation on platforms to make efforts to combat their harmful use. However, this is a complex area where the line between harmful manipulation and legitimate uses can become blurred. Deepfake technology can also be harnessed for satire, parody, or artistic purposes.

The challenge for the DSA lies in identifying deepfakes created with malicious intent while protecting those generated for legitimate forms of expression. Platforms will likely need to develop a combination of technological detection tools and human review mechanisms to make these distinctions effectively.

The Responsibility of Tech Giants

When it comes to spreading harmful content and the potential for online censorship, a large portion of the responsibility falls squarely on the shoulders of major online platforms. These tech giants play a central role in shaping the content we see and how we interact online.

The DSA directly addresses this immense power by imposing stricter requirements on the largest platforms, those deemed Very Large Online Platforms. These requirements are designed to promote greater accountability and push these platforms to take a more active role in curbing harmful content.

A key element of the DSA is the push for transparency. Platforms will be required to provide detailed explanations of their content moderation practices, including the algorithms used to filter and recommend content. This increased visibility aims to prevent arbitrary or biased decision-making and offers users greater insight into the mechanisms governing their online experiences.

Protecting Free Speech – Where do We Draw the Line?

The protection of free speech is a bedrock principle of any democratic society. It allows for the robust exchange of ideas, challenges to authority, and provides a voice for those on the margins. Yet, as the digital world has evolved, the boundaries of free speech have become increasingly contested.

The DSA represents an honest attempt to navigate this complex terrain, but it’s vital to recognize that there are no easy answers. The line between harmful content and protected forms of expression is often difficult to discern. The DSA’s implementation must include strong safeguards informed by fundamental human rights principles to ensure space for diverse opinions and critique.

In this effort, we should prioritize empowering users. Investing in media literacy education and promoting tools for critical thinking are essential in helping individuals become more discerning consumers of online information.


The Digital Services Act signals an important turning point in regulating the online world. The struggle to balance online safety and freedom of expression is far from over. The DSA provides a strong foundation but needs to be seen as a step in an ongoing process, not a final solution. To ensure a truly open, democratic, and safe internet, we need continuing vigilance, robust debate, and the active participation of both individuals and civil society.

The Looming Disinformation Crisis: How AI is Weaponizing Misinformation in the Age of Elections

By Anastasios Arampatzis

Misinformation is as old as politics itself. From forged pamphlets to biased newspapers, those seeking power have always manipulated information. Today, a technological revolution threatens to take disinformation to unprecedented levels. Generative AI tools, capable of producing deceptive text, images, and videos, give those who seek to mislead an unprecedented arsenal. In 2024, as a record number of nations hold elections, including the EU Parliamentary elections in June, the very foundations of our democracies tremble as deepfakes and tailored propaganda threaten to drown out truth.

Misinformation in the Digital Age

In the era of endless scrolling and instant updates, misinformation spreads like wildfire on social media. It’s not just about intentionally fabricated lies; it’s the half-truths, rumors, and misleading content that gain momentum, shaping our perceptions and sometimes leading to real-world consequences.

Think of misinformation as a distorted funhouse mirror. Misinformation is false or misleading information presented as fact, regardless of whether there’s an intent to deceive. It can be a catchy meme with a dubious source, a misquoted scientific finding, or a cleverly edited video that feeds a specific narrative.  Unlike disinformation, which is a deliberate spread of falsehoods, misinformation can creep into our news feeds even when shared with good intentions.

How the Algorithms Push the Problem

Social media platforms are driven by algorithms designed to keep us engaged. They prioritize content that triggers strong emotions – outrage, fear, or click-bait-worthy sensationalism.  Unfortunately, the truth is often less exciting than emotionally charged misinformation. These algorithms don’t discriminate based on accuracy; they fuel virality. With every thoughtless share or angry comment, we further amplify misleading content.

The Psychology of Persuasion

It’s easy to blame technology, but the truth is we humans are wired in ways that make us susceptible to misinformation. Here’s why:

  • Confirmation Bias: We tend to favor information that confirms what we already believe, even if it’s flimsy. If something aligns with our worldview, we’re less likely to question its validity.
  • Lack of Critical Thinking: In a fast-paced digital world, many of us lack the time or skills to fact-check every claim we encounter. Pausing to assess the credibility of a source or the logic of an argument is not always our default setting.

How Generative AI Changes the Game

Generative AI models learn from massive datasets, enabling them to produce content indistinguishable from human-created work. Here’s how this technology complicates the misinformation landscape:

  • Deepfakes: AI-generated videos can convincingly place people in situations they never were or make them say things they never did. This makes it easier to manufacture compromising or inflammatory “evidence” to manipulate public opinion.
  • Synthetic Text: AI tools can churn out large amounts of misleading text, like fake news articles or social media posts designed to sound authentic. This can overwhelm fact-checking efforts.
  • Cheap and Easy Misinformation: The barrier to creating convincing misinformation keeps getting lower. Bad actors don’t need sophisticated technical skills; simple AI tools can amplify their efforts.

The Dangers of Misinformation

The impact of misinformation goes well beyond hurt feelings. It can:

  • Pollute Public Discourse: Misinformation hinders informed debate. It leads to misunderstandings about important issues and makes finding consensus difficult.
  • Erode Trust: When we can’t agree on basic facts, trust in institutions, science, and even the democratic process breaks down.
  • Targeted Manipulation: AI tools can allow for highly personalized misinformation campaigns that prey on specific vulnerabilities or biases of individuals and groups.
  • Influence Decisions: Misinformation can influence personal decisions, including voting for less qualified candidates or promoting radical agendas.

What Can Be Done?

There is no single, easy answer for combating the spread of misinformation. Disinformation thrives in a complicated web of human psychology, technological loopholes, and political agendas. However, recognizing these challenges is the first step toward building effective solutions.  Here are some crucial areas to focus on:

  • Boosting Tech Literacy: In a digital world, the ability to distinguish reliable sources from questionable ones is paramount. Educational campaigns, workshops, and accessible online resources should aim to teach the public how to spot red flags for fake news: sensational headlines, unverified sources, poorly constructed websites, or emotionally charged language.
  • Investing in Fact-Checking: Supporting independent fact-checking organizations is key. These act as vital watchdogs, scrutinizing news, politicians’ claims, and viral content.  Media outlets should consider prominently labeling content that has been verified or clearly marking potentially misleading information.
  • Balancing Responsibility & Freedom: Social media companies and search engines bear significant responsibility for curbing the flow of misinformation. The EU’s Digital Services Act (DSA) underscores this responsibility, placing requirements on platforms to tackle harmful content. However, this is a delicate area, as heavy-handed censorship can undermine free speech. Strategies such as demoting unreliable sources, partnering with fact-checkers, and providing context about suspicious content can help, but finding the right balance is an ongoing struggle, even in the context of evolving regulations like the DSA.
  • The Importance of Personal Accountability: Even with institutional changes, individuals play a vital role. It’s essential to be skeptical, ask questions about where information originates, and be mindful of the emotional reactions a piece of content stirs up. Before sharing anything, verify it with a reliable source. Pausing and thinking critically can break the cycle of disinformation.

The fight against misinformation is a marathon, not a sprint. As technology evolves, so too must our strategies. We must remain vigilant to protect free speech while safeguarding the truth.

From Clean Monday to Cyber Cleanliness: Bridging Traditions with Modern Cyber Hygiene Practices

By Anastasios Arampatzis and Ioannis Vassilakis

In the heart of Greek tradition lies Clean Monday, which marks the beginning of Lent leading to Easter and symbolizes a fresh start, encouraging cleanliness, renewal, and preparation for the season ahead. This day, celebrated with kite flying, outdoor activities, and cleansing the soul, carries profound significance in purifying one’s life in all aspects.

Just as Clean Monday invites us to declutter our homes and minds, there exists a parallel in the digital realm that often goes overlooked: cyber hygiene. Maintaining a clean and secure online presence is imperative in an era where our lives are intertwined with the digital world more than ever.

Understanding Cyber Hygiene

Cyber hygiene refers to the practices and steps that individuals take to maintain system health and improve online security. These practices are akin to personal hygiene routines; just as regular handwashing can prevent the spread of illness, everyday cyber hygiene practices can protect against cyber threats such as malware, phishing, and identity theft.

The importance of cyber hygiene cannot be overstated. In today’s interconnected world, a single vulnerability can lead to a cascade of negative consequences, affecting not just the individual but also organizations and even national security. The consequences of neglecting cyber hygiene can be severe:

  • Data breaches.
  • Identity theft.
  • Loss of privacy.

As we celebrate Clean Monday and its cleansing rituals, we should also adopt cyber hygiene practices to prepare for a secure and private digital future free from cyber threats.

Clean Desk and Desktop Policies – The Foundation of Cyber Cleanliness

Just as Clean Monday encourages us to purge our homes of unnecessary clutter, a clean desk and desktop policy is the cornerstone of maintaining a secure and efficient workspace, both physically and digitally. These policies are not just about keeping a tidy desk; they’re about safeguarding sensitive information from prying eyes and ensuring that critical data isn’t lost amidst digital clutter.

  • Clean Desk Policy ensures that sensitive documents, notes, and removable storage devices are secured when not in use or when an employee leaves their desk. It’s about minimizing the risk of sensitive information falling into the wrong hands, intentionally or accidentally.
  • Clean Desktop Policy focuses on the digital landscape, advocating for a well-organized computer desktop. This means regularly archiving or deleting unused files, managing icons, and ensuring that sensitive information is not exposed through screen savers or unattended open documents.

The benefits of these policies are profound:

  • Reduced risk of information theft.
  • Increased efficiency and enhanced productivity.
  • Enhanced professional image and competence.

The following simple tips can help you maintain cleanliness:

  1.     Implement a Routine: Just as the rituals of Clean Monday are ingrained in our culture, incorporate regular clean-up routines for physical and digital workspaces.
  2.     Secure Sensitive Information: Use locked cabinets for physical documents and password-protected folders for digital files.
  3.     Adopt Minimalism: Keep only what you need on your desk and desktop. Archive or delete old files and dispose of unnecessary paperwork.

Navigating the Digital Landscape: Ad Blockers and Cookie Banners

Using ad blockers and understanding cookie banners are essential for maintaining a clean and secure online browsing experience. As we carefully select what to keep in our homes, we must also choose what to allow into our digital spaces.

  • Ad Blockers prevent advertisements from being displayed on websites. While ads can be a source of information and revenue for site owners, they can also be intrusive, slow down web browsing, and sometimes serve as a vector for malware.
  • Cookie Banners inform users about a website’s use of cookies. Understanding and managing these consents can significantly enhance your online privacy and security.

To achieve a cleaner browsing experience:

  • Choose reputable ad-blocking software that balances effectiveness with respect for websites’ revenue models. Some ad blockers allow non-intrusive ads to support websites while blocking harmful content.
  • Take the time to read and understand what you consent to when you agree to a website’s cookie policy. Opt for settings that minimize tracking and personal data collection where possible.
  • Regularly review and clean up your browser’s permissions and stored cookies to ensure your online environment remains clutter-free and secure.


Cultivating Caution in Digital Interactions

In the same way that Clean Monday prompts us to approach our physical and spiritual activities with mindfulness and care, we must also navigate our digital interactions with caution and deliberateness. While brimming with information and connectivity, the digital world also harbors risks such as phishing scams, malware, and data breaches.

  • Verify Before You Click: Ensure the authenticity of websites before entering sensitive information, and be skeptical of emails or messages from unknown sources.
  • Use BCC in Emails When Appropriate: Sending emails, especially to multiple recipients, should be handled carefully to protect everyone’s privacy. Using Blind Carbon Copy (BCC) ensures that recipients’ email addresses are not exposed to everyone on the list.
  • Recognize and Avoid Phishing Attempts: Phishing emails are the digital equivalent of wolves in sheep’s clothing, often masquerading as legitimate requests. Learning to recognize these attempts can protect you from giving away sensitive information to the wrong hands.
  • Embrace skepticism in your online interactions: Ask yourself whether information shared is necessary, whether links are safe to click, and whether personal data needs to be disclosed.

Implementing a Personal Cyber Cleanliness Routine

Drawing inspiration from the rituals of Clean Monday, establishing a personal routine for cyber cleanliness is beneficial and essential for maintaining digital well-being. The following steps can help show a cleaner digital life.

  • Enable Multi-Factor Authentication (MFA) wherever it is possible to keep unauthorized users out of personal accounts.
  • Periodically review privacy settings on social media and other online platforms to ensure you only share what you intend to.
  • Unsubscribe from unused services, delete old emails and remove unnecessary files to reduce the cognitive load and make it easier to focus on what’s important.
  • Just as Clean Monday marks a time for physical and spiritual cleansing, set specific times throughout the year for digital clean-ups.
  • Keep abreast of the latest in cybersecurity to ensure your practices are up-to-date. Knowledge is power, particularly when it comes to protecting yourself online.
  • Share your knowledge and habits with friends, family, and colleagues. Just as traditions like Clean Monday are passed down, so too can habits of cyber cleanliness.

Embracing a Future of Digital Cleanliness and Renewal

The principles of Clean Monday can also be applied to our digital lives. Maintaining a healthy, secure digital environment is a continuous commitment and requires regular maintenance. We take proactive steps toward securing our personal and professional data by implementing clean desk and desktop policies, navigating the digital landscape with caution, and cultivating a routine of personal cyber cleanliness. Let us embrace this opportunity for a digital clean-up and create a safer digital world for all.

Spyware: A New Threat to Privacy in Communication

*By Sofia Despoina Feizidou

The Athens Polytechnic uprising in November 1973 was the most massive anti-dictatorial protest and a precursor to the collapse of the military dictatorship regime imposed on the Greek people since April 21, 1963. Among other things, this regime had abolished fundamental rights.

One of the most critical fundamental rights is the right to the protection of correspondence, especially the confidentiality of communication. The right of an individual to share and exchange thoughts, ideas, feelings, news, and opinions within an intimate and confidential framework, with chosen individuals, without fear of private communication being monitored or any expression being revealed to third parties or used against them, is essential to democracy. Therefore, it is a fundamental individual right enshrined in international and European legislation, as well as in national Constitutions. The provision of Article 19 of the Greek Constitution dates back to 1975 (which may not be a coincidence).

However, the revelation of the surveillance of politicians or their relatives, actors, journalists, businessmen, and others one year ago shows that the protection of communication privacy remains vulnerable, especially in the modern digital age.

Spyware: A New Asset in the Arsenal of Intelligence Services and Companies

Spyware is a type of malware designed to secretly monitor a person's activities on their electronic devices, such as computers or mobile phones, without the end user's knowledge or consent. Spyware is typically installed on devices by opening an email or a file attachment. Once installed, it is difficult to detect, and even if detected, proving responsibility for the invasion is challenging. Spyware provides full and retroactive access to the user’s device, monitoring internet activity and gathering sensitive information and personal data, including files, messages, passwords, or credit card numbers. Additionally, it can capture screenshots or monitor audio and video by activating the device's microphone or camera.

Some of the most well-known spyware designed to invade and monitor mobile devices remotely include:

  1. Predator: This spyware is installed on the device when a user receives a message containing a link that appears normal and includes a catchy description to mislead the user into clicking on the link. Once clicked, the spyware is automatically installed, granting full access to the device, messages, files, as well as its camera and microphone.
  2. Pegasus: Similar to Predator, Pegasus aims to convince the user to click on a link, which then installs the spyware on the device. However, Pegasus can also be installed on a device without requiring any action from the user, such as a missed call on WhatsApp. Immediately after installation, it executes its operator's commands and gathers a significant amount of personal data, including files, passwords, text messages, call records, or the user’s location, leaving no trace of its existence on the device.

In June 2023, the Chairman of the European Parliament’s Committee of Inquiry investigating the use of Pegasus and similar surveillance spyware stated: "Spyware can be an effective tool in fighting crime, but when used wrongly by governments, it poses a significant risk to the rule of law and fundamental rights." Indeed, the technological capabilities of spyware provide unauthorized access to personal data and the monitoring of people's activities, leading to violations of the right to communication confidentiality, the right to the protection of personal data, and the right to privacy in general.

According to the Committee's findings, the abuse of surveillance spyware is widespread in the European Union. In addition to Greece, the use of such software has been found in Poland, Hungary, Spain, and Cyprus, which is deeply concerning. The need to establish a regulatory framework to prevent such abuse is now in the spotlight, not only at the national level but primarily at the EU level.

What Do We Need?

  1. Clear Rules to Prevent Abuse: European rules should clearly define how law enforcement authorities can use spyware. The use of spyware by law enforcement should only be authorized in exceptional cases, for a predefined purpose, and for a limited period of time. A common legal definition of the concept of 'national security reasons' should be established. The obligation to notify targeted individuals and non-targeted individuals whose data were accessed during someone else’s surveillance, as well as procedures for supervision and independent control following any incident of illegal use of such software, should also be enshrined.
  2. Compliance of National Legislation with European Court of Human Rights Case Law: The Court grants national authorities wide discretion in weighing the right to privacy against national security interests. However, it has developed and interpreted the criteria introduced by the European Convention of Human Rights, which must be met for a restriction on the right to confidential, free communication to be considered legitimate. This has been established in numerous judgments since 1978.
  3. Establishment of the "European Union Technology Laboratory": This independent research institute would be responsible for investigating surveillance methods and providing technological support, such as device screening and forensic research.
  4. Foreign Policy Dimension: Members of the European Parliament (MEPs) have called for a thorough review of spyware export licenses and more effective enforcement of the EU’s export control rules. The European Union should also cooperate with the United States in developing a common strategy on spyware, as well as with non-EU countries to ensure that aid provided is not used for the purchase and use of spyware.


In conclusion, as we reflect upon the lessons of history and the enduring struggle for democracy and fundamental rights, Benjamin Franklin's timeless wisdom resonates with profound significance: "They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety." The recent revelations of spyware abuse have starkly illustrated the delicate balance between security and individual freedoms. While spyware may be wielded as a tool in the fight against crime, its potential for misuse poses a grave threat to the rule of law and the very principles upon which our democratic societies are built.

*Sofia-Despina Feizidou is a lawyer, and graduate of the Athens Law School, holding a Master's degree with specialization in "Law & Information and Communication Technologies" from the Department of Digital Systems of the University of Piraeus. Her thesis was on the comparative review of the case law of the European Courts (ECtHR and CJEU) on mass surveillance.

The Challenge of Complying with New EU Legislative Security Requirements

Γράφουν οι Αναστάσιος Αραμπατζής και Λευτέρης Χελιουδάκης

Τα τελευταία χρόνια, ο αριθμός των πρωτοβουλιών ψηφιακής πολιτικής σε επίπεδο ΕΕ έχει διευρυνθεί. Έχουν ήδη υιοθετηθεί πολλές νομοθετικές προτάσεις που καλύπτουν τις τεχνολογίες πληροφοριών και επικοινωνιών (ΤΠΕ) και επηρεάζουν τα δικαιώματα και τις ελευθερίες των ανθρώπων στην ΕΕ, ενώ άλλες παραμένουν υπό διαπραγμάτευση. Οι περισσότερες από αυτές τις νομοθετικές πράξεις είναι καίριας σημασίας και αφορούν ένα ευρύ φάσμα πολύπλοκων θεμάτων, όπως η τεχνητή νοημοσύνη, η διακυβέρνηση δεδομένων, η προστασία της ιδιωτικής ζωής και η ελευθερία της έκφρασης στο διαδίκτυο, η πρόσβαση των αρχών επιβολής του νόμου σε ψηφιακά δεδομένα, η ηλεκτρονική υγεία και η κυβερνοασφάλεια.

Οι φορείς της κοινωνίας των πολιτών χρειάζονται συχνά βοήθεια για να παρακολουθήσουν αυτές τις πολιτικές πρωτοβουλίες της ΕΕ, ενώ οι επιχειρήσεις αντιμετωπίζουν σοβαρές προκλήσεις στην κατανόηση της πολύπλοκης νομικής γλώσσας των νομοθετικών απαιτήσεων. Το παρόν άρθρο αποσκοπεί στην ευαισθητοποίηση σχετικά με δύο πρόσφατα εκδοθείσες νομοθεσίες της ΕΕ για την ασφάλεια στον κυβερνοχώρο, και συγκεκριμένα την Πράξη για την Ψηφιακή Επιχειρησιακή Ανθεκτικότητα (DORA) και την αναθεωρημένη έκδοση της Οδηγίας για την Ασφάλεια Δικτύων και Πληροφοριών (NIS2).


Η NIS2 αποτελεί την αναγκαία απάντηση στο διευρυμένο τοπίο που απειλεί τις κρίσιμες ευρωπαϊκές υποδομές.

Η αρχική έκδοση της Oδηγίας εισήγαγε διάφορες υποχρεώσεις για την εθνική εποπτεία των φορέων εκμετάλλευσης βασικών υπηρεσιών (ΦΕΒΥ) και των παρόχων ψηφιακών υπηρεσιών (ΠΨΥ). Για παράδειγμα, τα κράτη μέλη της ΕΕ πρέπει να εποπτεύουν το επίπεδο κυβερνοασφάλειας των φορέων εκμετάλλευσης σε κρίσιμους τομείς, όπως η ενέργεια, οι μεταφορές, το νερό, η υγειονομική περίθαλψη, οι ψηφιακές υποδομές, οι τράπεζες και οι υποδομές της χρηματοπιστωτικής αγοράς. Επιπλέον, τα κράτη μέλη πρέπει να εποπτεύουν τους παρόχους κρίσιμων ψηφιακών υπηρεσιών, συμπεριλαμβανομένων των διαδικτυακών αγορών, των υπηρεσιών υπολογιστικού νέφους και των μηχανών αναζήτησης.

Για το λόγο αυτό, τα κράτη μέλη της ΕΕ έπρεπε να δημιουργήσουν αρμόδιες εθνικές αρχές που θα είναι επιφορτισμένες με αυτά τα εποπτικά καθήκοντα. Επιπλέον, η NIS εισήγαγε διαύλους για τη διασυνοριακή συνεργασία και ανταλλαγή πληροφοριών μεταξύ των κρατών μελών της ΕΕ.

Ωστόσο, η ψηφιοποίηση των υπηρεσιών και το αυξημένο επίπεδο των κυβερνοεπιθέσεων σε ολόκληρη την ΕΕ οδήγησαν την Ευρωπαϊκή Επιτροπή το 2020 να προτείνει μια αναθεωρημένη έκδοση της NIS, δηλαδή τη NIS2. Η νέα οδηγία τέθηκε σε ισχύ στις 16 Ιανουαρίου 2023 και τα κράτη μέλη έχουν πλέον 21 μήνες, έως τις 17 Οκτωβρίου 2024, για να μεταφέρουν τα μέτρα της στο εθνικό τους δίκαιο.

Οι νέες διατάξεις έχουν διευρύνει το πεδίο εφαρμογής της NIS με σκοπό την ενίσχυση των απαιτήσεων ασφαλείας που επιβάλλονται στα κράτη μέλη της ΕΕ, τον εξορθολογισμό των υποχρεώσεων αναφοράς περιστατικών ασφαλείας και τη θέσπιση ισχυρότερων εποπτικών μέτρων και αυστηρότερων απαιτήσεων τήρησης της νομοθεσίας, όπως για παράδειγμα ένα εναρμονισμένο καθεστώς κυρώσεων για όλα τα κράτη μέλη της ΕΕ.

Η NIS2 εισάγει τα ακόλουθα στοιχεία:

  • Διευρυμένη εφαρμογή: Η NIS2 αυξάνει τον αριθμό των τομέων που καλύπτουν οι διατάξεις της, συμπεριλαμβανομένων των ταχυδρομικών υπηρεσιών, των κατασκευαστών αυτοκινήτων, των πλατφορμών των μέσων κοινωνικής δικτύωσης, της διαχείρισης αποβλήτων, της παραγωγής χημικών προϊόντων και αγροτικών προϊόντων διατροφής. Οι νέοι κανόνες ταξινομούν τις οντότητες σε “βασικές οντότητες” και “σημαντικές οντότητες” και ισχύουν για τους υπεργολάβους και τους παρόχους υπηρεσιών που δραστηριοποιούνται στους καλυπτόμενους τομείς.
  • Αυξημένη ετοιμότητα για τις παγκόσμιες απειλές στον κυβερνοχώρο: Η NIS2 επιδιώκει να ενισχύσει τη συλλογική επίγνωση της κατάστασης μεταξύ των βασικών οντοτήτων για τον εντοπισμό και την κοινοποίηση σχετικών απειλών πριν αυτές επεκταθούν σε όλα τα κράτη μέλη. Για παράδειγμα, το δίκτυο EU-CyCLONe θα βοηθήσει στον συντονισμό και τη διαχείριση περιστατικών μεγάλης κλίμακας, ενώ θα δημιουργηθεί ένας εθελοντικός μηχανισμός αμοιβαίας μάθησης για την ενίσχυση της ευαισθητοποίησης.
  • Εξορθολογισμένα πρότυπα ανθεκτικότητας με αυστηρότερες κυρώσεις. Σε αντίθεση με τη NIS, η NIS2 προβλέπει υψηλότατες κυρώσεις και ισχυρά μέτρα ασφαλείας. Για παράδειγμα, οι παραβάσεις της νομοθεσίας από βασικές οντότητες θα υπόκεινται σε διοικητικά πρόστιμα μέγιστου ύψους τουλάχιστον 10 εκατ. ευρώ ή 2% του συνολικού παγκόσμιου ετήσιου κύκλου εργασιών τους, ενώ οι σημαντικές οντότητες θα υπόκεινται σε πρόστιμα μέγιστου ύψους τουλάχιστον 7 εκατ. ευρώ ή 1,4 % του συνολικού παγκόσμιου ετήσιου κύκλου εργασιών τους.
  • Εξορθολογισμένες διαδικασίες αναφορών. Η NIS2 εξορθολογίζει τις υποχρεώσεις υποβολής αναφορών ώστε να αποφευχθεί η πρόκληση υπερβολικής υποβολής και η δημιουργία υπερβολικού φόρτου για τις καλυπτόμενες οντότητες.
  • Διευρυμένο εδαφικό πεδίο εφαρμογής: Σύμφωνα με τους νέους κανόνες, συγκεκριμένες κατηγορίες οντοτήτων που δεν είναι εγκατεστημένες στην Ευρωπαϊκή Ένωση αλλά προσφέρουν υπηρεσίες εντός αυτής θα υποχρεούνται να ορίζουν αντιπρόσωπο στην ΕΕ.


Η πράξη για τη Ψηφιακή Επιχειρησιακή Ανθεκτικότητα (DORA) αντιμετωπίζει ένα θεμελιώδες πρόβλημα στο χρηματοπιστωτικό οικοσύστημα της ΕΕ: πώς ο τομέας μπορεί να παραμείνει ανθεκτικός κατά τη διάρκεια σοβαρών επιχειρησιακών διαταραχών. Πριν από την DORA, τα χρηματοπιστωτικά ιδρύματα χρησιμοποιούσαν την κατανομή κεφαλαίου για τη διαχείριση των σημαντικών κατηγοριών λειτουργικού κινδύνου. Ωστόσο, πρέπει να αντιμετωπίσουν καλύτερα τις προκλήσεις που ανακύπτουν για την ενίσχυση της κυβερνοσφασάλειας τους και να ενσωματώσουν πρακτικές που θα ενισχύσουν την ανθεκτικότητα τους έναντι ενός εξελισσόμενου τοπίου απειλών στο ευρύτερο επιχειρησιακό πλαίσιο τους.

Το δελτίο τύπου του Ευρωπαϊκού Συμβουλίου παρέχει μια περιεκτική δήλωση του σκοπού της Πράξης για την ψηφιακή επιχειρησιακή ανθεκτικότητα:

«Η πράξη DORA καθορίζει ενιαίες απαιτήσεις για την ασφάλεια των συστημάτων δικτύου και πληροφοριών των εταιρειών και οργανισμών που δραστηριοποιούνται στον χρηματοπιστωτικό τομέα, καθώς και των κρίσιμων τρίτων παρόχων υπηρεσιών ΤΠΕ (τεχνολογίες πληροφοριών και επικοινωνιών), όπως πλατφόρμες υπολογιστικού νέφους ή υπηρεσίες ανάλυσης δεδομένων».

Με άλλα λόγια, η DORA δημιουργεί ένα ομοιογενές κανονιστικό πλαίσιο για την ψηφιακή επιχειρησιακή ανθεκτικότητα, ώστε να διασφαλιστεί ότι όλες οι χρηματοπιστωτικές οντότητες μπορούν να προλαμβάνουν και να μετριάζουν τις απειλές στον κυβερνοχώρο.

Σύμφωνα με το άρθρο 2 του κανονισμού, η DORA εφαρμόζεται σε χρηματοπιστωτικές οντότητες, συμπεριλαμβανομένων τραπεζών, ασφαλιστικών επιχειρήσεων, επιχειρήσεων επενδύσεων και παρόχων υπηρεσιών κρυπτοστοιχείων. Ο κανονισμός καλύπτει επίσης κρίσιμα τρίτα μέρη που προσφέρουν σε χρηματοπιστωτικές εταιρείες υπηρεσίες ΤΠΕ και κυβερνοασφάλειας.

Επειδή η DORA είναι κανονισμός και όχι οδηγία, είναι εκτελεστή και ισχύει άμεσα σε όλα τα κράτη μέλη της ΕΕ από την ημερομηνία εφαρμογή της. H DORA συμπληρώνει την Oδηγία NIS2 και αντιμετωπίζει πιθανές επικαλύψεις ως ειδικό δίκαιο (“lex specialis”).

Η συμμόρφωση με τη DORA αναλύεται σε πέντε πυλώνες που καλύπτουν ποικίλες πτυχές της πληροφορικής και της κυβερνοασφάλειας, παρέχοντας στις χρηματοπιστωτικές επιχειρήσεις μια εμπεριστατωμένη βάση για την ψηφιακή ανθεκτικότητα.

  • Διαχείριση κινδύνων ΤΠΕ: Οι διαδικασίες εσωτερικής διακυβέρνησης και ελέγχου διασφαλίζουν την αποτελεσματική και συνετή διαχείριση κινδύνων ΤΠΕ.
  • Διαχείριση, ταξινόμηση και αναφορά περιστατικών που σχετίζονται με τις ΤΠΕ: Ανίχνευση, διαχείριση και προειδοποίηση περιστατικών που σχετίζονται με ΤΠΕ, με τον καθορισμό, την καθιέρωση και την εφαρμογή μιας διαδικασίας αντιμετώπισης και διαχείρισης περιστατικών κυβερνοασφάλειας.
  • Έλεγχος ψηφιακής επιχειρησιακής ανθεκτικότητας: Αξιολόγηση της ετοιμότητας για τη διαχείριση περιστατικών κυβερνοασφάλειας, εντοπισμός ατελειών, ελλείψεων και κενών στην ψηφιακή επιχειρησιακή ανθεκτικότητα και ταχεία εφαρμογή διορθωτικών μέτρων.
  • Διαχείριση του κινδύνου ΤΠΕ από τρίτους: Πρόκειται για αναπόσπαστο στοιχείο του κινδύνου κυβερνοασφάλειας εντός του πλαισίου διαχείρισης κινδύνου ΤΠΕ.
  • Ανταλλαγή πληροφοριών: Ανταλλαγή πληροφοριών σχετικά με απειλές στον κυβερνοχώρο, συμπεριλαμβανομένων δεικτών συμβιβασμού, τακτικών, τεχνικών και διαδικασιών (TTP) και ειδοποιήσεων για την κυβερνοασφάλεια, για την ενίσχυση της ανθεκτικότητας των χρηματοπιστωτικών οντοτήτων.

Σύμφωνα με το άρθρο 64, ο κανονισμός τέθηκε σε ισχύ στις 17 Ιανουαρίου 2023 και εφαρμόζεται από τις 17 Ιανουαρίου 2025. Είναι επίσης σημαντικό να σημειωθεί ότι το άρθρο 58 ορίζει ότι έως τις 17 Ιανουαρίου 2026, η Ευρωπαϊκή Επιτροπή θα επανεξετάσει “την καταλληλότητα των ενισχυμένων απαιτήσεων για τους νόμιμους ελεγκτές και τα ελεγκτικά γραφεία όσον αφορά την ψηφιακή επιχειρησιακή ανθεκτικότητα”.

Τέσσερα βήματα για τη συμμόρφωση σήμερα

Παρόλο που οι προθεσμίες είναι πιο μακριά, οι επηρεαζόμενοι οργανισμοί δεν χρειάζεται να κάθονται και να περιμένουν. Ο χρόνος (και το χρήμα) είναι πολύτιμος όταν προετοιμάζεστε για την συμμόρφωση με τις απαιτήσεις των NIS2 και DORA. Οι οργανισμοί πρέπει να αξιολογήσουν και να προσδιορίσουν τις ενέργειες που μπορούν να προβούν για να προετοιμαστούν για τους νέους κανόνες.

Οι ακόλουθες συστάσεις αποτελούν ένα καλό σημείο εκκίνησης:

  • Διακυβέρνηση και διαχείριση κινδύνων: Κατανοήστε τις νέες απαιτήσεις και αξιολογήστε τις τρέχουσες διαδικασίες διακυβέρνησης και διαχείρισης κινδύνων. Επιπλέον, εξετάστε το ενδεχόμενο να αυξήσετε τη χρηματοδότηση για προγράμματα που βοηθούν στον εντοπισμό απειλών και περιστατικών κυβερνοεπιθέσεων και να ενισχύσετε τις πρωτοβουλίες εκπαίδευσης για την ευαισθητοποίηση σε θέματα κυβερνοασφάλειας σε επίπεδο επιχείρησης.
  • Αναφορά περιστατικών: Αξιολογήστε την ωριμότητα της διαχείρισης συμβάντων και της υποβολής εκθέσεων για να κατανοήσετε τις τρέχουσες δυνατότητες και να μετρήσετε την ευαισθητοποίηση σχετικά με τα διάφορα πρότυπα υποβολής εκθέσεων συμβάντων κυβερνοασφάλειας που αφορούν τον κλάδο σας. Θα πρέπει επίσης να ελέγξετε την ικανότητά σας να αναγνωρίζετε καταστάσεις ατυχημάτων που αποφεύγονται την τελευταία στιγμή.
  • Δοκιμή ανθεκτικότητας: Αναγνώριση των ταλέντων που απαιτούνται για το σχεδιασμό και τη διεξαγωγή δοκιμών ανθεκτικότητας, συμπεριλαμβανομένων εκπαιδευτικών συνεδρίων για τα μέλη του διοικητικού συμβουλίου σχετικά με τις τεχνικές που χρησιμοποιούνται και τις επιπτώσεις τους.
  • Διαχείριση κινδύνων από τρίτους: Για να βοηθήσετε στη δημιουργία ενός σχεδίου περιορισμού των κινδύνων, επικεντρωθείτε στην ενίσχυση της χαρτογράφησης των συμβάσεων και στην αξιολόγηση των τρωτών σημείων τρίτων μερών. Αναγνωρίστε τις υπηρεσίες που είναι απαραίτητες για τη φιλοξενία θεμελιωδών επιχειρηματικών διαδικασιών. Ελέγξτε αν έχει εφαρμοστεί μια αρχιτεκτονική ανοχής σε σφάλματα για να μειωθούν οι επιπτώσεις της διακοπής λειτουργίας κρίσιμων παρόχων.

Το άρθρο αυτό εκπονήθηκε στο πλαίσιο του έργου “Increasing Civic Engagement in the Digital Agenda — ICEDA” με την υποστήριξη της Ευρωπαϊκής Ένωσης και του South East Europe (SEE) Digital Rights Network. Το περιεχόμενο αυτού του άρθρου δεν θα πρέπει να θεωρείται ότι αποτελεί επίσημη θέση της Ευρωπαϊκής Ένωσης ή του SEΕ.

Photo by FLY:D on Unsplash

Raising Awareness Is Critical for Privacy and Data Protection

By Anastasios Arampatzis

Many believe cybersecurity and privacy are about emerging technologies, processes, hackers, and laws. Partially this is true. Technology is pervasive and has changed drastically how we live, work and communicate. High-profile data breaches make the news headlines more frequently than not, and businesses are fined enormous penalties for breaking security and privacy laws.

However, they must remember the most important pillar of data protection and privacy; the human element. Hymans create and use technology, and it is humans who even develop the regulations that govern a respectful and ethical use of technology. What is more, humans mostly feel the impact of data breaches. The human element is also responsible for the majority of data breaches. The Verizon Data Breach Investigations Report highlights that humans are responsible for 82% of successful data breaches.

If this percentage seems high, imagine that many security professionals argue that it is instead closer to 100%. Flawed applications, for example, are the artifact of humans. People manufacture insecure Internet of Things (IoT) devices. And it is humans that choose weak passwords or reuse passwords across multiple applications and platforms.

This is not to imply that we should accuse people of being “the weakest link” in cybersecurity and privacy. On the contrary, these thoughts underline the importance of individuals in preserving a solid security and privacy posture. This demonstrates how essential it is to create a security and privacy culture. Raising awareness about threats and best practices becomes the foundation of a safer digital future.

Data Threats Awareness

Our data is collected daily — your computer, smartphone, and almost every internet-connected device gather data. When you download a new app, create a new account, or join a new social media platform, you will often be asked to provide access to your personal information before you can even use it! This data might include your geographic location, contacts, and photos.

For these businesses, this personal information about you is of tremendous value. Companies use this data to understand their prospects better and launch targeted marketing campaigns. When used properly, the data helps companies better understand the needs of their customers. It serves as the basis for personalization, improving customer service, and creating customer value. They help to understand what works and what doesn’t. They also form the basis for automated and repeatable marketing processes that help companies evolve their operations.

In an article from May 2017, The Economist defined the data industry as the new oil industry. According to LSE Business Review, advertisements accounted for 92% of Facebook’s revenue and above 90% of Google’s revenue. This revenue is equal to approximately 60 billion $.

This is the point where things derail. Businesses store personal data indefinably. They use data to make inferences about your socioeconomic status, demographic information, and preferences. The Cambridge Analytica scandal was a great manifestation of how companies can manipulate our beliefs based on the psychographic profiles created by harvesting vast amounts of “innocent” personal data. Companies do not always use your data to your interest or according to your consent. Google, Apple, Facebook, Amazon, and Microsoft generate value by exploiting them, selling them (for example, via a data broker), or exchanging them for other data.

Besides the threats originating from the misuse of our data by legitimate businesses, there is always the danger coming from malicious actors who actively seek to spot gaps in data protection measures. The same Verizon report indicates that personal data are the target in 76% of data breach incidents. The truth is that data is valuable to criminals as well.

According to Keeper Security, criminals sell your stolen data in the dark web market, doing a profitable business. A Spotify account costs $2.75, a Netflix account up to $3.00, a driver’s license $20.00, a credit card up to $22.00, and a complete medical record $1.000! Now multiply these prices per unit by the million records compromised yearly, and you have a sense of the booming cybercrime economy.

Privacy Best Practices Awareness

If this reality is sending chills down your spine, don’t fret! You can take steps to control how your data is shared. You can’t lock down all your data — even if you stop using the internet, credit card companies and banks record your purchases. But you can take simple steps to manage it and take more control of whom you share it with.

First, it is best to understand the tradeoff between privacy and convenience. Consider what you get in return for handing over your data, even if the service is free. You can make informed decisions about sharing your data with businesses or services. Here are a few considerations:

-Is the service, app, or game worth your personal data?

-Can you control your privacy and still use the service?

-Is the data requested relevant to the app or service?

-If you last used an app several months ago, is it worth keeping it, knowing that it might be collecting and sharing your data?

You can adjust the privacy settings to your comfort level based on these considerations. Check the privacy and security settings for every app, account, or device. These should be easy to find in the Settings section and usually require a few minutes to change. Set them to your comfort level for personal information sharing; generally, it’s wise to lean on sharing less data, not more. You don’t have to adjust the privacy settings for every account at once; start with some apps, which will become a habit over time.

Another helpful habit is to clear your cookies. We’ve all clicked “accept cookies” and have yet to learn what it means. Regularly clearing cookies from your browser will remove certain information placed on your device, often for advertising purposes. However, cookies can pose a security risk, as hackers can easily hijack these files.

Finally, you can try privacy-protecting browsers. Looking after your online privacy can feel complicated, but specific internet browsers make the task easier. Many browsers depreciate third-party cookies and have strong privacy settings by default. Changing browsers is simple but can be very effective for protecting your privacy.

Data Protection Best Practices Awareness

Data privacy and data protection are closely related. Besides managing your data privacy settings, follow some simple cybersecurity tips to keep it safe. The following four steps are fundamental for creating a solid data protection posture.

-Create long (at least 12 characters) unique passwords for each account and device. Use a password manager to store all your passwords. Maintaining dozens of passwords securely is easier than ever, and you only need to remember one password.

-Turn on multifactor authentication (MFA) wherever permitted, even on apps that are about football or music. MFA can help prevent a data breach even if your password is compromised.

-Do not deactivate the automatic updates that come as a default with many software and apps. If you choose to do it manually, make sure you install these updates as soon as they are available.

-Do not click on links or attachments included in phishing messages. You can learn how to spot these emails or SMS by looking closely at the content and the sender’s address. If they promote urgency and fear or seem too good to be true, they are probably trying to trick you. Better safe than sorry.

This article was prepared as part of the project “Increasing Civic Engagement in the Digital Agenda — ICEDA” with the support of the European Union and South East Europe (SEE) Digital Rights Network. The content of this article in no way reflects the views of the European Union or the SEE Digital Rights Network.

Submission of Comments to the Draft Law on the procedure for lifting confidentiality of communications

Today, Homo Digitalis’ Legal & Policy Team submitted its comments in the context of the Ministry of Justice’s open consultation on the draft law entitled “Procedure for the lifting of the confidentiality of communications, cybersecurity and protection of citizens’ personal data”

Homo Digitalis welcomes the submission of the present draft of the Draft Law on the Protection of Privacy and Security of Personal Data. Law to regulate the procedure for lifting the confidentiality of communications, including the restructuring of the National Intelligence Service (NIS) and the criminalization of the trade, possession and use of prohibited surveillance software in Chapters B to E of the old draft law. The issues of lifting the confidentiality of communications need clarification to ensure the validity of the procedure and to guarantee the constitutionally guaranteed right to confidentiality of communications (see Article 19 of the Constitution). Although the Bill presents a limited number of positive elements, it is rife with a number of problematic provisions, which we highlight in our comments, and the institutional omission of the inclusion of the DPAA in the legislative process poses significant challenges. Homo Digitalis urges the Ministry of Justice to take seriously the relevant comments of the DPAA on the provisions of this Sect. Law, as posted on its website and in this public consultation. Homo Digitalis drafts these comments in their entirety.

Homo Digitalis also welcomes the amendment of the national regulations for the transposition of Directive 2016/680 in Law 4624/2019 in Chapter F of this Law, which is a consequence of the submission of a relevant complaint before the European Commission by Homo Digitalis in October 2019, and the relevant consultations initiated by the Commission with the Greek State. This revision, although after the expiry of the compliance date set by the European Commission for the Greek State (June 2022), ensures compliance with European Law and safeguards the constitutionally guaranteed right to the protection of personal data (see Article 9A of the Constitution).

You can read all our comments in detail on the Public Consultation website or here.

Interview with Prof. Chris Brummer on cryptocurrencies and international regulatory cooperation

In recent years, cryptocurrencies have significantly grown in popularity, demand and accessibility. New “stablecoins” have emerged as a relatively low-risk asset category for institutional investors, whereas new “altcoins” have attracted retail investors due to their availability on popular fintech platforms.

These developments have amplified some of the risks inherent in the decentralized and immutable nature of the blockchain technology on which cryptocurrencies rely. Indicatively, sunk costs associated with volatility, loss of private keys or theft have become higher, whereas the financing of illicit activities has increasingly been channeled through cryptocurrencies.

We asked Chris Brummer, Professor of International Economic Law at Georgetown University*, to reflect on the importance of international regulation, standardization or regulatory cooperation in mitigating the above risks, and the challenges entailed in seeking to align or harmonize domestic regulatory approaches.

Prof. Brummer began by noting that the major risk of cryptocurrencies from an investment standpoint lies in their relative complexity — “what they are, how they operate, and the value proposition, if any, that any particular cryptoasset provides. Because of this complexity, they are difficult to price, and unscrupulous actors can exploit the relative ignorance of many investors”, Prof. Brummer explained.

As cryptoassets are inherently cross-border financial products, operating on digital platforms, the mitigation of the risks entailed in their increasing circulation and use requires international coordination. 

This could take place through informal guidelines and practices which, though not “harmonizing” approaches, should at least ensure that reforms are broadly moving in the same direction”, Prof. Brummer notes.

Countries have very different risk-reward appetites—which are defined largely by their own experiences

Achieving even a minimal degree of international consensus may nonetheless prove challenging given the existence of significant regulatory constraints at the domestic level. As Prof. Brummer explains, “for one, although the tech may be new, regulators operate within legacy legal systems.  And national jurisdictions don’t identify crypto assets in the same way, in part because they identify just what is a “security” differently, and also define key intermediaries differently, from “exchanges” to “banks.”  And these definitions can be difficult to modify — in the U.S. it’s in part the result of case law by the Supreme Court, whereas in other jurisdictions it may be defined by national (or in the case of the EU, regional) legislation.  Coordination can thus be tricky, and face varying levels of practical difficulty.”

Apart from key differences in existing domestic regulatory structures, there may also be a mismatch in incentives across different jurisdictions. “Countries have very different risk-reward appetites—which are defined largely by their own experiences”, Prof. Brummer explains. “Take for example how cybersecurity concerns have evolved. Japan was one of the most crypto friendly jurisdictions in the world.  Its light touch regulatory posture, began to change when its biggest exchange, Coincheck, was hacked, resulting in the theft of NEM tokens worth around $530 million.  Following the hack, Japanese regulators required all the exchanges in the country to get an operating license. In contrast, other G20 countries like France have been more solicitous, and have even seen the potential for modernizing their financial systems and gaining a competitive advantage in a fast-growing industry, especially as firms reconsider London as a financial center in the wake of Brexit.  Although not dismissive of the risks of crypto, France has introduced optional and mandatory licensing, reserving the latter for firms that seek to buy or sell digital assets in exchange for legal tender, or provide digital asset custody services to third parties.”

Users in different jurisdictions may face different regulatory constraints or enjoy varying degrees of regulatory protection

Another limiting factor of international coordination is the proliferation of domestic and international regulatory authorities. Prof. Brummer observes that “international standard-setters don’t always agree on crypto approaches, and neither do agencies within countries. International standard-setters and influencers themselves have until recently espoused very different opinions, with the Basel Committee less impressed, and the IMF—committed to payments for financial stability, more intrigued.  And even within jurisdictions, regulatory bodies can take varying views.  In the US, for example, the SEC has at least been seen to be far more wary of crypto than the CFTC; similarly, from an outsider’s perspective, the ECB’s stance has appeared to be more cautious than, say ESMA’s.

For the time being, international regulatory cooperation appears to be evolving slowly, in light of the limitations listed above by Prof. Brummer. Users in different jurisdictions, even within the EU, may therefore face different regulatory constraints or enjoy varying degrees of regulatory protection. The Bank of Greece, for one, has issued announcements adopting the views of European supervisory authorities warning consumers of the risks of cryptocurrencies, but has yet to adopt precise guidelines. Yet, as stablecoin projects popularized by established commercial actors gain popularity, we may soon begin to see a shift in international regulatory pace, possibly toward a greater degree of convergence.

* Chris Brummer is the host of the popular Fintech Beat podcast, and serves as both a Professor of Law at Georgetown and the Faculty Director of Georgetown’s Institute of International Economic Law.  He lectures widely on fintech, finance and global governance, as well as on public and private international law, market microstructure and international trade. In this capacity, he routinely provides analysis for multilateral institutions and participates in global regulatory forums, and he periodically testifies before US and EU legislative bodies. He recently concluded a three-year term as a member of the National Adjudicatory Council of FINRA, an organization empowered by the US Congress to regulate the securities industry.

** Photo Credits: Jinitzail Hernández / CQ Roll Call

Digital Cartels: The Risks, the Role of Big Data, the Possible Measures & the Role of the EU

By Konstantinos Kaouras*

The risks of tacit collusion have increased in the 21st century with the use of algorithms and machine learning technologies.

In the literature, the term “collusion” commonly refers to any form of co-ordination or agreement among competing firms with the objective of raising profits to a higher level than the non-cooperative equilibrium, resulting in a deadweight loss.

Collusion can be achieved either through explicit agreements, whether they are written or oral, or without the need for an explicit agreement, but with the recognition of the competitors’ mutual interdependence. In this article, we will deal with the second form of collusion which is referred to as “tacit collusion”.

The phenomenon of “tacit collusion” may particularly arise in oligopolistic markets where competitors, due to their small number, are able to coordinate on prices. However, the development of algorithms and machine learning technologies has made it possible for firms to collude even in non-oligopolistic markets, as we will see below.

Tacit Collusion & Pricing Algorithms

Most of us have come across pricing algorithms when looking to book airline tickets or hotel rooms through price comparison websites. Pricing algorithms are commonly understood as the computational codes run by sellers to automatically set prices to maximise profits.

But what if pricing algorithms were able to set prices by coordinating with each other and without the need for any human intervention? As much as this sounds like a science fiction scenario, it is a real phenomenon observed in digital markets, which has been studied by economists and has been termed “algorithmic (tacit) collusion”.

Algorithmic tacit collusion can be achieved in various ways as follows:

  1. Algorithms have the capability “to identify any market threats very fast, for instance through a phenomenon known as now-casting, allowing incumbents to pre-emptively acquire any potential competitors or to react aggressively to market entry”.
  2. They increase market transparency and the frequency of interaction, making the industries more prone to collusion.
  3. Algorithms can act as facilitators of collusion by monitoring competitors’ actions in order to enforce a collusive agreement, enabling a quick identification of ‘cartel price’ deviations and retaliation strategies.
  4. Co-ordination can be achieved in a sort of “hub and spoke” scenario where competitors may use the same IT companies and programmers for developing their pricing algorithms and end up relying on the same algorithms to develop their pricing strategies. Similarly, a collusive outcome could be achieved if most companies were using pricing algorithms to follow in real-time a market leader (tit-for-tat strategy), who in turn would be responsible for programming the dynamic pricing algorithm that fixes prices above competitive level.
  5. “Signaling algorithms” may enable companies to automatically set very fast iterative actions, such as snapshot price changes during the middle of the night, that cannot be exploited by consumers, but which can still be read by rivals possessing good analytical algorithms.
  6. “Self-learning algorithms” may eliminate the need for human intermediation, as using deep machine learning technologies, the algorithms may assist firms in actually reaching a collusive outcome without them being aware of it.

Algorithms & Big Data: Could they Decrease the Risks of Collusion?

Big Data is defined as “the information asset characterized by such a high volume, velocity and variety to require specific technology and analytical methods for its transformation into value”.

It can be argued that algorithms which constitute a “well defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as outputs” can provide the necessary technology and analytical methods to transform raw data into Big Data.

In data-driven ecosystems, consumers can outsource purchasing decisions to algorithms which act as their “digital half” and/or they can aggregate in buying platforms, thus, strengthening their buyer power.

Buyers with strong buying power can disrupt any attempt to reach terms of coordination, thus making tacit collusion an unlikely outcome. In addition, algorithms could recognise forms of co-ordination between suppliers (i.e. potentially identifying instances of collusive pricing) and diversify purchasing proportions to strengthen incentives for entry (i.e. help sponsoring new entrants).

Besides pure demand-side efficiencies, “algorithmic consumers” also have an effect on suppliers’ incentives to compete as, with the help of pricing algorithms, consumers are able to compare a larger number of offers and switch suppliers.

Furthermore, the increasing availability of online data resulting from the use of algorithms may provide useful market information to potential entrants and improve certainty, which could reduce entry costs.

If barriers to entry are reduced, then collusion can hardly be sustained over time. In addition, algorithms can naturally be an important source of innovation, allowing companies to develop non-traditional business models and extract more information from data, and, thus, lead to the reduction of the present value of collusive agreements.

Measures against Digital Cartels and the Role of the EU

Acknowledging that any measures against algorithmic collusion may have possible effects on competition, competition authorities may adopt milder or more radical measures depending on the severity and/or likelihood of the risk for collusion.

To begin with, they may adopt a wait-and-see approach conducting market studies and collecting evidence about the real occurrence of algorithmic pricing and the risks for collusion.

Where the risk for collusion is medium, they could possibly amend their merger control regime lowering their threshold of intervention and investigating the risk of coordinated effects in 4 to 3 or even 5 to 4 mergers.

In addition, they could regulate pricing algorithms ex ante with some form of notification requirement and prior analysis, eventually using the procedure of regulatory sandbox.

Such prior analysis could be entrusted to a “Digital Clearing House”, a voluntary network of contact points in regulatory authorities at national and EU level who are responsible for regulation of the digital sector, and should be able to analyze the impact of pricing algorithms on the digital rights of users.

At the same time, competition authorities could adopt more radical legislative measures by abandoning the classic communications-based approach for a more “market-based” approach.

In this context, they could redefine the notion of “agreement” in order to incorporate other “meetings of minds” that are reached with the assistance of algorithms. Similarly, they could attribute antitrust liability to individuals who benefit from the algorithms’ autonomous decisions.

Finally, where the risk for collusion is serious, competition authorities could either prohibit algorithmic pricing or introduce regulations to prevent it, by setting maximum prices, making market conditions more unstable and/or creating rules on how algorithms are designed.

However, given the possible effects on competition, these measures should be carefully considered.

Given most online companies using pricing algorithms operate beyond national borders and the EU has the power to externalize its laws beyond its borders (a phenomenon known as “the Brussels effect”), we would suggest that any measures are taken at EU-wide level with the cooperation of regulatory authorities who are responsible for regulation of the digital sector.

Harmonised rules at EU Regulation level such as the recently adopted General Data Protection Regulation are important to protect the legitimate interests of consumers and facilitate growth and rapid scaling up of innovative platforms using pricing algorithms.

It is worth noting that, following a proposal of the European Parliament, the European Commission is currently carrying out an in-depth analysis of the challenges and opportunities in algorithmic decision-making, while in April 2019, the High-Level Expert Group on Artificial Intelligence (AI), set up by the European Commission, presented Ethics Guidelines for Trustworthy AI, in which it was stressed that AI should foster individual users’ fundamental rights and operate in accordance with the principles of transparency and accountability.

In conclusion, given the potential benefits of algorithms, but also the risks posed by the creation of “digital cartels”, it is clear that a fine balance must be struck between adopting a laissez-faire approach, which can be detrimental for consumers, and an extremely interventionist approach, which can be harmful for competition.

* Konstantinos Kaouras is a Greek qualified lawyer who works as a Data Protection Lawyer at the UK’s largest healthcare charity, Nuffield Health. He is currently pursuing an LLM in Competition and Intellectual Property Law at UCL. He has carried out research on the interplay between Competition and Data Protection Law, while he has a special interest in the role of algorithms and Big Data.


Lianos I, Korah V, with Siciliani P, Competition Law Analysis, Cases, & Materials (OUP 2019)

OECD, ‘Algorithms and Collusion: Competition Policy in the Digital Age’ (2017)

OECD, ‘Big Data: Bringing Competition Policy to the Digital Era – Background Note by the Secretariat’ (2016)