Risk Management in the Digital World - A Chore or A Necessity?

Written by Ioannis Ntokos*

“Nothing in life is certain but death and taxes,” Benjamin Franklin (or someone before him) once said. But the phrase could well include another component, risk. “Death, taxes, and risk.” In the digital world, risks are a constant that we must take into account, whether as citizens, product or service providers, or as experts in the field of risk management. Let’s see how proper risk management can provide certainty and security in the digital space.

What does risk mean, and why does it deserve attention?

The digital world changes rapidly, every day. The concept of risk, however, is relatively static: every system, every program, every person using technology creates an “opening,” a vulnerability. These openings are not dangerous in themselves, but they are vulnerable to threats that can exploit them. Consider, for example, a flaw in a computer system at a nuclear power plant, a flawed process for accessing sensitive data, a bad setup of a network switch. The alarm bells are ringing.

The risk is there before you do anything, it is “inherent.” It is there by default, without any protective measures being taken. When you ride a bike, the very act of riding is a risk. In the digital world, the inherent risk arises from things like human carelessness, the complexity of systems, or the value of data shared with others. Risk itself is a certainty, a constant of life. That does not mean we ignore it.

So far, so good. The fact that I cross the threshold of my door every morning is a dangerous situation, theoretically. What is the point of action if the risk is there anyway? The next stage is to identify which risks require attention and action. This requires cold observation and logical thinking. Some risks are more significant than others, so they must go through the risk management "filter".

Calculating Risk

In its simplest form, risk (numerical or not) is simply a function of probability and impact. A given risk has (negative) consequences (impact) with some frequency (probability). Being able to quantify the variables of probability and impact in a quantitative way (using precise and detailed numbers, usually monetary for impact and annual occurrence units for probability) or qualitative way (using more arbitrary calculations, usually using a scale from 1 to 5) brings us closer to calculating risk.

 

In the example of cycling, a risk is my sudden encounter with a brown bear (and the unpleasant consequences that might follow). The likelihood of this happening varies depending on the situation - if I am riding my bike in an area of ​​Korydallos, the chances of encountering a bear are close to zero. If I have gone cycling in the Pindos mountains, the situation changes dramatically. The impact of the encounter with the bear also changes. If I carry bear spray or have watched many videos on how to deal with a brown bear (the author of this article has watched quite a few such videos), I may escape with bruises and scratches (or a broken bicycle). If I lack knowledge and tools, things become more difficult.

Here the importance of protective measures and the transition from inherent to residual risk also becomes apparent. Through the protective measures at my disposal (spray), I can lower the impact of the encounter from certain death to admission to the hospital for stitches. Residual risk is the risk that remains after we take protective measures against it! Protective measures are an integral part of risk management.

 

Risk Management Methods

There are four appropriate ways to deal with a risk, once it has been perceived (and quantified or quantified). These options are: acceptance, transfer, reduction or elimination.

Acceptance means that you understand the risk and hold on to it, not passively or ignorantly, but rationally. Some risks are so small that it costs more to deal with them than to accept them. If I ride my bike downtown, I accept the infinitesimal chance (0.00001%) that a bear will attack me, and I enjoy my ride.

Transfer is the assignment of risk to someone else (usually through insurance). The risk does not disappear, it simply changes hands. The responsibility remains with the person subject to the risk, but there is coverage in case of damage due to the risk. In the bear scenario, I hope my insurance covers such attacks, or at least my family receives a lump sum (through my life insurance) in case the spray doesn’t help.

Speaking of spray, this is a risk reduction method! Reduction means that you limit the likelihood or impact, and it is the most common method of dealing with risks. This includes any form of preventive protection. Every protective measure I take aims to reduce the risk. If I’m out cycling in the Pindus Mountains with 10 other friends, the chances of the bear attacking me instead of one of them are drastically reduced!

Elimination is the most absolute option, as you move away from the risk and its source. Elimination is the final cleanup: the recognition that something is beyond “patching.” Are there many hungry bears on the mountain I’m planning to visit? I choose the sea instead of the mountain and I have peace of mind!

While the above ways of dealing with risk are all tried and tested, there is one reaction that is not legitimate - risk ignorance. Knowing the risk and consciously choosing to ignore it will inevitably lead to negative results!

Dealing with risk in the digital world

Risks similar to a random bear encounter exist in the digital and online space, only instead of hungry four-legged friends, we encounter hackers, abused platforms, the use of artificial intelligence that violates human rights and defective hardware. And with the same logic as our trip to the forest, these risks require special treatment, taking into account the following basic principles:

  1. Risk management is not a single event, but a cycle. You identify, assess, act, and regularly review. The digital world is constantly changing, which means that the risk landscape is also changing. What was secure in the morning may be vulnerable by evening. Technology waits for no one - and the associated risks must be constantly recorded and addressed.
  2.  A holistic approach to risk is crucial. One gap is enough to cause radical damage to citizens, users, and businesses. Partial protection creates a false sense of security. In the digital space, the weak point is often not the most obvious. It can be the forgotten file, the inadequate password, the external partner using a fragile application. Therefore, a holistic view is required.
  3. It is also necessary to understand that risk is not only technical, but also organizational, human, or procedural. In practice, most damage results from mistakes, omissions, or misunderstandings. Technology simply exacerbates the consequences. Therefore, it is necessary to address it from many different angles.
  4. Awareness and education on information and data protection issues are key to reducing risks. No matter how organized you are, there will always be someone who will write their passwords in plain sight, open the wrong file, or accidentally press “delete.” The human element cannot be eliminated.
  5. Prevention is always cheaper than recovery. For the average user, risk management may seem like a chore, but the reality is that the world of technology has grown so much that ignorance of risk is costly. Just as no one waits to install an alarm system after a break-in, risk management works best before bad things happen.

The essence of risk management is targeted clarity: although absolute security is not possible, we strive for stability while trying to avoid major mistakes. When you understand this, risk management ceases to be a burden. It becomes an organized and coordinated effort, and then a habit. A kind of mental exercise where you ask: “What could go wrong? How much do I care? What do I do about it?” Not as an exercise in fear, but as an exercise in pure reasoning and protection. Risk will always be there. Managing it is a conscious choice, and awareness is a tool.

*Ioannis Ntokos is an IT risk management, information security and third party risk management specialist, with expertise in data protection. He specializes in ISO27001, NIST, NIS2 and the General Data Protection Regulation (GDPR). In his spare time, he offers career advice on IT governance, risk and compliance through his YouTube channel


Another important victory! The Hellenic Data Protection Authority rules the operation of the Hellenic Police’s Smart Policing system unlawful

In 2019, the Hellenic Police signed a contract with Intracom Telecom for the implementation of the Smart Policing programme, with a total value of €4 million. The project concerned the procurement of 1,000 “smart” portable devices, intended to enable facial recognition, fingerprint recognition, as well as the scanning of documents and vehicle licence plates.

Homo Digitalis was the first organisation to publicly bring this case to light, through a joint investigative publication with AlgorithmWatch in December 2019. In the same month, we submitted an access-to-documents request to the Ministry of Citizen Protection in order to clarify critical issues of legality and data protection. The response we received failed to provide substantive answers to our questions.

As a result, in March 2020 we filed a complaint with the Hellenic Data Protection Authority (HDPA), requesting that the case be investigated. The Authority accepted our complaint and launched an official investigation in August 2020. In the meantime, the Greek State paid the full amount of €4 million (75% of which was financed through EU funds), while the company duly delivered the devices to the Hellenic Police.

Ultimately, on 31 December 2025, the HDPA issued Decision 45/2025, warning the Hellenic Police not to activate the Smart Policing system, since, under the applicable legal framework, any productive operation of the system would constitute unlawful processing of personal data. The Authority found that there was no legal basis for the intended processing through the system and that the required data protection impact assessment had not been carried out in a timely manner during the pilot phase of the project.

This development gives rise to a strong sense of vindication, as it confirms—six years later—that the serious concerns we raised from the very beginning were fully justified. At the same time, it starkly highlights the waste of public resources on the development and procurement of technologies that could never lawfully operate. Four million euros of taxpayers’ money were spent on a system that, under the existing legal framework, was deemed unlawful before it was ever put into productive use.

This case demonstrates the urgent need for meaningful legality checks, transparency, and accountability before adopting high-risk technological solutions, especially when they affect fundamental rights and are financed with public funds.

You can read Decision 45/2025 of the HDPA here (EL).


Our GAIN event with the supervisory authorities of Article 77 of the AI Act was successfully concluded

Yesterday’s event, which we co-organized with the civil society network Greek AI Network – GAIN at the offices of network member WHEN Hub, was successfully completed.

The event opened with a welcoming address by our Co-founder and Treasurer of the Board, Konstantinos Kakavoulis. This was followed by educational presentations from representatives of two fundamental rights authorities under Article 77 of the AI Act, namely Dr. Efrosyni Siougle from the Hellenic Data Protection Authority and Dr. Christos Tsevas from the Greek National Commission for Human Rights.

Finally, during the Members in the Spotlight Session, our member and DPO Executive / GDPR Expert, Dimos Kostoulas, delivered an educational presentation on the processing of personal data in the healthcare sector and the use of Artificial Intelligence systems in this field.

We warmly thank the speakers, the members of the GAIN network, and the members of Homo Digitalis who joined us both online and in person, as well as the other organizations that honored us with their presence.

The event was held within the framework of the GAIN program, with the support of the European AI & Society Fund.


Only a few spots left for GAIN’s new event! Meet the supervisory authorities of Article 77 of the AI Act

Are you a Civil Society Organization (CSO) interested in the protection of human rights in the age of artificial intelligence?

Only a few free in-person participation spots remain for the event we are co-organizing tomorrow with the CSO network Greek AI Network – GAIN at WHEN Hub!

At the event, representatives from two fundamental rights authorities under Article 77 of the AI Act—namely Efrosini Siougle from the Hellenic Data Protection Authority and Christos Tsevas from the Greek National Commission for Human Rights—will deliver two informative presentations on Artificial Intelligence and will be available to answer questions about the mission and role of their respective bodies.

In addition, during the Members in the Spotlight Session, we will have the honor of hosting our member and DPO Executive / GDPR Expert, Dimos Kostoulas, who will give an educational presentation on the processing of personal data in the healthcare sector and the use of AI systems in this field.

The registration link for the limited number of free participation spots for civil society organizations can be found here.

The event is held within the framework of the GAIN program, with the support of the European AI & Society Fund.


We presented our Study on the Digital Omnibus package at the Privacy & Data Protection Conference

Last Friday, Homo Digitalis was invited to the Privacy & Data Protection Conference, organized by BOUSSIAS.

There, our Executive Director, Eleftherios Chelioudakis, presented our Study on the Digital Omnibus reform packages, highlighting the challenges that the proposed changes pose to our rights in the contemporary digital era.

You can read our Study here.

We would like to warmly thank the conference organizers, and especially Alexandra Varla, for the very honorable invitation. Congratulations as well to all the speakers for their insightful contributions.

.


Homo Digitalis speaks on ERTnews

On Thursday, December 11, Homo Digitalis had the great pleasure of being a guest on the ERTnews program “LIVE NOW”, hosted by Giorgos Kakousis and Nikoleta Kritikou, to discuss the upcoming revisions to the U.S. visa waiver program, ESTA.

The proposed changes foresee the collection of more—and particularly sensitive—personal data, including biometric data, email accounts from the past 10 years, and social media accounts from the past 5 years, among other information.

The plan of the U.S. government is to complete the agreement procedures with the various countries participating in the ESTA visa waiver program by December 2026, so that the new rules can begin to apply from the following year.

The European Union has already, since the summer, initiated the required institutional procedures in order to negotiate at EU level with the United States the framework for the exchange and processing of the relevant data. This framework is expected to form the basis of a comprehensive agreement between the EU and the U.S., upon which Member States, such as Greece, will subsequently be called to conclude the necessary bilateral agreements for the implementation of the new ESTA regime.

In the field of personal data protection, the European Data Protection Supervisor has already issued a relevant opinion on the matter since September. This opinion focuses primarily on the exchange of biometric data, as well as on the necessary safeguards of security, oversight, and accountability that must accompany their processing.

Homo Digitalis was represented on the program by our Executive Director, Lefteris Chelioudakis. We warmly thank ERTnews for the invitation! You can watch the relevant segment here (from 47:00 onwards) here.


Homo Digitalis publishes its Study on the proposed Digital Omnibus Regulations

On 19 November, the European Commission published two (2) proposed Digital Omnibus Regulations.

The first (Digital Omnibus Regulation Proposal) proposes significant amendments to provisions of well-known legislative instruments that are already in force, such as the General Data Protection Regulation (GDPR – Regulation 2016/679), the Data Protection Regulation for the EU institutions, bodies, offices and agencies (EUDPR – Regulation 2018/1725), the ePrivacy Directive (Directive 2002/58/EC), the Data Act (Regulation 2023/2854), the Single Digital Gateway Regulation (Regulation 2018/1724), the NIS2 Directive (Directive 2022/2555) and the Critical Entities Resilience Directive (Directive 2022/2557).


It also proposes the repeal of other legislative instruments that are currently in force, namely the Regulation on the free flow of non-personal data (Regulation 2018/1807), the Data Governance Act (Regulation 2022/868), the Regulation on platform-to-business relations (P2B – Regulation 2019/1150) and the Open Data Directive (Directive 2019/1024).

The second (Digital Omnibus on AI Regulation Proposal) proposes significant amendments to legislation that is currently partially in force, in particular the Artificial Intelligence Act (AI Act – Regulation 2024/1689), while also introducing additions to legislation that is fully in force, specifically the Regulation establishing common rules in the field of civil aviation (Regulation 2018/1139).

Homo Digitalis, together with the European network of which it is a member, European Digital Rights (EDRi), as well as other civil society organisations, academics and experts, sought throughout 2025 to maintain a substantive and well-documented presence in the relevant legislative preparatory process. In this context, we submitted a series of open letters to the competent bodies of the European Commission in May, September and November, while at the same time setting out our positions and arguments in detail through active participation in the public consultation process in October 2025.

We had already warned that the Digital Omnibus package forms part of a broader wave of deregulation that threatens to weaken critical European rules, portraying fundamental rights as an alleged obstacle to innovation and, in practice, serving the interests of large technology companies. From the outset, we had no confidence that the European Commission would be able to genuinely absorb and process our positions within the framework of the public consultation, because the consultation closed in mid-October and the publication of the final text with the proposed provisions was expected just one month later. It would have been essentially impossible to analyse and incorporate arguments and proposals into the text within such a short timeframe.

By November 2025, it had become clear that the European Commission was never interested in substantive dialogue and exchange of views regarding this legislative initiative. At that point, a leak of certain drafts of the proposed provisions revealed that the Commission had already shaped a draft regulation containing highly problematic provisions, representing the greatest rollback in the protection of human rights in the digital sphere. We then highlighted, through a joint press release, the significant challenges arising from the proposed changes to the GDPR and the ePrivacy Directive. A few days later, the official text was published, concealing new surprises and even greater challenges for our rights and freedoms in the digital environment.

Today, we publish our in-depth study on the package of proposed Digital Omnibus reforms, explaining the challenges that arise for our rights and freedoms.

The text of our study is available here (EL).

The Digital Omnibus reform package is not a technical exercise in codification, but a decisive shift of the European digital acquis towards a regime of reduced safeguards and diminished accountability. If adopted as is, the Union risks losing the strongest tool it possessed internationally: the example that technological progress can coexist with high standards of fundamental rights protection.

This weakening will not be felt through dramatic ruptures, but through a slow and persistent erosion, whereby rights we considered non-negotiable will be transformed into exceptions, and oversight will become a shadow of its former self. The choice facing EU legislators does not concern the “simplification” of legislation, but the very future of the European model of human protection: whether it will remain a pillar of the rule of law or allow itself to be replaced by a logic in which innovation is imposed without counterbalances and privacy becomes negotiable.

.


Our NGI TALER workshop at Journals n’ Spirits 2025 was successfully completed

On November 13, we represented NGI TALER at Journals n’ Spirits 2025, which was organized by omniatv together with the initiatives Vlavi (magazine), Copwatch GR, FactReview, Femicide.gr, Homo Digitalis, INFOWAR, inside story., Jacobin Greece, KRAX Radio, The Manifold, Reporters United, Solomon, The Untold, Vouliwatch, and YUSRA (magazine/publications), at the Kypseli Municipal Market.

More specifically, Eleftherios Chelioudakis from our team presented the GNU TALER digital payments tool, which functions like digital cash and radically reshapes the ecosystem of electronic micropayments. It is based on the principles of free/libre software and strict respect for privacy, offering a new way to conduct online financial transactions while ensuring full accountability.

We also spoke about the funding opportunities provided by NGI TALER for everyone who can contribute to our important mission. Do you have relevant ideas? Submit your application by February 1, 2026, here.

.


Homo Digitalis & EDRi speak to inside story on the proposed Digital Omnibus regulations

Ιs Europe moving away from the protection of our digital rights?

inside story. and journalist Eliza Triantafyllou published an in-depth article on Monday, December 1, examining the European Commission’s Digital Omnibus proposals. European Digital Rights (EDRi) and Homo Digitalis had the honor of contributing comments and arguments, represented by their members Blue Duangdjai Tiyavorabun, Ella Jakubowska (she/her), Itxaso Domínguez de Olazábal, PhD, and Eleftherios Chelioudakis.

Is the EU giving in to pressure from Trump and major technology companies to deregulate rules protecting Europeans’ personal data and privacy, rebranding it as “simplification”? What exactly do the two recent proposals include? Read the article here.

We warmly thank the journalist for her interest in our arguments.