Why simply redefining personal data narrowly does not solve the problem of the “law of everything”
Originally published as
Nadezhda Purtova, ‘Why Simply Redefining Personal Data Narrowly Does Not Solve the Problem of the “Law of Everything”’ (Brussels Privacy Hub, 15 December 2025) https://brusselsprivacyhub.com/why-simply-redefining-personal-data-narrowly-does-not-solve-the-problem-of-the-law-of-everything/
A few weeks after its publication, the Digital Omnibus proposal continues sending shockwaves through the data protection community. The proposal to drastically narrow down the meaning of personal data, far from the neutral “codification” of the EUCJ caselaw, is one of the most contentious changes that the Digital Omnibus intends to bring to the current landscape of the digital law.[1] Some discussions online cite my earlier “law of everything” critique of the GDPR to support this far-reaching change. In this brief intervention, I explain why the idea of simply redefining identifiability and the concept of personal data narrowly does not fix the problem of data protection law as “the law of everything.”[2] Rather, it delivers the very risk that relying on the data protection law for solving all digital problems created: destabilizing the system of legal protection against digital wrongs when its anchor concept collapses.
Context: the GDPR as “the law of everything”
There is more to framing the GDPR as “the law of everything” than simply underscoring its (too) broad scope that needs to be narrowed down. For a long time, the EU was relying heavily on the system of data protection law to ensure legal protection in the digital world. Whether or not this was explicitly the intention of the drafters of the 1995 Data Protection Directive and the GDPR, the European data protection law has transformed from closely linked to and even synonymous to informational privacy to developing an identity of its own, embodied in a distinct fundamental right to data protection in Article 8 EU Fundamental Rights Charter, as a protector of all fundamental rights in a digital context (Article 1(2) GDPR), protector of people, not data, and “a bundle of rights that make up citizenship in the new millennium.”[3]This was possible due to a broad and flexible concept of personal data. The general principles of data protection, including fairness, lawfulness, transparency, and proportionality (in the form of data minimization and storage limitation) are also general enough to be useful across a broad range of situations and offer remedies against many digital problems from discrimination to platform power to algorithmic firing to manipulation and unfair and non-transparent decision-making. Data protection law was also one of the first domains of law born in response to the rise of computing, designed to regulate the digital context. Considering all this, it comes as no surprise that the data protection law has become the legal equivalent of “defence against dark digital arts”, a magic toolbox where one could find solutions and remedies when no other law provided them. Many digital problems have long been framed as data protection problems,[4] which might have pre-empted other legal domains from adjusting to the changes that digitisation brought.
These data protection “superpowers” came at a cost. Data protection law became overinclusive, and, unsurprisingly, beyond the enforcement capacity of the data protection authorities, in that it regulates very diverse digital problems with the same set of rules, disregarding the diversity of causal processes leading to those problems and only focusing on one common denominator, i.e. processing of personal data. This suggests that the regulation of the digital problems would be more effective if it acknowledged the differences and tailored the regulatory interventions to those distinct contexts, for instance, through the regulation of design and use of computer code and updating other legal domains such as labour, administrative and consumer law to the digital world. Paradoxically, data protection also became underinclusive. While on the books, the broad notion of personal data gave it the superpowers, it also became its kryptonite. While some actors lack (access to) the necessary expertise to navigate the many nuances of the complex CJEU caselaw on the concept of personal data[5] and do not apply the GDPR where they should due to ignorance, others with access to high-end legal expertise are able to navigate this complexity to their advantage and use the complex legal tests as signposts to avoid application of the law while the data processing they engage in might still impact people. Together, the over- and under-inclusiveness of the GDPR result in regulatory imprecision. On the one hand, the scope of application and content of the law do not reflect the different causes of problems in the digital world it aims to tackle. On the other hand, the difference between the situations where the GDPR protections do and do not apply might lie in the different degrees of identifiability of the affected people from the perspectives of different actors in the processing chain, which is not always materially relevant to the nature of the problem (think of algorithmic manipulation or unfair differentiation of people who are identifiable vs anonymous) and hence an arbitrary distinction.
Against this background, we have argued that the concept of personal data does not serve as a good anchor to effective protection of people against digital wrongs and should be abandoned altogether or substantially revised (I discuss the architecture of the legal reform we proposed in the next section). Yet, none of these considerations was cited as a rationale for narrowing down the material scope of the GDPR by the Commission. The only specifically cited reasons were unnamed concerns of smaller entities engaging in low-intensity, low-risk data processing about the application of some unnamed GDPR obligations.[6] If anything, these concerns call for tailored exemptions for small enterprises rather than blanket limitations on the material scope of application of the GDPR for all.
We need a genuine reform, not a hasty reduction of legal protection
The critique of the GDPR as “the law of everything” does not problematize the broad scope of legal protection. Rather, it questions if it is wise to designate a single legal domain hinging on one legal concept as the “law to rule them all”. Therefore, we called for a fundamental redesign of the system of legal protection against digital wrongs, where personal data would no longer be its key anchor and the single point of failure. Instead of putting all proverbial eggs of legal protection in one basket of the data protection law, Europe should move towards a more distributed system by updating existing, and where necessary, creating new sectoral legislation governing specific social contexts affected by digitisation based on the values of those respective domains. Think of the labor law governing the algorithmic management of workers, or administrative and procedural law governing digitization of decision-making by public authorities. Underlying this reformed system, there should come a principle-based regulation of the design and use of computer code.
One could argue that – with the many digital laws adopted in recent years, such as the DSA, DMA, Platform Work Directive and the AI Act, to name a few, Europe has done exactly that, i.e. shifted towards sector-specific legislation and has no need for the GDPR as the catch-all law and the broad concept of personal data. Yet, this is far from true. Even after the avalanche of the new digital legislation, the GDPR still remains the center of gravity in the EU universe of digital laws. Many of the key new digital laws still in major ways refer to the GDPR and the GDPR concepts such as personal data, special categories of personal data or profiling. To illustrate, the protection against recommender systems is mostly based on transparency tools, which have questionable effectiveness in the algorithmic contexts. The only restriction on the recommender systems is the Article 38 DSA obligation to offer users a profiling-free alternative, where the definition of profiling is taken from Article 4(4) GDPR. Thus, this protection hinges on personal data being processed in the course of profiling. The same analysis holds for the ban on profiling-based ads targeting children (Article 28 DSA). To correct that, the digital law would need own, GDPR-neutral definition of profiling. A significant number of the Platform work directive provisions would also only apply when personal data is processed. The AI Act – although it regulates design and use of a certain type of software – is not principle-based regulation of all computer code, and does not offer any remedy against the negative impacts of code-driven amplification and optimization.[7] The GDPR principle of minimization mandating that personal data should not be processed beyond what is necessary for the purpose, and not if other less intrusive ways to achieve this purpose are possible, remains the only remedy that is useful in this space. In other words, the “new digital sheriffs” do not provide for the same breadth of legal protection against digital wrongs and, for the protections they do provide, still draw significantly on the power of the GDPR. The cookie rule of Article 5(3) of the ePrivacy directive is one of the few digital laws that regulates a digital practice (online tracking) without being tied to the processing of personal data, which is exactly the type of regulation that Europe should use more of. Interestingly, the proposed Omnibus “simplifies” this rule by moving it to the GDPR and hooking the regulation of surveillance on the concept of personal data. Logically, a restriction of the meaning of identifiability and hence of the concept of personal data will have a domino effect across all these domains, leading to the overall reduction of legal protection. The effect might be worsened by the fact that – with the definition of identifiability severely limited – we do not have a conclusive binding interpretation of what it means to be identified. Theoretically, the definitional options range from the most conservative identification by name and other elements of civil identity in the offline world to the broader identification as individuation, i.e. recognizing a person as a distinct individual.[8] Without such a binding and broader definition, the powerful technology companies engaging in the most problematic digital practices such as online tracking and behavioral advertising and algorithmic recommendations, will be able to structure their processing in a way where they are not reasonably likely to establish their targets’ offline identity, while still subjecting them to surveillance and algorithmic treatment.
No vision of the future of legal protection, just deregulation
The way legal protection against digital wrongs is currently structured in Europe, with the major weight pulled by the GDPR, needs major rethinking and a vision. A foundation of this vision is the articulation of the identity of the data protection law. If the privacy roots of data protection are to be preserved, rooting data protection on the concept of identification and identity is important. Otherwise, identification is of little conceptual relevance if the objective is to deliver the law that ensures broadly “decent treatment of people in a digital society”,[9] and legal protection might not need the concept of personal data at all. However, the Commission’s proposal makes principle changes, presented as simplification and clarification, without any vision of the future of legal protection that the reform facilitate, or the role of data protection in that vision. To illustrate, the reform proposal is not accompanied by the fundamental rights impact assessment while it seeks to limit the concept of personal data at the heart of the fundamental right to data protection. The only vision that emerges clearly from is that of deregulation and fear of missing out on the supposed advantages the AI would bring, which we are yet to see. The “law of everything” critique of the GDPR, when used with intellectual honesty, cannot be used to back this up.
[1] I believe that the language of the amendment to te definition of personal data does not codify the consequences of the CJEU judgement in C-413/23 P EDPS v SRB. It is a complex judgement which effects and interaction with other caselaw such as C‑319/22 Scania is yet to be understood. In this contribution, however, I will not this point further.
[2] In this reflection, I build on the ideas first developed in Nadezhda Purtova, ‘The Law of Everything. Broad Concept of Personal Data and Future of EU Data Protection Law’ (2018) 10 Law, Innovation and Technology 40. And Nadezhda Purtova and Bryce Clayton Newell, ‘Against Data Fixation: Why “Data” Fails as a Regulatory Target for Data Protection Law and What to Do About It’ [2025] Oxford Journal of Legal Studies gqaf038.
[3] Stefano Rodotà, ‘Data Protection as a Fundamental Right’ in Serge Gutwirth and others (eds), Reinventing Data Protection? (Springer Netherlands 2009) <http://link.springer.com/10.1007/978-1-4020-9498-9_3> accessed 1 November 2023.
[4] Bert-Jaap Koops, ‘The Trouble with European Data Protection Law’ (2014) 4 International Data Privacy Law 250.
[5] The standard of identifiability after C‑319/22 Scania and C-413/23 P EDPS v SRB is yet another example of such complexity.
[6] Explanatory memorandum to the Digital Omnibus, p. 6.
[7] On why code needs general principle-based regulation – Nadya Purtova and Diletta Huyskes, ‘General Principles of Code’ (2024) 2024 Technology and Regulation 132.
[8] Nadezhda Purtova, ‘From Knowing by Name to Targeting: The Meaning of Identification under the GDPR’ (2022) 12 International Data Privacy Law 163.
[9] Koops (n 4).