The Decentralized Mind
Cover photo

The Case for Data Ownership: A Moral Imperative

Josh

Josh

1. Introduction: Why Does Data Ownership Matter?

We live in a world where data is one of the most valuable resources, yet individuals have little to no control over the data they generate. Every online interaction, creative work, and personal preference is collected, analyzed, and monetized by corporations, most often without explicit or informed consent. For much of the early digital age, data was treated as an intangible byproduct of online activity: an ephemeral trace rather than a tangible asset. This perception benefited those in power, allowing corporations to quietly consolidate control over personal data while legal and ethical frameworks lagged behind.

If data were truly insignificant, why would corporations and governments invest so heavily in harvesting, processing, storing, and profiting from it? If data is valuable enough to sustain billion-dollar industries and influence global economies, then why do individuals, the very source of that data, have no claim to it? The implications of this contradiction are profound, leading us to a more pressing question: Should individuals have an inherent right to own their data?

If we accept that personal autonomy is a cornerstone of ethical society, then it follows that control over one’s data should be recognized as an extension of that autonomy; a claim that demands thoughtful and careful justification. Likewise, if individuals are denied ownership of their data, they risk losing control over a crucial aspect of their personal and intellectual identity. The extent to which this loss constitutes a moral failure will be explored in the sections that follow.

This article argues that data ownership is not merely a regulatory concern but a necessary recognition of individual rights. By drawing on moral philosophy, political theory, and legal precedent, we will make the case that true autonomy in the digital age requires full ownership of personal data. Without this, individuals are reduced to passive subjects within a system that extracts value from them while offering little in return, rendering them as mere means.

The stakes could not be higher. The current model of data commodification benefits a handful of powerful corporations and billionaires at the expense of nearly all global citizens. This structure echoes historical forms of economic and social exploitation, where control over labor, land, or intellectual property was concentrated in the hands of a few. Just as past generations fought for the right to own their labor and property, we must now demand the right to own our data. To do so, we must establish personal digital data: the behavioral, creative, and informational byproducts of human activity, as a form of personal property.

To build this argument, we will explore:

  • The philosophical grounding for data ownership, drawing from Kantian autonomy, Proudhonian critiques of property, and Lockean theories of ownership.

  • The moral implications of data extraction and commodification.

  • The legal and political precedents that support a framework for data ownership.

By the end of this exploration, it will become clear that data ownership is not just an issue of personal rights but a necessity for ensuring digital autonomy and ethical fairness in the modern world.


2. Kantian Autonomy and Data as an Extension of the Self

In moral philosophy, autonomy is often regarded as the foundation of moral personhood.

Immanuel Kant’s moral framework defines autonomy as the ability to act according to rational principles, free from coercion or external control. A person’s autonomy is what grants them moral worth; it is what makes them an end in themselves, rather than a mere means to be used by others.

At the core of Kantian ethics is the “Formula of Humanity,” which asserts that rational agents possess intrinsic worth by virtue of their rationality. However, it is important to clarify that Kant does not argue that people cannot be used as means; after all, we rely on others for labor, services, and mutual exchanges. Rather, what Kant warns against is treating someone’s “humanity,” or their rational agency, as a mere means to achieve our own ends, disregarding their autonomy and moral worth. Treating a person’s rational agency as a mere means rather than an end in itself is a moral transgression.

Understanding Kant’s Concept of Humanity

Kantian autonomy is founded on the idea that individuals possess rational agency: the capacity for self-determination and moral decision-making. To use someone as a mere means is to disregard their ability to make autonomous decisions, instrumentalizing them for one’s own purposes without proper regard for their rational will.

For Kant, humanity is what sets us apart from all other entities: animals, inanimate objects, and non-sentient beings. This rational agency, the capacity to act morally and make ethical judgments, is what makes us uniquely human, and therefore, must be respected. Any system that subverts this rational agency for external gain violates a fundamental moral duty.

When corporations collect, analyze, and monetize personal data without consent, they are not simply using individuals in the neutral sense that, say, a store owner uses a supplier. Instead, they exploit rational agency itself, extracting economic value from individuals while actively subverting their capacity for informed decision-making. Users are rarely given meaningful knowledge of how their data is collected, monitored, and used. Terms of service agreements, while legally accessible, are intentionally designed to be vague, excessively complex, or buried beneath opaque legal jargon. The result is a system in which individuals appear to consent but do so without true understanding, making their participation effectively involuntary. Platforms employ dark patterns: deceptive design strategies that manipulate users into giving away more data than they realize. Algorithms track behavioral patterns without clear disclosure, and AI systems profile individuals in ways they cannot meaningfully contest. Kant’s moral philosophy explicitly condemns deception as a violation of autonomy. In this sense, modern dark patterns and coercive design strategies are not merely inconveniences, they are moral failings that structurally manipulate users into surrendering control over their digital selves.

Consider the iPhone as an example. Most users assume they have a reasonable understanding of how their device operates, yet they remain largely unaware of the multi-layered data tracking infrastructure embedded in their digital experience. The iPhone itself collects telemetry data: logs of interactions, system diagnostics, and behavioral analytics, often sent back to Apple. Every app installed adds another layer of potential data extraction, each with its own privacy policies and permissions that may change with updates and often without knowldge. Further still, users move through the physical world while being tracked by wireless networks, GPS location services, and Bluetooth beacons, many of which can relay data to third parties without clear disclosure or even awareness of the tracking event. The result is an environment where individuals have no meaningful ability to oversee, audit, or control the full extent of their digital footprint.

By withholding transparency and coercing participation, corporations deny individuals the ability to exercise rational, autonomous judgment over their own digital existence. This is precisely the kind of ethical violation that Kant’s Formula of Humanity condemns; it does not merely use individuals, but in subverting their rational autonomy reduces them to passive subjects in an economic system that profits from their ignorance.

The Immorality of Data Exploitation

If individuals are to be treated as ends, they must have both knowledge and control over how their digital extensions i.e. their data are used. Without meaningful awareness of how their data is collected, tracked, and monetized, individuals cannot make rational choices about their participation in digital systems. Data is not digital carbon monoxide; it is not a passive artifact like we are often led to believe; it is an expression of rational agency in the digital world, a manifestation of choices, actions, and personal identity. When corporations extract, analyze, and monetize personal data without transparency or consent, they do not merely use individuals in a transactional sense; they strip them of both knowledge and agency over their own digital identity, reducing them to unwitting subjects in an opaque system of economic exploitation.

This is not a neutral economic transaction; it is a profound moral violation, as it denies individuals their right to self-determination in the digital realm: a realm we grow seemingly unable to detach ourselves from. A person cannot meaningfully act as a rational agent if they are unaware of the extent to which their digital existence is being surveilled, stored, monetized, and manipulated.

Just as Kant condemned deceit and coercion in human interactions as immoral, so too must we recognize the systemic deception involved in modern data extraction. Users are not merely exploited, they are misled by design. Most have no genuine choice but to agree to vague, exploitative terms of service that obscure the full extent of data commodification. Even if a person wanted to make an informed decision about how their data is collected and used, they are structurally prevented from doing so in most cases. This lack of informed consent does not merely disadvantage individuals, it fundamentally subverts their rational autonomy making them a “mere means.”

By denying individuals knowledge, control, and the ability to meaningfully consent, corporations systematically violate the Kantian principle of acting with full rational awareness. This is not just a failure of ethics, it is a deliberate, structural suppression of individual autonomy for corporate economic gain.

Counterarguments and Rebuttals

1. The Non-Rivalrous Argument: Why Data Usage Still Violates Autonomy

Critics may argue that data, unlike one’s physical self or labor, is non-rivalrous, meaning its use by corporations does not deprive the individual of its possession. In other words, if a person still has access to their data, why should they be concerned that a corporation also possesses it?

However, this argument ignores the moral dimension of data usage. Even if an individual retains access to their data, the fact that it is extracted, monetized, and used for external purposes without their knowledge or meaningful consent fundamentally undermines their autonomy. This is not merely an issue of access, it is an issue of control and self-representation.

To illustrate, consider the difference between owning your medical records versus a third party secretly collecting your health data and making decisions about your insurance rates or job eligibility without your input. Even if you still "have" your medical records, the fact that they are being used to shape decisions about your life without your knowledge is an unacceptable breach of autonomy.

Likewise, if personal data is an extension of one’s identity, then the unauthorized use of that data is not just economic theft, it is a violation of self-governance. Much like deception in contractual agreements, the issue is not physical deprivation but rather the subversion of rational agency and the inability to control how one is represented in the digital world.

While data is non-rivalrous in a physical sense, its exploitation creates a different kind of deprivation: the loss of exclusive control. If a person’s data can be endlessly replicated and monetized without their consent, their autonomy over their own digital self is functionally erased. Unlike natural resources, which can be overused, digital data can be over-exploited, eroding personal agency rather than physical scarcity.

2. The Illusion of Consent: Why "Agreeing" is Not True Autonomy

Some may claim that because users “freely” provide data by using online services, they have implicitly agreed to its commodification. Under this reasoning, simply by signing up for social media, using search engines, or installing an app, users voluntarily participate in a system that collects their data.

However, this argument misrepresents what true consent entails. Consent requires both knowledge and choice, neither of which are genuinely available under the current corporate data regime.

  • Lack of Knowledge: Most users do not fully understand what data is being collected, how it is being used, or how long it is being stored. Further, they rarely know when one company or organization purchases data from another. Terms of Service agreements are deliberately vague, filled with legal and technical jargon that obscures the reality of data commodification to laypersons.

  • Lack of Choice: Even if a user objects to these terms, they have no realistic alternative. Rejecting these conditions often means exclusion from essential digital spaces: social media, banking, work platforms (have you ever held a job that required you to download an app?), government services, and communication tools. Just as one cannot consent to an unfair contract under coercive conditions, one cannot meaningfully consent to data extraction when the alternative is total exclusion from digital society, especially insofar as willful separation from digital society becomes increasingly improbable to achieve.

To put it another way, if the only way to “opt out” of exploitation is to forgo participation in the modern world entirely, then consent is an illusion.

Conclusion: Data Ownership as a Moral Obligation

From a Kantian perspective, individuals must be recognized as full moral agents in the digital landscape. This means they must have explicit ownership rights over their data, ensuring they control how it is used, shared, and monetized. Without such rights, data extraction is not merely an economic practice, it is a violation of moral autonomy. The failure to grant individuals agency over their digital existence does not just deprive them of economic benefits; it dehumanizes them, reducing them to mere resources in the digital economy.

This exploitation is not new, it is simply the latest evolution of a historical pattern. Throughout history, those in power have monopolized critical resources: land, labor, and now, data, while those who produce value remain dispossessed. In the next section, we will extend this argument by examining Proudhon’s critique of property, which reveals how the monopolization of data by corporations mirrors past systems of economic exploitation and enclosure.


3. Proudhon’s Critique of Property and Data Monopolization

The French anarchist philosopher Pierre-Joseph Proudhon famously declared, “Property is theft.” Though often misunderstood, this statement was not an outright rejection of property itself but rather a critique of how property is monopolized to the detriment of the many. Proudhon distinguished between personal possessions, which he saw as legitimate, and property as a system of concentrated control, which allows a small elite to extract wealth from those who produce value.

This critique is especially relevant to the modern debate over data ownership. Tech corporations claim ownership over vast amounts of data generated by individuals, yet the individuals who produce this data have no meaningful rights over it. Just as landlords in Proudhon’s time profited from the labor of tenants while denying them ownership of the land they worked, modern corporations extract value from user-generated data while excluding individuals from control, compensation, or decision-making over its use.

The Monopolization of Data as Digital Enclosure

In the 19th century, land enclosures allowed the ruling class to seize vast amounts of common land, displacing local populations and consolidating economic power. What was once shared became privatized, creating a system in which those who had previously lived and worked on the land were forced into dependency on landowners.

In the digital age, we are witnessing a parallel form of enclosure, where a small number of corporations monopolize access to user-generated data.

These corporations do not create the data themselves, they extract it from individuals, aggregate it into massive databases, and claim exclusive rights over its use. Much like feudal lords extracted wealth from peasants who worked the land, modern data monopolists extract value from users without granting them any meaningful control or compensation. This asymmetrical relationship is not just economic exploitation; it is the direct digital counterpart to the enclosures Proudhon condemned where resources produced by the many are controlled by the few.

While some argue that historical enclosures facilitated industrial development, they did so at the cost of widespread dispossession. Similarly, data monopolization may fuel AI development and platform efficiencies, offering users certain conveniences and services. However, these benefits do not negate the fundamental injustice of expropriating value from individuals without their consent or fair compensation, continuing the same pattern of economic exploitation seen throughout history.

Why Data Enclosure Is a Form of Economic Exploitation

  • Users produce data, but corporations own it. Individuals generate vast amounts of data through their interactions, preferences, and behaviors, fueling AI systems, targeted advertising, and corporate decision-making. Yet, despite being the primary producers of this valuable resource, they have no claim to its profits or control over its use.

  • Corporations reinforce dependency. Just as land enclosures forced peasants off common lands and into wage labor under exploitative conditions, data monopolization forces individuals into digital ecosystems where they have no bargaining power. Social media, search engines, and essential digital services operate as “walled gardens” giving users no real alternative but to accept exploitative terms.

  • A power imbalance emerges. The consolidation of data ownership enables a small number of corporations to dictate market conditions, reinforcing surveillance capitalism and eroding individual autonomy. By controlling both the infrastructure and the data that fuels it, these corporations set the rules deciding who has access, what is prioritized, and who benefits.

The enclosure of data mirrors historical patterns of economic exploitation: a resource produced collectively is privatized and controlled by the few, leaving individuals disempowered, dependent, and excluded from its benefits.

However, if individuals are the rightful producers of data, then they should have a legitimate claim to ownership. This is precisely where Lockean theories of property become relevant. In the next section, we will explore how Locke’s labor theory of ownership provides a foundational argument for why individuals, not corporations, should have rightful control over their data.


4. Locke’s Labor Theory and Its Application to Data

According to Locke, property rights originate when a person exerts labor upon something unowned, thereby making it an extension of themselves. Just as a farmer cultivates land or an artisan creates a piece of work, individuals produce digital content and behavioral data through their interactions online. This data is not passive residue, it is the byproduct of human activity, expression, and engagement, and free will.

Yet, under the current model of data commodification, users do the work of generating valuable digital footprints, while corporations appropriate the resulting data for profit. This arrangement is akin to a landlord asserting ownership over crops cultivated by a tenant farmer: an unjust appropriation of another’s labor.

However, unlike physical labor, digital labor is often invisible and uncompensated, making its exploitation more insidious. Users do not merely consume digital platforms; they fuel them, generating data that powers AI training models, targeted advertising, and algorithmic decision-making, yet they retain no ownership or rights over the value they create.

The Exploitative Nature of Corporate Data Appropriation

Locke argued that property ownership is only justified if it does not harm others or deprive them of necessary resources. This is known as the "enough and as good" principle, the idea that individuals can appropriate resources only if their use does not diminish access or opportunity for others.

The current data economy violates this principle in two fundamental ways:

  • Extraction without compensation: Corporations profit from data while individuals receive no direct benefit. Users generate value through their digital activities, fueling AI development, targeted advertising, and algorithmic decision-making, yet they are not compensated for their contributions. This mirrors historical labor injustices, where workers produced wealth but were denied ownership over the fruits of their labor. Just as industrial capitalists profited from exploited labor, tech corporations profit from uncompensated digital labor, treating personal data as a resource to be extracted rather than an extension of the self.

  • Lack of control: Once data is collected, individuals have no meaningful say over how it is used. Under Locke’s framework, labor should yield ownership and agency over the product of one’s efforts. Yet in the current system, users are stripped of their rights the moment their data is collected, making it impossible to govern, sell, or even delete the very resource they produce. This is functionally indistinguishable from involuntary servitude where labor is taken but rights over the product are denied.

By appropriating user-generated data without consent, compensation, or control, corporations violate the very principles of property rights that Locke used to justify ownership. If personal data is the product of digital labor, then under Lockean ethics, it should belong to the individual, not to corporations that merely extract and enclose it.

Some may argue that data does not require “labor” in the same way physical work does. However, digital interactions whether through content creation, behavioral engagement, or platform participation, are integral to the revenue generation of tech corporations. If platforms and data-based services cannot function without user-generated data, then the claim that this data is merely an incidental byproduct, rather than an economic contribution, fails to hold.

Reclaiming Data as a Natural Right

Applying Locke’s framework to the digital age leads to a clear conclusion: individuals should have full ownership rights over their data because it is the product of their labor, identity, and decision-making. To be denied control over one’s data is to be denied control over one’s own digital existence.

A just system of data ownership would ensure that individuals have the right to:

  • Decide how their data is used (or opt out of its use entirely).

  • Profit from its monetization (both sale and use) if corporations seek to commercially exploit it.

  • Transfer or delete it at will, ensuring autonomy over one’s digital footprint.

This Lockean perspective reinforces the idea that data rights are not merely a regulatory concern but a fundamental moral imperative. If property rights are to retain any meaning in the digital age, then data, the most valuable resource of the modern economy, must be recognized as belonging to the individuals who create it.

However, recognizing data as personal property requires legal and political frameworks to support and enforce these rights. In the next section, we will examine how existing legal precedents lay the groundwork for treating personal data as property, and what gaps must be addressed to ensure full data sovereignty.


5. Legal and Political Precedents for Data Ownership

The philosophical case for data ownership must be supported by legal recognition. Over the past two decades, governments have responded to growing concerns over data privacy by enacting laws aimed at protecting user rights. However, while these laws offer individuals some level of control over their personal data, they fall short of recognizing data as property that individuals can fully own, control, or monetize.

This section examines existing legal frameworks and their limitations, highlighting why privacy protections alone are insufficient to establish true data ownership.

Existing Legal Frameworks and Their Limitations

Several major legal frameworks acknowledge that individuals have rights over their data, but they stop short of treating data as personal property:

1. General Data Protection Regulation (GDPR) – European Union

  • What it provides: The GDPR grants individuals the right to access, correct, and delete personal data collected by companies.

  • Where it falls short: It does not establish ownership; while users can control certain aspects of their data, companies still retain control over how it is monetized and used beyond its immediate collection.

2. California Consumer Privacy Act (CCPA) – United States

  • What it provides: The CCPA strengthens consumer rights by requiring companies to disclose what data they collect and allowing users to opt out of data sales.

  • Where it falls short: Like the GDPR, it does not establish personal data as property; individuals can limit data sharing, but they do not have full control over or the ability to monetize their data.

3. Intellectual Property and Right to Publicity Laws

  • What they provide:

    • Copyright law protects original creative works (e.g., writing, artwork, and music).

    • Patent law protects inventions and technological innovations.

    • The Right to Publicity allows individuals to control how their name, image, or likeness is used for commercial purposes.

  • Where they fall short: None of these laws apply broadly to personal data. While copyright protects specific creations, it does not cover behavioral data, digital footprints, or general online activity. The Right to Publicity controls likeness and identity, but it does not extend to how corporations track and use digital behaviors.

Yet, the logic behind the Right to Publicity seems to directly align with the case for data ownership.

The Right to Publicity acknowledges that an individual’s image and identity hold economic value and should not be used without consent. Yet, in the digital age, behavioral data ranging from search histories to biometric patterns has become just as valuable, if not more so, than a static likeness. If one's face or name deserves legal protection, then surely the patterns of one’s digital existence shaped by one’s choices, actions, and personal identity must also be safeguarded against unauthorized commercial use.

Courts have recognized that individuals have control over their likeness even in digital spaces, as seen in cases involving deepfakes and virtual representations. Extending this reasoning to personal data aligns with existing legal principles that recognize an individual’s right to govern their own identity in commercial contexts.

Historical Precedents for Recognizing Data as Property

The recognition of property rights has historically expanded in response to technological and economic changes. Whether through land enclosures, labor protections, or financial rights, societies have continuously adapted to ensure that individuals maintain control over the value they create.

Today, the same logic must be applied to data: a resource produced by individuals but controlled by corporations.

A Legal Grey Area: Does Facial Recognition Violate the Right to Publicity?

One pressing issue in the digital age is the proliferation of facial recognition technologies and mass surveillance systems, which capture and process individuals' likenesses without explicit consent. If the Right to Publicity ensures control over one's name, image, and likeness in commercial contexts, then it raises a crucial question: should facial recognition databases and surveillance tools be considered a violation of this right?

Many corporations and government entities collect facial data through surveillance cameras, biometric scanners, and digital tracking systems. These images and data points are often stored in proprietary databases and, in some cases, sold to third parties, often without the knowledge or consent of the individuals being recorded. If a celebrity cannot have their image used in an advertisement without permission, why should an individual’s biometric data be captured, stored, and monetized without their consent?

Additionally, when companies license or sell access to facial recognition databases, this constitutes a commercial use of biometric data: one that profits from individuals’ likenesses without their consent. This raises direct parallels to established Right to Publicity protections, which prohibit the unauthorized commercial exploitation of one’s identity.

Legal frameworks such as the Biometric Information Privacy Act (BIPA) in Illinois and the General Data Protection Regulation (GDPR) in the EU have already recognized biometric data as sensitive and requiring explicit consent. However, while biometric privacy laws such as BIPA and GDPR regulate the collection of facial data, they do not always address its commercial use. The Right to Publicity offers an additional legal avenue for protecting individuals from unauthorized capture and exploitation of their likenesses in digital spaces, particularly when this data is monetized by private entities.

1. The Evolution of Property Rights

  • Property rights have expanded throughout history to address new economic realities. The enclosure of land, the rise of industrial labor, and the digital revolution all forced societies to redefine ownership and compensation models.

  • Intellectual property laws emerged as a response to the increasing value of non-physical goods: ideas, inventions, and creative works. Similarly, digital labor and data production should be recognized as property that belongs to its creators, rather than the corporations that extract and enclose it.

2. The Fight for Workers’ Rights and Wage Protections

  • The labor movements of the 19th and 20th centuries fought for fair compensation, recognizing that those who produce value should benefit from their labor. These movements led to wage laws, union protections, and limitations on exploitative labor practices.

  • This same principle applies to data. If corporations profit from the behavioral data, preferences, and content generation of users, then individuals should not be treated as unpaid digital laborers. The concept of a data dividend or compensation for digital labor is the natural extension of past labor protections in the physical world.

3. Consumer Protection Laws and Financial Rights

·      Consumer protection laws exist to prevent financial exploitation, ensuring that individuals have transparency and control over their financial assets.

·      Personal data should be recognized as an economic asset, not because of its intrinsic nature, but because corporations have turned it into one. The modern digital economy is built on the extraction, analysis, and commercialization of user data, generating billions in revenue for AI training, targeted advertising, and algorithmic decision-making. If corporations treat data as a monetizable asset, then the individuals who generate it should have rightful claims over its value.

·      If individuals have the right to control financial assets, they should have the right to control their data assets as well. The denial of data ownership is effectively a form of economic disenfranchisement: forcing individuals into an extractive system where they generate wealth but receive none of its benefits.

Toward a New Legal Framework for Data Ownership

The historical evolution of property rights, labor protections, and consumer laws all point to the need for a new legal framework that explicitly recognizes personal data as property. Existing privacy laws offer limited protections but fail to address the fundamental power imbalance between individuals and corporations. For data ownership to be meaningfully established, legal frameworks must:

  • Explicitly define data as property belonging to the individual who generates it. Without legal recognition, corporations will continue to claim ownership by default.

  • Provide mechanisms for compensation, allowing individuals to benefit financially from their data. Just as labor laws ensure wages for work, data laws must prevent the uncompensated extraction of digital labor.

  • Enforce transparency and consent, ensuring users fully understand how their data is used and have meaningful control over it. Consent must be informed, not coerced through vague policies or monopolistic digital ecosystems.

  • Establish penalties for unauthorized data collection, similar to laws protecting financial assets from fraud or theft. Without strong enforcement, corporations will continue to exploit data as an unregulated resource.

Recognizing data as property is not just a legal necessity, it is a moral imperative. Without ownership, individuals will remain passive subjects in a system that extracts value from their digital existence without accountability. In the next section, we will explore the practical steps that individuals, policymakers, and advocacy groups can take to push for data ownership as a fundamental right.


6. Practical Steps Toward Data Ownership

Recognizing the moral and legal imperatives for data ownership is only the first step. Without concrete action, data rights will remain theoretical—while corporations continue to extract, commodify, and control the personal data of individuals without accountability.

Implementing meaningful change requires coordinated efforts from individuals, policymakers, technologists, and advocacy groups. This section outlines key steps that can drive the push for data ownership as a fundamental right.

1. Strengthening Legal Protections

  • Advocating for Data Property Laws → Push for legislation that explicitly recognizes personal data as property owned by the individual, preventing corporations from claiming it by default.

  • Expanding Data Compensation Models → Support data dividend programs that ensure individuals receive compensation when their data is monetized.

  • Reforming Consent Practices → Strengthen laws to require active, informed consent before companies collect and use personal data, ensuring individuals are fully aware of what they are agreeing to.

2. Developing Technological Solutions

Technology must be part of the solution, not just the problem. While legal frameworks provide enforcement, technological innovations can offer users direct control over their data.

  • Decentralized Data Storage → Support blockchain-based and decentralized data systems that allow users to retain control over their own data rather than relying on centralized corporations.

  • User-Controlled Data Markets → Develop platforms where individuals can voluntarily sell, license, or manage their data, ensuring both compensation and consent.

  • Privacy-Enhancing Technologies (PETs) → Promote encryption, anonymization, and self-sovereign identity systems that give users greater autonomy over their digital presence and prevent unauthorized tracking.

3. Individual Actions to Protect Data Rights

  • Using Privacy-Focused Tools → Opt for browsers, search engines, and platforms that prioritize user privacy (e.g., DuckDuckGo, Signal, ProtonMail).

  • Opting Out of Data Collection → Take advantage of existing opt-out mechanisms provided by laws like GDPR and CCPA, limiting how corporations track and use personal data.

  • Educating Yourself and Others → Spread awareness of data ownership rights and empower individuals to demand better protections and policies.

4. Collective Action and Advocacy

  • Supporting Digital Rights Organizations → Groups like the Electronic Frontier Foundation (EFF) and Fight for the Future actively work to protect digital rights—supporting them amplifies the cause.

  • Pressuring Corporations for Transparency → Demand that companies adopt ethical data policies through petitions, campaigns, and shareholder activism.

  • Demanding Algorithmic Accountability → Advocate for greater transparency in AI and data-driven decision-making, ensuring individuals are not unknowingly manipulated by opaque systems.


Conclusion: A Call to Action

Data ownership is not just a theoretical ideal, it is a fundamental necessity for protecting individual autonomy in the digital age.

Without legal recognition and technological safeguards, we risk entrenching power imbalances that favor corporations over individuals, allowing unchecked surveillance, economic exploitation, and algorithmic control.

By advocating for stronger laws, supporting technological solutions, taking individual action, and engaging in collective advocacy, we can reshape the digital landscape: one in which individuals have true control over their digital lives.

The fight for data ownership has already begun. If we fail to act now, the power imbalance will only deepen, allowing corporations to entrench predictive surveillance, AI-driven decision-making, and digital exclusion based on hidden data profiles. The longer individuals are denied rightful control over their digital selves, the harder it will be to reclaim autonomy in an era increasingly governed by invisible algorithms. The question is no longer whether data ownership should be recognized, but whether individuals will have any meaningful control over their digital selves in the future.


Motivation for This Paper

This paper was motivated in part by Kate Crawford’s Atlas of AI, which explores how artificial intelligence relies on the mass extraction of resources: data, labor, and the environment, to sustain its operations. Crawford’s analysis of how AI systems commodify human-generated data without consent served as a foundational influence in shaping the arguments presented here. While Atlas of AI critiques the power structures that dominate modern AI development, this paper extends that critique into the legal and philosophical necessity of recognizing personal data as property, a crucial step in reclaiming individual autonomy in the digital age.

You can read my book review of Crawford’s work here: https://paragraph.xyz/@jldart/book-review-atlas-of-ai-by-kate-crawford


References & Additional Reading

The following works were used to inform and guide the thought process of this paper, providing philosophical, legal, and historical context for the arguments presented. Additionally, selected readings offer further exploration into data ownership, digital rights, and the ethics of AI-driven economies.

Primary References (Used to Support the Core Arguments in This Paper)

Philosophical & Ethical Foundations:

  • Immanuel Kant – Groundwork of the Metaphysics of Morals

  • John Locke – Second Treatise of Government

  • Pierre-Joseph Proudhon – What is Property?

    • Critiques monopolized property and explores the enclosure of common resources, providing historical context for data monopolization.

Legal & Policy Frameworks:

  • General Data Protection Regulation (GDPR) – European Union

    • The EU’s comprehensive data protection law, addressing privacy but not ownership.

    • Official Text

  • California Consumer Privacy Act (CCPA) – United States

    • Strengthens consumer rights over data collection but does not establish property rights.

    • Official Information

  • Biometric Information Privacy Act (BIPA) – Illinois

    • Recognizes biometric data as sensitive and requiring explicit consent, aligning with the Right to Publicity.

    • ACLU Illinois BIPA

  • The Right to Publicity – International Trademark Association (INTA)

    • Explains legal protections for name, image, and likeness in commercial contexts, which support the argument for extending these rights to digital footprints.

    • INTA Resource

Economic & Historical Context:

  • Karl Polanyi – The Great Transformation

    • Examines how commodification of essential human activities leads to societal disruption, paralleling data commodification today.

  • Shoshana Zuboff – The Age of Surveillance Capitalism

    • Analyzes how personal data has become the foundation of modern economic exploitation.

Modern AI & Digital Rights Scholarship:

  • Timnit Gebru & Margaret Mitchell – AI Ethics Research

  • Evgeny Morozov – To Save Everything, Click Here

    • A critical analysis of technological power structures and their impact on autonomy.

  • Cambridge Analytica Scandal – Wikipedia Overview

    • A case study on the weaponization of personal data, illustrating the stakes of data ownership.

    • Article Link


Additional Reading (For Further Exploration)

Expanding on Digital Rights & Ownership:

  • Jaron LanierWho Owns the Future?

    • Proposes data ownership models where individuals are compensated for their contributions to digital platforms.

  • Cory Doctorow – How to Destroy Surveillance Capitalism

    • A critical response to Zuboff’s work, offering alternative solutions to data monopolization.

Legal Cases & Policy Reports:

  • Facial Recognition & Privacy: Clearview AI Lawsuits

    • Discusses how facial recognition companies scrape and sell biometric data without consent.

    • ACLU Lawsuit Summary

  • EFF (Electronic Frontier Foundation) Digital Rights Reports

    • Regular updates on legal battles surrounding data ownership, privacy, and AI governance.

    • EFF Website

Collect this post as an NFT.

The Decentralized Mind

Subscribe to The Decentralized Mind to receive new posts directly to your inbox.

The Case for Data Ownership: A Moral Imperative