The Bigger Picture: AI, Data, and Power Structures
As artificial intelligence technology evolves, its ability to learn from vast amounts of data—including artwork—has sparked heated debates among artists, creators, and even those not yet fully engaged in discussions of art and/or ethics.
These debates often focus on how AI models access data—images, music, and other creative works—without permission. It’s important to clarify that while AI models themselves don’t typically scrape the web, they do rely on data purchased or procured from third-party companies that do. Data scraping, though often criticized, is a legal practice that predates generative AI and underpins many industries beyond art and creativity. This distinction matters because it highlights a broader, long-standing issue beyond the models themselves: how data—creative or otherwise—is collected and commodified in the digital age.
From here, most discussions gravitate toward the financial compensation of artists and creators, as well as the ethics surrounding those who build and use generative AI systems. Additionally, art purists have worked tirelessly to delegitimize AI-generated creations, arguing they lack the authenticity of human-made works. These debates have intensified daily since the rise of latent diffusion-based models (a class of AI that generates outputs by learning patterns from vast datasets), like DALL-E, Midjourney, and Stable Diffusion. As someone fortunate enough to be an early adopter and tester of these models, I can attest that the negative narrative surrounding generative AI was almost non-existent before the technology advanced to what it is today. The rhetoric has since become increasingly vitriolic, and these heated arguments show no signs of slowing down.
But what if this entire debate is overlooking a larger issue? The most important conversation may not be about what qualifies as art (a debate as old as time), nor about whether generative AI is good or bad. Instead, we should be asking: Who owns and controls the vast data that AI relies on, and what are the implications of that control?
The Illusion of a Fair Game
There is a common assumption that once art or content is shared publicly online, it remains protected—either by copyright law or the terms of the platforms where it’s posted. This belief creates an illusion of a fair game.
In reality, most social media and content-sharing platforms, through their terms of service, claim certain rights over the content shared, often allowing them to use, distribute, or profit from the work without compensating the creator. The moment someone uploads anything, they may unknowingly surrender these rights, enabling platforms not only to display their content but also to monetize it, often without the creator’s knowledge or consent.
This raises a deeper question of fairness: Is it morally-just for platforms to profit from user-generated content while offering creators little to no protection or compensation?
These platforms generate significant revenue through advertising, data collection, and even licensing content, often at the expense of the artists and creators who provide the material. The terms and conditions that many users agree to—often without fully understanding them—allow platforms to commodify both creative work and personal data on a massive scale. Creators become part of a system where their labor or information is monetized, but they are excluded from the chain of compensation.
Many argue that this type of influence—borrowing, referencing, and building upon the work of others—has existed long before AI, and they aren’t wrong. Artists, like philosophers and mathematicians, have always borrowed and built upon the works of others; this is a foundational part of the creative process. As the saying goes, “Good artists copy; great artists steal.”
But AI doesn’t borrow or steal; it consumes.
While AI itself doesn’t make decisions independently, it functions under the direction of the entities controlling it. What differentiates AI is the scale at which it processes and synthesizes data, often without distinguishing between public works and more personal, intimate creations. This mass consumption occurs at a speed and scope unimaginable even a decade ago, disrupting traditional boundaries of artistic influence.
By focusing our ethical debates solely on compensating artists or regulating AI-generated outputs, we risk overlooking the larger issue: the commodification of creative and personal content by both platforms and AI systems, and the increasing exploitation of creators and consumers in the digital age. Platforms and technology companies alike profit from this system, while creators are often left with little recourse or reward. Should we really be pointing fingers at AI, or is the technology being used as a convenient distraction—allowing tech companies to continue enriching themselves and their shareholders at the expense of all social media users?
For example, the Cambridge Analytica scandal demonstrated how data exploitation extends far beyond simple monetization—it can be weaponized for political and social manipulation. Millions of Facebook users had their personal data harvested without consent, and this information was used to influence political campaigns, including the 2016 U.S. presidential election. This scandal highlights the grave societal consequences of unchecked data collection and commodification, where personal information is used not just for profit but to alter the democratic process itself.
This case exemplifies how data is weaponized not only for profit but also for control and manipulation—making the question of who owns and controls data more pressing than ever.
The Real Question: Who Owns the Data?
The deeper, more critical conversation that needs to happen isn’t about tweaking AI models to ensure artists get paid. It's about the fundamental nature of data in the digital age.
Who owns the data that AI models are built upon?
Who stores and controls it?
Who profits from it?
Who holds the power to control how this data is used, affecting personal, social, and even political outcomes?
AI has exposed an uncomfortable truth: the internet has become a vast reservoir where individual creations—whether they are art, writing, or personal information—are scooped up, commodified, and profited from by those who hold the power to control data.
These entities are not just corporations; they represent a broader and more insidious network of economic and political power structures.
A small number of major tech companies and data brokers control immense troves of data, giving them outsized influence over both the digital and physical world. For example, Google has faced legal scrutiny for its AI training practices, which include scraping public data such as images and text without explicit permission from creators. This practice has led to lawsuits accusing Google of profiting from copyrighted content and personal data through its AI models, like the Bard chatbot, without compensating or even informing the original creators.
Lawsuits such as those filed by Gannett and others claim that tech giants are using their data monopolies to exploit artists' work while simultaneously consolidating control over the digital economy (Enterprise Technology News and Analysis) (Market Realist).
Their ability to collect, aggregate, and manipulate vast amounts of data puts them at the center of the modern digital economy, where information is the most valuable commodity.
This raises a critical ethical question: In a world where everything we create and share becomes data, who truly benefits from this system?
The status quo allows tech giants to siphon personal and artistic data without consent, transforming creators and everyday users into cogs in a machine that profits from their labor, often without their knowledge or permission. The terms and conditions we accept without reading, the platforms we rely on to showcase our work, and even the devices we use for regular daily tasks are all part of a larger system designed to extract value from the digital representation of nearly every aspect of our lives.
I am naturally and philosophically inclined to challenge and question these power structures.
The issue isn’t just that AI is scraping data; it’s that the very structure of the internet and new digital economy is designed to concentrate power and wealth in the hands of a few corporations. AI, in this context, is simply a reflection of deeper inequalities in how data—and, by extension, power—is controlled.
This, to me, is the more pressing ethical issue.
Current AI isn’t just a technological tool; it is a manifestation of the larger system that commodifies everything including, now, human activity. To focus only on artist compensation or AI regulation is to miss the broader issue: that the same structures exploiting AI today have been quietly profiting from our data for years.
If all human activity is monitored, controlled, and commodified, can we still consider our actions truly free?
For example, the rise of surveillance capitalism (a system where companies collect personal data, often without explicit consent, to build detailed consumer profiles and sell targeted advertising, maximizing profits while reducing user autonomy) illustrates how our actions are shaped by unseen forces.
Platforms track every click, interaction, and purchase, subtly shaping our choices based on algorithms that prioritize corporate profit over individual autonomy. This data is then sold or used to maximize profit through targeted advertising, with the platforms often earning billions while users remain unaware of the ways their data has been exploited. Whether it’s the ads we see, the recommendations we receive, or even the news we’re exposed to, we often unknowingly participate in systems that control and commodify our digital lives.
Given these realities, it’s clear that the problem extends far beyond compensating creators for their work. The deeper issue is that individuals have lost control over their personal data and creative content, allowing tech companies and platforms to commodify human activity for profit. This erosion of individual control necessitates the creation of a new ethical framework—one that prioritizes transparency, consent, and personal autonomy in the digital ecosystem. We can no longer rely on outdated regulations or minor adjustments to existing models; a systemic overhaul is required to restore power to the individuals who generate the data.
Moving Forward: A New Ethical Framework
While no single person or group has all the answers, I believe the starting point is clear. Artists, creators, and everyday internet users and consumers must have a say in how their data is collected and used. Consent, transparency, and autonomy must be central to any ethical framework governing AI and data usage. This isn’t just an art-world issue; it’s a critical issue for the entire human-digital ecosystem—affecting how we interact online and how we maintain control over our personal information.
Governments have already begun taking steps to address data control. For example, the GDPR in Europe and the CCPA in California have been landmark policies in regulating how companies collect, process, and share personal data. These laws ensure that users have the right to know what data is being collected about them, request that it be deleted, and opt out of having their data sold to third parties. Such regulations represent a growing recognition of the need for individual control in a data-driven world, yet more comprehensive global solutions are necessary to tackle the broader ethical issues posed by tech companies leveraging AI for digital commodification. These laws have led to billion-dollar fines for companies like Google and Facebook and forced greater transparency, but the lack of uniform global regulations leaves major loopholes for exploitation.
The focus must shift away from how much an artist’s work influences an AI model and toward how ALL individuals can gain control over their data. In a world where so much of our human activity is commodified, gaining control of our data is a matter of reclaiming autonomy and freedom. We must ask: What systems can we create to ensure that data cannot be exploited without permission? What legal or decentralized structures could provide individuals with greater autonomy over their digital footprint? What financial benefits are average creators and consumers entitled to?
Blockchain technology, known for its decentralized, transparent, and secure nature, is increasingly being recognized as a tool for empowering individuals in the digital age.
For instance, open-source services like Ascribe.io (Ascribe) use blockchain to attribute ownership of creative works by providing each piece with a unique cryptographic ID. This allows artists to securely track ownership and distribution of their content, ensuring that no unauthorized use occurs without consent.
Platforms such as Zora and Objkt also allow creators to create smart-contracts or immutable rules that apply to their visual art, poetry, and music, while also establishing provenance and enabling direct peer-to-peer sales. These platforms leverage blockchain technology to ensure transparency in transactions, granting artists more control over the resale of their works and ensuring they receive royalties from secondary market sales. This new approach empowers creators by removing intermediaries, enabling them to directly monetize their work while retaining control over its use and distribution.
By leveraging smart contracts, these platforms ensure that creators are compensated each time their work is resold, providing a financial structure that protects artists' long-term interests—unlike traditional platforms where resale royalties are uncommon.
Beyond artistic circles, platforms like Farcaster are pioneering decentralized social networks that give users control over their digital identity and content. Unlike traditional social media platforms that lock users into their ecosystems, Farcaster ensures users retain ownership of their data and the freedom to move their content across different applications. Built on Ethereum, Farcaster allows users to post, interact, and communicate while maintaining autonomy over their profiles and contributions.
Imagine a user creating content on Farcaster—whether a blog post or a social media thread—without worrying about censorship or losing control over their posts if they switch platforms or share divisive opinions. This freedom stands in stark contrast to the 'platform lock-in' users experience on centralized platforms like Facebook, Threads, and X-Twitter.
This decentralized model represents a significant shift in power, offering a solution to the platform lock-in typical of centralized networks, where user data is often controlled and monetized by the platform itself. Farcaster gives individuals more control, ensuring that users—not corporations—decide how their content is used and shared within the digital ecosystem.
By removing intermediaries, these decentralized platforms allow individuals to regain control over their content and data, offering an antidote to the corporate gatekeeping that has long dominated the digital landscape.
The conversation about AI ethics is not only about compensating creators; it’s about confronting the broader implications of how power structures commodify personal and creative data. Without addressing the underlying inequities, any attempt to regulate AI will merely treat the symptom while leaving the disease intact. To move forward, we must demand systems that prioritize consent, autonomy, and transparency, and actively participate in shaping the digital future.
Call to Action: Regaining Control Over Data
The time to take control of our data is now. The digital age is evolving rapidly, and without action, the imbalance of power between corporations and individuals will only widen.
If you are an artist, a crypto enthusiast, or simply someone who values privacy, creativity, and autonomy in the digital age, I invite you to join this crucial conversation. We need to push for a decentralized, transparent, and ethical future—one where individuals, not corporations, have control over the data that AI and digital systems rely upon. Consent and autonomy must be non-negotiable principles in any ethical framework we build for the future.
This is not just about protecting the rights of creators; it’s about reshaping the power dynamics of the digital age for everyone. In the next few articles, I’ll explore how AI, data, and power intersect in ways that challenge our traditional understanding of privacy, creativity, and autonomy. Together, we can work toward systems that are more just and equitable—not just for artists and creators, but for all of us navigating the complexities of the digital world.