Introduction: When Protest Becomes Necessity
In a striking act of rebellion, a group of artists recently leaked OpenAI's Sora video model to protest unpaid labor. Their actions, reported by The Decoder, are a clarion call to reevaluate the relationship between large AI corporations and the creative community. The protestors emphasized they were not against AI itself but against the exploitative practices of corporations like OpenAI, which, despite its staggering valuation of $150 billion, continues to rely on uncompensated labor from artists and creators to refine its tools. This incident exemplifies a broader corporate trend: prioritizing profit over fair treatment.
As an artist who participated in OpenAI’s closed Alpha testing group for DALL-E, I understand this frustration intimately. Like many others, I entered the program hoping it would amplify my visibility and career in the NFT space. Instead, the experience hindered my progress, leaving me to grapple with the harsh realities of how corporations exploit the very people they claim to empower.
A Firsthand Account of Exploitation
During my time with OpenAI’s Alpha group, I devoted countless hours to testing and refining the DALL-E model. I poured creative energy into crafting prompts, analyzing outputs, and providing feedback to improve the tool. In the process, I conducted multiple studies to test for bias, uncovering severe issues, particularly regarding race and gender. These findings revealed critical flaws in the model’s design, raising urgent ethical concerns.
Beyond this technical work, I also devoted countless hours to artistic research, exploring the model’s viability as a tool with seemingly limitless potential. All of this amounted to unpaid labor. OpenAI made no effort to hide the fact that I was homeless at the time—a homeless artist and veteran striving to build a future. Instead of offering support, they knowingly took advantage of my precarious circumstances.
Like all the others in the closed Alpha, I had high hopes that working with OpenAI would yield some sort of positive outcome. As a homeless artist, all I did was work, taking any opportunity that came my way in the hopes it would lead to something meaningful. Instead, OpenAI’s promise of participation in an innovative project felt hollow as months passed without tangible benefits. We couldn’t even get something as small as a social media shoutout to help boost our visibility.
Worse, all communication with OpenAI was tightly monitored. Conversations among testers were restricted, with moderators actively shutting down discussions on critical topics like NFTs, blockchain, or crypto—key areas of interest for many of us in the group. Questions about the duration of the testing phase or potential rewards for participants were met with silence. This lack of transparency left us feeling more like tools than collaborators.
Meanwhile, OpenAI reaped the benefits of our labor, fine-tuning a product that would go on to generate significant public and private interest. For an artist in the NFT space, where visibility and first-mover advantage are critical, this period of unpaid work represented a lost opportunity. Far from being a career boost, my involvement hurt my standing, leaving me feeling exploited rather than empowered.
The fact that OpenAI knowingly exploited the labor of a homeless veteran, alongside others in precarious circumstances, speaks volumes about their moral code. It illustrates the extent to which large corporations are willing to disregard the humanity of individuals in pursuit of their goals. This is not just a personal grievance—it is a glaring example of how unchecked corporate practices harm those who contribute the most to innovation and progress.
AI Is Not the Villain
It would be easy to blame artificial intelligence itself for these exploitative practices, but that would miss the point entirely. The problem lies not with the tools but with the corporations that wield them. AI is, at its core, a neutral technology—a set of algorithms capable of remarkable things when guided by human intent. It is the intent behind its development and deployment that we must scrutinize.
When large corporations position themselves as innovators while relying on unpaid or underpaid labor, they perpetuate a cycle of inequity. This behavior is not unique to AI companies; it mirrors exploitative practices in other industries, from unpaid internships to crowdsourced creative competitions. These models exploit human effort under the guise of collaboration while funneling profits and recognition to the corporate entity alone.
This pattern of exploitation is particularly insidious in the creative industries, where labor is often undervalued. Artists are invited to contribute their time and expertise with promises of exposure or future opportunities—promises that rarely materialize. In the case of AI, the ethical stakes are even higher because the tools being developed have the potential to reshape entire industries. If these tools are built on the backs of exploited creators, what does that say about the values embedded in their design?
An Ethical Framework for AI Development
As a former student of ethics, I, like many others, have openly discussed the need for ethical frameworks regarding Ai rollouts since I came onto the scene in 2021. Sadly, ethics is largely overlooked by many, and this causes the issues we end up reacting to. We must be more proactive in our demand for new and applied ethics.
To address these issues, we need to hold corporations accountable for their treatment of artists and contributors. Here are three key principles that should guide the ethical development of AI:
Transparency and Consent
Companies must be upfront about the scope and terms of participation in testing or development programs. Contributors should know how long they will be expected to work, what their contributions will be used for, and what they will receive in return.Fair Compensation
Unpaid labor should never form the foundation of multimillion-dollar innovations. Artists and testers must be fairly compensated, whether through direct payment, profit-sharing, licensing agreements, or other forms of financial equity. Given that building, training, and tuning AI models is impossible without essential human labor, contributors should have the right to meaningful compensation—be it commissions, stock options, or equity stakes in the resulting technology.Collaborative Agency
Rather than treating contributors as tools, corporations should recognize them as collaborators. This includes fostering open communication, respecting participants' expertise, and allowing meaningful input into the development process.
Reclaiming AI’s Potential
AI holds immense promise for artists, creators, and humanity as a whole. In the creative realm, it has the power to democratize access to tools, unlock new forms of expression, and push the boundaries of artistic possibility. However, for this potential to be realized ethically, we must hold corporations accountable for their actions. The rollout of generative AI in creative fields is a critical litmus test—one that reveals whether corporations will choose to harness this technology responsibly or prioritize profit by exploiting the very people whose labor makes it possible.
The current model, where a few large entities reap the rewards of collective labor while sidelining the very people who make their success possible, is not sustainable. By holding these corporations accountable, we can ensure that AI is a force for empowerment rather than exploitation.
The protestors who leaked OpenAI’s Sora model were clear in their message: the issue is not AI itself but the practices of those who control it. This distinction is crucial. AI is a tool, and like any tool, its impact depends on how it is used. Let us direct our critique where it belongs—toward the corporate behaviors that undermine fairness and equity—and work together to build a future where technology truly serves the many, not just the few.
Conclusion and Call to Action
The story of OpenAI’s Sora leak and my own experience with DALL-E underscore the urgent need for ethical accountability in AI development. Unfortunately, these issues are not new—they have become all too common in recent years. Artists and creators deserve to be treated as collaborators, not as resources to exploit. Addressing these systemic issues is essential to reclaiming AI as a tool for creativity, empowerment, and equity.
To ensure that the future of AI is shaped not by corporate greed but by a commitment to fairness and collaboration, it is up to those who use these models to speak up and act accordingly. Artists, AI enthusiasts, and creative technologists must demand fair corporate practices and enforce ethical guidelines. By refusing to use, promote, or train AI models under exploitative conditions, we can push back against harmful corporate behaviors and help create a fairer, more equitable AI landscape.
The market is full of viable alternatives, and more options are being created constantly. By choosing not to use the tools of companies with unfair and unethical business practices, we set the tone for not only all creatives, but for all people who will inevitably be impacted by Ai in some regard. The power to shape the future of this technology lies in collective action, and we must seize it to ensure AI serves humanity, not just corporate interests.