Who Owns the Work You Create with Generative AI? A Legal Minefield Worth Understanding
Or Why We Still Don't Know Who Really Owns What We Create with AI.
A few weeks a go, I had the pleasure of speaking at IA Hub Barcelona with a talk that, judging by the faces in the audience, touched a rather exposed nerve in the creative sector. The question I posed was deceptively simple: do you own the work you produce with generative artificial intelligence?
The short answer is no. The long answer is, "it depends on where you are and exactly what you've done." Welcome to the wonderful world of legal ambiguity in the age of AI.
The Big Melon Nobody Wants to Open
Marco Petrucci defined it perfectly during one of the previous talks at IA Hub: this is "the big melon right now," acknowledging that all agencies are trapped in this debate. César Pesquera, in another intervention, confirmed he encounters it every day in his work, mentioning the need for models trained with clean data and admitting something we all know: the law lags behind.
María Vinagre and Hugo Barbera brought the perspective of those working with major agencies: they impose strict legal requirements that include declaring tools, prompts, and proving that reference images have valid copyright. Mauricio Tonon was even more forthright in stating that current copyright legislation should be changed from the ground up.
The Requirement of Human Authorship
The current legal framework, both in Spain and in most Western jurisdictions, is quite clear on one point: copyright protection is only recognized for works created by a natural person. Article 5 of the Spanish Intellectual Property Law leaves little room for interpretation.
It's the same principle that led to Naruto the macaque's famous selfie receiving no copyright protection. If a monkey can't be an author, neither can a machine.
In January 2025, the US Copyright Office published a report attempting to provide some clarity. The key criterion is the existence of "significant human creative intervention." And this is where things get interesting, because what on earth does "significant" mean?
According to this report, the elaboration of the prompt, however elaborate it may be, does not offer sufficient human control to make the user the author of the output. In other words, although some prompts can be creative and protectable in themselves, they don't grant copyright over the generated work. Selection and organization of the result isn't sufficient, either.
However, when you include input generated by yourself—a photograph of yours, an original sketch—or when the final result derives from a process of adaptation and editing that introduces significant creative changes, then you can claim authorship, at least partial.
The International Mosaic
The problem becomes more complicated when we look beyond the United States and Europe. UK legislation establishes that the author is "whoever makes the necessary arrangements for the creation of the work," but it doesn't clarify whether that person is the user or the software developer. A very British ambiguity, if I may say so.
China, meanwhile, has taken a completely different path: it considers the selection of prompts, parameter configuration, and image iteration to represent adequate "intellectual investment" by the human user to grant them full authorship. If you produce AI-generated content for the Chinese market, your rights are more protected than in the West. Ironic, isn't it?
What the European AI Act Requires
Beyond who the owner is, the European AI Regulation establishes transparency obligations that we should all have on our radar by now. Generated content must be identified as of artificial origin when it resembles significant people, objects, places, entities, or real events.
And simply posting a disclaimer in small print won't cut it: the regulation explicitly requires that AI-generated content marking be machine-readable. Non-compliance can result in fines of up to 15 million euros or 3% of total worldwide turnover. These obligations will be enforceable from August 2026.
The Fine Print of Each Tool
One of the sections of my presentation that generated the most interest was the comparative analysis of the terms and conditions of the major tools. The differences are substantial.
Adobe Firefly is probably the most favorable option for professional commercial use. Both input and output content belongs to you, provided you don't use material protected by third-party intellectual property without permission and don't generate it with a beta version of the applications. Additionally, in enterprise accounts, Adobe guarantees protection for the company and assumes financial costs in case of a copyright infringement claim. However, if you publish to the Firefly gallery, you grant Adobe a broad, non-exclusive, perpetual, irrevocable, worldwide, royalty-free license.
Midjourney allows commercial use of generated images, provided you own the assets used in their creation. But there are exceptions in the fine print: if you work for a company generating over one million dollars annually, you need to subscribe to a Pro or Mega plan to claim ownership. Most worryingly, if you create an output that violates someone else's copyright and face a lawsuit, not only will you have to defend yourself, but you'll also have to compensate Midjourney for legal expenses if they're sued due to your action. Additionally, by default, both generated images and prompt details (including visual references) will be publicly visible. For complete privacy, you need the Stealth mode of Pro and Mega accounts. And to finish it off, you share the rights of generated content with the platform perpetually and worldwide.
OpenAI has an interesting position: they assign you all their rights, title, and interest in the output, "if any." That nuance is important: they implicitly acknowledge there may be no rights to assign. For GPT Enterprise users and the API, they offer Copyright Shield, which protects against claims related to both generated results and data used for training.
Google with Gemini allows commercial use in most tools (Gemini, Vertex AI), but not in experimental versions of AI Test Kitchen (MusicFX, VideoFX) without specific agreements. Their indemnification covers both claims about training data and generated outputs, provided you haven't intentionally attempted to create that infringement.
Suno and Stability AI share a similar pattern: you own the outputs if you have a paid subscription, but they retain broad licenses to use your content and make you responsible for any legal consequences arising from your use. Larger companies need specifically negotiated licenses.
How to Protect Your Work and Comply with Legislation
To meet the machine-readable marking requirement, the C2PA standard (Coalition for Content Provenance and Authenticity) is consolidating as the reference. It works like a nutrition label for digital content, allowing you to trace the creation and editing history of any file. Adobe, Amazon, BBC, Google, Intel, Meta, Microsoft, OpenAI, Publicis, Sony, and Truepic are part of the project.
Content Credentials allow you to embed verifiable metadata in images, including information about the producer, tools used, ingredients (base images), and actions performed. LinkedIn already displays this information when available. Google has its own system, SynthID, which allows you to mark and identify AI-generated content from its tools.
Four Actions You Should Start Applying Yesterday
Know in detail the terms and conditions of all tools you use regarding property rights, legal liability, and commercial use. It's not thrilling reading, but it's essential.
Review and adapt contracts with clients and suppliers to the new reality of artificial intelligence. The World Federation of Advertisers has published a guide of best practices for generative AI contracts that's worth consulting.
Document the complete production process of all your AI work. Keep prompts, visual references, intermediate iterations, and final versions.
Digitally sign all work done in part or entirely with AI, using standards such as C2PA.
We Need to Open This Melon Soon
The terrain of intellectual property in the era of generative AI is, as César Pesquera would say, in a state where the law is clearly lagging behind technology. But that doesn't mean we can ignore the issue waiting for legislators to reach agreement. The decisions we make today about documentation, transparency, and responsible use of these tools will determine our position when regulation finally matures.
And if there's one thing I've learned in more than twenty years in this sector, it's that you're better off being prepared before the melon explodes in your hands.
Understanding intellectual property and commercial rights of content we generate with AI is key. In my training programs, I help teams and companies identify what processes to automate, how to implement generative AI without putting your foot in it, and above all, how to do it safely to ensure privacy and correct use of generated content.
If your organization is exploring how to integrate AI into your processes without it becoming another pilot project that dies in two months, let's talk. I design sessions tailored to your needs, from introductory workshops to complete digital transformation programs with AI.