The Truth Behind Signatures on AI-Generated Art
Why a Signature on AI Art Isn’t a Sign of Copyright Theft
The results from generative art models, colloquially known as “AI art,” have been making waves on social media and in the art community. Many people are upset, asserting that these models are stealing the work of human artists. One of the biggest misconceptions that is often used as ‘proof’ is the presence of a distorted signature in the bottom corner of the image. I’d like to clear up this and other misconceptions about how AI art is made and explain why it’s not as sinister as it may seem.
How do AI Art Models Think?
The key thing to understand is that when you see AI art with a signature, it’s not because the model is copying that artist’s work. It’s because the training images that are often associated with the phrase “by [famous artist]” tend to be traditional artwork that have signatures on them. When using prompts like “in the style of [famous artist],” you rarely see signatures because the model interprets that prompt differently based on a different subset of training images.

The AI model breaks up a prompt into many little pieces, each of which is tied to “memories” within the model. Those memories are all collected and they contribute to the final image. This is why adding “as a sticker” to “A painting of a dragon” drastically changes the output. It references memories around paintings, dragons, and stickers, and combines those 3 memories, rather than copying from a dragon painting sticker it was trained on.

It’s also important to understand that AI art is not capable of copying and pasting parts of existing artwork.
Generative AI models like Stable Diffusion, MidJourney (now on version 6), and DALL·E (recently updated to DALL·E 3) are trained on vast amounts of art and other visual data, just like how a human artist would be exposed to different concepts and styles throughout life. However, there’s an important distinction to be made here: while human artists actively choose what to study and reference, AI models passively learn from the data they are trained on. This training data often includes copyrighted and publicly available images scraped from the web. This has been a contentious point, with ongoing debates about whether using such data without direct permission constitutes copyright infringement or falls under fair use.
So how exactly does the AI model go about creating an image from its wealth of trained knowledge?
It actually starts with random noise, which is essentially a set of random pixel values. The model then applies a process called “diffusion” to this noise, using the data it’s been trained on to transform the noise into an image gradually over dozens or hundreds of iterations (steps). The model will apply mathematical operations, such as convolutions and upsampling, to the noise, adjusting the pixel values with each iteration. Each iteration brings the image closer to the desired outcome until the model reaches a point where it thinks the image is complete.
This process, although similar across many generative models, has improved significantly in the latest versions. DALL·E 3 is now capable of understanding far more complex prompts with much better accuracy, producing fewer artifacts, and avoiding many of the issues seen in earlier versions, such as bizarre hands or text-like gibberish. MidJourney v6, likewise, has pushed the envelope in terms of artistic quality and coherence, producing stunningly detailed images that were previously difficult to achieve.
A Thought Exercise
Imagine I asked you to draw a dragon, something you’ve obviously never seen in person and have only been exposed to through the art and interpretation of others. You would have to rely on your imagination and the knowledge you have about dragons from movies, books, and other media. You might think about what features a dragon usually has, such as wings, scales, and fire-breathing abilities. You might reference other artworks you’ve seen of creatures with similar features to dragons, such as birds or reptiles. You might also use your imagination to develop unique features your dragon could have.
The final result would be a unique representation of a dragon inspired by previous artwork but not a copy of any one dragon you’ve seen. Similarly, when AI generates art, it doesn’t copy a specific image it has seen; it creates something new based on the data it has been trained on and a healthy dose of randomness (noise).
Ethical Considerations
While it’s true that AI models don’t directly copy and paste parts of existing images, the question of whether training on copyrighted material without consent is ethical remains a complex issue. Some artists feel that their work has been used to train these models without their knowledge, devaluing their creativity and labor. Others argue that this process is similar to how human artists learn from their environment, suggesting that generative models are simply part of the evolution of art-making tools.
The companies behind these models, like OpenAI and Stability AI, have started making efforts to address these concerns. For example, DALL·E 3 allows artists to opt-out of having their work used in training future models, and there’s a broader push towards creating datasets that prioritize public domain or ethically sourced images. These changes are aimed at giving artists more control over how their work is used and preventing the models from inadvertently imitating a living artist’s unique style.
Is AI-Generated Art Protected?
A recent lawsuit has brought this question into focus. Digital artist Jason M. Allen, who won a state art fair competition in 2022 with a stunning piece titled Théâtre d’Opéra Spatial, recently requested that a Colorado federal court reverse the U.S. copyright office’s decision to reject his copyright protection application. Allen created the piece using MidJourney and made further modifications in Adobe Photoshop. He argues that the image is an expression of his creativity and deserves copyright protection, as the process involved hundreds of iterations and artistic decision-making.
Allen’s work has become a topic of controversy as some people have started directly copying and using his AI-generated piece to make money, making it an ironic twist in the copyright debate. While there are arguments about whether training on copyrighted images is fair use or infringement, the more direct copying of Allen’s work raises new questions about the nature of copyright for AI-assisted creations. Should AI-generated art be copyrightable? This case may set a precedent, but it’s a complex question that deserves its own deep dive in a separate article.
Closing Thoughts
I think it’s important to understand how this technology works before forming opinions and assumptions about what it is or is not capable of doing. Hopefully, I provided a surface-level overview for anyone concerned about the outright stealing of art by these generative models. The debate around AI art will continue as technology evolves, but understanding the mechanics and ethical complexities behind these models is a good first step in having an informed discussion.
Author’s Note
The technical process of AI models and generation has been simplified in this article, to make the concepts more approachable. Human-centric words like “thought” and “understanding” have been intentionally used instead of deeper technical terms, despite models not yet being capable of actual thought and understanding.