By Neil Marion |

Artificial intelligence-powered (AI) image platforms have gained popularity, and apps like DALL-EMidjourney and Stable Diffusion are exciting tools with seemingly endless creative possibilities. But can AI itself be creative? And what are the legal and ethical implications of using AI to create content?

To begin to answer this question, we must understand that AI isn’t capable of making something out of nothing. It can only reference existing art to create its new images. To teach the systems, these generative AI tools use algorithms to learn from existing images and to create new content. DALL-E 2, for example, was trained on approximately 650 million image-text pairs that its creator, OpenAI, collected from the internet. The owners of the technologies are not saying much, but it does seem likely that millions of copyrighted images were used to train the AI. Now, these companies offer paid subscriptions that give users full usage rights to reprint, sell and merchandise the images they create—which has opened the floodgate of images being used publicly.

Numerous major brands have already engaged in AI image creation. Cosmopolitan magazine partnered with an AI prompt artist (yes, that’s a new job title) to make the first AI-created cover photo for a magazine. At first glance it’s really cool, but you start to wonder what source images it used to create the final image.

Heinz created a commercial using AI image generation technology and asked other AI prompt artists to submit their own creations. The Heinz example is interesting because it points to a thorny issue for trademark holders: The source images clearly included Heinz’s logo and iconic bottle shape.

With so much creative opportunity at our fingertips, how are marketers and brands to navigate the legal issues of AI image creation?

A headshot of W. Keith Robinson. A black man smiling with short curly hair.

Phone an Expert

We sat down with W. Keith Robinson, a legal expert in artificial intelligence. Robinson is a professor of law at the Wake Forest University School of Law. Thomson Reuters has twice recognized Professor Robinson’s articles as the best of the year in intellectual property law. His recent work has explored how artificial intelligence may impact obtaining U.S. patents. See his bio and disclaimer below.

What Is Art?

We began our conversation talking about art—specifically whether a computer can make art. Professor Robinson said, and we agree, that this is both a philosophical and a legal question. “Art is basically what other humans perceive it to be and take out of it. To the extent artificial intelligence could create something that humans enjoy, or invoke emotion in a person, I think that would meet that definition philosophically.”

But does this mean AI can legally author original works? Robinson continued, “According to the United States Copyright Office and past Supreme Court precedent, works that are eligible for copyright protection are original works of authorship. An author is a human. An author can’t be a nonhuman. For example, neither an animal nor artificial intelligence can be an author.”

a comic book style illustration of a man with long hair and glasses who looks menacing

Is the Idea of John Wick Protected?

Can the prompt artist claim copyright on the prompts entered to create the AI work? The law says the prompt is comparable to an idea. We know from copyright law ideas are not protectable.

Robinson gave a great example, “I have the idea for a story about an invincible hitman [who’s] trying to get out of an organized crime syndicate. I can’t sue the producers of John Wick for stealing my idea. Because ideas aren’t protectable. What copyright law has said is that it’s the expression of that idea that is protectable. Based on this line of reasoning, I would say, someone entering their idea into the software is not afforded copyright protection under this framework.”

The Fair Use of Prince’s Image

Fair use promotes freedom of expression by permitting the unlicensed use of copyright-protected works in certain circumstances. How is the concept of fair use applied with text-to-image generators and the resulting creations that are based on existing images?

The AI scrapes the internet for images, and copyright-protected images are included in the data collection. Robinson said, “When using these images, whether or not this is going to be considered infringement of copyright protection, that’s where the fair use defense comes into play. Currently we don’t have court opinions to determine how the courts will rationalize this. However, we do have a fair use case that the Supreme Court heard. You may be familiar with Andy Warhol Foundation v. Lynn Goldsmith. Andy Warhol was commissioned to create an image of Prince in 1984. Andy Warhol used a photograph taken by Lynn Goldsmith as a base.” The Supreme Court will be ruling on this case in spring 2023. “The court could change the way we think about fair use with its ruling—if this is a fair use of the photograph or not, which will have implications in the AI world as well.”

What About Brad Pitt?

The photographer, not the celebrity, typically owns the copyright in a photo of a celebrity. The paparazzi have been sued by celebrities for copyright infringement, and celebrities have lost because the photographer owns the photograph. The general idea is that if a celebrity is out in public, there’s no expectation of privacy.

The problem with AI is that we aren’t really sure where the source material is coming from. If we generate an image of Brad Pitt, a photographer could say, “Hey, I took the photo that’s derived from. You are infringing my copyright.”

A great example of this is the Fairey v. Associated Press case surrounding the famous Shepard Fairey “Hope” image of then-senator Barack Obama. The original photo was taken at a summit. The two parties reached a settlement. Fairey agreed not to use any other Associated Press photos without license to do so, and he and the Associated Press will share profits derived from the sale of posters based on the image.

Let’s bring it back to Brad. If someone creates an AI image of Brad Pitt and doesn’t break any copyright laws and then they use his image in marketing materials, he still may have a case.

The right of publicity is very powerful. The argument is that the agency or brand is associating Pitt with a product or service. It looks as if he is endorsing the product or service when he’s not actually given the company rights and has nothing to do with the company.

There’s a recent example from Twitter where the actress Katherine Heigl is shown walking out of a Duane Reade drugstore in New York. The official Duane Reade Twitter account tweeted, “Even Katherine Heigl can’t resist shopping #NYC’s favorite drugstore.” In the image, she is walking out after actually shopping at the store. The photographer took the image, so it’s not a copyright issue. She sued them on her right of publicity. Heigl’s argument was that Duane Reade made it look like she was endorsing the store. She was not, even though it’s clear she was shopping there. She had Duane Reade bags in her hands. But she sued them for her right of publicity. The case ended in an agreement out of court with Duane Reade contributing to Heigl’s animal welfare foundation.

an antique clock scale

The Courts Will Decide

We are already seeing the U.S. courts define the guardrails when it comes to AI image usage.

The U.S. Court of Appeals for the Federal Circuit case Thaler v. Vidal was based on the Patent Act, which requires inventors to be human. Therefore, AI cannot be listed as an inventor on a patent application. Professor Robinson said, “So far, we have not seen any major decisions that I think would give artificial intelligence any sort of rights.”

He continued, “Congress can come in and legislate and say we’re going to pass a law saying there is copyright protection in works created by AI to stay competitive globally. We’ve already seen this in other countries such as New Zealand, where they recognize AI as an inventor. China recognizes copyright protection for AI-generated news articles.”

It takes a disagreement on something important for the guardrails to be defined. Just as with other technologies such as the internet, smartphones and social media, it will take two parties with a lot at stake to litigate.

What the Future Holds

a crystal ball with smoke surrounding it

We asked Robinson to look into his crystal ball and tell us what the future of AI will hold. He said that first, he sees possible regulation and/or lawmakers getting involved around the issue of deepfakes and whether or not there will be disclosure requirements.

“For written words, audio and visual images, a disclosure ‘this was created with artificial intelligence,’ or ‘this was created solely by artificial intelligence.’ The AI will cite sources or give attribution if it uses these existing images to create its own image.”

Second, companies can develop a “clean model approach.” A clean model is training AI only on images that the programmer has rights to. For example, Adobe is developing an AI that relies on the clean model that is trained only on images to which Adobe has full rights.

“Lastly, programmers can allow creators to opt in, allowing their work to be used and potentially collect royalties when another user creates work in their style using the AI tools.”

The Bottom Line

Experiment with AI image creation but be mindful of the legal implications and read the app licensing agreements. For brands and companies using AI image creation publicly for marketing or internally for brainstorming, it’s another tool in our creative toolkit. There’s a lot of room for improvement in both the technology and legal aspects.

Robinson wrapped by saying that he’s not giving legal advice. “This is legal information. Work with your legal counsel to determine the right path for your brand, and monitor both the technology and legal advancements.”

Many thanks to W. Keith Robinson for sharing his expertise.


About W. Keith Robinson

W. Keith Robinson is a Professor of Law and Faculty Director for Intellectual Property, Technology, Business, and Innovation at the Wake Forest University School of Law. Professor Robinson is a nationally recognized patent scholar. He researches how legal institutions govern emerging technology. He has commented on issues of intellectual property law in media outlets and given more than seventy presentations around the world on patent law. Thomson Reuters has twice recognized Professor Robinson’s articles as the best that year in intellectual property law. His recent work has explored how artificial intelligence may impact obtaining U.S. patents. Robinson’s work has been cited in briefs before the U.S. Court of Appeals for the Federal Circuit (the court that hears all appeals in the U.S. arising under the patent laws). The Federal Circuit has also cited his work favorably. His most recent article is published in the Nevada Law Journal. In addition, he has published articles in the Florida Law Review, DePaul Law Review, and the American University Law Review. Robinson graduated from Duke University, earning a BS in electrical engineering. He received his JD, cum laude from Duke University. After law school, he worked for the Washington, DC law firm of Foley & Lardner LLP, where his practice focused on patent law. While at Foley & Lardner, Robinson was an adjunct professor at The George Washington University Law School. Before joining Wake Forest, Professor Robinson was an Associate Professor at the SMU Dedman School of law for ten years. There, he was an Altshuler Distinguished Teaching Professor, a founding Co-Director of the Tsai Center for Law, Science and Innovation, and Faculty in Residence in Kathy Crow Commons on the campus of SMU.


Sign up for our monthly newsletter to get more content like this.