March 7, 2026

Why AI-generated art needs a label

By Zachary Steiman
Staff Writer

As artificial intelligence becomes increasingly popular, AI-generated images and art are also gaining popularity. However, due to criticism surrounding the ethics of using AI to generate art, an indicator of whether an image is AI-generated or not has increasingly become a necessity. The solution to this problem: AI watermarks.

Since its conception, AI has been a subject of controversy. This controversy is especially prevalent among artists, who worry about how AI may affect their work and even their livelihoods.

According to a study by UNESCO, 77 percent of artists believe AI models represent a threat to art workers. This significant majority of artists concerned over the effects of AI models on art represents the inherent need for AI watermarks.

With AI watermarks, people would be able easily distinguish art generated by artificial intelligence versus art made by a human hand. This simple addition could reduce misinformation and protect both artists and viewers from the growing prominence of AI art.

Some platforms, such as Meta, have already started doing this. “We will label images that users post to Facebook, Instagram, and Threads when we can detect industry standard indicators that they are AI-generated,” said Nick Clegg, Meta’s former chief policy decision-maker.

However, this initiative for adding labels on AI-generated content is rendered pointless, as users can remove these AI labels on their own. Simply by editing the post, anybody who posts AI-generated content can untoggle the AI label.

California’s government has also taken charge of addressing AI watermarks, but in a less obvious way. According to the California AI Transparency Act, which has been effective since January 1, large-scale AI companies must include a hidden watermark embedded within all AI-generated images.

This means that while the watermark will not be immediately visible, with AI detection software, users can always identify whether the image is generated by AI or not. While this doesn’t allow for quick recognition of the status of an image, it does allow people to know whether or not an image is generated by artificial intelligence or not. This may prove especially useful in the future as AI progressively becomes harder to detect.

While progress has been made on the state level, it has proven to be more difficult for the rest of the country. While various bills have been introduced, there is still no federal requirement for the disclosure of artificially generated content.

While artificial intelligence remains as controversial as ever, it is still undeniably not slowing down. Everyday, AI becomes more and more intelligent and accurate in image generation. Even if it seems easy to spot AI-generated images now, there is no telling what it may look like in the future. As AI continues to advance, measures like AI watermarks may become essential not just for transparency, but for preserving authenticity and trust in artistic communities.

About Zachary Steiman 3 Articles
Zachary Steiman is a junior staff writer at La Vista, where they cover restaurants and food. Steiman brings a passion for government and opinion writing to their reporting. When not reporting, Zachary enjoys music, food, and spending time with friends and family.

Be the first to comment

Leave a Reply

Your email address will not be published.


*