Getty Images advocates for regulatory and industry action for AI-generated images

By

As Meta develops capabilities to detect and label AI-generated content.

Getty Images has called for regulatory and industry action to ensure data sets used to train AI are openly disclosed and make clear if images are AI-generated. 

Getty Images advocates for regulatory and industry action for AI-generated images

It follows as Meta announced it will begin to detect and label AI-generated images from companies using invisible markers embedded into files and apply labels across its subsidiaries, Facebook, Instagram and Threads.  

Grant Farhall, Getty Images’ chief product officer told Digital Nation, “Transparency starts with how AI models are trained.”

“Our AI tools are trained solely from Getty Images’ and iStock's vast creative library, including exclusive premium content. Unfortunately, not all AI models today are transparent about the data they used for training,” Farhall said. 

“That is why Getty Images is advocating for regulatory and industry action that considers transparency as to the makeup of all training sets used to create AI learning models and require generative AI models to clearly and persistently identify AI outputs.”

Farhall said the organisation believes “that individuals should know when they are interacting with AI created content or an AI system.”

“First, a unique ‘AI-Generated’ watermark will appear on preview images. In addition, the metadata on AI-generated images will include a new field that indicates that the image was AI-generated, and which model and model version were used.”

He said Getty Images “believes that the development of generative AI tools and services should not come at the expense of creators, as they are vital for a vibrant and progressive society.”

“Creators, contributors and right holders should consent to the use of their work as training data, which we’ve seen is not happening in many instances.

“Allowing generative AI to be trained on copyrighted works without consent allows for unlimited and immediate content creation at little to no cost, which directly competes with the original training data. 

“We also believe that opt-out schemes proposed by certain AI developers are not sufficient,” Farhall said.  

He explained that in the absence of regulation, the company is also “concerned” about a potential flood of disinformation and synthetic content. 

“This influx of content has the potential to undermine public trust in institutions and in each other. 

“Without an obligation to identify generative content, the burden of verification falls on the public. 

“Further, the debate over the authenticity of generative AI content often occurs after it has been published, if it happens at all, which could give harmful content the opportunity to replicate and influence,” Farhall said. 

He said while the future of AI is still uncertain the company remains “optimistic about the possibilities for creatives”, especially smaller businesses. 

According to Farhall, many small businesses are experimenting with various tools already, with its VisualGPS research finding 42 percent of small businesses are using AI-generated content to support marketing work. 

“Our main goal at Getty Images is to help businesses and individuals create at a higher level while saving time and money and help them mitigate risk as best as we can. 

“In some cases, this could be done through our AI tools, in other cases our creative libraries prevail as a source of authenticity, diversity, creativity and quality, in a manner that stands above what AI can do. 

“This is seen through our Custom Content offering that allows us to create unique and powerful visuals from our global network of photographers adjusted precisely to customer needs,” Farhall said. 

He said, “Ultimately, it's about allowing customers to elevate their entire end-to-end creative process to find the right visual content for different needs.”

At present, no customer-generated image will be added back to the Getty library for licence. 

“The industry as a whole, is still learning how to navigate a world of generative AI,” Farhall said. 

“Our main focus was to get a commercially safe tool in the hands of customers that provides legal indemnification.”

Farhall said Getty’s approach included adding automatic Not Safe For Work (NSFW) filters into its AI tools.

Its “commercial safe” tools mean “a viable product that businesses can use to ideate and create in their final work because it is built with content that is safe to use.”

He explained Getty tools “do not know who Taylor Swift or SpongeBob SquarePants are, or what a Nike swoosh represents, and therefore cannot create visuals that contain those types of elements that could create legal risk if used for commercial purposes.”

“This means businesses using generative AI by iStock can create new, never-seen-before content, without the fear that something that is legally protected has ended up in their work.”

Got a news tip for our journalists? Share it with us anonymously here.
© Digital Nation
Tags:

Most Read Articles

Optus' first AI chief Samantha Lawson exits

Optus' first AI chief Samantha Lawson exits

Transurban explores bringing agentic AI to its chatbot

Transurban explores bringing agentic AI to its chatbot

Westpac pilots AI to analyse inbound call content

Westpac pilots AI to analyse inbound call content

ANZ explores agentic AI opportunities

ANZ explores agentic AI opportunities

Log In

  |  Forgot your password?