Adobe caught selling AI-generated images of Israel-Palestine violence

A

Australian news source Crikey reported that Adobe was selling AI-generated photographs of the Israel-Hamas fighting, a disturbing and ethically repulsive example of a firm benefiting from online falsehoods.

After searching for “conflict between Israel and Palestine” on Adobe Stock, a subscription service that provides generic stock images and now AI-shots, photorealistic images of explosions in high-density urban environments closely resemble the Gaza carnage.

Another image shows an AI-generated “mother and child in destroyed city in Palestine Israel war conflict,” a sad frame. In fact, 33 photographs have a similar composition.

Still another depicts “destroyed and burnt buildings in the Israel city.”

These photographs appear to have been supplied by Adobe Stock users, not Adobe.

While these photographs are formally classified as “generated with AI,” a requirement for all user-submitted works, Crikey reported that several are already circulating online, which might mislead innocent viewers.

A reverse image search on Google shows that several tiny media are using a photorealistic AI image of a massive explosion.

Without looking for AI-generated artifacts like misaligned windows or uneven lighting and shadows, these photographs may pass for real.

Over the past year, OpenAI’s DALL-E, Stable Diffusion, and Midjourney picture generators have advanced significantly. No more obvious errors or terrifying animal monstrosities.

Thus, AI-generated photographs are widely shared online. Futurism discovered last year that the top Google picture for Edward Hopper was an AI fake.

Adobe has enthusiastically embraced generative AI instead of treading carefully.

Last month, it made a major impression by bringing Firefly, its generative AI model, out of beta and making it an essential part of Photoshop. The business established a new yearly bonus program for Adobe Stock contributors to encourage them to train its AI model.

But fervor doesn’t always help everyone. Adobe is harming photojournalists by marketing AI-generated photographs. In many respects, it’s another example of AI technology threatening to destroy the livelihoods of humans who captured the photographs these algorithms are trained on.

Given the risk war photographers take to chronicle human strife, it’s an especially troubling and unethical example.

Unfortunately, these pictures not only disseminate disinformation but also undermine our faith in the news we read everyday.

“Once the line between truth and fake is eroded, everything will become fake,” USC engineering professor Wael Abd-Almageed told the Washington Post last year. “We will not be able to believe anything.”

Add Comment

Tags

Search By Months

Recent Comments