Artificial Intelligence May Be Able to Pimp the Pope, but What Else Can It Do?

Last weekend, pictures of Pope Francis dressed in a long, white puffer jacket began circulating the internet. In awe of the Pope’s sudden sense of fashion, I began sending the photo to my friends and family without giving much thought to its legitimacy. Given that it’s now spring and probably not cold enough in Rome to require a full-body puffer jacket, I probably should’ve examined the photo more closely. Unbeknownst to me, the photo of the Pope decked out in a Balenciaga jacket outside of the Vatican was not real. Just like countless other internet users, I fell subject to the trickery of an artificial intelligence-generated image.

Source: CBS News

Pablo Xavier, a 31-year-old utility worker and Chicago native, had been high on mushrooms when he designed the iconic Pope photo on the AI software Midjourney. Though created a little less than a year ago with a self-funded staff of 11 employees, Midjourney has sent ripples across the internet and AI community. The software has become increasingly popular with those working within science-fiction literature or artwork and generates images based on text prompts, similar to OpenAI’s DALL-E. Midjourney’s popularity could be attributed to the fact that it is much more user-friendly than other AI software and can quickly generate realistic images on a mass scale with little more than a simple text prompt from users. In addition to the photo of the Pope, the AI software is also responsible for many other fake images that went viral, such as the images of former U.S. president Donald Trump being arrested that circulated the internet in early March and faux paparazzi photos of Elon Musk holding hands with Alexandria Ocasio-Cortez.

Source: BBC News

Though the relatively harmless images fooled many people and generated much controversy, do Americans have real reasons to be concerned about falling for AI trickery? 

The ability to manipulate photos and create fake images isn’t necessarily new in American culture. For several years now, Americans have fallen for deepfakes, produced from an earlier version of AI, that were used to create convincing image, audio, and video hoaxes. However, many deepfakes had obvious signs of illegitimacy, such as unusual skin tones, non-blinking eyes, blurred ears, stains, strange lighting, and oddly-positioned shadows. 

Today’s more advanced AI software programs, such as Midjourney and OpenAI’s DALL-E, have far fewer giveaways. Many AI experts have noted that Midjourney has issues rendering human hands accurately, often adding additional fingers or none. However, with photos like the puffer-jacket Pope where his hands aren’t necessarily the focus of the image, the common giveaways that the image is illegitimate are harder to spot. 

This recent slew of fake AI photos, in addition to ChatGPT’s release last fall, has brought up many concerns about the ethics of generative AI models. Last week, Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and former presidential candidate Andrew Yang joined hundreds of people calling for a six-month pause on AI experiments in an open letter, citing that these programs pose “profound risks to society and humanity.” The group posted the letter on the website of the Future of Life Institute, a nonprofit dedicated to developing technology in a way that benefits human life. According to the letter, the proposed pause should be used so developers can make AI more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal while collaborating with lawmakers to create AI governance systems.

Last Tuesday, Midjourney CEO, David Holz, released a statement on Discord that the company has decided to stop offering the software for free. Now, this technology will cost a $10 a month subscription for users. According to him, though, the sudden change had little to do with the recent controversial Midjourney-generated images, and more to do with the fact that the software’s available computing power simply couldn’t handle so many people overloading the system and abusing the service by creating multiple accounts to get more free images.

While many people use these emerging AI software programs to earnestly create art and express themselves, the spread of AI-generated media also threatens to further taint information channels in the digital age. Bad actors could eventually attempt to generate illegitimate AI content in bulk—or conversely, suggest that legitimate content is AI-generated—to confuse internet users and provoke certain behaviors. There are also concerns that AI-generated images could be used for harassment, or to further drive divided internet users apart.

Many AI software companies and industry groups are beginning to roll out content improvements that will help users better discern when a piece of content is generated by AI. For instance, the Partnership on AI, along with partners like OpenAI, TikTok, Adobe, Bumble and the BBC, has offered recommendations for different categories of stakeholders with regard to their roles in developing, creating, and distributing synthetic media. Additionally, platforms including Meta’s Facebook and Instagram, Twitter, and YouTube have policies restricting or prohibiting the sharing of manipulated media that could mislead users. Other improvements, like requiring watermarks on AI-generated images so users can discern their legitimacy, are also underway. 

However, AI software and social media companies may have to jump many hurdles to help improve the transparency of their content. Blanket rules for generative AI tools may be hard to implement because of freedom of speech protections and other factors. It is indeed a slippery slope for AI software and social media companies to take it upon themselves to decide whether a particular speech is worthy of protection or not. With synthetic images becoming increasingly difficult to recognize as legitimate or illegitimate, it is vital that there is better public awareness and education in order to combat misinformation. 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s