Let's kick off the week by catching up on some science news.
For Science Quickly, I'm Rachel Feltman.
If you enjoyed photos of the red-carpet fashion from the Met Gala last Monday, chances are good you encountered at least one artificial-intelligence-generated fake.
Katy Perry posted screenshots suggesting that even her own mother got duped by an AI-generated image appearing to show the pop singer in a floral gown.
It just so happens that on Tuesday ChatGPT developer OpenAI announced a new tool designed to detect images made using the company's DALL-E 3 generator.
OpenAI says internal tests found that the tool was able to identify about 98 percent of images generated by DALL-E 3.
The creators did note, however, that any post-AI changes made to the images, including shifts in coloring, made the tool more likely to fail.
Sam Gregory, executive director of technology-focused human rights nonprofit Witness, told NPR that good media literacy and common sense make a better defense against deepfakes than currently available digital tools.
While a fake photo of Katy Perry may not pose a global threat, other images could have more serious implications — particularly during an election year.
Gregory noted that if just one image or video is circulating from a supposed event, Internet users should ask themselves why there aren't any corroborating sources.