ĢĒŠÄvlog

Skip to content
NOWCAST ĢĒŠÄvlog News at 10pm Weeknights
Live Now
Advertisement

White House pushes tech industry to shut down market for sexually abusive AI deepfakes

White House pushes tech industry to shut down market for sexually abusive AI deepfakes
THE PROGRAM WOULD WORK. KHIREE. WELL, THIS RIGHT HERE IS THE KIND OF THING WEā€™RE TALKING ABOUT WHEN IT COMES TO DEEP FAKES. THIS VIDEO WAS UPLOADED BY MCAFEE ON ICS, AND IT SHOWS AN EXAMPLE OF A DEEP FAKE GENERATED BY AI. AS YOU CAN SEE, IT SHOWS SOMEONE LOOKING LIKE TAYLOR SWIFT GIVING AWAY COOKWARE. NOW, THIS IS THEIR EXAMPLE OF THE KIND OF VIDEO YOU JUST SHOULDNā€™T TRUST. AND THE PROFESSOR I SPOKE WITH AT UMD IS WORKING ON SOFTWARE THAT CAN HELP YOU TELL THE DIFFERENCE. DANGER CAN RANGE ANYWHERE FROM A FUND TO A SERIOUS ISSUE, LIKE MANIPULATING OPINIONS IN IN A DEMOCRACY. AND THAT IS THE THING. THOSE ARE THE THINGS THAT ACTUALLY ARE MADE, MAKES US MORE WORRIED. NORAH POMEROY IS AN ASSISTANT PROFESSOR AT THE UNIVERSITY OF MARYLAND IN COLLEGE PARK. HE AGREES WITH NATIONAL SECURITY EXPERTS THAT DEEP FAKES ARE A GROWING ISSUE. A DEEP FAKE IS THE MANIPULATION OF A PERSONā€™S LIKENESS IN A VIDEO THAT CAN BE USED TO SPREAD MALICIOUS OR FALSE INFORMATION. TO COMBAT THAT PROBLEM, ROY AND HIS RESEARCH AND DEVELOPMENT TEAM OF FOUR STUDENTS ARE DEVELOPING TALK LOCK. ITā€™S A CRYPTOGRAPHIC QR CODE BASED SYSTEM THAT CAN VERIFY WHETHER CONTENT HAS BEEN EDITED FROM ITS ORIGINAL FORM. HEREā€™S AN EXAMPLE OF HOW TALK LOCK WORKS. IF Iā€™M ABOUT TO GIVE A LIVE SPEECH, THE APP WILL GENERATE A QR CODE LIKE THIS DEMO RIGHT HERE. AS I KEEP TALKING, THE CODE KEEPS CHANGING. THE QR CODE IS DISPLAYED NEXT TO THE PERSON THATā€™S SPEAKING AND RECORDS THE SPEAKERā€™S AUDIO. ANYONE CAN TRIGGER A VERIFICATION THAT CAN TAKE CONTENT WITH OUR TALK LOCK QR CODE, AND OUR SERVER CAN VERIFY WHETHER THE SPEECH THAT IS IN THE CONTENT AND THE QR CODE THAT IS SHOWING ON THE CONTENT MATCHES OR NOT. AND IT CAN TELL AUTHENTICATE THE SPEECH WHETHER IT HAS BEEN MANIPULATED OR NOT. IT ALSO WORKS FOR ORIGINAL CONTENT THAT A CREATOR WANTS TO MAKE SURE CANā€™T GET MANIPULATED. THEY CAN UPLOAD THIS TO TALK LOCK SERVER AND TALK LOCK SERVER WILL PROCESS THE DATA AND IT WILL CREATE A DYNAMIC QR CODE AND IT WILL EMBED TO THE EVERY FRAME OF THAT MEDIA CONTENT. HIS TEAM OF STUDENTS SAYS THAT WE SHOULD BE CONCERNED ABOUT DEEP FAKES, BECAUSE CREATING ONE ISNā€™T AS HARD AS YOU MAY THINK. THESE DAYS. YOU CAN JUST DOWNLOAD AN APP ON PHONE AND THEN JUST TELL THEM, OKAY, CHANGE THIS VIDEO AND CHANGE SWAP FACE WITH THIS PERSON, HE SAYS. CREATING THIS KIND OF SOFTWARE IS IMPORTANT BECAUSE DEEP FAKES COULD LEAD TO LIFE CHANGING CONSEQUENCES. IN TODAYā€™S WORLD. AND THE PROFESSOR TOLD ME THAT HE IS LOOKING TO ROLL OUT THE FIRST FREE VERSION OF THIS FOR ANDROID AND IPHONE. BY THE BEGINNING OF THE SUMMER, REPORT
Advertisement
White House pushes tech industry to shut down market for sexually abusive AI deepfakes
President Joe Biden's administration is pushing the tech industry and financial institutions to shut down a growing market of abusive sexual images made with artificial intelligence technology.Video above: Maryland professor working on software to identify deepfakesNew generative AI tools have made it easy to transform someone's likeness into a sexually explicit AI deepfake and share those realistic images across chatrooms or social media. The victims ā€” be they celebrities or children ā€” have little recourse to stop it.The White House is putting out a call Thursday looking for voluntary cooperation from companies in the absence of federal legislation. By committing to a set of specific measures, officials hope the private sector can curb the creation, spread and monetization of such nonconsensual AI images, including explicit images of children.ā€œAs generative AI broke on the scene, everyone was speculating about where the first real harms would come. And I think we have the answer,ā€ said Biden's chief science adviser Arati Prabhakar, director of the White House's Office of Science and Technology Policy.She described to The Associated Press a ā€œphenomenal accelerationā€ of nonconsensual imagery fueled by AI tools and largely targeting women and girls in a way that can upend their lives.ā€œIf youā€™re a teenage girl, if youā€™re a gay kid, these are problems that people are experiencing right now,ā€ she said. ā€œWeā€™ve seen an acceleration because of generative AI thatā€™s moving really fast. And the fastest thing that can happen is for companies to step up and take responsibility.ā€A document shared with AP ahead of its Thursday release calls for action from not just AI developers but payment processors, financial institutions, cloud computing providers, search engines and the gatekeepers ā€” namely Apple and Google ā€” that control what makes it onto mobile app stores.Video below: Middle school students in Wisconsin targeted in AI nude photo scamThe private sector should step up to ā€œdisrupt the monetizationā€ of image-based sexual abuse, restricting payment access particularly to sites that advertise explicit images of minors, the administration said.Prabhakar said many payment platforms and financial institutions already say that they won't support the kinds of businesses promoting abusive imagery.ā€œBut sometimes itā€™s not enforced; sometimes they donā€™t have those terms of service,ā€ she said. ā€œAnd so thatā€™s an example of something that could be done much more rigorously.ā€Cloud service providers and mobile app stores could also ā€œcurb web services and mobile applications that are marketed for the purpose of creating or altering sexual images without individualsā€™ consent," the document says.And whether it is AI-generated or a real nude photo put on the internet, survivors should more easily be able to get online platforms to remove them.The most widely known victim of pornographic deepfake images is Taylor Swift, whose ardent fanbase fought back in January when abusive AI-generated images of the singer-songwriter began circulating on social media. Microsoft promised to strengthen its safeguards after some of the Swift images were traced to its AI visual design tool.A growing number of schools in the U.S. and elsewhere are also grappling with AI-generated deepfake nudes depicting their students. In some cases, fellow teenagers were found to be creating AI-manipulated images and sharing them with classmates.Last summer, the Biden administration brokered voluntary commitments by Amazon, Google, Meta, Microsoft and other major technology companies to place a range of safeguards on new AI systems before releasing them publicly.That was followed by Biden signing an ambitious executive order in October designed to steer how AI is developed so that companies can profit without putting public safety in jeopardy. While focused on broader AI concerns, including national security, it nodded to the emerging problem of AI-generated child abuse imagery and finding better ways to detect it.But Biden also said the administration's AI safeguards would need to be supported by legislation. A bipartisan group of U.S. senators is now pushing Congress to spend at least $32 billion over the next three years to develop artificial intelligence and fund measures to safely guide it, though has largely put off calls to enact those safeguards into law.Encouraging companies to step up and make voluntary commitments ā€œdoesnā€™t change the underlying need for Congress to take action here,ā€ said Jennifer Klein, director of the White House Gender Policy Council.Longstanding laws already prohibit making and possessing sexual images of children, even if they're fake. Federal prosecutors brought charges earlier this month against a Wisconsin man they said used a popular AI image-generator, Stable Diffusion, to make thousands of AI-generated realistic images of minors engaged in sexual conduct. An attorney for the man declined to comment after his arraignment hearing Wednesday.But there's almost no oversight over the tech tools and services that make it possible to create such images. Some are on fly-by-night commercial websites that reveal little information about who runs them or the technology they're based on.The Stanford Internet Observatory in December said it found thousands of images of suspected child sexual abuse in the giant AI database LAION, an index of online images and captions thatā€™s been used to train leading AI image-makers such as Stable Diffusion.London-based Stability AI, which owns the latest versions of Stable Diffusion, said this week that it ā€œdid not approve the releaseā€ of the earlier model reportedly used by the Wisconsin man. Such open-sourced models, because their technical components are released publicly on the internet, are hard to put back in the bottle.Prabhakar said it's not just open-source AI technology that's causing harm.ā€œIt's a broader problem,ā€ she said. ā€œUnfortunately, this is a category that a lot of people seem to be using image generators for. And itā€™s a place where weā€™ve just seen such an explosion. But I think itā€™s not neatly broken down into open source and proprietary systems.ā€

President Joe Biden's administration is pushing the tech industry and financial institutions to shut down a growing market of abusive sexual images made with artificial intelligence technology.

Video above: Maryland professor working on software to identify deepfakes

Advertisement

New generative AI tools have made it easy to transform someone's likeness into a sexually explicit AI deepfake and share those realistic images across chatrooms or social media. The victims ā€” be they celebrities or children ā€” have little recourse to stop it.

The White House is putting out a call Thursday looking for voluntary cooperation from companies in the absence of federal legislation. By committing to a set of specific measures, officials hope the private sector can curb the creation, spread and monetization of such nonconsensual AI images, including explicit images of children.

ā€œAs generative AI broke on the scene, everyone was speculating about where the first real harms would come. And I think we have the answer,ā€ said Biden's chief science adviser Arati Prabhakar, director of the White House's Office of Science and Technology Policy.

She described to The Associated Press a ā€œphenomenal accelerationā€ of nonconsensual imagery fueled by AI tools and largely targeting women and girls in a way that can upend their lives.

ā€œIf youā€™re a teenage girl, if youā€™re a gay kid, these are problems that people are experiencing right now,ā€ she said. ā€œWeā€™ve seen an acceleration because of generative AI thatā€™s moving really fast. And the fastest thing that can happen is for companies to step up and take responsibility.ā€

A document shared with AP ahead of its Thursday release calls for action from not just AI developers but payment processors, financial institutions, cloud computing providers, search engines and the gatekeepers ā€” namely Apple and Google ā€” that control what makes it onto mobile app stores.

Video below: Middle school students in Wisconsin targeted in AI nude photo scam

The private sector should step up to ā€œdisrupt the monetizationā€ of image-based sexual abuse, restricting payment access particularly to sites that advertise explicit images of minors, the administration said.

Prabhakar said many payment platforms and financial institutions already say that they won't support the kinds of businesses promoting abusive imagery.

ā€œBut sometimes itā€™s not enforced; sometimes they donā€™t have those terms of service,ā€ she said. ā€œAnd so thatā€™s an example of something that could be done much more rigorously.ā€

Cloud service providers and mobile app stores could also ā€œcurb web services and mobile applications that are marketed for the purpose of creating or altering sexual images without individualsā€™ consent," the document says.

And whether it is AI-generated or a real nude photo put on the internet, survivors should more easily be able to get online platforms to remove them.

The most widely known victim of pornographic deepfake images is Taylor Swift, whose ardent fanbase fought back in January when abusive AI-generated images of the singer-songwriter began circulating on social media. Microsoft promised to strengthen its safeguards after some of the Swift images were traced to its AI visual design tool.

A growing number of schools in the U.S. and elsewhere are also grappling with AI-generated deepfake nudes depicting their students. In some cases, fellow teenagers were found to be creating AI-manipulated images and sharing them with classmates.

Last summer, the Biden administration brokered voluntary commitments by Amazon, Google, Meta, Microsoft and other major technology companies to place a range of safeguards on new AI systems before releasing them publicly.

That was followed by Biden signing an ambitious executive order in October designed to steer how AI is developed so that companies can profit without putting public safety in jeopardy. While focused on broader AI concerns, including national security, it nodded to the emerging problem of AI-generated child abuse imagery and finding better ways to detect it.

But Biden also said the administration's AI safeguards would need to be supported by legislation. A bipartisan group of U.S. senators is now pushing Congress to spend at least $32 billion over the next three years to develop artificial intelligence and fund measures to safely guide it, though has largely put off calls to enact those safeguards into law.

Encouraging companies to step up and make voluntary commitments ā€œdoesnā€™t change the underlying need for Congress to take action here,ā€ said Jennifer Klein, director of the White House Gender Policy Council.

Longstanding laws already prohibit making and possessing sexual images of children, even if they're fake. Federal prosecutors brought charges earlier this month against a Wisconsin man they said used a popular AI image-generator, Stable Diffusion, to make thousands of AI-generated realistic images of minors engaged in sexual conduct. An attorney for the man declined to comment after his arraignment hearing Wednesday.

But there's almost no oversight over the tech tools and services that make it possible to create such images. Some are on fly-by-night commercial websites that reveal little information about who runs them or the technology they're based on.

The Stanford Internet Observatory in December said it found thousands of images of suspected child sexual abuse in the giant AI database LAION, an index of online images and captions thatā€™s been used to train leading AI image-makers such as Stable Diffusion.

London-based Stability AI, which owns the latest versions of Stable Diffusion, said this week that it ā€œdid not approve the releaseā€ of the earlier model reportedly used by the Wisconsin man. Such open-sourced models, because their technical components are released publicly on the internet, are hard to put back in the bottle.

Prabhakar said it's not just open-source AI technology that's causing harm.

ā€œIt's a broader problem,ā€ she said. ā€œUnfortunately, this is a category that a lot of people seem to be using image generators for. And itā€™s a place where weā€™ve just seen such an explosion. But I think itā€™s not neatly broken down into open source and proprietary systems.ā€