Home Internet Taylor Swift AI deepfakes raise alarm, regulation questions

Taylor Swift AI deepfakes raise alarm, regulation questions

X, formerly known as Twitter, has blocked searches for Taylor Swift, following a recent incident involving AI-generated explicit images on various social media platforms.

The deepfake images were traced back to being likely generated on Microsoft’s text-to-image generator, according to a report from 404 Media. While Microsoft stated that they haven’t been able to reproduce the specific case, they acknowledged the need for better safeguards and have implemented improved guardrails for their platforms.

Yahoo Finance’s Dan Howley reports on this story and weighs in the the increasing urge for US regulation of the quickly-evolving world of AI.

For more expert insight and the latest market action, click here to watch this full episode of Yahoo Finance Live.

Editor’s note: This article was written by Eyek Ntekim

Video Transcript

JOSH LIPTON: Meanwhile, searches for Taylor Swift on Twitter have been blocked following the social media site being flooded with AI-generated explicit images of the singer over the weekend. The deepfakes sparked outrage with fans and lawmakers and have since been traced back to being generated on Microsoft’s text-to-image generator. Here with more is Dan Howley. Dan.

DAN HOWLEY: That’s right. This kind of generated image kind of splashed across the internet, most prominently showing up on X, but on other platforms as well, is really kind of stirring the pot as far as how regulators want to respond to this kind of content online. Obviously, these kind of deepfake images are generated using AI. 404 media reported that they came from one of Microsoft’s products, but Microsoft says they haven’t been able to recreate this on their own but nevertheless have improved their guardrails for their platforms.

And I try to go on Microsoft Designer, typed in Taylor Swift, and it wouldn’t even let me do that. But you can still see these images on X. They have blocked searches for Taylor Swift but– as well as Taylor AI. But if you search for something like Swift AI, it’s still going to come up. That’s because Swift is a coding language.

They’re also going to continue to circulate regardless of what X does at this point, because, as we all know, once it’s on the internet, it’s basically impossible to get rid of. So now regulators have to come up with a plan, if they can, as to how to put some kind of law into place around this. The EU is working on laws about that. But the US is notoriously slow when it comes to putting out any kind of AI law through Congress.

And it’s important to point out that this kind of content obviously is done without the victim’s consent at all. But it’s not the first time something like this has happened. We’ve seen generative AI used for nefarious purposes before, whether that’s the deepfake voicemail for President Biden during the New Hampshire vote. And we’ve also seen AI-generated explosion outside of the Pentagon that hurt markets for a time being in 2023. So this kind of problem is going to continue to grow until regulators come up with a way of dealing with it.

JULIE HYMAN: Well, Dan, to your point, we have seen this in recent years. I’m curious if this Taylor Swift situation is different in any way. Is it just because she’s Taylor Swift and she’s so popular? Is it because there is technologically something different going on in this particular case, all the above?

DAN HOWLEY: Yeah. I mean, it’s all the above. Obviously, Taylor Swift’s popularity, especially at this moment, has a lot to do with it. I mean, this kind of happened obviously not the same situation, but her kind of standing led to those hearings about ticket prices and ticket scalping and things along those lines. But this is unfortunately been around for some time. People have been photoshopping heads of celebrities onto other people for forever, right?

It’s just the fact that it’s so easy to do now with this technology means that the spread of it is going to be far faster and the volume of it will be far more. So I think that’s where why we’re seeing this kind of reaction. It’s– a, it’s Taylor Swift and people are going to have a response because of that and, b, just because of the simplicity of the technology that allows people to do this and then quickly spread it across social media. So I think those are the real reasons why it’s generating so much buzz and why X had to take action.

JOSH LIPTON: Do we know– speaking of taking action, Dan, my understanding was these images, and correct me if I’m wrong, they were created using a Microsoft tool. Did Microsoft respond to this?

DAN HOWLEY: Yeah. They said that they weren’t able to reproduce this on their own and that they’re regardless putting their better guardrails into place to ensure this doesn’t happen. And this has been something that companies have been talking about these kind of guardrails to prevent the misuse of this technology for some time. The kind of technologist side of things will say, look, it’s a technology. It can be used for good and bad. It depends on the person that’s using it, which is a fair thing to say. But when you’re a victim of something like this, you probably have, rightfully so, a much different kind of take on it.

So it’s going to be interesting to see how these two kind of opposing directions kind of come together if they can and if Congress can actually do something. So far, we haven’t seen anything from them of significance. The Biden administration has released its AI executive order looking at ways to safeguard against certain types of AI misuse, whether that’s discriminatory uses, things along those lines. But when it comes to this issue of deepfakes and how they’re used and if they can be controlled, that’s kind of still out there.

And while Microsoft says they haven’t recreate this, it’s important to point out that there’s other models out there that people could use, could train, and could put out deepfakes with regardless of whether a large company is doing this or not. So it’s something that’s going to be here with us now going forward. It’s not– there’s really no putting the genie back in the bottle here. It’s just going to be a permanent problem from here on out.

JOSH LIPTON: All right. We’ll keep watching. Dan, thank you so much for your time. Appreciate that.

 

Reference

Denial of responsibility! TechCodex is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment