The European Union is preparing to force American AI companies to comply with a new law governing “hate speech.” Glenn speaks with Justin Haskins, the co-author of his “Great Reset” book series, about how this will “force AI developers, no matter where they are in the world, to adopt all of these EU ESG rules and to embed them into their AI systems.” Will this turn AI algorithms even more woke? Apparently, ChatGPT has already begun making the changes. But the law isn’t entirely bad …
Transcript
Below is a rush transcript that may contain errors
GLENN: Anyway, so we were talking about AI, what we have just learned in -- from polling, which is both good and bad. People are at least really starting to understand this, which is good.
If they don't understand this, or they only understand it, like we understood social media when we got on. Have you seen the Twitters? I'm on Facebook.
If we approach AI like that, we're doomed. Because, by the time you figure it out, everything will be different. And so it looks like people are starting to pay attention to it, before we get there.
JUSTIN: And they want rights and protections embedded in it. That's what they want.
GLENN: That's great.
Now, talk to me about the EU. Because the EU has decided, we're not playing this game. We don't have to be the first. First of all, you wouldn't be. But we don't have to be the first.
We're just going to be the ones that set all of the rules.
JUSTIN: Yeah.
The expanse that -- expansion of power in the European Union, during the Biden administration, into today.
Has been slightly unbelievable. This is just another example. We talked about the EU/ESG law before. Where they're trying to impose ESG on all of America and the whole world.
Well, this is similar to that. They passed a law called the AI Act in 2024.
So last year, they passed it. A lot of the law. Some of the law has already gone into effect earlier this year. More of it goes into effect in August. Penalties and things like that for non-compliance go into effect over the next couple of years.
GLENN: This is part of the stuff where you can't say anything.
You know, the -- the -- or the governments disagree with you. Go to jail.
Or is this basically, all these lays, for the high-tech companies.
JUSTIN: These are for big tech companies. That's what the purpose is. The idea behind these is to force AI developers. No matter where they are in the world, they could be in America or Canada or the EU or someplace else.
GLENN: Just not Canada because they don't care.
JUSTIN: Yeah. Well, they let them slide. To adopt all of these EU sort of like ESG rules.
And to embed them into their AI systems. And if you -- and their way of determining whether this might apply to you, is not whether you have an EU. Or not whether you have an AI system offered in the EU.
It's not even whether someone in the EU uses your AI system from the EU.
It's based on the use of an output from an AI system.
So, in other words, if I have ChatGPT produce a -- an offensive transphobic meme. And that is used by somebody in the EU.
Because you sent it to somebody in the EU.
Then some influencer in the EU spreads it all over the place. That's enough to bring your AI system under their law. Okay?
GLENN: Oh, my gosh.
Now, it's a super complicated, absurdly ridiculous law.
GLENN: I have to tell you, you know what, because what I would do. If that were the case.
And we had a president here that was going to allow those punishments to stand.
I would just -- I would find a way to embed some sort of code, that would not allow it to be spread in the EU.
I don't even know if that's possible.
You would just say, cut them off.
I don't want anything in the EU.
JUSTIN: What's crazy about this, even if you didn't offer your service in the EU, just the output alone. It's like, you have to force big tech companies not to operate in the EU. And why would they do that? Because they can make money in the EU.
That's the genius of the EU.
So what's amazing about this is.
Before we get into that, the requirements. What are the things that you have to do if you're a covered company under this. There's all these obligations that you have to do. You have to conduct detailed risk assessments. You have to build in human oversight, maintain logs, testing protocols, submit extensive documentation for EU regulators.
The worst of it all, you have to build a risk mitigation system, to avoid actions considered harmful by the EU into your EU systems -- into your AI systems.
Okay? Now, what is a risk mitigation system exactly? How do they define risk under this law? Well, risk is so broadly defined. That almost anything could be considered.
It's essentially whatever the EU regulators want to ask you to stop doing.
They can ask you to stop doing it.
Or else you pay these massive fines.
How big are the fines, you might wonder?
Well, depends on the violation.
But for most of the violations that I think we would be concerned about. It would be 3 percent of total worldwide revenue for that company.
GLENN: Nonprofit!
Revenue.
STU: Revenue.
GLENN: If you run a regular company.
I mean, it's usually a good return is six to 8 percent on your dollar?
He texted take.
JUSTIN: We're talking about billions of dollars.
GLENN: We're talking about gigantic proportions of -- if you're taking straight revenue, your profit could be gone.
JUSTIN: Yes. Exactly. So it forces compliance. Forces it. Very similar to the ESG law that we had talked about before. Except, it's focused on AI.
So they're requiring these companies to adhere to these absurd rules. They're going to do, because they're requiring these companies to adhere to these absurd rules.
They're going to do it. Because they will make money off of it. So I want to give you specific. Give you some specific laws from the law.
So in the law, there's this section where they basically lay out their intent. And in their intent, which is pages and pages and pages long, they
about risks.
Now, some of the risks that they're concerned about, and they're trying to stop with this law. So this is what is in their mind, this is how courts will interpret it in the EU, this is why it's important. It says, when they're talking about general purpose AI models. They're talking about a systemic risk that a big one might cause so they're talking about ChatGPT or something like that. It says that -- it could pose systemic risks which include, but not limited to any action or foreseeable negative effects, in relation to major accidents. Disruptions of critical sectors and serious consequences to public health and safety.
Any actual or reasonably foreseeable negative effects on Democratic processes.
Public and economic security, the dissemination of illegal, false, or discriminatory content. Then there's a whole bunch of other things I'll skip down. Risks from models making copies of themselves, or self-replicating. In which models can give rise to harmful bias, and discrimination. With risks to individuals, communities, or societies.
The facilitation of disinformation or harming privacy, with threats to Democratic values and human rights. A risk that a particular event could lead to -- could lead to a chain reaction, with considerable negative effects. That could affect, up to an entire city, an entire domain activity. Or an entire community.
STU: What!
JUSTIN: Now, this is a law. This is what they put into the law. So -- so --
GLENN: I don't even know what that last one means.
JUSTIN: It's anything, Glenn.
It summarizes, whatever we want.
So we -- I asked ChatGPT.
Are you -- are you concerned.
You know, is this something that you're concerned with.
It's already supposed to go into effect. Are you doing anything?
ChatGPT says, yes. We've already been changing. OpenAI has already been changing its algorithms to conform to this and other laws like it.
It's already happening.
So we've been sitting here, talking about, why is AI doing all this crazy, woke stuff. And why is it only promoting certain views of various issues or things like that.
Well, because it's being forced to do that.
In part, at least, because of the European Union.
So while China and America are in a race to -- you know, the AI race, to see who will develop it first.
Europe is just like, we don't really care who develops it first.
GLENN: Okay. So that's a really cute idea.
May I take it to the last big possible destructive force. And that's the Manhattan Project.
Europe could be sitting there, going, well, you know. Russia might develop their own nuclear weapons.
And America. But we will make the rules. Hmm.
Well, maybe.
If Russia and America were like, okay.
You can make the rules.
I mean, if China and the United States were like, I don't really care.
I mean, we're not -- we're not doing it.
JUSTIN: Agreed. And so one of the things that is in here that is good. We're talking about some of the good stuff that is in it. There are direct problems on the use of AI.
So it's banned. Not a penalty.
Like, you can't do it.
That deals with AI that exploits human vulnerabilities like age or disability. Certain social scoring.
GLENN: What does that even mean, though?
JUSTIN: Of course. That's always the problem with these things.
The use of AI for subliminal mind altering apps and things like that.
They don't want that happening.
GLENN: Sure, yeah.
Right.
JUSTIN: I mean, there's stuff New Testament.
Well, yeah. I don't want -- I don't want to be subliminally.
GLENN: It's all ephemeral.
How are you going to prove that? It's happening now!
JUSTIN: It's whatever they want. That's the power!
GLENN: It is show me the person, I will show you the crime.
JUSTIN: Yes.
And so what you need. By the way, there's a minority report type thing in here. That says, you can't use it to commit crime and arrest people for it.
GLENN: Why not?
Just use the cameras.
JUSTIN: It's realtime. You don't need to predict it.
GLENN: Yeah.
JUSTIN: I think the main part of all of this is why would America allow the EU to dictate how it's designing its AI program?
GLENN: Easy.
If you have the progressive counterpart to the EU.
Then you can't get things through your house and Senate.
Okay? We couldn't pass any of that crap.
So let them do it. And then we'll just go, okay.
Well, we are one with the EU. And we will just follow their rules.
And we will just do that.
So it allows you to destroy all of American rights.
Just by blaming it on them.
I think that's why.
JUSTIN: That's 100 percent why.
I think at some point, this became the actual strategy for whoever was running the Biden administration.
That was the strategy. Like, we're not getting anything done.
It's over for us.
We will just let Europe do whatever it wants. And we will have to comply with it. And we will allow them to impose ESG on us, and all these other things.
GLENN: Because we can't get it through.
JUSTIN: Because we can't get it through.
GLENN: And I know, because the guy who is running the administration was Mike.
JUSTIN: Is it really? Wow.
Was he an okay guy?
GLENN: I don't know. His name was Mike. You know, at least he ran it on Mondays.
I don't know who ran it the rest of the week.