The WallBuilders Show

The Changing Landscape of Technology - AI Being Used to Shape DEI, part 2 - With Justin Haskins and Don Kendall

May 01, 2024 Tim Barton, David Barton & Rick Green
The Changing Landscape of Technology - AI Being Used to Shape DEI, part 2 - With Justin Haskins and Don Kendall
The WallBuilders Show
More Info
The WallBuilders Show
The Changing Landscape of Technology - AI Being Used to Shape DEI, part 2 - With Justin Haskins and Don Kendall
May 01, 2024
Tim Barton, David Barton & Rick Green

Could artificial intelligence be perpetuating a new wave of discrimination, under the guise of promoting fairness and equality? We tackle this paradox head-on at the ProFamily Legislators Conference, where we scrutinize the report on AI discrimination, and dissect the left's appetite for 'fairness-aware algorithms.' The push for AI to deliver equitable outcomes has been echoed by the Biden administration. 

The implications of AI are profound, stretching far into the realms of banking and the criminal justice system. We uncover how AI in banking might be affecting lending practices due to CO2 emission reduction commitments, potentially sidelining some businesses in favor of others. The criminal justice system is not immune to AI's reach either; we discuss how AI influences decisions on sentencing and bail, posing a risk of embedding ideological biases from its developers. The episode also sheds light on the broader societal implications, including ties to the ESG movement and the potential emergence of a social credit score-like system.

Zooming in on the criminal justice system, the episode features a critical look at the organizations behind risk assessment tools used in legal settings, such as those funded by Arnold Ventures. We explore how these tools could be shaping legal outcomes and the real-world impact through a case study from Virginia, showing AI's power to alter judicial decisions. The need for legislation to curb the ideological influence of AI in the justice system is paramount. We wrap up by discussing legislative responses aimed at preserving unbiased justice in an age where technology's influence is inescapable. Join us as we confront these urgent issues, calling for a balanced approach to integrating AI into the fabric of our society.

Support the Show.

Show Notes Transcript Chapter Markers

Could artificial intelligence be perpetuating a new wave of discrimination, under the guise of promoting fairness and equality? We tackle this paradox head-on at the ProFamily Legislators Conference, where we scrutinize the report on AI discrimination, and dissect the left's appetite for 'fairness-aware algorithms.' The push for AI to deliver equitable outcomes has been echoed by the Biden administration. 

The implications of AI are profound, stretching far into the realms of banking and the criminal justice system. We uncover how AI in banking might be affecting lending practices due to CO2 emission reduction commitments, potentially sidelining some businesses in favor of others. The criminal justice system is not immune to AI's reach either; we discuss how AI influences decisions on sentencing and bail, posing a risk of embedding ideological biases from its developers. The episode also sheds light on the broader societal implications, including ties to the ESG movement and the potential emergence of a social credit score-like system.

Zooming in on the criminal justice system, the episode features a critical look at the organizations behind risk assessment tools used in legal settings, such as those funded by Arnold Ventures. We explore how these tools could be shaping legal outcomes and the real-world impact through a case study from Virginia, showing AI's power to alter judicial decisions. The need for legislation to curb the ideological influence of AI in the justice system is paramount. We wrap up by discussing legislative responses aimed at preserving unbiased justice in an age where technology's influence is inescapable. Join us as we confront these urgent issues, calling for a balanced approach to integrating AI into the fabric of our society.

Support the Show.

Rick Green

Welcome to the Intersection of Faith and Culture. It's the WallBuilders Show, David Barton, Tim Barton and I'm Rick Green, and normally the three of us pontificating upon whatever the hot topic of the day is from a biblical, historical and constitutional perspective. But today we're going to be taking you right back to the pro-family legislators conference. Yesterday we started this presentation about, well, the universal law commissions, some of the CBDC issues, AI, lots of cool stuff, but we're going to jump right back into it. So here we go, back to the pro-family legislators conference.

Justin Haskins

They produced a report in 2023 about AI and discrimination. Now in the report, now these are the people who are advising Black Rock and pensions and CalPERS and all these different people on how they should be voting when it comes to artificial intelligence companies. Okay, and what they said is a primary way to improve AI model fairness is the specification of fairness aware algorithms. This means that, in addition to other objectives such as predicting high job performance, user engagement or other successful outcomes, the model also factors in fairness metrics such as gender balance. These constraints encourage predictions that are equitable across certain protected attributes, thereby mitigating discrimination. Well, no, I would say that encourages discrimination. That actually is the definition of discrimination. We're going to rig the system to give us the result we want and we're going to build it into the design Again, powerful Wall Street firm and I could show you other quotes that are just like this.

Don Kendall

Yeah, and this is just another thing that I want to point out is that a lot of this information is spoken very openly. They have these conferences, they talk about all of this stuff, they put it in their white papers, they release press releases about it and it's just kind of flies under the radar. You don't have a lot of people like Justin and I that are laser focused on reading through all of this stuff. That seems very boring just based on their covers but is very revealing of their agenda. But we do have another bit more views from the Biden administration.

Justin Haskins

Yeah, so just real quick because I want to make sure we get to everything. This came from Lael Bernard. She was on the Fed Board of Governors when she said this. She gave a speech about AI. She's now the head of Biden's the director of the National Economic Council, so there's a very powerful, influential person. She was talking about banking and AI and she said it is our collective responsibility to build appropriate guardrails and protections against she's talking about racial bias in this case including that, we quote ensure that AI is designed to promote equitable outcomes. Outcomes have to be equitable. That's called socialism, right? So we're going to design our AI so that it produces some kind of socialistic outcome in accordance with whatever we think is the right socialistic outcome.

Don Kendall

And, like I said, it seems like everybody's on board. This isn't just kind of these academic elites sitting out in Davos talking about this stuff. They have buy-in from basically everyone that's influential in this game, right, Justin?

Justin Haskins

Right. So World Economic Forum created an AI governance alliance. This is a public-private partnership between governments and the largest, most powerful AI developers, big tech companies in the entire world, all agreeing with the Davos model for ESG and social justice and all of these different things. All of the companies that are listed here on the right now. Some of you are going to have trouble reading these, so I'll read some of them to you. I've already signed up for this. Okay, so they're with Davos. They're working hand in hand with them Microsoft, Meta, that's Facebook, Google, IBM, Salesforce, Deep learning, AI, University of Oxford, University of Texas, University of Pennsylvania, Duke, Sony, Lenovo, Hewlett Packard, Intel, Amazon, Adobe. We could go on and on and on. It's everybody, Okay, Everyone who matters in the AI space. Almost everyone has agreed to sign up with the Davos people. Even after the Great Reset debacle, they've agreed to sign up with them to manipulate AI so that it produces Davos values. This is well documented.

Don Kendall

Right, right and again, if you look through the papers, you will see this stuff in black and white. There was another thing that we didn't put in the slides, that I actually just discovered a couple of weeks ago when I was looking through press releases of OpenAI, which is who created ChatGPT. They're associated with Microsoft and they're putting together something that they call the Frontier Model Forum, where they're able to kind of come together and decide on what best practices are and make sure they're not doing anything that's going to harm society. But in it it says supporters' efforts to develop applications that can help meet society's greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention and combating cyber threats. So again, built into the foundations of these things is going to be how do we fight back against climate change? In a number of other stuff as well, but again it's.

You know we're talking about all this stuff and it all seems so theoretical. Yes, you know we have talking about all this stuff and it all seems so theoretical. Yes, we have these algorithms that are on our phones. Netflix suggests something that we should watch, based on all the other stuff that we've watched in the past. But the implications of this, the reach that all of this type of technology has, is going to get increasingly more impactful and kind of spread into all of the different industries, and not just in terms of job displacement, but just kind of like the way that we come to decisions in a whole bunch of different industries. So let's talk about AI and banking, Justin.

Justin Haskins

Okay so it isn't theoretical, it's already happening. The discrimination part is already happening. We're not just talking about AI being used. They're already using AI to discriminate against people. So I'm going to show you this.

Rick Green

Quick break, folks. We'll be right back. You're listening to the WallBuilders Show.

 

Break

Rick Green

Welcome back to The WallBuilders Show. Let's jump right back in with Justin Haskins. Don Kindling out at the ProFamily Legislators Conference.

Justin Haskins

AI is used in banking widely for speaking with customers, fraud detection, predictive analytics, credit risk management and lending decisions. That's the big one. Okay, that's how you control society. If you cut people off from credit lending, you cut people off from bank accounts. They can't survive, they can't function as a business without that, okay? So how do we know that there's bias in AI banking? Well, because they've admitted that there's bias in AI banking. They're really happy about it. They celebrate it all the time. Okay, now we know it because they've admitted it, but we also know it because banks have signed up for certain initiatives that require discrimination.

 

So, for example, every large bank in the United States has already agreed that they're going to go. They're going to get rid of their CO2 emissions. They're gonna go net. They're gonna go neutral on CO2 emissions by XYZ date in their entire not just their business, in their business models, in everything in lending, in banking. That means they can't loan you. They can't give you a loan for a gasoline-powered car and be in compliance with that. It's impossible. It means they can't give you a checking account if you drive a gasoline-powered car and you're making payments, because it would be impossible. It means that you can't if you're a business and you have anything to do with fossil fuels at all, or just CO2 emissions, which is not even necessarily from burning gasoline or something like that then you cannot get access to a bank account. This is what they've already promised every large bank JP Morgan Chase, Wells Fargo, all of the Bank of America. They've all made this pledge. Well, the only way you can do that and while you're using AI is if you rig the AI to make sure that AI is denying people access to those banking services.

Okay, so let's go to the next slide here. This is a quote that comes from a Harvard Business Review article that was published in 2020. So three years ago. This comes from a person who is a business data consultant, Sian Townsend, and she says that she loves helping financial services companies. Quote reverse past discrimination by quote building AI-driven systems designed to encourage less historic accuracy but greater equity.

Well, this is exactly what I said. Well, what does that mean? She says that means training and testing AI systems, not merely on the loans or mortgages issued in the past, but instead on how the money should have been lent in a more equitable world. So they're building AI banking systems that are trying to figure out if you're likely to repay your loan. And what they're doing is saying don't look at the data, imagine what the data should have been if we weren't also racist for all these decades. And then let's build that into the AI. Let's trick the AI into believing that was the world that actually happened. It wasn't, but that's what we're going to teach it. All right, so that's just scratching the surface on that.

Don Kendall

Yeah, and just one other thing that, as he was talking about that, it just kind of reminds me of what we were talking about last year when we had a presentation on ESG and ESG replacing, kind of the traditional ways of doing business and profit and all of that, and replacing it with a kind of a social credit score type system. That's what this is, this is what facilitates that, this is what allows them to collect all these different metrics and make their decisions based on these subjective metrics instead of objective reality. So this is the next step of ESG, essentially, but it doesn't just stop in banking. Like I said, every industry and this is the one that got on our radar, I think, first out of all of the ones in terms of AI but AI and criminal justice. So, Justin, you tried to be a lawyer at some point. Explain.

Justin Haskins

I did. I failed miserably. So this is incredible. A lot of people have never heard this before, but there is already criminal justice systems all across the country are using AI and algorithms to help them make sentencing determinations, to help them with pretrial sentencing, so determine whether you should get bail or not, or what the bail should look like, and all of that stuff. This is widely used. So this comes from Mapping Pretrial Injustice, which is a project from these left-wing organizations, because there's a bunch of left-wing people that don't like this either, because they're convinced that it causes you know, it promotes racism and other things, and in this case, I kind of agree with them. So they say that most states now use at least one of these risk assessment algorithm tools, including various forms of artificial intelligence, to quote help judges and magistrates decide everything from bail and pretrial release or supervision, to sentencing and gravity of parole or probation supervision. Okay, so we can go to the next slide, but they don't. This is not what I'm. I'm not trying to say that the judge must do whatever the AI tells them in these jurisdictions, but they're using it as a as a way of influencing their decisions. They're going to the AI and they're saying what should we do? And then the judge ultimately decides. But the judge is depending on these things to give them the answers, right?

So these RATs are based on aggregate data. So they take a data set, which is often a large sample of historic information about a group of people. They find factors consistent with the results they are trying to predict, which means that an RAT's outcome tries to predict what someone with certain characteristics such as where someone is from, how many times they were arrested or convicted or how old they are might do. They are not individualized for each person. So the whole point is let's take groups of people and then make decisions about them when they come before a criminal court, and let's do it based on the data. The AI is going to tell us what the right answer is. We're not going to base our decision on the individual person, which is completely insane. Obviously, the court should make a decision based on every unique person that comes before him, because you can't just lump people into these broad categories and then make decisions based on them, but that's exactly what they're doing.

Don Kendall

Yeah, and just you know, again, just listening to you speak, I'm just thinking of this kind of analogy of just like an accountant or something. So the idea of relying on some type of algorithm to determine, kind of like, what sentencing could be, like that in itself isn't the most horrifying thing for me, because, just like an accountant that's sitting down and trying to figure out a spreadsheet and he punches some numbers into a calculator, it's two plus two equals four. That's just the reality of the, of the situation, objective reality, whereas this and we're talking about embedding the values into the foundations of this artificial intelligence it's no longer representing objective reality. So now an accountant hitting two plus two is supposed to equal four. No, no, that's not what we think it should be. It should be five. That's what we're talking about here. That is the kind of changing the basic foundations of these things that are going to be leading to the ability for these people to kind of control society in whichever way that fits their agenda.

 

Break

Justin Haskins

This is a map of jurisdictions that use one of these AI tools and what you'll see is that in some places, whole states or almost whole states use them. In other places, you have jurisdictions within the state that use them, but most jurisdictions don't use them. In some states, most jurisdictions use them, but there's still a minority of places that don't use them. So Texas, for example, has some jurisdictions, but most jurisdictions don't. In Florida, a lot of the really populated places use it, but the non-populated places typically don't use it. Ok, and then on the on the thing there it says in Alaska, Arizona, California, Hawaii, Utah, Nevada, Minnesota, Indiana, Kentucky, Ohio, Virginia, West Virginia, Vermont, Connecticut, Delaware and Rhode Island, most jurisdictions use at least one of these AI tools. Okay, so it's being used all over the place. Most states have it. In some places they almost all the jurisdictions use it. So the question we have to ask ourselves is can we trust these things right? Can we trust them? So maybe the AI is better than the person, I don't know, maybe it is right. So there are several reasons why I think we should not trust them all right. The first is that research shows that judges do not adhere to RAT recommendations at equal rates for all demographic groups. So remember what I said the judge doesn't have to do what the artificial intelligence tells them, right? So, for example, there was one study that found that, on average, judges are more lenient with female defendants than they are with males. So, in other words, when the AI says throw this woman in jail, they're more likely to say I don't know. But when a man comes up and it says throw this man in jail, they're much more likely to do it. Okay, so they're going to be harsher. They're going to listen. They're not applying it equally. So they're just picking and choosing when they want to listen to the AI anyway, and then they get to hide behind the AI when they make the wrong decision, they say well, this is just what the math told me. I'm just going with what the math said, right?

The other reason that we there's a bunch of other reasons why we shouldn't listen to this. Another one is that some of the most popular risk assessment tools have been developed by organizations that espouse openly left wing values or receive funding from left wing special interests artificial intelligence algorithm tool for pretrial sentencing, and it was created by the Laura and John Arnold Foundation in 2013. They're now called Arnold Ventures, I believe it's a for-profit charitable organization. Arnold Ventures has a long track record of funding liberal organizations, including the Center for American Progress that's the Think Progress people Environmental Defense and Action Fund and Planned Parenthood. So these people are designing the AI that we're using when people come before a judge to determine whether they should go to jail or get bail or something like that.

Now you have to ask yourself that in and of itself doesn't mean that they're doing a bad job, but you should be suspicious, probably, right. I mean, I don't know if I want Planned Parenthood people being the ones that are designing my AI algorithms in a deep red state. That doesn't necessarily make a whole lot of sense. And then the other thing is they found this really in-depth case in Virginia. They looked at 56,000 cases used with this artificial intelligence program they have and they found that AI recommendations significantly increased the probability of offering alternative punishments, lower the probability of incarceration and shorten the length of imprisonment. In other words, the AI in that state's designed to make the prison sentences less likely and shorter. That's a problem when you have a crime issue, which is what we have right now in the United States, and maybe this is part of the contributing factor to that.

Don Kendall

Yeah. So I think that we've painted a pretty comprehensive picture of the potential impacts of all of this and the way that everything could be shifted towards the agenda by embedding their values into the foundations. That's their plan. We've showed you all of that, so we got to talk about potential fixes. Do you have any, Justin?

 

Justin Haskins

 I have some,

 

Don Kendall

 okay, all right. Good, all right. Yeah, we've got a couple here, so we're not going to leave you in a lurch, but uh, well, let's rapid fire through these. We only have a few more minutes.

Justin Haskins

Yeah, so we don't want to, we don't have a ton of time, but, um, there are some really easy, low hanging fruit that I think would be effective. Okay, so the first is that you could have States pass bills that ban state and local agencies from using AI systems that have been altered to promote an ideological agenda or rely on a manipulated data set. So if you're going to use AI and maybe you shouldn't but if you're going to have the government using AI, in whatever capacity, it shouldn't be rigged, it shouldn't be biased, it should be based on math. That seems like common sense. Okay, so I think that's an easy one. Number two state policymakers could enact new rules or revise existing processes to ensure that financial institutions, including banks and insurance companies, are not embedding AI systems with ESG scoring metrics to discriminate against some customers over other customers. Okay, so banks shouldn't be able to use their AI to discriminate against people. That should just be common sense.

Now, in some states, or in many states most states banks are allowed to discriminate against people on non-financial factors. One of the problems that we came up across when we're dealing with the ESG issue is that you had lots of banks using ESG as a way to screen out certain industries or businesses. So if you're a gun owner or you own a gun shop or something like that, the bank is less likely to give you a loan because they don't like guns. They're trying to get rid of them in society, right? So fair access legislation like the only fair access legislation that has been passed in America to date right now, or really in the whole world is in Florida, where they just passed this past year fair access legislation that prevents banks from doing that kind of discrimination. They're the only state that has done this so far. Other states, a couple other states have done it on insurance Texas and North Dakota. Betty's nodding her head. So that means I'm right. They've done it with insurance companies but not with banks.

 

Break

Justin Haskins

In West Virginia they've done fair access, but only on the Second Amendment, only on issues related to guns. And then Utah is the other state. I don't really understand what they did in Utah, to be totally honest, but it's not the right thing. They need to do more, okay, so that's one potential option. If you pass a fair access bill that deals with banking services generally, it covers the AI issue, because those bills cover whatever tools the bank is using to make its decisions. Another potential solution lawmakers could ban state and local criminal justice systems from using AI to help with sentencing decisions. That sort of seems like common sense to me at this point in time. Why is AI helping them make these decisions? It's rigged. If it was completely unbiased, then maybe you could make a case for at least being one of many options. Maybe I don't even like that, but at least you could make an argument for it. But we know it's rigged. So why are we letting rigged AI help courts make bad decisions? That doesn't make any sense to me.

 

Another one is that we could get rid of AI out of schools. Why we didn't have a chance to talk about this because we didn't have time, but AI is being put into public schools all across the country as a way to help improve learning outcomes. This is what they're arguing, because teachers are so bad at their jobs in many of these public government-run schools and now we have to turn to computers to do it for them, while we continue to pay them more and more money, by the way, for doing less and less.

 

Don Kendall

 How do you really feel, Justin?

 

Justin Haskins

 I know, don't get me started, I go another 40 minutes on just teachers. But look, the issue is this Are we going to have AI educating our kids? Does that make any sense? No, of course it makes no sense. Why would we do that? So I don't know why we couldn't have bans that prevent that from happening.

And then, at the very least, at the very least, can't we just look into how AI is being used in these states? Can't we just investigate it? Can't we just see how it's being used? Does anybody even know what's going on? We've got all these powerful institutions, governments everybody is using AI. It's going to get worse and worse and worse. You've got the Biden administration crafting rules on it. You've got the AI developers working with Davos, for God's sake, and nobody's even looking under the rock. Like I mean, can't we just look? I feel like that's another really easy thing to move forward.

Don Kendall

Yeah, and the last thing that we want to discuss very briefly, because we only left ourselves a minute and 10 seconds for this very important announcement that we have here is basically the fact that, like I said, we're always looking for the next thing. What's the next thing down the road that we have to pay attention to? And Justin and I basically want to dedicate more, if not all, of our professional time to kind of figuring those things out and disseminating the information to the people that can make a difference. So we are announcing the launch of the Henry Dearborn Liberty Network. So, Justin, you've got 40 seconds.

Justin Haskins

Right, mylibertynetwork.com. Mylibertynetwork.com If you go there, you're going to be able to get access to policy tip sheets that cover a lot of the information that I talked about today. In your packets that you've got for this conference, you're going to see some of these printed out, some of these tip sheets that cover the issues that we talked about related to AI and ESG and AI and banking, and there's going to be a whole bunch of other topics that we're going to cover. It's not just going to be AI we actually believe it or not, don't spend most of our time on AI. We actually spend it on other things but this is going to cover a whole wide range of issues and it's not only going to have the problem and have footnotes so you could do more research, but it also is going to have some legislative recommendations, things that we think might be solutions.

We're not lobbyists. We're not trying to force anybody to do anything or encouraging you to vote for one bill over another bill or anything like that. It's just ideas. That's all we're trying to provide people is with ideas and information, and we're happy to take requests as well. So if you go to mylibertynetwork.com, you'll be able to see these things and then they can also reach out to us, which is on the next slide at Heartland.org. If you just Google my name, you'll be able to find this as well Justin Haskins. Leave Donald alone. He's a he's a private guy and he gets really nervous when people start emailing him. So no, I'm kidding, you can email him. In fact, I encourage you to bother him more than me.

Don Kendall

And I'm happy to talk about this, Justin's happy to talk about this, Justin's happy to talk about this. It's an issue that we're very interested in, but thank you all for paying attention and focusing on this thing that we think is very important.

Justin Haskins

Thank you. Thank you very much. Appreciate it

Rick Green

Appreciate it All right, everybody, we're out of time for today. By the way, if you joined us today and you missed yesterday, today was the second part in sharing that Pro-Family Legislators Conference presentation, so if you want to get the first part, you can go to our website where all of the archives are housed. That's wallbuilders.show. I'm Rick Green. You've been listening to the WallBuilders Show.

 

AI Bias in Socialistic Outcomes
Bias and Equity in AI Banking
Judges and AI in Criminal Justice
AI and Left Wing Influence