States Rush to Bridge Oversight Gap as AI Becomes Integral to Daily Life
Some court papers describe how a third-party service turned down black woman Mary Louis’s application to rent an apartment in Massachusetts.
States Rush to Bridge Oversight Gap as AI Becomes Integral to Daily Life: Even though some AI Becomes Integral to Daily Life and have been shown to favor certain races, genders, or income levels, the government still doesn’t keep a close eye on them.
Big steps are being taken by lawmakers in at least seven states to control bias in AI. These steps are necessary because Congress hasn’t done anything yet. These ideas are the first step in a long conversation that has been going on for decades about how to balance the well-known risks and the less obvious benefits of this new technology.
The lawmakers’ success or failure will depend on how well they handle complicated problems and negotiate with a sector worth hundreds of billions of dollars that is growing at the speed of lightyears.
Amazon stopped using a hiring algorithm almost ten years ago when it was found to favor male candidates. The computer program told the AI to look at new resumes by looking at current candidates, mostly men.
The algorithm lowered the quality of resumes that had the word “women’s” or mentioned women’s organizations. This was done because these fields were underrepresented in the past data it looked at, but it didn’t know what gender the applicants were.
Christine Webber, a lawyer for the class-action plaintiffs, says that AI systems used to score rental applications were biased against African American and Hispanic applicants.
Check Out: Breaking News: Bobby Kotick Joins Forces With OpenAI’s Sam Altman in TikTok Sale Saga
What else does the AI have in store for us?
Some court papers describe how a third-party service turned down black woman Mary Louis’s application to rent an apartment in Massachusetts. She was also told that she didn’t have the power to change the results of the renter screening.
BSA The Software Alliance says that only 12 of the nearly 200 AI-related bills that were introduced in state legislatures last year were put into law. More than 400 bills are being considered this year that deal with AI. The main goal of these bills was to control smaller parts of AI, like deepfakes and chatbots.
States have been slow to put in place protections against AI’s tendency to be biased. As “automated decision tools,” AI Becomes Integral to Daily Life and is used all the time to make important choices, but most people don’t know about it.
According to the Equal Employment Opportunity Commission, 83% of businesses use hire algorithms, and 99% of Fortune 500 companies do too. Most Americans, on the other hand, don’t know about these tools or how they might be biased.
Laws that keep up with technology are good for consumers because they build trust in technology and ensure businesses know their duties. But laws to stop AI discrimination have been weak so far; plans have already failed in committees in Washington and California.
A California Assemblywoman named Rebecca Bauer-Kahan has changed her bill that was shot down last year, even though tech companies like Workday and Microsoft backed it.
Besides Vermont, Colorado, Rhode Island, Illinois, Connecticut, and Virginia are also likely to pass laws in this area. Even though these steps are a step forward, it’s still not clear how they will affect things and how well they can find bias.
It would be more accurate to find discrimination if we required bias audits, which are tests to see if an AI is biased or not, and made the results public. But the business doesn’t want this to happen because it would mean trade secrets would be made public.
Most legislative plans don’t say that AI systems have to be tested regularly, but this is the first step toward politicians and voters having to deal with AI as a permanent technology.