This state is allowing AI to help rule on its unemployment claims


0

The Google logo on an ominously dark phone screen.

Nevada will become the first state to pilot a generative AI system designed to make unemployment claim decisions, marketed as a way to speed up appeals and tackle the nation’s overwhelming backlog of cases. It’s a risky, first-time experiment at integrating AI into higher-level decision making.

Google is behind the program’s tech, which runs transcripts of unemployment appeals hearings through Google’s AI servers, analyzing the data in order to provide claim decisions and benefit recommendations to “human referees,” Gizmodo reported. Nevada’s Board of Examiners approved the contract on behalf of its Department of Employment, Training and Rehabilitation (DETR) in July, despite broader legal and political pushback against integrating AI into bureaucracy.

Christopher Sewell, director of DETR, told Gizmodo that humans will still be be heavily involved in unemployment decision making. “There’s no AI [written decisions] that are going out without having human interaction and that human review. We can get decisions out quicker so that it actually helps the claimant,” said Sewell.

But Nevada legal groups and scholars have argued that any time saved by gen AI would be cancelled out by the time it would take to conduct a thorough human review of the claim decision. Many have also noted concerns about the possibility of private, personal information (including tax information and social security numbers) leaking through Google’s Vertex AI studio, even with safeguards. Some have hesitancies surrounding the type of AI itself, known as retrieval-augmented generation (RAG), which has been found to produce incomplete or misleading answers to prompts.

Across the country, AI-based tools have been quietly rolled out or tested across various social services agencies, with gen AI integrating itself further into the administrative ecosystem. In February, the federal Centers for Medicare and Medicaid Services (CMS) ruled against using AI (including generative AI or algorithms) as a decision maker in determining patient care or coverage. This followed a lawsuit from two patients who alleged their insurance provider used a “fraudulent” and “harmful” AI model (known as nH Predict) that overrode physician recommendations.

Axon, a police technology and weapons manufacturer, introduced its first-of-its-kind Draft One — a generative large language model (LLM) that assists law enforcement in writing “faster, higher quality” reports — earlier this year. Still in a trial period, the technology has already sounded alarms, prompting concerns about the AI’s ability to parse the nuance of tense police interactions and potentially adding to a lack of transparency in policing.


Like it? Share with your friends!

0

What's Your Reaction?

hate hate
0
hate
confused confused
0
confused
fail fail
0
fail
fun fun
0
fun
geeky geeky
0
geeky
love love
0
love
lol lol
0
lol
omg omg
0
omg
win win
0
win

0 Comments

Your email address will not be published. Required fields are marked *

Choose A Format
Personality quiz
Series of questions that intends to reveal something about the personality
Trivia quiz
Series of questions with right and wrong answers that intends to check knowledge
Poll
Voting to make decisions or determine opinions
Story
Formatted Text with Embeds and Visuals
List
The Classic Internet Listicles
Countdown
The Classic Internet Countdowns
Open List
Submit your own item and vote up for the best submission
Ranked List
Upvote or downvote to decide the best list item
Meme
Upload your own images to make custom memes
Video
Youtube and Vimeo Embeds
Audio
Soundcloud or Mixcloud Embeds
Image
Photo or GIF
Gif
GIF format