Saturday, April 27, 2024

VP Harris says US agencies must show their AI tools aren’t harming people’s safety or rights



U.S. federal agencies must show that their artificial intelligence tools aren’t harming the general public, or prevent the usage of them, underneath new laws unveiled by means of the White House on Thursday.

“When government agencies use AI tools, we will now require them to verify that those tools do not endanger the rights and safety of the American people,” Vice President Kamala Harris advised journalists forward of the announcement.

- Advertisement -

Each company by means of December must have a collection of concrete safeguards that information the entirety from facial popularity screenings at airports to AI tools that lend a hand regulate the electrical grid or decide mortgages and residential insurance coverage.

The new coverage directive being issued to company heads Thursday by means of the White House’s Office of Management and Budget is a part of the extra sweeping AI executive order signed by means of President Joe Biden in October.

While Biden’s broader order additionally makes an attempt to safeguard the extra complicated business AI methods made by means of main generation corporations, similar to the ones powering generative AI chatbots, Thursday’s directive objectives AI tools that govt agencies were the usage of for years to lend a hand with selections about immigration, housing, kid welfare and a variety of alternative services and products.

- Advertisement -

As an instance, Harris mentioned, “If the Veterans Administration wants to use AI in VA hospitals to help doctors diagnose patients, they would first have to demonstrate that AI does not produce racially biased diagnoses.”

Agencies that may’t practice the safeguards “must cease using the AI system, unless agency leadership justifies why doing so would increase risks to safety or rights overall or would create an unacceptable impediment to critical agency operations,” in keeping with a White House announcement.

The new coverage additionally calls for 2 different “binding requirements,” Harris mentioned. One is that federal agencies must rent a main AI officer with the “experience, expertise and authority” to supervise the entire AI applied sciences utilized by that company, she mentioned. The different is that each and every yr, agencies must make public a listing of their AI methods that comes with an evaluate of the dangers they could pose.

- Advertisement -

Some laws exempt intelligence agencies and the Department of Defense, which is having a separate debate in regards to the use of autonomous weapons.

Shalanda Young, the director of the Office of Management and Budget, mentioned the brand new necessities also are supposed to fortify sure makes use of of AI by means of the U.S. govt.

“When used and overseen responsibly, AI can help agencies to reduce wait times for critical government services, improve accuracy and expand access to essential public services,” Young mentioned.

Copyright 2024 The Associated Press. All rights reserved. This subject matter is probably not revealed, broadcast, rewritten or redistributed with out permission.

More articles

- Advertisement -
- Advertisement -

Latest article