Remove Capacity Remove Definition Remove Impact Assessment
article thumbnail

Generative AI could raise questions for federal records laws

FedScoop

But the agency’s provisional approval of a few generative AI products — which include ChatGPT, Bing Chat, Claude 2, DALL-E2, and Grammarly, per a privacy impact assessment — call for closer examination in regard to federal transparency. OpenAI did not respond to a request for comment.

article thumbnail

Procuring AI without understanding it. Way to go?

How to Crack a Nut

The one finding that should definitely not go unnoticed is that, according to DRCF, ‘ Buyers can lack the technical expertise to effectively scrutinise the [algorithmic systems] they are procuring, whilst vendors may limit the information they share with buyers ’ (at 9).

professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

OMB Releases Final Guidance Memo on the Government’s Use of AI

Government Contracts Legal Forum

The AI impact assessment must: state the intended purpose of the AI and its expected benefit; identify the potential risk using the AI and any mitigation measures beyond the minimum practices outlined in the memo; and evaluate the quality of the data used in the AI design and development. race, age, sex, etc.);

article thumbnail

Racial Equity Budgeting: A Powerful Tool to Reduce Inequalities

Inter-American Development Bank

This tool helps agencies assess how their budget might benefit and/or burden local communities, particularly Black, Indigenous and People of Color communities, and identify programs and services with the greatest capacity to move the needle on closing racial equity gaps. 2] Invest in capacity building. Federal Mandates The U.S.

Budget 52
article thumbnail

Anthropic eyes FedRAMP accreditation in quest to sell more AI to government

FedScoop

We’ve built a lot of capacity within DOE to just conduct this kind of red-teaming and evaluation. We have our societal impact research team that develops, implements and [conducts] evaluations to look for things like bias, discrimination, and other potential negative implications of how the technology could be used.