This site uses cookies to improve your experience. To help us insure we adhere to various privacy regulations, please select your country/region of residence. If you do not select a country, we will assume you are from the United States. Select your Cookie Settings or view our Privacy Policy and Terms of Use.
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Used for the proper function of the website
Used for monitoring website traffic and interactions
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Strictly Necessary: Used for the proper function of the website
Performance/Analytics: Used for monitoring website traffic and interactions
Harnessing AI is a useful way to advance modernization goals, but AI governance—including ethical considerations, data security, and compliance with federal regulations—must remain a top priority. And increased AI implementation demand that organizations rethink how they staff, develop, and run their day-to-day operations. .”
There seemed to be some recognition of the need for more State intervention through regulation, for more regulatory control of standard-setting, and for more attention to be paid to testing and evaluation in the procurement context. Public procurement is an opportunity to put into practice how we will evaluate and use technology.’
However, its findings are sufficiently worrying as to require a much more robust policy intervention that the proposals in the recently released WhitePaper ‘AI regulation: a pro-innovation approach’ ( for discussion, see here ). None of this features in the recently released WhitePaper ‘AI regulation: a pro-innovation approach’.
This opportunity allows industry to submit whitepapers at any time that are aligned with one of the DPA’s areas of focus, including sustaining critical production, commercializing research and development investments, and scaling emerging technologies. Access the recording here.
Luke Keller, Chief Innovation Officer at US Census bureau, highlighted using NIST guidelines, including bias reduction frameworks, to ensure ethical and accurate AI deployment. Risk Mitigation: Risks vary by application. High-quality, diverse datasets are essential. Use Cases: Start small with proofs of concept to test limitations and risks.
We organize all of the trending information in your field so you don't have to. Join 5,000+ users and stay up to date on the latest articles your peers are reading.
You know about us, now we want to get to know you!
Let's personalize your content
Let's get even more personalized
We recognize your account from another site in our network, please click 'Send Email' below to continue with verifying your account and setting a password.
Let's personalize your content