Promote trustworthy AI and ML and identify best practices for scaling your AI

Promote trustworthy AI and ML and identify best practices for scaling your AI

[ad_1]

John P. Desmond, Editor of AI Trends

Advancement of trustworthy AI and machine learning is a priority for the U.S. Department of Energy (DOE) to mitigate institutional risk, and identifying best practices for implementing AI at scale is a priority for the U.S. Agency for General Services (GSA).

That’s what participants learned in two sessions. AI World Government Live and virtual events held last week in Alexandria, Virginia.

Pamela Isom, Director of AI and Technology Office, DOE

Pamela Isom, director of the DOE’s Office of AI and Technology, has spoken about advances in reliable AI and ML techniques to mitigate agency risk, and has been involved in growing AI use across institutions for several years. With an emphasis on applied AI and data science, she is involved in overseeing risk mitigation policies and standards, applying AI to save lives, combat fraud and strengthening cybersecurity infrastructure.

She highlighted the need for AI project efforts to become part of our strategic portfolio. “My office is there to promote a holistic view on AI and mitigate risk by bringing us together to address the challenges,” she said. This initiative is supported by DOE’s AI and Technology Office. It focuses on transforming DOE into a world-leading AI company by accelerating AI research, development, delivery and adoption.

“I’m telling my organization to keep in mind the fact that you can have a lot of data, but that may not be representative,” she said. Her team sees examples from international partners, industry, academia and other institutions about “trustworthy” outcomes from systems that incorporate AI.

“We know that AI is destructive and that by trying to do what humans do, it makes it even better,” she said. “It goes beyond human capabilities. It goes beyond spreadsheet data. You can be taught what to do next before you can reflect on yourself. It’s very powerful,” she said.

As a result, you need to pay close attention to your data sources. “AI is essential to the economy and national security. Accuracy is necessary. You need reliable algorithms. You don’t need accuracy. You don’t need bias,” Isom adds, “Remember that you need to deploy the output of your model and then monitor it for a long time.”

Executive Order Guide GSA AI Work

Detailed measures issued in May of this year to address cybersecurity for government agencies, and Executive Order 14028, issued in December 2020, promoting the use of trustworthy AI in the federal government, provide a valuable guide to her work.

To manage the risks of AI development and deployment, ISOM has created an AI risk management playbook. It provides guidance on system functions and mitigation techniques. There is also a filter of ethical and reliable principles that are considered across AI lifecycle stages and risk types. Additionally, the playbook is tied to related executive orders.

We also provide examples such as the result being 80% accuracy, but 90% was required. “Something’s wrong,” Isom said, “Playbooks help us to consider these types of issues and what we can do to mitigate risk, and what factors we can take into consideration when designing and building a project.”

Currently, while inside the DOE, the agency is considering the next step in the external version. “We’ll soon share it with other federal agencies,” she said.

GSA best practices for scaling AI projects are outlined

Anil Chaudhry, Director of Federal AI Implementation, AI Center of Excellence (COE), GSA

Anil Chaudhry, director of federal AI implementation at the GSA AI Center of Excellence (COE), spoke about best practices for implementing AI at scale, and has over 20 years of experience in technology delivery, operations and program management for the defense, information and national security sectors.

The COE’s mission is to accelerate technology modernization across governments, improve public experience and increase operational efficiency. “Our business model is to partner with industry subject matter experts to solve problems,” Chaudhry said, adding, “It’s not a business that replicates industry solutions and replicates them.”

COE provides recommendations to partner agencies and collaborates with them to implement AI systems as the federal government is heavily involved in AI development. “For AI, the government landscape is huge. Every federal agency has some kind of AI project currently underway,” he says, and the maturity of the AI ​​experience varies greatly from agency to agency.

Typical use cases he sees include AI focusing on increasing speed and efficiency, reducing costs and avoiding costs, improving response times, and increasing quality and compliance. As one best practice, he recommended the agency Refuses their commercial experience In large datasets, you encounter them in government.

“We’re talking petabytes and examples here about structured, unstructured data,” Chaudhry said. [Ed. Note: A petabyte is 1,000 terabytes.] “We also ask industry partners about their strategies and processes for macro and microtrend analytics, their experience in bot deployments such as automating robotic processes, and how they demonstrate sustainability as a result of data drifting.”

He also asks potential industry partners Please explain the AI ​​talents on the team Or the talent they have access to. If the company is weak to AI talent, Chaudhry asks, “If you buy something, how do you know that you got what you want when there’s no way to evaluate it?”

He said, “Best practices in AI implementation use AI tools, techniques, and practices to define how the workforce is trained to define how employees grow and mature, particularly in AI projects.

Another best practice is that Chaudhry recommends testing industry partners Access to financial capital. “AI is a field where capital flows are very unstable. “”I can’t predict or predict that I’ll spend X dollars this year and go where I want,” he said. AI development teams may need to explore alternative hypotheses or clean up data that may not be transparent or potentially biased.

Another best practice is Access to logistics capitaldata collected by sensors for AI IoT systems, etc. “AI requires a huge amount of prestigious, timely data, and direct access to that data is important,” Chaudhry said. He recommended that there be data sharing agreements in place with organizations related to AI systems. “You may not need it right away, but you can access the data so you can use it right away and think about privacy issues before you need it. It’s a good habit for scaling AI programs,” he said.

The ultimate best practice is planning Physical infrastructure, Data center space, etc. “When you’re in a pilot, you need to know the capacity you need to reserve in your data center and how many endpoints you need to manage,” says Chaudhry.

See more details at AI World Government.

Check this Exclusive offer

Leave a Comment

Your email address will not be published. Required fields are marked *

Review Your Cart
0
Add Coupon Code
Subtotal