[ad_1]
John P. Desmond, Editor of AI Trends
Two experiences in the way AI developers within the federal government pursue AI accountability practices are: AI World Government This week, an event was held in Alexandria, Virginia.
Taka Ariga, Chief Data Scientist and Director of the US Government Accountability Bureau, He plans to explain the AI ​​accountability framework he uses in his agency and make it available to others.
Bryce Goodman, Chief Strategist in AI and Machine Learning Defense Innovation Unit (DIU), the Department of Defense forces, established to help the US military use emerging commercial technologies more quickly, explained the work in his unit to apply the principles of AI development to terms that engineers can apply.
Ariga, the first Chief Data Scientist to be appointed to the US Government’s Accountability Office and director of GAO’s Innovation Lab, AI Accountability Framework He helped develop by convening a forum of experts from government, industry, nonprofits, and federal inspectors and AI experts.
“We are employing an auditor’s perspective on the AI ​​accountability framework,” Aliga said. “GAO is in the verification business.”
Efforts to create a formal framework began in September 2020 and discussed over two days as 60% of women included, of which 40% were in the minority. This effort was driven by a desire to root the framework of AI accountability in the reality of engineers’ daily work. The resulting framework was first published in June as what Ariga described as “version 1.0.”
We are seeking to bring “high altitude posture” to the Earth.
“We found that there is a very high level of attitude in the AI ​​accountability framework,” Aliga said. “These are admirable ideals and aspirations, but what do they mean for everyday AI practitioners? There’s a gap while watching AI grow across government.”
“We’ve reached a lifecycle approach,” he said, overcoming the stages of design, development, deployment and continuous monitoring. Development efforts lie in four “pillars”: governance, data, surveillance and performance.
Governance reviews what organizations have implemented to oversee AI efforts. “Chief AI officers may be in place, but what does that mean? Can they make changes? Is it interdisciplinary?” At the system level within this pillar, the team looks at the individual AI models to see if they are “deliberately deliberated.”
In the case of the data pillar, his team examines the assessment of training data, how it is represented and whether it is functioning as intended.
For the pillars of performance, teams consider the “social impacts” that AI systems have on their deployment, such as whether they are at risk of civil rights violations. “Auditors have a long history of evaluating equity, and they build on proven systems for AI evaluation,” Ariga said.
Emphasizing the importance of continuous surveillance, he said, “AI is not a technology you deploy and forget.” He said. “We are preparing to continuously monitor drift and algorithm vulnerabilities, and are expanding AI appropriately.” The assessment will determine whether the AI ​​system continues to meet “whether the sunset is more appropriate,” Aliga said.
He is part of a discussion with NIST about the government-wide AI accountability framework. “We don’t want an ecosystem of chaos,” Aliga said. “We want an all-government approach, and we feel this is a useful first step in pushing down highly high-level ideas that make sense for AI practitioners.”
DIU evaluates whether the proposed project meets ethical AI guidelines
At DIU, Goodman is involved in a similar effort to develop guidelines for developers of AI projects within the government.
Project Goodman is involved in the implementation of AI on humanitarian assistance and disaster response, predictive maintenance, counter-gees information, and predictive health. He leads a responsible AI working group. He is a faculty member at Singularity University, has a wide range of consulting clients from within and outside the government, and has a PhD in AI and philosophy at Oxford University.
DOD in February 2020 adopted five areas AI Ethical Principles For 15 months, I consulted with AI experts from the commercial industry, government academia, and American citizens. These areas are responsible, fair, traceable, reliable and governable.
“They are well thought out, but it is not clear how engineers will translate them into requirements for a particular project,” he said in his presentation on responsible AI guidelines at the AI ​​World Government Event. “That’s the gap we’re trying to fill.”
Before DIU considers the project, they run through ethical principles and see if it passes the convocation. Not all projects. “We need the option to say that the technology is not there or that the issue is not compatible with AI,” he said.
All project stakeholders from commercial vendors and governments must be able to test, verify and go beyond the minimum legal requirements to meet the principles. “The law doesn’t move as fast as AI, so this is why these principles are important,” he said.
Additionally, government-wide collaboration is being carried out to ensure that value is maintained and maintained. “Our intention with these guidelines is not to try to achieve perfection, but to avoid catastrophic consequences,” Goodman said. “It’s difficult to agree with the group what the best outcome is, but it’s easy to agree with the group what the worst outcome is.”
Goodman said DIU guidelines, case studies and supplementary materials will be posted on DIU’s website “Soon,” to help others leverage their experiences.
Here are the questions that DIU asks before development begins:
The first step in the guidelines is to define the task. “That’s the most important question,” he said. “You only need to use AI if there is an advantage.”
Next is the benchmark, which you will need to set up in advance to know if the project has been delivered.
He then evaluates ownership of the candidate data. “Data is important to AI systems and is where many problems can exist,” Goodman said. “You need a specific contract about who owns the data. If it’s vague, this can lead to problems.”
Second, Goodman’s team wants a sample of data to evaluate. Next, you need to know how and why the information was collected. “If consent is given for one purpose, it cannot be used for another purpose without reviewing the consent,” he said.
The team then asks if a responsible stakeholder is identified, such as a pilot who could be affected if a component fails.
Next, you need to identify responsible mission owners. “This requires one individual,” Goodman said. “There is often a trade-off between the performance of an algorithm and its explainability. You may need to decide between the two. These types of decisions have ethical and operational components.
Finally, the DIU team needs a process to roll back if things go wrong. “We need to be aware that we’re abandoning our previous system,” he said.
Once all these questions are answered in a satisfactory way, the team advances to the development stage.
In the lessons learned, Goodman states: “Metrics are important. It may not be appropriate to simply measure accuracy. You need to be able to measure success.”
It also adapts technology to your tasks. “High-risk applications require low-risk technologies, and when potential harm is important, you need to be highly confident in this technology,” he said.
Another lesson I’ve learned is to set expectations for commercial vendors. “The vendor needs to be transparent,” he said. “We are very wary when we say that someone has their own algorithms that we can’t speak to. We see relationships as collaboration. This is the only way to ensure that they are developed responsibly.”
Finally, “AI is not magic. It doesn’t solve everything. It should be used when you need it.
See more details at AI World Government,in Government Accountability Bureau, in AI Accountability Framework In Defense Innovation Unit site.