Best Practices for Building an AI Development Platform in Government

Best Practices for Building an AI Development Platform in Government

[ad_1]

John P. Desmond, Editor of AI Trends

The AI ​​stack, defined by Carnegie Mellon University, is the basis of the approach adopted by the U.S. Army for the AI ​​development platform effort, according to Isaac Faber, chief data scientist at the U.S. Army AI integration center. AI World Government Last week, an in-person event was held from Alexandria, Virginia.

Isaac Faber, Chief Data Scientist, US Army AI Integration Center

“One of the biggest issues I’ve found when I want to move the Army from legacy systems through digital modernization is the difficulty of abstracting the differences in applications,” he said. “The most important part of digital conversion is the middle tier, a platform that makes it easier to the cloud and to the local computer.” They hope that the software platform can be moved to another platform, just as new smartphones make it easier to contact and history for users.

Ethics reduces all layers of the AI ​​application stack. The AI ​​application stack places the planning stage at the top, followed by decision support, modeling, machine learning, data management at scale, and device layers or platforms at the bottom.

“I argue that the stack is a core infrastructure, a way to deploy applications, and that it considers how to avoid silence in the approach,” he said. “We need to create a development environment for a globally distributed workforce.”

The Army is working on the Common Operating Environment Software (COES) platform, first announced in 2017, designing scalable, agile, modular, portable, open DOD work. “This is suitable for a wide range of AI projects,” Faber said. To carry out the effort, “The devil is in the details,” he said.

The Army, in collaboration with CMU and private companies, includes a prototype platform including Visimo of Coraopolis, Pennsylvania, provides AI development services. Faber said he prefers to coordinate with the private industry rather than buying products from the shelves. “The problem with that is that you’re still at the value offered by that one vendor. This is not usually designed for the challenges of DOD networks,” he said.

The Army trains various technical teams of AI

The Army is engaged in AI workforce development efforts for several teams, including: A professional with leadership and graduate degrees. Technical staff. This is trained to get certified. and AI users.

Army technical teams have different areas of focus. Machine learning operations teams, including the large teams needed to develop general-purpose software, operational data science, deployments including analysis, and build computer vision systems. “As people come through the workforce, they need a place to work together, build and share,” Faber said.

Project types include diagnostics. Diagnosis may combine streams of historical data, predictive and normative. “There’s AI at the far end. You don’t start with that,” Faber said. The developer needs to solve three issues: It is a data engineering, an AI development platform called the “Green Bubble,” and a deployment platform called the “Red Bubble.”

“These are mutually exclusive and all interconnected. The different teams of people need to be coordinated programmatically. Usually, a good project team has people from each of those bubble areas,” he said. “If you haven’t done this yet, don’t try to solve the green bubble problem. It’s pointless to pursue AI until you have an operational need.”

When asked by participants which groups are the hardest to reach and train, Faber said without hesitation, “The most difficult part of the job is the executives. They need to learn what the value is provided by the AI ​​ecosystem. The biggest challenge is how to communicate that value,” he said.

The panel explains the most likely AI use cases

On the panel on Emerging AI Fundamentals, moderator Curt Savoie, program director for global smart city strategy at market research firm IDC, asked what AI use cases are most likely.

“I point to the benefits of decisions at the edge, pilots and operators, and decisions at the back, for mission and resource planning,” said Jean Charles Lede, autonomous technology advisor for the US Air Force, the Bureau of Scientific Research.

Krista Kinnard, Ministry of Labor’s Chief of Emerging Technology

“Natural language processing is an opportunity to open the door to AI for the Department of Labor,” she said, according to Christa Kinnard, chief of emerging technologies. “Ultimately, we’re dealing with data about people, programs and organizations.”

Savoie asked what are the big risks and dangers panelists are seeing when implementing AI.

Anil Chaudhry, director of federal AI implementation at General Services Administration (GSA), said that in a typical IT organization using traditional software development, the impact of developer decisions has only lasted so far. With AI, “we need to consider the impact on people, members and stakeholders across the class. Simple changes to the algorithm allow us to delay profits for millions of people or make false inferences on a large scale. That’s the most important risk,” he said.

He said he would ask his contract partner to “have humans in the loop and humans in the loop.”

Kinnard supported this, saying, “We’re not going to remove humans from the loop. It really can make better decisions for people.”

She highlighted the importance of monitoring AI models after deployment. “Models can float as data underlying the change,” she said. “Therefore, a level of critical thinking is needed to not only do the tasks but also to assess whether what the AI ​​model is doing is acceptable.”

She said, “We have built use cases and partnerships across the government to ensure that responsible AI is implemented. We will never replace people with algorithms.”

“There are often use cases where data is not present. Since we can’t explore 50 years of war data, we use simulations. The risk is to teach an algorithm that says there is a real risk, “simulation.” I don’t know how algorithms map to the real world. ”

Chaudhry highlighted the importance of testing strategies for AI systems. He warned developers that “become obsessed with tools and forget about the purpose of the movement.” He recommended the design of the development manager with independent verification and verification strategies. “Your test, that’s where you need to focus your energy as a leader. Before committing resources, leaders need to have ideas in mind about how to justify whether or not the investment was successful.”

The Air Force Lede spoke about the importance of explanability. “I’m an engineer. I don’t do the law. The ability of AI functions to explain in a way that humans can interact is important. Instead of coming up with the conclusion that there is no way AI can verify it, it’s the partner we interact with,” he said.

See more details at AI World Government.

Check this Exclusive offer

Leave a Comment

Your email address will not be published. Required fields are marked *

Review Your Cart
0
Add Coupon Code
Subtotal