[ad_1]
John P. Desmond, Editor of AI Trends
Engineers tend to see things in clear terms. Some call this a black and white term, such as choices of right or wrong, good and bad. Ethical considerations in AI are very subtle and a vast grey area that makes it difficult for AI software engineers to apply it to their work.
It was a take-out from the standard future session and ethical AI. AI World Government The meeting was effectively held in person this week in Alexandria, Virginia.
The overall impression from the conference is that the AI ​​and ethics debate is what is happening in almost every quarter of AI in the vast federal corporations, and the consistency of the points that are taking place across all these different and independent efforts.
“Our engineers often think of ethics as an ambiguous thing that no one really explains,” said Beth Ann Schuelke Leek, an associate professor of engineering management and entrepreneurship at the University of Windsor, Ontario, Canada, in the Future of Ethical AI Sessions. “It’s hard to be told that an engineer looking for solid constraints is ethical. It really gets complicated because you don’t know what that really means.”
Schuelke-Leech began his engineering career before completing his PhD in Public Policy. “I have a PhD in Social Sciences and am involved in AI projects, but I am drawn back to the world of engineering based in the Faculty of Mechanical Engineering,” she said.
An engineering project has the goal of “standards and regulations becoming part of the constraints,” such as the purpose, the set of features and features required, and the set of constraints such as budgets and timelines. “If I know I have to follow it, I will do it. But if you tell me it’s a good thing, I might adopt it.”
Schuelke-Leech also chairs the IEEE Association’s committee on the social implications of technical standards. She commented, “The voluntary compliance standards from IEEE are essential for people from the industry to come together and say that this is what we think we should do as an industry.”
Some standards, such as around interoperability, do not have the power of the law, but engineers follow them, so the system works. Other standards are described as good practices, but they do not need to be followed. “It helps you achieve your goals or prevents you from achieving them, how does your engineer see it,” she said.
Pursuing AI ethics, known as “disruptive and difficult”
In a session with Schuelke-Leech, Sarah Jordan, the future senior advisor to the Privacy Forum, is working on the ethical challenges of AI and machine learning and is an active member of the IEEE Global Initiative on Ethics and Autonomous and Intelligent Systems. “Ethics is messy, difficult and contextual. There is a surge in theory, frameworks and components,” she says, “Practice of ethical AI requires repeated and rigorous thinking in the context.”
Schuelke-Leech said, “Ethics is not the end result. It’s a process that is being tracked. But I’m looking for someone who can tell me what it takes to do my job, how it ethical methods, the rules I should follow, what I need to do to deprive me of ambiguity.”
“Engineers shut down when it came to funny words that they didn’t understand, like ‘ontology’, that they’ve been taking math and science since they were 13,” she said.
She finds it difficult to involve engineers in attempts to draft standards for ethical AI. “The engineer is missing from the table,” she said. “The debate about whether or not you can reach 100% ethics is a conversation that engineers don’t have.”
She said, “If their managers tell them to understand it, they will. We need to help engineers cross the bridge along the way. It’s essential that social scientists and engineers don’t give up on this.”
The Leader’s Panel discussed the integration of ethics into AI development practices
The topic of AI ethics appears further in the curriculum of RI’s US Navy War College, established to provide advanced research to US Navy officers and now to educate leaders in all services. Ross Coffey, the institution’s military professor of national security issues, joined the AI ​​World Government’s panel of leaders on AI, ethics and smart policy.
“Ethical literacy among students increases over time as they tackle these ethical issues, which is why it takes a long time and is an urgent issue,” Coffey said.
Panel member Carol Smith, a senior research scientist at Carnegie Mellon University who studies human interactions, has been involved in integrating ethics into AI systems development since 2015.
“My interest is to understand what interactions can be created when humans properly trust the system they are working on, and when they properly trust the system they don’t overly trust,” she says, “In general, people have higher expectations than systems.”
As an example, she cited the Tesla Autopilot feature. This implements some autonomous car features, but is not perfect. “People think they can do much more activities than the system is designed. Understanding the limitations of a system is important. Everyone needs to understand the expected outcomes of the system and what some of the mitigation situations are,” she said.
Panel member Takaariga, the first Chief Data Scientist to be appointed to the US Government’s Accountability Office and director of GAO’s Innovation Lab, sees the gap in AI literacy in the young workforce that appears in the federal government. “Ethics isn’t always included in training for data scientists. Accountable AI is a commendable component, but we don’t know if anyone will buy it or not. Beyond the technical aspects, we need to have a responsibility to hold the end users who are trying to provide the service,” he said.
Dr. Alison Brooks, VP of Smart City and Community Research and Panel Moderator at IDC Market Research Company, asked whether ethical AI principles can be shared across national boundaries.
“We have limited ability for all countries to align with the exact same approach, but we need to coordinate in some way what AI doesn’t allow and what people are responsible for,” said CMU’s Smith.
The panelists believed that the Commission was on the frontlines on these ethical issues, particularly in the realm of enforcement.
The Roth of the Navy acknowledged the importance of finding a common basis, centered around AI ethics. “From a military perspective, interoperability needs to go to a whole new level. We need to find a common foundation with our partners and allies about what AI allows and what AI doesn’t allow.” Unfortunately, “I don’t know if that debate is happening,” he said.
The debate on AI ethics could possibly be pursued as part of certain existing treaties, Smith proposed
Many AI ethical principles, frameworks, and roadmap provided by many federal agencies are difficult to follow and can be consistent. Take said, “We hope to see a merger next year over two years.”
For more information and access to recorded sessions, see AI World Government.