Site icon IT World Canada

Microsoft outlines AI ethics framework strategy

Microsoft Corp. has made great strides in the field of AI, and as it continues to innovate, it is keeping an eye on the ethics of AI, said Michael Phillips, assistant general counsel for Microsoft, at its recent AI & Tech Immersion event in Seattle.

“We’re very enthusiastic and optimistic about the fact that we are enabling anybody and everybody to utilize these tools. But we are also very aware of the fact that once you do that, you open up all kinds of issues that may have been traditionally sort of confined to human debate,” said Phillips. “And so as we confront this, we know and certainly the regulatory environment knows, that when we are working on this technology and deploying this technology, the trustworthiness is really the critical piece of this.”

Phillips co-chairs an internal group within Microsoft which was created to tackle the challenges of implementing AI in an ethical manner. The group consists of members of leadership from Microsoft’s research, legal, and engineering departments.

He describes Microsoft’s approach to ethical AI as being broken down into five essential aspects: fairness, reliability & safety, privacy & security, inclusiveness, transparency, and accountability.

Fairness

Defining the term fairness in the context of technology is quite a challenge, said Phillips.

“The topic of fairness is incredibly broad, and you know, it starts with sort of a realization that even defining what we mean by fairness is extremely difficult, and in fact, can’t really be done,” said Phillips. “And then from that, sort of understanding that fairness or various problems can surface in artificial intelligence in any number of ways.”

He said one way to work towards that goal is to ensure a lack of stereotyping by an AI system; as race has been an issue in the past when relying on algorithmic analysis of data.

Using the example of Eric L. Loomis, who was sentenced to six years in prison due to being labeled as high risk by the Compas system (an algorithmic predictor that assesses risk of re-offence used by the Wisconsin Supreme Court), Phillips pointed to the need to ensure data being leveraged by systems like this is accurate and as bias-free as possible.

He emphasized the need for greater oversight when assessing data before it is put into use, as we will begin to see the use of AI and algorithmic predictors to assess such things as bank loans and access to housing.

Reliability & Safety

As AI is beginning to affect human lives through decision making in the digital space, so too will it begin to affect human lives in the physical world.

The prime example of this is the innovation in the space of self-driving cars.

This has become a hot button issue in Arizona, where residents of Chandler, Arizona, have begun violently protesting the implementation of self-driving cars by Waymo, a subsidiary of Alphabet Inc.

And Phillips says he believes this sort of backlash is likely to continue as long as the general public remains in the dark about how AI works and how it makes the decisions it does.

“There are various studies that suggest that human beings will forgive the mistakes of other human beings at a pretty high rate. Human beings will not forgive the mistakes of machines at a similar rate. Why is that?” said Phillips. “Certainly, I think part of the challenge is the lack of the ability to understand what the machine does and why. The sort of fear that machines are operating in a different space than humans can understand and at a speed that they can understand.”

By ensuring a thorough system of testing, while remaining transparent and open to the public through a policy of informing them of the progression of the technology, Phillips said these fears can be alleviated.

Privacy & Security

Data is being collected at higher rates than we have ever seen. In fact, Phillips stated that the amount of data that exists has begun to increase at a rate of 200 per cent every two years, and is likely to increase more dramatically in the future.

The ability to leverage data in the use of AI and algorithmic prediction will continue to increase as the scope of data increases.

And Phillips says it is essential that regulatory frameworks relating to data and the methods it is collected by continue to catch up to this exponential growth.

We have already seen this occurring, with the EU’s 2018 implementation of its General Data Protection Regulation (GDPR) being a prime example, among other regulatory bodies being formed.

While Phillips says this is one area he sees the most advancement in, that momentum needs to continue.

“This ends up being a space I think that probably compared to the others is relatively mature. And yet, is still developing,” said Phillips. “But we certainly understand that ensuring that people have the ability to consent to the use of the data and to be able to control the use of that data, these are going to remain critical issues.”

Phillips pointed to a case in 2012 when a man found out his daughter was pregnant because of shopping and coupon suggestions that Target had sent the family based on the shopping habits of his daughter.

He said he believes further advancements in regulatory frameworks can begin to eliminate privacy issues like this.

Inclusiveness

The potential for AI technology to assist those with disabilities is beginning to take shape, said Phillips, and he said that Microsoft is very committed to this idea.

“For AI to really augment human capabilities and really become indispensable, it has to be designed from the beginning for human needs in mind and all human beings,” he said. “So certainly, there’s tremendous promise of the potential for AI for those with disabilities.”

While Phillips used the example of technology to assist the visually impaired in interacting with the world around them, there has also been talk of devices that can assist those with physical disabilities by intuitively predicting the movements of the user.

And to create devices that will be used with trust and reliance, Microsoft must work towards creating technology that will improve the lives of the user without changing how they live their life, while maintaining an avenue of education so that the user understands the technology, said Phillips.

“This principle terms of maintaining the trustworthiness is about ensuring that people feel like, yes, I actually want this technology in my life because it serves a purpose that I understand. And that it is focused on me versus the other way around. I’m not having to adapt who I am and what I do to fit within what this technology is doing,” Phillips said. “And so this will remain a really key part of what we do.”

Transparency & Accountability

The idea of transparency has been a key aspect in everything that Phillips spoke about, and he reiterated just how essential it is, stating that this is constantly on the minds of those making important decisions related to AI at Microsoft.

And he says this comes down to not only allowing the public to understand how the technology works, but also what the intended purposes of the technology are and how those purposes are meant to be achieved.

“Where’s our defining line on transparency? And I think the answer is we’re not entirely sure yet,” said Phillips. “There are a couple of elements that we’re looking at. On one hand, we think of this broader context, and that is really going beyond just what the models do and what’s inside the system. But really transparency about what the system is designed to do and how we intended it to work. Are we being as forthright as we can be about system limitations and purposes and that sort of thing?”

Future Outlook

And while Phillips said that Microsoft has been and will continue to monitor these issues and the optimal ways in which to tackle them, the true test will be to try to instill the need for ethical policies to those that will be using Microsoft’s AI technologies as well as those that create the regulatory framework that will govern this technology in the future.

“There’s going to be a lot of variety in terms of how people approach these issues. So that’s going to be an important point of our outreach; working with regulators to get deeper. And as we have what we consider breakthroughs on issues like transparency, educating our partners in government to understand that as well.”

 

Microsoft Corp. paid for Buckley Smith’s travel and accommodations to attend Azure Data & AI Tech Immersion. The editorial content was not reviewed before publication.

Exit mobile version