The genesis of an ethical digital blueprint for the post-pandemic economy | An interview with Danny Lange, SVP of AI at Unity Technologies

Co-Written by Robert Brennan Hart and Anthony Bulk, Associate Director of Data and AI at Sierra Systems

Artificial Intelligence is leading to the fundamental reengineering of all aspects of our human experience and how we as a species interact with our universe. The inequalities, incumbencies, and biases of the innovators designing AI have a direct impact on how the technology guides human information, perception, and action.

As AI leads society towards the next phase of human evolution, it is becoming increasingly more evident that we are at risk of creating a future in the flawed image of her maker; and, perhaps a societal blueprint constructed from Silicon Valley encoded digital eugenics. Can we create an intelligence that is unconstrained by the limitations and prejudices of its creators to have AI serve all of humanity, or will it become the latest and most powerful tool for perpetuating and magnifying racism and inequality?

We recently chatted with Danny Lange, senior vice president of AI at Unity Technologies to get his thoughts on the role of artificial intelligence in advancing a more ethical blueprint for the post-pandemic economy.

As head of machine learning at Unity, Lange leads the company’s innovation around AI and machine learning; focusing on bringing AI to simulation and gaming. Prior to joining Unity, Lange was the head of machine learning at Uber, where he led efforts to build the world’s most versatile machine learning platform to support the company’s hyper-growth. Lange also served as General Manager of Amazon machine learning – an AWS product that offers machine learning as a Cloud Service. Before that, he was Principal Development Manager at Microsoft where he led a product team focused on large-scale machine learning for Big Data.

As artificial intelligence further progresses into mainstream corporate culture, how can organizations ensure the fair and ethical use of this technology?

I think there is an increasing awareness among business leaders that a responsible approach to AI is required to ensure the proper use of this awesome technology. Leaders and their organizations must define what constitutes the responsible use of AI. This can take form in a set of principles that guide the design and use of the technology. Drawing up such principles should be structured around the business value of AI and the mitigation of potentially negative factors such as bias, privacy, fairness, sustainability, and safety. Major companies have already moved in this direction and published their principles for ethical AI. Other organizations should follow their example. Drafting such principles is good business as it creates the foundation of a healthy AI business strategy, detailing how organizations responsibly deploy AI in their products and business processes.

Mark Cuban recently announced that he would commit $2 million to expand a program he founded that aims to teach artificial-intelligence skills at no cost to high school students in low-income communities across the country. What other steps can the private sector take to help meet the demand for skilled workers in the areas of artificial intelligence and machine learning?

At Unity, our mission is to empower creators of all backgrounds. We have made many of our education products free, and most of our AI technologies are available as open-source packages. We have set out to make Unity easy to use for students in the field of AI. Our engineering team has created programming interfaces that make it easy to explore and learn complex techniques such as Reinforcement Learning or generate large amounts of fully labelled images of photographic quality for training Computer Vision systems. For our teams at Unity, it is all about lowering the barriers for students and other users to access, explore, and learn the in and outs of AI in a hands-on setting. Some of our AI environments have seen worldwide adoption amongst undergraduate and graduate students. AI is too an important technology to be left in the hands of the few.

What criteria should organizations consider when evaluating different use cases for AI/ML? What areas are often overlooked when moving AI out of the lab and into production?

Many applications of AI fail when they are brought out of the lab, and into the real world. While AI is one of the most powerful technological innovations of recent times, it is not magic. Many applications perform well in carefully orchestrated settings, but they are facing challenges in a noisy and messy world. Organizations should carefully and critically vet applications of AI. Ask the hard questions and challenge the application design with scenarios of users who are trying to break or even game the system; networks that are unstable; cameras with dirt on the lens; autonomous vehicles on a snowy day; and so on. Don’t just go along and hope for the best.

As the Internet of Things, business intelligence, artificial intelligence, and virtual reality start to focalize, what impact will this convergence have on the way solutions are developed and services are delivered? How can we ensure the broader societal picture remains in focus through the hyper-accelerated emergence of so many seemingly fragmented technology mediums? 

We are returning to the point of the ethical use of powerful technologies. The solution is found in organizations that are prepared to ask the right questions and empowered to deliver responsible solutions. Changing organizations to operate this way will be hard but I am an optimist. This approach will persevere because responsible organizations are likely to be the most successful in business. Indeed, the need for the ethical use of AI is not a trend but rather the defining factor that will decide who is worth doing business with.

Danny will be joining John Koetsier, AI Contributor at Forbes, and five of the world’s leading experts in AI Ethics, on Thursday, December 3 for the third chapter of Interzone; presented by Politik and the Electronic Recycling Association.

Interzone’s closing digital roundtable will also feature Beena Ammanath, Executive Director of the Deloitte AI Institute; Anima Anankunda, Managing Director of ML Research at NVIDIA; Dr. Cindy Gordon, CEO of SalesChoice and Dr. Lobna Karoui, Global AI Panel Contributor at Forbes and MIT.

Registration is now open.

Would you recommend this article?


Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.

Jim Love, Chief Content Officer, IT World Canada
Robert Brennan Hart
Robert Brennan Hart
Robert Brennan Hart currently serves as the Executive Vice President of Social Impact at the Electronic Recycling Association and has been recognized by the United Nations Foundation as one of the world's Top 70 Digital Leaders and Avenue Magazine as a Top 40 Under 40 digital luminary. As the founder and former CEO of the Canadian Cloud Council and Politik, Robert is a globally recognized advocate for the advancement of a more equitable and enlightened digital society and has participated as a member of the United Nations Global Digital Council, DocuSign global advisory board, and HotTopic’s Meaningful Business steering committee. Previous to joining the ERA, Robert served as the Chief Community Officer at Abaxx Technologies and Executive Producer of the Smarter Markets podcast.

Featured Download

IT World Canada in your inbox

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Latest Blogs

Senior Contributor Spotlight