Knowing the Difference*

*This article is based on a Fortune magazine article I read recently.

In March 2019, aviation authorities around the world grounded the Boeing 737 MAX passenger airliner after two new aeroplanes crashed within five months of each other. Soon afterwards, headlines began appearing blaming ‘Aggressive and riskier’ AI systems, systems that were making decisions the pilots didn’t understand and didn’t know were wrong. Further investigations showed that it may have been faulty sensor data that led to these crashes – the algorithms were doing their jobs, but were using bad input data and making appropriate decisions based on the inputs they were receiving. The AI surprise element, however, was on top of many people’s minds and didn’t fade quickly.

Beyond Airlines and into Finance

In November 2019, various incidents in how credit limits were assigned to male versus female applicants raised questions of gender profiling and biases in credit scoring. This led to a high-profile Twitter engagement, with Bloomberg asking ‘What’s in the black box?’, illustrating how it is difficult for users, and sometimes even those offering the service, to explain how decisions are being made.

Bloombger black box

A Surprising Move

Not all AI surprises have been negative though. In 2016, AlphaGo, a computer Go program developed by Google Deepmind, played world champion Go player Lee Sodol in a series of widely publicised matches.

In the 37th move of the second game, AlphaGo played a move that caught even the world’s best Go players by complete surprise. It was so out of the ordinary, that many thought it was actually a mistake. AlphaGo went on to win the game, and Move 37 has defined the way that AI is now seen by businesses. It is these seemingly hidden opportunities that, when correctly predicted by AI, can change the course of a company. These are the ultimate moves we aim for in using AI and data science.

Lee Sodol AlphaGo AI

Helping to Solve the Problem with AI Governance

The obvious question is then, how do we know to trust Move 37s, while avoiding those which can lead to reputational damage or worse. For me, I think that AI and model building governance has a role to play here.

AI governance, when done correctly, aims to enable data analytics models to be trusted throughout the organisation by ensuring models and the inputs used for models are built and used in accordance with a defined framework. These frameworks should be put in place to mitigate risk and identify potential errors while providing guidance on how to build and deploy models. One such error that should be carefully addressed is faulty input data.

Knowing the Difference

When data scientists combine proper AI governance, data management and governance and a defined analytics framework to build, deploy and manage models, we will be one step closer to being able to tell the difference between outcomes that should ignored, those that should give us guidance, and those that will enable us to make our own Move 37s.