cft

Do You Trust Machines to Make Decisions For You?

Major tech companies continue to grow and their influence on everyday life is spreading.


user

Mark Ryan

2 years ago | 6 min read

John had an idea. He often had fairly good ideas but this was a very good idea and he was pretty pleased with himself. He had discovered that his company had been wasting thousands of man-hours every month due to an anomaly in the timecard system that most of his colleagues were using. The processes hadn’t been looked at in years and nobody could remember why the current protocol was in place or what purpose it served, but everybody agreed it was a huge waste of time. He decided to develop a simple system that automated most of the procedures while preserving its core functionality.

It was a very very good idea and he was, as I say, pretty pleased with himself. He proudly submitted his proposal to management on a Monday and by the following Friday, he had been called to a meeting for what he assumed would be a discussion about the implementation of his plan. Instead, he was faced with an impassive, stone-faced director who flatly told him that the proposal was unsuitable, before leaving the room without another word.

John was pretty miffed. He didn’t understand why his proposal was rejected and was particularly annoyed that the director didn’t take any time to explain why. Most of us would probably want to understand why our pitch didn’t work or what was wrong with our application. Being left in the dark is frustrating and undermines our trust in the decision-making process. This goes double for algorithms — because we tend to anthropomorphise technology it becomes very unsettling for people when an AI appears to keep us out of the loop. It’s one of the reasons why explainability in Artificial Intelligence is so important as it becomes further embedded in our day-to-day transactions and trust becomes critical.

On any given day you or I could have thousands of data points collected about us. This could include web-browsing behaviour, physical movements or spending patterns. Our personal data is compiled and recorded openly, with the tacit agreement of the user, and sold on to interested parties. There is no restriction on who can purchase material used to predict our future behaviour and assess our suitability for services.

Major tech companies continue to grow and their influence on everyday life is spreading. It is extremely difficult for somebody living in the developed world to go through a day without encountering Google, Apple, Facebook, Amazon or Microsoft. Software imposes itself on almost every aspect of human activity, so the range of collection points is vast.

The more data points that businesses or organisations possess on a person the more the decisions they can make about them. A bank will use a credit report to assess an applicants suitability for a loan, for example. There are well-established regulations governing what information is available on a credit report, how long it stays there for and who can access it. But until recently there was very little legislation governing how an individual’s personal data should be treated. GDPR and California’s CCPA privacy act, which came into effect in January 2020, have begun to establish rules on the appropriate way to treat what has become an exceptionally valuable commodity.

I have written elsewhere about the problem of how organisations use this data once they have acquired it. Article 22 of GDPR mandates that no individual can be subject to an entirely automated decision-making process that affects them in any meaningful way. This means, for example, that a Uber rider with a low passenger score cannot be excluded from further rides automatically. There must be a human in the loop to make sure the algorithms working within Uber’s software are behaving properly and passenger’s scores aren’t suffering due to glitches, mistakes or biases.

What many people find concerning is the level of trust that these organisations ask of us. There is a fear that individuals could become subject to the unaccountable whims of an unexplainable algorithm when they try to rent a car or take out a phone contract. Charlie Brooker’s tech satire Black Mirror illustrated this beautifully in the episode ‘Nosedive’, where a character suffers a dramatic drop in her algorithmically calculated social score after a series of unfortunate interactions. Consequently, she is unable to rebook a seat on a cancelled flight to attend a wedding and, after several hugely stressful situations is ultimately forced to accept a lift from a lorry driver, where she learns that the driver’s husband was denied life-saving medical care due to his inferior social score.

As the services provided by private companies are further enmeshed within our everyday lives there must be legitimate human oversight in AI decisions. The alternative could lead us to a grotesque Kafkaesque ‘computer says no’ reality where no decision can be challenged, nobody can account for how the system works and nobody takes responsibility.

EU lawmakers have taken the position that accountability is a prerequisite for consumer trust in automated decision-making. Undeniably the consequence of the action is relevant — there is no requirement for explainability when Google Maps selects one route over another to get you home at night. Traditional AI is perfectly appropriate for simple or trivial tasks like song recommendations on Spotify or voice assistant software.

Decisions made by self-driving cars will need to be explained in the event of a collision (Credit: THE ASSOCIATED PRESS)
Decisions made by self-driving cars will need to be explained in the event of a collision (Credit: THE ASSOCIATED PRESS)

A police AI system equipped with facial-recognition technology, however, could result in instances of false arrest (or worse). Modern military systems like drones use AI processes to make split-second decisions on a battlefield. Diagnostic AI tools in healthcare or even stock-trading applications all have the potential to cause real damage when they behave in unexpected ways. If the decisions taken by the AI are not transparent or explainable it makes their use difficult to justify. How can we be sure we can trust these systems? Can we be certain that they have been developed without biased training data, or that they haven’t been maliciously manipulated? If a closed AI system is guilty of a miscarriage of justice, a stock market crash or a war crime, who can be held accountable?

Considerations like these have caused major companies like IBM and Google to start taking Explainable AI seriously, although advanced applications of the technology are still in their infancy. Two of the key factors in any XAI application are inspectability & traceability: an investigator after the fact must be able to examine precisely where within an algorithm a decision was taken and why.

DARPA (the agency that develops emerging technology for use by the US military) have determined that an XAI algorithm should produce two outputs: the decision and the model by which it arrived at the decision. Broken down to a basic level a complete algorithm constitutes the training data (input layer), the deep learning process (hidden layer) and finally the explanation interface that produces the solution along with the steps by which it arrived there (output layer). This would mean that many modern advanced AI techniques, by their very nature, could never be explainable. The ‘Black Box’ of complex artificial neural networks could never produce a model of how it came to a decision. Instead, developers have been working on other techniques that might lend themselves to explainability. These include:

LRP — layer-wise relevance propagation — one of the simpler techniques that can explain how some machine learning systems came to conclusions. LRP works backwards through the neural network to see which inputs were most relevant to the output.

LIME — Local Interpretable Model-Agnostic Explanations — is a post-hoc model that slightly changes (or perturbs) the inputs to see how the outputs change, giving insights on how the decision was made.

RETAIN — Reversed Time Attention Model can be used in medical diagnostics and utilises attention mechanisms to observe which of two possible neural networks influenced the decision the most.

This recent post from Venture Radar gives a more detailed summary of companies leading the way in XAI technology including DarwinAI, which uses a process known as “Generative synthesis,” to enable developers to understand their models; Flowcast which uses machine learning technology to create predictive credit-risk models for lenders, and Factmata which aims to combat fake news online.

Upvote


user
Created by

Mark Ryan


people
Post

Upvote

Downvote

Comment

Bookmark

Share


Related Articles