cft

ALGORITHMS BORN OF OUR PREJUDICES

Are algorithms capable of discrimination? I am afraid they are.


user

Norbert Biedrzycki

3 years ago | 6 min read

Are algorithms capable of discrimination? I am afraid they are. What complicates the question is the fact that algorithm developers can hardly be accused of malicious intent.

How then could a mathematical formula put individuals and communities in harm’s way?

As distant and aloof as mathematical equations may seem, they are also commonly associated with reliable, hard science. Every now and then, it nevertheless turns out that a sequence of numbers and symbols conceals a more ominous potential.

What is it that causes applications, which otherwise serve a good cause, to go bad? There could be any number of reasons. One of the first ones that spring to mind has to do with human nature.

People are known to follow a familiar mechanism of letting stereotypes and prejudices guide their lives. They apply them to other individuals, social groups, and value systems.

Such cognitive patterns can easily be driven by the lack of imagination and a reluctance to give matters proper consideration. The resulting explosive mixture spawns negative consequences.

People who blindly trust computer data fail to see the complexity of situations and easily forgo subjective assessments of events.

Once that happens, unfortunate events unfold causing huge problems for everyone involved. Our ignorance and the increasing autonomy of algorithms, which turn out to be far from infallible, generates a disturbing mix.

Algorithms in the service of the police

The police are ideally suited for testing intelligent technologies. Such technologies have their quirks and industry is well aware that a useful algorithm can at times cause problems.

But let us be fair. Smart data processing allows police computers to effectively group crimes, historical data and circumstances into categories and datasets.

There is no disputing the usefulness of applications that help associate places, people, psychological profiles, the time crimes were committed and the instruments used. Criminologists and data processing scholars at the University of Memphis have chosen to use IBM software designed for predictive analyses.

The project team created an analytical mechanism that takes into account such variables as air temperature, local geographies, population distribution, the locations of stores and restaurants, resident preferences and crime statistics.

The underlying algorithms use these variables to identify potential flashpoints in the city. And they actually work. Tests of the system show it is indeed possible to predict the future with a certain degree of certainty, although no details are given on what that degree might be.

The certainty is nevertheless sufficiently high to justify sending police officers to “high-risk” zones identified in this manner. Claims are also made that this helps reduce police response time from the moment an incident is reported by a factor of three.

I can only imagine that mere police presence in such locations could deter criminal activity. And although this example may be difficult for a layman to understand, it proves that modern technology offers “dynamite” innovations with a potential to produce spectacular results.

When computers get it wrong 

The HunchLab system from the startup Azavea, which has been rolled out in the United States, sifts through massive amounts of data of various types (including phases of the moon) to help the police investigate crimes.

As in the previous example, the idea is to create a map of locations where the probability of a crime emerging is particularly high. The program focuses on the locations of bars, schools and bus stops across the city.

And it is proving helpful. While some of its findings are quite obvious, others can be surprising. It is easy to explain why fewer crimes are committed on a colder day.

It is considerably harder though to find the reasons why cars parked near Philadelphia schools are more likely to get stolen.

Would it ever occur to a police officer without such software to look into the connection between schools and auto theft?

The above are all positive scenarios. However, it is difficult to get over the fact that smart machines not only make mistakes in their processing but also contribute to wrong interpretations.

Quite often, they are unable to understand situational contexts. Not entirely unlike people.

The shaky credibility of software

In 2016, the independent newsroom ProPublica, which associates investigative journalists, published the article “Machine Bias” on US courts’ use of specialist software from Northpointe to profile criminals.

Designed to assess the chances that prior offenders will re-offend, the software proved highly popular with US judges, noted the article. Northpointe tool estimated the likelihood of black convicts committing another crime at 45 percent.

Meanwhile, the risk of a white person re-offending was put at 24 percent. To reach these interesting conclusions, algorithms assumed that blacks neighborhoods were a higher criminal-behavior risk than predominantly white districts.

The presumptions propagated by the software have been questioned, ultimately putting an end to the analytical career of Northpointe’s software suite.

The root cause of the problem lied in basing assessments on historical data alone and in the lack of awareness or rather the failure to design algorithms to account for the latest demographic trends.

Algorithms and white faces

In her 2016 book “Weapon of Math Destruction”, Cathy O’Neil explores the interesting presumption that algorithms greatly influence various areas of people’s lives. She suggests that people tend to give mathematical models too much credit.

This, she claims, gives rise to biases which are formed in many ways and on many levels. Prejudices, she says, originate early, even before the data that algorithms use for analysis, is collected. The very same mechanism was discovered by Amazon managers.

They noticed that the recruitment programs they were using regularly discriminated against women. Searches for promising prospects would always have women in the minority among the suggested hits.

What caused the bias? Reliance on historical data showing more men applying for specific positions. This disrupted the gender parity of employment, tipping the scales in men’s favor and ultimately leading to the formulation of biased employment policies.

Algorithms not getting cultural change

The above assessment software was built to rely on algorithms developed in an era in which gross gender-based inequalities plagued employment.

That specific moment in time was characterized by an over-representation of men. Trained on historical data, algorithms worked on the “belief” that the world has not changed.

This meant that their assumptions and simplifications (such as that black means higher probability of crime and that men are more likely to be excellent professionals) were misguided.

Disturbing questions

If you think that mechanisms similar to those described above may be common in professional and personal life, you may well be on to something.

How many cases are there we are not aware of, in which data is organized on erroneous assumptions? How often do algorithms fail to account for economic and cultural changes?

The black box is a term used to refer to human helplessness in the face of what happens in the “brains” of artificial intelligence. Our ignorance and the increasing autonomy of algorithms, which turn out to be far from infallible, generates a disturbing mix.

The prejudices of algorithms will not vanish at the wave of a magic wand. The key question therefore is whether their developers, who often do their design and training work all by themselves, will rise to the task and realize just how easily human biases and behavior patterns can rub off on software.

.    .   .

Works cited:

IBM, Memphis Police Department, IBM SPSS: Memphis Police Department, A detailed ROI case study, Link, 2015. 

The Verge, by Maurice Chammah, with additional reporting by Mark Hansen,  POLICING THE FUTURE. In the aftermath of Ferguson, St. Louis cops embrace crime-predicting software, Link, 2018. 

ProPublica, Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks, by Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, ProPublica, Link, 2018. 
.    .   .

Related articles:

– Learn like a machine, if not harder

– Time we talked to our machines

– Will algorithms commit war crimes?

– Machine, when will you learn to make love to me?

– Hello. Are you still a human?

– Artificial intelligence is a new electricity

– How machines think

Upvote


user
Created by

Norbert Biedrzycki

I have been watching technologies change for years. I believe that the next wave of development is going to be driven by such technologies and trends as Virtual and Augmented Reality, Internet of Things, Robotics and Automation ana AI.


people
Post

Upvote

Downvote

Comment

Bookmark

Share


Related Articles