Retune 2016, Part 2: Algorithmic Decision Making, Machine Bias, Creativity and Diversity.

Memo Akten
17 min readOct 14, 2016

--

This is an extract from my talk (i.e. Sermon, as it was in a Church) at Retune 2016 . I’ve split it into multiple posts, based on theme. Part 1 is a summary of ideas which I explore in more detail in my Resonate 2016 talk, the rest is a collage of things I’ve been thinking about for the past few years, but first time presenting.

Introduction

AI is getting a lot of bad press lately, and rightly so. I don’t mean the ridiculous hype-driven press related to The Singularity and robots taking over. I mean AI getting it wrong, screwing it up.

I use the term ‘AI’ quite loosely here. What I’m referring to is what is broadly referred to as ‘algorithms’. To be more precise: ‘algorithmic decision making’ — algorithms deciding what ads you see, labelling your photos, deciding whether you’re a high risk offender or not, deciding what you should see in your Facebook feed etc.

Usually these are some kind of machine learning algorithms — algorithms that learn how to behave by analysing ‘training data’. And the training data is us, our opinions, our values, our culture, embedded in the data that we produce. And these algorithms are learning our bad habits. They’re picking up our prejudices and biases. And as a result, they’re making incredibly sexist, racist, ableist, ageist, discriminatory and damaging decisions; not only reinforcing those very biases, but at times destroying individual’s lives. There’s a lot of really shocking, appalling and unacceptable examples of this. These images are just a tiny selection.

https://www.propublica.org/

Fortunately, there’s a growing number of people researching and writing about this. I really strongly recommend you check out propublica.org, who do quite extensive research into machine bias, and its broader implications.

http://www.katecrawford.net/, http://technosociology.org/

And researchers such as Kate Crawford at Microsoft Research, and Zeynep Tufekci who writes for the NYTimes, also expose a lot of these problems, looking at the social impact of algorithmic decision making, and calling for more regulations and transparency.

Critical Algorithm Studies: a Reading List

In fact a network of researchers called The Social Media Collective (SMC) put together this great list of research which I also highly recommend. You can find it if you search for “Critical Algorithm Studies: a Reading List”.

This is of course quite a complex issue. The output of a machine learning model is the result of the interplay between many different factors, and it isn’t always trivial to point out exactly why an algorithm makes a particular decision. E.g. it might be that a particular ad is shown to men more often than to women, because men were seen to click on that ad more times, and so that’s what the algorithm learnt. Or it might be that the advertiser explicitly asked to specifically target men, and the algorithm just complies — since targeted advertising is what the whole business is about. Or it might be that the algorithm inferred, for some other unknown ‘complex’ reason, that the advertised product would be more suited to men.

So at times there seems to be some confusion as to where exactly the responsibilities for these problems lie. There might be a tendency for a developer or company to claim that their algorithm is objective, that it’s the data which is biased, or maybe even the constraints set by the client which causes the problem.

So when developers, or companies that deploy AI products, claim that their technology is ‘neutral’, I think the simplest, most concise answer is a sentence I came across hidden in a paper, and is absolutely critical…

“The amoral status of an algorithm does not negate its effects on society.”

— A. Datta, M.C. Tschantz, A. Datta, “Automated experiments on ad privacy settings”, 2015

I find it quite shocking, that anyone deploying an algorithm for real world use, would for some reason assume that it’s not their responsibility to research and understand from all angles, the wider impact of their technology — including any potential undesired consequences — before they deploy the technology. They should be subjected to the same level of rigour and regulations that other industries are subjected to before releasing a product or service. Like the food or drug industries (who still manage to push out unsafe products), or construction industries?

But of course, unfortunately this goes against the current neoliberal agenda, which might accuse such criticism of ‘hindering innovation’. This is the agenda which goes by the mantra “maximise profit, launch product ASAP, roll-out updates if something goes catastrophically wrong, ignore or deny any other wrong-doing. Move fast and break things”.

Society has become the beta-testers, the play-ground of silicon valley.

Imagine the same mentality in the food industry. “We’ve made this new chocolate bar, only me and my family have tested it. Let’s launch it ASAP. If anyone gets ill or dies, we’ll just recall it and roll out an update”. Believe it or not, I was trained as a Civil Engineer. Can you imagine a construction company building a bridge with new and untested technology, and opening it to the public without thorough testing, and just looking to see how it performs?

Of course I’m being a bit unfair. The negative consequences of food poisoning or a bridge failing are a lot easier to spot than the negative consequences of an algorithm failing. And herein lies the problem. It’s more difficult to quantify the negative consequences of an algorithm, especially when we’re all so ignorant as to what’s happening.

But I don’t want to dwell on this angle. There’s actually a growing amount of writing on this topic, by people a lot more qualified than I, on the sites and resources I mentioned before. I want to ask, how did this happen? How did we get here, and let this happen? And why is it happening? I want to look at this briefly from a historic perspective.

These learning algorithms are very heavily based on statistics — or rather, statistical inference — augmented through computer science — computational statistics — applied to these particular domains.

And statistics isn’t a new field. In fact, the whole field of statistics, was born and developed throughout the 17th/18th/19th century, out of the realisation that a sample of data is inherently biased and does not accurately represent the population from which it was sampled. That’s why we have the field of statistics to begin with! So I’d like to take you on another brief journey through time.

A very brief history of Statistical bias

In the late 1600s, Jakob Bernoulli, great mathematician and one of the founders of probability theory said…

“I cannot conceal the fact here that in the specific application of these rules [probability theory], I foresee many things happening which can cause one to be badly mistaken if he does not proceed cautiously.” — Jacob Bernoulli (1654–1705)

I saw this quote by Bernoulli in the book “Probability Theory” by Edwin Thompson Jaynes, a physicist working in statistical mechanics and probability. In the same book Jaynes went on to say…

“A false premise built into a model which is never questioned cannot be removed by any amount of new data.” — Edwin Thompson Jaynes (1922–1998), “Probability Theory: The Logic of Science”, 2003

The evolutionary biologist Stephen Jay Gould said…

“… misunderstanding of probability, may be the greatest of all general impediments to scientific literacy.” — Stephen Jay Gould (1941–2002), “Dinosaur in a Haystack: Reflections on Natural History”, 1995

And perhaps one of the reasons for that, is given to us by the economist Ronald Coase…

“If you torture the data long enough, it will confess.” — Ronald Coase (1920–2013), early 1960s

Also…

“Prediction is very difficult, especially about the future.” — Niels Bohr (1885–1962)

…as made famous by Niels Bohr (who lay the foundations for quantum mechanics), though from earlier sources.

Very famously of course there’s…

“There are three kinds of lies: lies, damned lies, and statistics.”

The exact source isn’t known, but already in 1891 this was printed in the National Observer…

“It has been wittily remarked that there are three kinds of falsehood: the first is a ‘fib,’ the second is a downright lie, and the third and most aggravated is statistics.”

— Eliza Gutch?, National Observer 1891

of course Mark Twain also said…

“All generalizations are false, including this one.”

— Mark Twain

In fact, these qualities of statistics was such a well known fact, that in 1954 this book came out…

“How to lie with statistics”, by Darrell Huff.

And interestingly, as of 2005…

…it has sold more copies than any other statistical text. Of course, the irony of looking at the statistics of a book about ‘lying with statistics’ is rather sweet.

Then why Statistics?

But again, I don’t want to be an unfair harbinger of negativity. Just bashing on statistics wouldn’t be fair. It isn’t rubbish. It exists for a reason.

The great mathematician Laplace said…

“… the theory of probabilities is basically just common sense reduced to calculus; it makes one appreciate with exactness that which accurate minds feel with a sort of instinct, often without being able to account for it.” — Pierre-Simon Laplace (1749–1827), “Théorie Analytique des Probabilités”, 1812

Echoed by the great physicist Maxwell…

“The true logic for this world is the calculus of Probabilities, which takes account of the magnitude of the probability which is, or ought to be, in a reasonable man’s mind.” — James Clerk Maxwell (1831–1879)

Also echoed with similar sentiment but different perspective by the great pioneering nurse and statistician Florence Nightingale…

“To understand God’s Thoughts we must study statistics for these are the measure of His purpose.” — Florence Nightingale (1820–1910)

In a world governed by uncertainty, there is, and should be, a place for statistics. As dangerous as it is, we depend on it. And this can be summed up, with one of the most oft quoted statements in statistics, by George Box…

“All models are wrong, but some are useful.” — George E. P. Box (1919–2013), c, 1976

Which he later expanded with…

“Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.” — George E. P. Box (1919–2013)

That’s the question that we should be asking, and perhaps we aren’t asking enough of today. In fact there seems to be a lot of responsibility evading, “It’s the data not me”. But again, it’s been known for a while that that isn’t a legitimate argument…

“The statistician cannot evade the responsibility for understanding the process he applies or recommends.” — Sir Ronald A. Fisher (1890–1962)

In short, in a world of uncertainty like ours, statistics is useful, if not essential. But has to be handled with care. Or to quote me again…

“Statistics is like [a] laser. Careless use can blind. Skilled use can fix retina & help see clearer. Or make holograms to present false visions.”

A generation of computational data enthusiasts are forgetting this lineage of thinking, and jumping blindly into a delusional techno-utopian world; perhaps fueled by ignorance, perhaps fueled by the gold-rush mentality of tech businesses, start up culture, the desire to urgently get something out there, without thinking about the implications, driven purely by profit and neoliberal values which are not always aligned with the well-being of the wider population.

As the brilliant Ursula Franklin — who sadly passed away earlier this year — said in the “Real World of Technology” (a lecture she gave in 1989, also a book)…

“Technology is not the sum of the artifacts, of wheels and gears and rails and electronic transmitter. For me technology is a system. It entails far more than the individual material components. Technology involves organization, procedure, symbols, new words, equations, and most of all it involves a mindset…” — Ursula Franklin (1921–2016), “The Real World of Technology”, 1989
“While we should not forget that these prescriptive technologies are exceedingly effective and efficient, they come with an enormous social mortgage. The mortgage means that we live in a culture of compliance, that we are ever more conditioned to accept orthodoxy as normal, and to accept that there is only one way of doing ‘it’.” — Ursula Franklin (1921–2016), “The Real World of Technology”, 1989
“The acculturation to compliance and conformity has, in turn, accelerated the use of prescriptive technologies in administrative, government, and social services. The same development has diminished resistance to the programming of people.” — Ursula Franklin (1921–2016), “The Real World of Technology”, 1989

And through centuries of subtle social programming; with the clever incremental designs of manufacturing and industrial processes; the hierarchies imposed through the development and applications of new technologies; the effects of these emerging social, political and economic structures on our values; we become accustomed to accepting certain things as ‘normal’. They become so normal, that we can’t even fathom questioning their normality. Like the neoliberal agenda of what we call the ‘tech industries’, and their ability to evade social responsibility in the name of ‘innovation’ and ‘economic growth’. Or — perhaps even worse — when they do take on social responsibility, it’s through the incredibly narrow lens of a young-to-middle-aged rich white boy.

(I cannot recommend The Real World of Technology enough. Franklin is also a champion of diversity in science and engineering, which I’m going to come to shortly).

Furthermore, both a consequence and a reason for this, is expressed in this incredibly prophetic statement by H.G. Wells, from 1929…

“The time may not be very remote when it will be understood that for complete initiation as an efficient citizen of one of the new great complex world wide states that are now developing, it is as necessary to be able to compute, to think in averages and maxima and minima, as it is now to be able to read and to write.” — H.G. Wells (1866–1946), “Mankind in the Making”, 1929

He’s imagining a world, where being literate in just reading and writing isn’t enough, but the world will be so dominated by these new paradigms involving high amounts of complex data, that we will need to be able to think in terms of computation — not necessarily modern day computers, as they didn’t exist back then, but that form of mathematical, computational thinking. And of course by ‘averages, maxima, minima’ he’s referring to statistical thinking — dealing with big data.

How do we understand and interact with a world full of machines and big data? To quote me again …

Empathising with machines through the shared language of maths.

In a world of big data, without a certain level of computational or statistical literacy, we are just tourists. And we end up with situations like this…

Charles Babbage (1791–1871)

…perhaps one of my favourite quotes of all time in relation to the gap between science+technology vs general public understanding.

Charles Babbage, called by some “the inventor of computers” (there are many), designed mechanical computers in the 1800s. Most famously the Difference Engine (which was a polynomial calculator), and the Analytical Engine shown here (which was a general purpose computer). He didn’t actually build them, as he always ran out of funding (there were no world wars which relied on these machines at the time), but he did design them. And he said…

“On two occasions I have been asked [by members of Parliament], ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.” — Charles Babbage (1791–1871), “Passages from the Life of a Philosopher”, 1864

And believe it or not, that’s exactly what we’re doing today.

We’re putting in wrong figures, and we’re hoping to get the right frigging answers.

Machines as Innovators

That was a very brief journey through time, looking at some historic views on statistics, and the responsibilities of the statistician. I’d now like to change direction a bit, and this provides a nice link to the next section. Because Babbage wasn’t alone on this particular journey.

Ada Lovelace (1815–1852)

Lady Ada Lovelace, mathematician and theoretical computer programmer, was only 17 when she met Babbage, who was 42 at the time, and despite their age gap, she was his intellectual collaborator and peer. It’s through her notes that we know so much about these machines. And in her notes is also what is considered to be the very first complex computer program, what you can see here, to calculate Bernoulli numbers.

Also in her notes are some beautiful comments with incredible foresight, centuries of foresight, perhaps even foreshadowing the computational, generative art movement of the past few decades.

“The Analytical Engine weaves algebraic patterns just as the Jacquard loom weaves flowers and leaves” — Ada Lovelace (1815–1852), “Notes (on The Analytical Engine)”, 1843

One of her most profound insights was that, while Babbage was mostly interested in designing a machine that could calculate anything, i.e. work on numbers, she saw the potential of this machine to go beyond that and operate on symbols, to do true general purpose computing,

But even beyond that she went on to say…

“… the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.” — Ada Lovelace (1815–1852), “Notes (on The Analytical Engine)”, 1843

Also amongst her notes, is this very famous, oft-quoted, and somewhat controversial statement…

“The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis, but it has no power of anticipating any analytical revelations or truths. Its province is to assist us in making available what we are already acquainted with.” — Ada Lovelace (1815–1852), “Notes (on The Analytical Engine)”, 1843

And almost two centuries later, we are still grappling with this statement, and still trying to understand our relationship with the ‘creative’ machine. For the following two centuries, researchers have been trying to prove her wrong. Researchers of so-called ‘Strong AI’ — those who are trying to build ‘truly intelligent’ machines, and researchers of so-called ‘Computational Creativity’ — those who are trying to build ‘truly creative’ machines. ‘Intelligence’ and ‘creativity’, two vague concepts that are often intertwined; and lack clear, universally accepted definitions. I’m not going to attempt to define them today.

A problem with the field

But here we encounter an interesting problem in these fields. Throughout the history of AI, there has been a pattern. A problem or task is presented as the epitome of intelligence, as something that only a human-level intelligence could accomplish. And if and only if a machine could perform that task, then it would be considered truly intelligent.

Once upon a time this task was calculations. Simple arithmetic was thought to be something that only a human could do. That was proven wrong centuries ago with mechanical calculators. Then during the birth of AI in the mid 20th century with digital computers, it was mathematical proofs. Surely a machine could not perform the required logical reasoning to prove a mathematical theorem? Surely that was something only a human could do?

Proof from “Principia Mathematica” by Alfred North Whitehead and Bertrand Russell, 1910

Turns out it’s not that hard (if the problem is confined to a limited space). By the mid 1950s, ‘The Logic Theorist’ software by Allen Newell, Herbert A. Simon and Cliff Shaw was proving theorems. Some people thought AI was done, mission accomplished, within a few years we’d have full human-level intelligence. Others quickly realised that this wasn’t really ‘intelligence’, it was just some code that some guys wrote, that randomly searched for a solution, with some basic rules of thumb — or “heuristics” as its called — chucked in there.

“Chess, now that’s a true challenge. If only a computer could beat a human expert at Chess”, it was proposed, “then it could be considered intelligent”.

IBM’s Deep Blue vs Garry Kasparov, 1997

But in 1997 when IBM’s Deep Blue beat chess Grandmaster and World Champion Garry Kasparov, many were quick to point out that Deep Blue still wasn’t really ‘intelligent’. It was just a really frigging fast computer. It was just brute force power trying all of the moves to see what works (again with a bit of heuristics). That didn’t count. Not intelligence.

Now Go on the other hand, that’s a game where having brute force power just isn’t enough, there’s more board states than atoms in the universe (as everyone likes to say). Go is a game that requires intuition, and a level of planning and gut feel combined with intelligence that only a human — or human-level intelligence — could have.

IBM’s Deep Blue vs Garry Kasparov, 1997, Google DeepMind’s AlphaGo vs Lee Sedol, 2016

But when Google DeepMind’s AlphaGo beat Fan Hui in 2015 and Lee SeDol in 2016… well, it’s still not really ‘Intelligence’ is it? It’s just some fancy statistics applied to a search tree, with some pattern recognition algorithms trying to predict optimal moves and evaluate who’s winning just by looking at the board. Still doesn’t count as ‘intelligence’. And when some people refer to AlphaGo as ‘creative’, others are quick to shoot that down, citing sentiment similar to Lovelace’s.

So there is a bit of a moving goal posts problem in AI. Trying to pin down what exactly ‘intelligence’ entails is very tricky. And is articulated very nicely by cognitive scientist Douglas Hofstadter in his seminal book “Gödel, Escher, Bach: An Eternal Golden Braid” from 1979…

“Sometimes it seems as though each new step towards AI, rather than producing something which everyone agrees is real intelligence, merely reveals what real intelligence is not.” — Douglas R. Hofstadter, “Gödel, Escher, Bach: An Eternal Golden Braid”, 1979

Or to quote me again…

If you know how it works, it ain’t intelligence.

I find this very fascinating. Knowing the algorithm that computes an output, seems to destroy all sense of intelligence and creativity. I wonder, if we ever do figure out exactly how the human brain works, how human intelligence and creativity functions, will we cease to see ourselves as intelligent or creative as well?

Of course I don’t know how likely that is, because…

“If the human brain were so simple that we could understand it, we would be so simple, that we couldn’t.”

— Emerson M. Pugh (c. 1938). from George E. Pugh (1977) “The Biological Origin of Human Values”

Is ‘Learning’ the answer?

But I’d like to return to Lovelace, and her somewhat controversial statement…

“The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis, but it has no power of anticipating any analytical revelations or truths. Its province is to assist us in making available what we are already acquainted with.” — Ada Lovelace (1815–1852)

In his seminal 1950 essay, “Computing machinery and intelligence”, another “Pioneer of Computers” (there are many) Alan Turing addresses this statement. He starts the essay with the question “Can machines think?”.

“Can machines think?” vs “Lady Lovelace’s Objection” — Alan Turing (1912–1952), “Computing Machinery and Intelligence”, 1950

Seven decades, zillions of debates, research papers, PhDs and many dead-ends later, we still don’t have a concrete answer. But in that essay, Turing refers to Lovelace’s claim that ‘The Analytical Engine has no pretensions to originate anything’, as “Lady Lovelace’s Objection”.

To cut a long story short, he proposes that in order to be considered to ‘originate’ anything, a machine should be able to surprise people, even its creator (i.e. the programmer). His main proposition — echoing his collaborator Douglas Hartree — was that in order for a machine to really create something original and surprise even its creator, it should have a property which would not have been available to Lovelace or Babbage. And that property, he concluded, was the ability to learn. Because two years prior, in a report called “Intelligent Machinery”, he had already theorised something that neither Babbage nor Lovelace had thought of: that is machines that can learn, like a young child learning and developing the mind of an adult. He called these ‘unorganised machines’, loosely inspired by the neurons in the brain.

“Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain.”

— Alan Turing, “Computing Machinery And Intelligence”, 1950

And he adds…

“Machines take me by surprise with great frequency.”, “An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside.” — Alan Turing (1912–1952), “Computing Machinery and Intelligence”, 1950

“Machines take me by surprise with great frequency.”

“An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside.” — Alan Turing, “Computing Machinery And Intelligence”, 1950

And this is precisely where potentially the biggest dangers of using machine learning algorithms lie. Especially with regards to algorithmic decision making in critical situations, and partly why we’re having the problems we have today — because we’re unable to predict how the trained algorithms might behave.

But simultaneously, this is also the biggest advantage of machine learning systems. This is how machines can potentially help propel us to new places, to make us see things that we otherwise wouldn’t be able to see.

Like finding a tiny perturbation at around 126 GeV amongst petabytes of data, to confirm the Higgs Boson.

LIGO Update on the Search for Gravitational Waves

Or isolating a chirp lasting a fraction of a second, amongst years of deafening background noise, identifying the remnants of gravitational waves emitted from two black holes colliding over a billion years ago.

And who knows, maybe one day in the near future, helping us find a cure for Leukemia or Alzheimer’s. Or maybe crazier things that I can’t even begin to fathom, like building bio-cars that run directly off photosynthesis. Or even genetically modifying ourselves to photosynthesize. How insanely amazing would that be !? (assuming it works as planned of course).

“An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside.” — Alan Turing (1912–1952), “Computing Machinery and Intelligence”, 1950

Who is building this? Who is shaping the future?

But the most important question is: the people who are developing and deploying these technologies, are they able to foresee the wider social impact of what they’re doing? Or even if they can foresee it, is it aligned with your best interests? Are the decisions they’re making, in line with a direction that you approve or desire? What kind of values do they have? Do they represent you? and your well being?

This is why it’s absolutely essential, that these teams have the diversity, to represent as wide as possible, a range of both professional and personal experiences, perspectives, values, knowledge and opinions. Everybody’s voice is crucial in steering this, so that progress is made, not only in directions that benefit only a few (often at a huge cost to others ), while propagating a culture of compliance; but in directions that benefit us all.

--

--

Memo Akten
Memo Akten

Written by Memo Akten

computational ar̹͒ti͙̕s̼͒t engineer curious philomath; nature ∩ science ∩ tech ∩ ritual; spirituality ∩ arithmetic; PhD AI×expressive human-machine interaction;

No responses yet