The funny thing about Artificial Intelligence, is that the principle behind it is not that new. The autopilot of an airplane, isn’t that already a form of AI? It’s old school math and even though we may think AI is already far along in its development, we’re probably now at the point where the Internet was in 1994.
Using neural networks, a computer is able to see the difference between dogs and cats in photos. Or at least, it recognizes the difference in pixels, which you could argue is a sign of intelligence, but it’s still not very intelligent.
There’s also a very interesting difference between learning and instinct. An AI based algorithm has to learn from scratch, for every new situation. But a baby mountain goat knows how to stand on a steep ridge pretty much since the moment it was born. If it didn’t, there would have been a lot of baby goats falling off mountains and the entire species would’ve already been extinct long ago.
When you’re writing an algorithm, you’re writing it based on a lot of assumptions, conscious and subconscious. When we were working on Hitwizard, we wrote the algorithm based on what we thought we knew would constitute a hit song. But music is subjective, and there almost an infinite amount of factors, even something trivial like the weather. You will never be able to include every single factor, and you will always have some form of bias when you write it.
Those biases are the foundation of the algorithm, and it will learn based on those same biases. Those kids writing code for algorithms in Silicon Valley? They’re predominantly white, wealthy and highly educated, and whether they realize it or not, they see the world differently than other people.
Computer says no
We see AI as rational and impartial. So when we forget that own assumptions and unconscious biases are the foundation of an AI algorithm, it can have lasting consequences. With terrorist profiling for example, once you fit the profile, the burden of proof is suddenly on you, because the computer says so, instead of the other way around - which is how it should be in a democracy.
The same thing applies to insurance for example. Say you’re part of a certain age group and own a certain type of car associated with more accidents. You want to buy an insurance policy, computer says no. In a lot of cases, we also don’t really know the decision process of the algorithm, only the outcome.
Ethics and rationality
When Google’s self driving car or a Tesla gets into an accident, it will make headlines all over the world. We’ll think it’s dangerous, but we don’t look at the statistics. The total amount of car accidents caused by people is so much higher. If every car drove itself, there would be no traffic jams and almost no accidents, because those are the result of human irrational decisions.
But AI making decisions also raises an interesting ethical question. Say a kid suddenly crosses the street and the car has no room to maneuver without crashing. Either the person in the car dies, or the kid. Who should live? And what if a doctor crosses the street and the person in the car is a marketeer for Pepsi? Who’s life is worth more? There’s an infinite number of factors you can think of, because the more information you have, the more complex it gets. What are the parameters to decide who lives and dies?
Quality, not quantity
I’m not religious, but I come from a religious family. When I told them I didn’t believe in God, they told me that was because I didn’t read enough of the bible. In other words, if I read more, I would have more information and believe in God. I read it cover to cover over and over and it didn’t change my opinion.
It’s a funny analogy, because when it comes to AI, people tend to believe the same thing: more data equals better results. One thing we learned with Hitwizard, that’s not true. No matter how much more data we gave it, it still had an accuracy of about 60%. So it’s not just a question of quantity, it’s the quality of the data. Our conclusion: we don’t know enough about what makes a hit song. We’ve put Hitwizard on ice for now. It was a really cool way to show what’s possible with AI today, but also to learn about the current limits.
AI is the hot new thing right now, and also a hype in a sense. When you’re working with AI, it’s important to understand that it’s not the answer. It has the potential to do cool and great things, but it’s way too soon to worry about AI taking control over our world.
illustrations by Rueben Millenaar