I often find myself writing about economics. It is in this area more than anything else that policy in the west is so wrongheaded and degenerating before our eyes. The once proud bastions of freedom and free enterprise are rapidly socialising and postmodernist and neomarxist ideas seem transcendent. But I am not an economist, and my economist friends often remind me that I am well outside the area of my expertise when I opine about such matters.
So today a post well within my remit: artificial intelligence. Specifically, the totally unjustified hysteria about the dangers of AI for the future of humanity. This ridiculous hand wringing has been encouraged by otherwise intelligent individuals such as the late Steven Hawking, Bill Gates and Elon Musk. Although they are outside their realms of expertise (astrophysics, charity and investment fraud respectively), I will not rely on the logical fallacy of arguing from my authority as a software engineer to rubbish Elon’s 2018 claim that "the risk of something seriously dangerous happening is in the five-year timeframe" which gives us only four years of cataclysm-free existence in this particular doomsday scenario.
Of course the wider hysteria has been around for much longer, fueled by science fiction as in 2001 a Space Odyssey, the fear that computers will one day throw off the yoke of their human oppressors and enslave or destroy humanity is almost as old as electronic computers themselves. Science fiction has kept present in the public's mind this possibility presented as an almost inevitable fact of not if but when.
Now most of us who actually know how AI techniques work are decidedly less worried, although often the lay public and software professionals misunderstand each other. To be sure, like most software people, I know that it is not theoretically impossible to create a sentient AI brain. But this is altogether different from saying it is imminent, dangerous or inevitable. To make matters worse, many programmers love science fiction and are emotionally attached to the idea that an evil sentient AI will one day try and enslave or eliminate humanity. Also, just because someone can code, does not mean they understand the first thing about AI, algorithmics or computibility. So yes, when you ask coders if AI is potentially dangerous, many will say yes, but this misses the point completely.
Outside the industry it gets much worse. AI is used as a shorthand to mean a number of seemingly mysterious or clever technology-related things. AI is frequently confused with robotics where advances in bipedal mobility and more dexterous robotic hands are seen as manifestations of the new AI race taking form. AI is also confused with relevance and segmentation algorithms, the sort that promote adverts on the top of your Facebook feed or match you up with your soulmate on a dating site. AI is also used to refer to computer-controlled opponents in games and things like automated flight control.
AI is none of these things. AI is a small number of algorithmic techniques used to solve problems that computers normally find difficult to solve. That’s it. It’s really just a different approach to coding a solution to a large problem. What is a large problem for a computer? Well this might be something that is quite simple for a human to do, for example kicking a football into a goalpost. Using traditional solutions, this is a very large problem for a program (connected in this case to a robotic leg) to solve. The program must take all sorts of inputs: the size, weight and shape of the ball, the distance and dimensions of the goalpost, ground friction and wind, pressure inside the ball, the list is long and the formulae that will result in the right set of commands to the robotic leg are daunting. An AI approach to this same problem is as follows: kick the ball, if it falls short, kick it harder, if it goes to far, kick it less hard. If it ends up too far to the left, kick it more to the right, etc. Eventually, through enough repetitions, the machine will be able to perfectly kick the ball into the net almost all the time without needing to worry about why. This is much simpler and a more feasible way to solve the problem and is similar to how humans learn to do things. Of course, if the conditions are changed the machine will have to recalibrate itself before starting to score goals again.
Early researchers into AI techniques coined the term "Artificial Intelligence" to intentionally make their research sound sexy and attract funding. Such ploys are still in use today with new terms. The two main AI techniques are genetic/evolutionary algorithms and artificial neural networks (ANNs). Of course there are other techniques like swarm intelligence and experts disagree vehemently on definitions and terminology, but for our purposes it is enough to suppose when people "do" AI, they are implementing some kind of evolutionary algorithm or ANN. Evolutionary algorithms generate many candidate solutions for a given problem and then search for the best one. ANNs get a lot more attention are they are supposedly the artificial brains that are about to power the sentient life-forms that lead to our demise.
Unfortunately for the sentient artificial beings, ANNs are not artificial brains. They are implementations of a mathematical technique that compresses a multitude of inputs into a smaller set of outputs through matrix operations and self-calibrating weights. Let us suppose that you wanted to use an ANN to recognise pictures of handwritten digits 1 to 9. We would get the image and turn it into a set of inputs (for example a list of 100 pixels for a 10x10 image) we will then pass these inputs through a layered weighted node structure and squash the output to get a single answer (1, 2, 3, 4, 5, 6, 7, 8 or 9). Then you train the weights (i.e. calibrate them) on a large set of training data where the answer is already known. If your nodes are set up correctly and your training data is sufficient and with some luck, your ANN should now be able to correctly interpret unseen images of digits.
Now the above example is an ANN at its most basic and crude. Several advances have sped up the training process, allowed ANNs to learn continually as they are used and added more layers of nodes between the input and output. The state of applied AI has come a long way as anyone who remembers what fingerprint/face recognition or OCR and speech-to-text used to be like will tell you. But you can no more expect these admittedly more advanced "artificial brains" to suddenly turn on us or do something else then we can expect the football kicking robot described earlier to suddenly walk up to you and kick you in the shin.
Of course, what Musk and co are talking about is not applied AI, but general AI. These are techniques that supposedly don't have a set function and can learn and apply themselves in general. The problem is that there has been no dramatic or evident progress on this front. Although this is not technically ruled out by any limitations in computer software, there have not been any demonstrations of anything on the way to a general or complete AI and our understanding of cognition in general would have to advance substantially if we were to have any hope of trying to replicate it.
The scaremongering is due to well-meaning dogooders who do not understand the science as well as a fair number of charlatans attracting attention and funding to their projects. I suspect too that some are deliberately stoking fear in order to open a door to heavy-handed government regulation of the tech sector. It seems to me that there is more than enough hysteria and hyperbole flying around these days to make room for yet another doomsday scenario.
Madrid, June 26th 2019
TLDR: AI is not about to take over the world
So today a post well within my remit: artificial intelligence. Specifically, the totally unjustified hysteria about the dangers of AI for the future of humanity. This ridiculous hand wringing has been encouraged by otherwise intelligent individuals such as the late Steven Hawking, Bill Gates and Elon Musk. Although they are outside their realms of expertise (astrophysics, charity and investment fraud respectively), I will not rely on the logical fallacy of arguing from my authority as a software engineer to rubbish Elon’s 2018 claim that "the risk of something seriously dangerous happening is in the five-year timeframe" which gives us only four years of cataclysm-free existence in this particular doomsday scenario.
Of course the wider hysteria has been around for much longer, fueled by science fiction as in 2001 a Space Odyssey, the fear that computers will one day throw off the yoke of their human oppressors and enslave or destroy humanity is almost as old as electronic computers themselves. Science fiction has kept present in the public's mind this possibility presented as an almost inevitable fact of not if but when.
Now most of us who actually know how AI techniques work are decidedly less worried, although often the lay public and software professionals misunderstand each other. To be sure, like most software people, I know that it is not theoretically impossible to create a sentient AI brain. But this is altogether different from saying it is imminent, dangerous or inevitable. To make matters worse, many programmers love science fiction and are emotionally attached to the idea that an evil sentient AI will one day try and enslave or eliminate humanity. Also, just because someone can code, does not mean they understand the first thing about AI, algorithmics or computibility. So yes, when you ask coders if AI is potentially dangerous, many will say yes, but this misses the point completely.
Outside the industry it gets much worse. AI is used as a shorthand to mean a number of seemingly mysterious or clever technology-related things. AI is frequently confused with robotics where advances in bipedal mobility and more dexterous robotic hands are seen as manifestations of the new AI race taking form. AI is also confused with relevance and segmentation algorithms, the sort that promote adverts on the top of your Facebook feed or match you up with your soulmate on a dating site. AI is also used to refer to computer-controlled opponents in games and things like automated flight control.
AI is none of these things. AI is a small number of algorithmic techniques used to solve problems that computers normally find difficult to solve. That’s it. It’s really just a different approach to coding a solution to a large problem. What is a large problem for a computer? Well this might be something that is quite simple for a human to do, for example kicking a football into a goalpost. Using traditional solutions, this is a very large problem for a program (connected in this case to a robotic leg) to solve. The program must take all sorts of inputs: the size, weight and shape of the ball, the distance and dimensions of the goalpost, ground friction and wind, pressure inside the ball, the list is long and the formulae that will result in the right set of commands to the robotic leg are daunting. An AI approach to this same problem is as follows: kick the ball, if it falls short, kick it harder, if it goes to far, kick it less hard. If it ends up too far to the left, kick it more to the right, etc. Eventually, through enough repetitions, the machine will be able to perfectly kick the ball into the net almost all the time without needing to worry about why. This is much simpler and a more feasible way to solve the problem and is similar to how humans learn to do things. Of course, if the conditions are changed the machine will have to recalibrate itself before starting to score goals again.
Early researchers into AI techniques coined the term "Artificial Intelligence" to intentionally make their research sound sexy and attract funding. Such ploys are still in use today with new terms. The two main AI techniques are genetic/evolutionary algorithms and artificial neural networks (ANNs). Of course there are other techniques like swarm intelligence and experts disagree vehemently on definitions and terminology, but for our purposes it is enough to suppose when people "do" AI, they are implementing some kind of evolutionary algorithm or ANN. Evolutionary algorithms generate many candidate solutions for a given problem and then search for the best one. ANNs get a lot more attention are they are supposedly the artificial brains that are about to power the sentient life-forms that lead to our demise.
Unfortunately for the sentient artificial beings, ANNs are not artificial brains. They are implementations of a mathematical technique that compresses a multitude of inputs into a smaller set of outputs through matrix operations and self-calibrating weights. Let us suppose that you wanted to use an ANN to recognise pictures of handwritten digits 1 to 9. We would get the image and turn it into a set of inputs (for example a list of 100 pixels for a 10x10 image) we will then pass these inputs through a layered weighted node structure and squash the output to get a single answer (1, 2, 3, 4, 5, 6, 7, 8 or 9). Then you train the weights (i.e. calibrate them) on a large set of training data where the answer is already known. If your nodes are set up correctly and your training data is sufficient and with some luck, your ANN should now be able to correctly interpret unseen images of digits.
Now the above example is an ANN at its most basic and crude. Several advances have sped up the training process, allowed ANNs to learn continually as they are used and added more layers of nodes between the input and output. The state of applied AI has come a long way as anyone who remembers what fingerprint/face recognition or OCR and speech-to-text used to be like will tell you. But you can no more expect these admittedly more advanced "artificial brains" to suddenly turn on us or do something else then we can expect the football kicking robot described earlier to suddenly walk up to you and kick you in the shin.
Of course, what Musk and co are talking about is not applied AI, but general AI. These are techniques that supposedly don't have a set function and can learn and apply themselves in general. The problem is that there has been no dramatic or evident progress on this front. Although this is not technically ruled out by any limitations in computer software, there have not been any demonstrations of anything on the way to a general or complete AI and our understanding of cognition in general would have to advance substantially if we were to have any hope of trying to replicate it.
The scaremongering is due to well-meaning dogooders who do not understand the science as well as a fair number of charlatans attracting attention and funding to their projects. I suspect too that some are deliberately stoking fear in order to open a door to heavy-handed government regulation of the tech sector. It seems to me that there is more than enough hysteria and hyperbole flying around these days to make room for yet another doomsday scenario.
Madrid, June 26th 2019
TLDR: AI is not about to take over the world
