Simply put the concept of artificial intelligence is strange in that it is counter-intuitive. I believe in general we have started to formulate certain ideas of what it means to be intelligent at the level of software and machines which are erroneous in their essence. This is because we get carried away by the term ‘Intelligence’, which we normally conceptualize in it’s human form. It’s the same type of mistake (albeit differently) that we make when trying to think about animal intelligence.
For example when we consider ‘intelligent’ animals such as dolphins, monkeys or elephants, we characterise them intelligent based on several parameters a few of which are:
- Problem solving skills
- Awareness of self
- Ability to plan for the future (?imagine)
- Level of communication skills (not language per se)
Based on the animals performance on such parameters we can label their level of intelligence. The mistake we make is when we start to formulate their intelligence in terms of human intelligence. The biggest problem is that animals do not have the ability to use language like humans do, now this may seem a small matter at first, however when given proper thought it is probably the biggest difference there is. Without this ability they are likely unable to use symbolic and abstract systems of thought (which challenges any human like ideas we create of their state of being as we are unable to process our thoughts without relying on these systems to begin with).
This is especially problematic when scientists at times try to infer that a bird’s ability to plan for the future on certain experiments shows a level of imagination, because imagination as humans understand it is based on abstract and symbolic thought processes.
A discussion of 3 ideas regarding the concept of Artificial Intelligence:
1. Ability to solve problems and communicate with humans
Every computer even a calculator can solve simple mathematical problems which a human asks it to. If you take it further it is possible to formulate other issues into mathematical ones and solve them easily enough as well. Fore example I want google to tell me the best restaurant near me, it can formulate this by the use of 3 predefined human assigned parameters:
- What ‘near me’ means (i.e within 1 mile)
- Your geo-positioning parameter (latitude and longitude)
- The user generated numerical ratings of restaurants in the google maps database
Google can calculate an answer for me using these three numerical parameters. It can tweak it further in innumerable ways to give more meaningful information. Now using variations of a generic sentence like “the best restaurant near you is ‘X’ which is ‘Y’ miles away” google could now replace ‘X’ with the name and ‘Y’ with the number of miles and you get a sensible answer. Furthermore Google might add a text to voice software (which is a sub-field in artificial intelligence) and make the phone talk to you.
If we conceptualise artificial intelligence in this sense then it seems sensical as in essence it’s just a mathematical problem solving software at a more complex level tweaked with human friendly attributes.
2. Machines can ‘think’ on a human level spectrum
By this we mean artificially intelligent machines are able to ‘think’ and this thinking is at a certain level, either currently at a cat’s level, baby’s level etc or human level or even super-human level. Conceptualising synthetic intelligence in this way seems mistaken because there are a number of problems:
- Human thought process is what we imply by the notion of think. This is only possible for humans not even animals as far as we know and as I discussed above. We aren’t able to forge the concept of thinking without the human reference of what thought is.
- A spectrum implies a quantitative difference between the various levels (e.g cat, baby, human, super-human). This does not seem to be the case as it appears more of a qualitative difference in how cats are and how humans are. Much is still not understood about how babies develop adult level thought but one of the most recognised one is the Cognitive development theory of Jean Piaget. According to this the cognitive development does not occur gradually but in sudden qualitative jumps.
- Super-human seems possible in many ways such as speed of calculation etc. but it would again be wrong to put distinct super-human feats on a spectrum as if it were a continuum to humanness.
3. Artificial Intelligence can feel like a human does
This one is even a step further than the previous. As far as we understand humans can solve problems and so can machines, so there is some similarity in that respect, it is a big stretch to think that this automatically means machines can think, however at least there is something we can relate to.
As far a emotions and feelings go, there is not even a remote link we can equate with machines or softwares. We can have virtual characters which can show emotions in video games or even robots which seem to have emotional capabilities. This in-fact has been programmed into them using predefined mathematical rules i.e. they are designed to give us an image or animation which we are then able to relate to an already recognised emotion by us. The emotion itself is only in our own (human) repository, the character is similar to a picture you can draw on a piece of paper and has no emotions of it’s own. Emotions / feelings such as pain, happiness, worry, desire etc are phenomenological experiences or qualia which I feel cannot be understood by breaking them down, they are only uniquely understood from everyones own subjective experience.
The issue with the concept of artificial intelligence is that if you look at it on the deep down level most machine intelligences are working on mathematical functions / logic circuits etc and we can increase the quantity of such functions to immense levels (by far surpassing human abilities to perform such tasks) but it does not change what they i.e. an increasingly bigger and bigger collection of logic gates. Logic and maths is only one aspect of human nature and by being able to emulate human like characteristics such as speech or emotions etc. does not imply humanness. In fact it is more like a means of communicating with humans because the interpretation of such speech or emotions occurs within the human similar to seeing an emotion provoking picture (the picture here is the means and not in itself able to feel emotion).
Now I don’t feel this is the whole story in respect to the concept of artificial intelligence as there are some interesting aspects such as language ability and consciousness to consider which deserve more focus, so I will attempt to discuss these in this artificial intelligence series and then the consciousness series later on.