Philosophy of AI
what is intelligence?
- classically, you test this using the Turing Test
- interrogation game
- interrogate two parties, the goal of both parties is to convince the interrogator that they are human
- if the interrogator can’t tell who is the human, the computer is intelligent
- the objections:
- the test is subjective
- why are we basing intelligence on human intelligence? metaphor with flight, we only managed to get off the ground once we stopped imitating natural flight
intelligence is everything a computer can’t do yet.
can a computer be intelligent?
- substitution argument: if you replace one neuron at a time with a computer chip in the human brain, you would eventually change into a computer, without your conscience or thought process changing at any point.
- medium argument: no. “carbohydrate racism”, there’s something special about carbohydrates that allows us to do stuff that computers can’t do.
- formal systems argument: no. mathematical systems are inherently limited in some way; since computers are just formal systems, therefore they inherently have some limitations. we are not formal systems (that’s debatable) so we do not have those limitations.
- symbol-grounding: learning systems manipulate symbols
- symbols can only refer to other symbols, so how can a computer ever know what’s “red”, “heavy”, “sad” in the ‘real’ world?
- so simulated intelligence ≠ real intelligence
- thought experiment - the Chinese Room:
- a room with Chinese symbols coming in
- there’s one person inside that uses a book to translate Chinese symbols to other symbols
- there’s nothing in this system that understands Chinese
Mind-body problem:
- we have the physical body, and metaphysical thoughts
- what could be the relationship between the physical and the metaphysical?
- opinions:
- mind-body dualism, interactionism: we consist of two parts (physical and metaphysical) – Descartes
- materialism: the mind and body is one thing
- gradualism: we evolved the mind (intelligence, consciousness) over time
Intentional stance:
- intelligence/consciousness is “attributed” and “gradual”
- so the question isn’t “will computers ever be conscious?”, but rather “will we ever use consciousness-related words to describe them?”
- if it’s useful to talk about consciousness, motivation, feeling, etc., then we are allowed to (or should) do so equally for both humans and machines
- people have a strong tendency to take the intentional stance, so we will call our computers “intelligent”
Free will:
- reasons why it can’t be true:
- physics is deterministic, you can predict the next states, so your brain doesn’t physically allow free will
- inconsistent with psychology and neuroscience – motor areas begin activity 2 seconds before we think we want to do something ([[https://www.youtube.com/watch?v=IQ4nwTTmcgs|Libet’s experiment]])