Can a machine understand?
In a discussion about the immaterial nature of mind, an atheist asks: "Can an omnipotent God create a computer that has beliefs and knowledge?"
A theist replies:
What he was attempting to do is the moral-and-intellectual equivalent of "arguing" thusly: "Can 'God' create a square circle? No? I thought not. Thus, 'God' is not 'omnipotent'; thus, 'God' does not exist" -- which, as anyone can see, is really shitty "reasoning". But, you know, that's par for the course for 'atheists'.This is a serious question, and the theist, under the assumption that a thinking machine isn't even a logical possibility, misses the point entirely. It's not a logical absurdity at all. He's not trying to prove the non-omnipotence of God - he's attempting to demonstrate that it is conceivable that there can be a thinking machine.
What is a brain? It's a type of adaptive processing system that we call a neural network. This is significantly different from the von Neumann processing architecture of the computers most of us are familiar with. One big difference is the fact that the neural network is not programmed with a sequence of instructions. Instead, it is "trained" to produce results that approximate the optimal outcome in response to a particular stimulus, by using a feedback mechanism.
Man-made neural networks have been designed (or simulated in a computer) to mimic the processing function of a brain. However, they are much more limited than a human brain, due to limitations in the number of neurons and connections that can feasibly be included with our current technology. The human brain has about 100 billion neurons, and trillions of connections, which is well beyond our current capability. Still, these systems have shown remarkable computational capabilities, including classification and modeling tasks. As technology continues to allow larger and more sophisticated neural network systems, we can expect to see greater functional capabilities. Does this mean that we can eventually expect to see human-like thinking in a machine? I wouldn't rule it out.
But the theist still has his ace-in-the-hole: intentionality. Thoughts are about something. "Aboutness" is characterized by an immaterial connection between things that can not be achieved by any purely material object. Humans understand what they are doing, while computers can only follow a program, they say. Machines can never have thoughts with aboutness, and they can never understand, they say.
Not so fast. How does a brain understand things? By establishing associative connections. Not connections from a thought to an external object, but connections between neurons. Consider the untrained brain of an infant. The infant has sensory input, say the image of his mother's face. Every time he sees that image, it stimulates a certain cluster of neurons in his brain. Other images stimulate other neurons. He hears sounds, which also stimulate neurons. But when a certain image and a certain sound, say his mother's voice happen together, an associative connection is formed. The neurons are linked together. Eventually, he hears a particular sound pattern: "ma", that is also associated with his mother. Over time, he learns more, and builds a whole library of associative connections between various groups of neurons. These associations constitute what we call "meaning".
As for "aboutness", there's nothing magic going on. It's purely physical. When we see or hear or read something, it makes a persistent impression in the brain. This memory may be associatively connected to other things. When we have a thought, the brain invokes various memory impressions and conjures an image or model of that thing. Let's say I see my dog, and it creates an impression in my brain. That is associated with all the things I know about my dog. When I think about my dog, there is no real connection between my thought and the dog, but there is a real physical connection between the thought happening in some part of my brain and a whole network of neurons that are associated with the concept of "my dog". All of those associated things are available for me to include in my thought - his name, his appearance, his behavior, and so on. But if the dog breaks his claw without my knowledge, that's not part of my thought, because there's no impression of it in my brain. There's no connection to the actual dog. That's "aboutness". It's all in the brain. That's the basis of intentionality.
I'm not a neuroscientist. This description is meant to be nothing more than a general conceptual framework for understanding how a machine (or a piece of meat) could think about things and understand them. I don't expect theists will buy any of this. They will no doubt stick with their homunculus.