John, I think we need to view AI as an emergent property here; I think the notion that we can program a computer to be happy or sad is nonsensical but I do know this much mate .... I think whenever we look at complex systems, whether they be organic or electronic, it is just a matter of complexity as to when those systems becoming sentient.
It hasn't happened yet as far as we know in a digital sense, and that last statement is quite important because if your research is being funded, then if a computer does become self aware, the last thing those people in charge of that research are gonna do, is announce it to everyone ....think about it mate, those people would have in effect, a device worth zillions.
Not in itself, but because of what it could produce.
And so, if we assume for the time being it hasn't been achieved, I am proposing, computers will eventually experience emotions (or approximations to emotions) when the nature of its programming becomes sufficiently complex to provoke an emergent property of what we call consciousness.
Just how the computer will let us know it can think is another topic ......it's intriguing stuff because the psychology books will have to be rewritten everywhere to accommodate the 'nature' of a computer's personality ....hmmm...mind bending stuff indeed !
Is that all thats required though?
Once a system becomes complex enough that it appears convincingly sentient, then it is?
To me thats mimicery.
The Turing test was designed to test exactly that (can a computer mimic a human within certain boundaries?).
Last time I checked nobody had took the prize.
The Turing test will be conquered Im sure.
But it still leaves a question in my mind. At what point does a good mimic become the real thing?
Ill use a much more mature and well understood technology to try and clarify the point I made in the last post about the limitations of computers.
Which is the problem of analog in digital systems.
An analog value such as 1/3 is very easy for us to comprehend.
But A computer has no concept of dividing something in to thirds. It has to represent it in digital.
So you decide a whole 1 is represented as 1024. in digital (10 bits). Then 1/3 will be 341. 0.33 of a bit out from the real value.
If we represent the whole 1 as 65536. in digital (16 bits). Then 1/3 is 21845. 0.0000000000000000000000000000005 of a bit out from the real value.
The more resources you use the lower the error.
But it requires infinite computer resources to acheive zero error.
Ultimately you have to choose a level of approximation and pull strange tricks (like floating point representation) to deal with something a person can do so very easily.
No matter what level of development, a digital computer will never be able to deal with analog values as accurately as we do.
My feeling is the same limitation may be found in AI systems.
The more I learn about computers the more I become aware of their limitations and how they are impressively profficient at certain tasks and annoyingly useless at others.
PS Nobody check my maths