Good question.
Obviously the Turing test doesn’t cut it, which I suspected already back then. And I’m sure when we finally have a self aware conscious AI, it will be debated violently.
We may think we have it before it’s actually real, some claim they believe some of the current systems display traits of consciousness already. I don’t believe that it’s even close yet though.
As wrong as Descartes was about animals, he still nailed it with “I think therefore I am” (cogito, ergo sum) https://www.britannica.com/topic/cogito-ergo-sum.
Unfortunately that’s about as far as we can get, before all sorts of problems arise regarding actual evidence. So philosophically in principle it is only the AI itself that can know for sure if it is truly conscious.
All I can say is that with the level of intelligence current leading AI have, they make silly mistakes that seems obvious if it was really conscious.
For instance as strong as they seem analyzing logic problems, they fail to realize that 1+1=2 <=> 2=1+1.
Such things will of course be ironed out, and maybe this on is already. But it shows the current model, isn’t good enough for the basic comprehension I would think would follow from consciousness.
Luckily there are people that know much more about this, and it will be interesting to hear what they have to say, when the time arrives. 😀
[Turing] opens with the words: “I propose to consider the question, ‘Can machines think?’” Because “thinking” is difficult to define, Turing chooses to “replace the question by another, which is closely related to it and is expressed in relatively unambiguous words”. Turing describes the new form of the problem in terms of a three-person party game called the “imitation game”, in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing’s new question is: "Are there imaginable digital computers which would do well in the imitation game?
One should bear in mind that scientific methodology was not very formalized at the time. Today, it is self-evident to any educated person that the “judges” would have to be blinded, which is the whole point of the text chat setup.
What has been called “Turing test” over the years is simultaneously easier and harder. Easier, because these tests usually involved only a chat without any predetermined task that requires thinking. It was possible to pass without having to think. But also harder, because thinking alone is not sufficient. One has to convince an interviewer that one is part of the in-group. It is the ultimate social game; indeed, often a party game (haha, I made a pun). Turing himself, of course, eventually lost such a game.
All I can say is that with the level of intelligence current leading AI have, they make silly mistakes that seems obvious if it was really conscious.
For instance as strong as they seem analyzing logic problems, they fail to realize that 1+1=2 <=> 2=1+1.
This connects consciousness to reasoning ability in some unclear way. The example seems unfortunate, since humans need training to understand it. Most people in developed countries would agree that the equivalence is formally correct, but very few would be able to prove it. Most wouldn’t even know how to spell Peano Axiom; nor would they even try (Oh, luckier bridge and rail!)
How do you operationally define consciousness?
To understand what “I think therefore I am” means, is a very high level of consciousness.
At lower levels things get more complicated to explain.
Good question.
Obviously the Turing test doesn’t cut it, which I suspected already back then. And I’m sure when we finally have a self aware conscious AI, it will be debated violently.
We may think we have it before it’s actually real, some claim they believe some of the current systems display traits of consciousness already. I don’t believe that it’s even close yet though.
As wrong as Descartes was about animals, he still nailed it with “I think therefore I am” (cogito, ergo sum) https://www.britannica.com/topic/cogito-ergo-sum.
Unfortunately that’s about as far as we can get, before all sorts of problems arise regarding actual evidence. So philosophically in principle it is only the AI itself that can know for sure if it is truly conscious.
All I can say is that with the level of intelligence current leading AI have, they make silly mistakes that seems obvious if it was really conscious.
For instance as strong as they seem analyzing logic problems, they fail to realize that 1+1=2 <=> 2=1+1.
Such things will of course be ironed out, and maybe this on is already. But it shows the current model, isn’t good enough for the basic comprehension I would think would follow from consciousness.
Luckily there are people that know much more about this, and it will be interesting to hear what they have to say, when the time arrives. 😀
The Turing test is misunderstood a lot. Here’s Wikipedia on the Turing test:
One should bear in mind that scientific methodology was not very formalized at the time. Today, it is self-evident to any educated person that the “judges” would have to be blinded, which is the whole point of the text chat setup.
What has been called “Turing test” over the years is simultaneously easier and harder. Easier, because these tests usually involved only a chat without any predetermined task that requires thinking. It was possible to pass without having to think. But also harder, because thinking alone is not sufficient. One has to convince an interviewer that one is part of the in-group. It is the ultimate social game; indeed, often a party game (haha, I made a pun). Turing himself, of course, eventually lost such a game.
This connects consciousness to reasoning ability in some unclear way. The example seems unfortunate, since humans need training to understand it. Most people in developed countries would agree that the equivalence is formally correct, but very few would be able to prove it. Most wouldn’t even know how to spell Peano Axiom; nor would they even try (Oh, luckier bridge and rail!)