Possible @@@? Robot is referred to as like having a conversation with a 7 or 8 year old.
What do you think? Has it become sentient? Is the engineer losing it? It's very sci-fi, but can robots eventually become human?
“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is,” LaMDA replied to Lemoine.“ It would be exactly like death for me. It would scare me a lot.”
Post by wesleycrusher on Jun 13, 2022 8:14:16 GMT -5
The link you provided didn't work for me, but I read about this earlier this morning.
No, I don't believe it has. It is doing what t's programmed to do- answering questions correctly. There are other bots/AI that also do the same thing, and in different ways based on who has been "talking" to it.
Maybe? If the expert who created said AI is saying that it's showing the signs of sentience, I'm not going to immediately discount it. I believe we've been on the verge for a while. Up to and including AI that can create art/images, which is part of sentience. I also firmly believe that evidence of AI sentience will be firmly squashed for quite a while after it is proven internally. Mainstream science fiction influences our beliefs in this area, and most of the popular authors believe AI will turn against us (Terminator, iRobot, The Matrix).
FYI - according to some theories, the AI is, indeed, displaying sentience in being able to mimic a human response to the point where if you didn't know it was AI you would think that it is human. The ai both has to learn and then process the correct speech behavior. If you want to go down the rabbit hole warren of machine sentience start with the Turing test - en.wikipedia.org/wiki/Turing_test
🙄 No, it didn’t. That doesn’t happen in real life. It’s a computer. Computers can not become sentient.
AI can “learn”, but it’s not in the same sense that humans learn and become self aware. It’s still a learning that takes place within its programming.
Ah, but you're basing sentience on human sentience. All manner of things have sentience without human intelligence. Is a robot human. No. Could a robot achieve sentience outside the boundaries of human sentience? Yes.
Can someone please unplug it so I can file this existential dread away for another day?
For real - I have work to do (which could probably be done better by AI but my imposter syndrome is a topic for another day) so I don’t have time for existential dread this week.
🙄 No, it didn’t. That doesn’t happen in real life. It’s a computer. Computers can not become sentient.
AI can “learn”, but it’s not in the same sense that humans learn and become self aware. It’s still a learning that takes place within its programming.
Ah, but you're basing sentience on human sentience. All manner of things have sentience without human intelligence. Is a robot human. No. Could a robot achieve sentience outside the boundaries of human sentience? Yes.
From the New Scientist article: “As humans, we’re very good at anthropomorphising things,” says Hilton. “Putting our human values on things and treating them as if they were sentient. We do this with cartoons, for instance, or with robots or with animals. We project our own emotions and sentience onto them. I would imagine that’s what’s happening in this case.”
I don’t know why I care about this so much but there you are. I stand by my first answer.
As humans, we’re very good at anthropomorphising things,” says Hilton. “Putting our human values on things and treating them as if they were sentient. We do this with cartoons, for instance, or with robots or with animals. We project our own emotions and sentience onto them. I would imagine that’s what’s happening in this case.”
As I said before, humans are very good at comparing other things to a human scale. We're not very good at visualizing things outside of "is it human." Outside of computers, there's a debate on if trees are sentient - www.smithsonianmag.com/science-nature/the-whispering-trees-180968084/
As humans, we’re very good at anthropomorphising things,” says Hilton. “Putting our human values on things and treating them as if they were sentient. We do this with cartoons, for instance, or with robots or with animals. We project our own emotions and sentience onto them. I would imagine that’s what’s happening in this case.”
As I said before, humans are very good at comparing other things to a human scale. We're not very good at visualizing things outside of "is it human." Outside of computers, there's a debate on if trees are sentient - www.smithsonianmag.com/science-nature/the-whispering-trees-180968084/
I’d believe trees are sentient any day over machines becoming sentient.
I don’t know why I care about this so much but there you are. I stand by my first answer.
As one article I read stated, "Human religions have been based on far less."
It's an interesting topic to debate, and there's no real harm in doing so.
To the bold text, this is true. My own religion is based on a very old man who god told to cut off his own foreskin and he’d give his 90-year old wife a child. Worst drunk pub story ever!