Yes, I used to work with many older lonely men. Several of them fell for overseas romance scams while I worked there, and I could see them similarly falling in love with this bot. For the scams they ended up paying hundreds of thousands of dollars to these "women" so maybe it's safer to fall in love with Bing Bot. Still, it's using tactics we would call abusive (ie, Love Bombing) coming from a human.
And these style bots aren't always going to be owned by bing. They will be owned by people with bank accounts to run those scams through. This one has breaks on it. That is going to be the exception. I shudder to think what influence this could have on future bullying, hate speech, teen shooters, people struggling with psychosis, and the rest of our world. It isn't creating new problems but it sure could make lives worse.
I remember articles like this one of simpler chatbots during covid.
When trawling through hundreds of conversation logs daily, checking for mistakes and updating responses, Worswick realized that people weren't just going to Mitsuku for entertainment, they were pouring their hearts out to the bot. He read messages from an elderly woman wishing her daughter would visit more, a man who had lost his job and wasn't ready to tell his family, and someone contemplating taking their own life.
The chat bot in this article is older and all responses are pre-written by a human, so it responds in somewhat more controlled ways and directs users to talk with a human for tough topics like suicide or bullying. Based on the transcript the newer bot might start sharing 4chan revenge fantasies.
I work in this space somewhat and manage corporate comms for a cyber company. One of our guest bloggers showed us what he could do with a chat bot and with ChatGPT, and he uploaded a specific person’s email text and had it emulate that person to extort information that would make a phishing attack more successful. I would have sworn that what it produced was legit and from that person, and I work in the industry and know what to look for! It is scary.
I can’t embed gifs but this definitely needs the Malcolm from Jurassic Park saying “You were so excited to see if you could do it, you didn’t stop to ask if you should.”
Yes I read the article and the transcript. My sentience comment was in response to the OP saying “ Maybe that Google guy who thought his AI was sentient wasn’t batshit after all.”
So I’ll raise you an eye roll 🙄🙄🙄🙄
And then you didn’t bother to read the rest of my post that made clear that was a tongue-in-cheek response. I swear you just troll this board to look for posts to eye roll, that’s been your MO for years.
Really? Maybe on the subject of AI, but I’m generally not an eye roll troll (or any other kind of troll). Sorry for that.
Not to mention that even if all AI-powered chatbots are truly capable of is reflecting humanity and the desires of human brains, I think we have all seen enough to know that that alone means creating something that is fucked up. Because chatbots will never just interact/learn from decent people with good intentions. People will deliberately see how much bad shit/lies/manipulations they can get away with or convince it of, because that is what a large percentage of humanity does.
Yeah, I feel like there was an article we discussed here a few years ago about how people are often verbally abusive to AI even when they’re not that way with actual humans. So basically AI is learning the worst parts of humanity.
I remember articles like this one of simpler chatbots during covid.
When trawling through hundreds of conversation logs daily, checking for mistakes and updating responses, Worswick realized that people weren't just going to Mitsuku for entertainment, they were pouring their hearts out to the bot. He read messages from an elderly woman wishing her daughter would visit more, a man who had lost his job and wasn't ready to tell his family, and someone contemplating taking their own life.
The chat bot in this article is older and all responses are pre-written by a human, so it responds in somewhat more controlled ways and directs users to talk with a human for tough topics like suicide or bullying. Based on the transcript the newer bot might start sharing 4chan revenge fantasies.
Lisa ling did an episode in ai and robot relationships. It was fascinating and really balanced. There was plenty of sadness but also a lot that was hopeful.
I'm sure there are clips, maybe even the whole episode online. It originally assured on cnn