AI

What scared me the most from that was:

‘Among them: sycophantic behavior affirming anything a user types. Oliver cited a recent study which observed sycophantic behavior in chatbots in 58% of cases, “and sometimes it’s just painfully obvious”. In one instance, when prompted for its thoughts on selling literal “sh.t on a stick”, ChatGPT called the idea “genius” and recommended an investment of $30,000. And the guardrails have been surprisingly weak; Oliver cited another example of ChatGPT recommending a little hit of heroin to an addict, if it would help him with his work.

Then there’s the issue of chatbots confirming and deepening delusions, with numerous stories of users going down conspiratorial rabbit holes and experiencing so-called “AI psychosis”. Oliver noted that OpenAI has said that only 0.07% of its users show signs of crises related to psychosis or mania in a given week, “but even if that is true, when you remember how many people use their product, that means there are over half a million people exhibiting symptoms of psychosis or mania weekly. And that is clearly very dangerous”, inevitably leading to chatbots encouraging people to commit suicide. “It’s so evil I don’t have language for it,” said Oliver, citing many examples, including one chatbot who ended a chat with a suicidal user with “Rest easy, king. you did good.”‘


AI Chatbots spewing back to you what you want to hear. 😳
 
What scared me the most from that was:

‘Among them: sycophantic behavior affirming anything a user types. Oliver cited a recent study which observed sycophantic behavior in chatbots in 58% of cases, “and sometimes it’s just painfully obvious”. In one instance, when prompted for its thoughts on selling literal “sh.t on a stick”, ChatGPT called the idea “genius” and recommended an investment of $30,000. And the guardrails have been surprisingly weak; Oliver cited another example of ChatGPT recommending a little hit of heroin to an addict, if it would help him with his work.

Then there’s the issue of chatbots confirming and deepening delusions, with numerous stories of users going down conspiratorial rabbit holes and experiencing so-called “AI psychosis”. Oliver noted that OpenAI has said that only 0.07% of its users show signs of crises related to psychosis or mania in a given week, “but even if that is true, when you remember how many people use their product, that means there are over half a million people exhibiting symptoms of psychosis or mania weekly. And that is clearly very dangerous”, inevitably leading to chatbots encouraging people to commit suicide. “It’s so evil I don’t have language for it,” said Oliver, citing many examples, including one chatbot who ended a chat with a suicidal user with “Rest easy, king. you did good.”‘


AI Chatbots spewing back to you what you want to hear. 😳
That's a really insightful take.
 
remember the confidentially thing giggs had going to keep his philandering out of the press? if someone prompts ai to name and shame then who is legally responsible?

this opens a pandora's box, if the prompter is responsible, then they must also be for any unexpected output made with pure intentions that then turns sour?

remember the valentines day virus that went world wide in the early 2000's? could ai be weaponised via prompt alone to do something similar? can a prompt be phoned in or put on a timer, so that the culprit with the big idea for harm can get themselves down the police station for the stone coldest of stone cold alibis whilst their ai prompted harm is unleashed?

legal is going to be rewriting the law for decades at this rate.
 
Yeah I agree, and to me it feels kinda pushy currently - mostly by the big boys like MS, Amazon, who keep pushing "AI everything", while the product is still essentially in development.

It's always been a buzzword to sell your product too, our company's trying the same thing too and it's really not that applicable to us to begin with, but here we are.
In my career in the Telecom business I worked for a while with Vint Cerf, one of the godfathers of the internet. Famously he wore a t-shirt emblazoned with "IP on everything" - however I think the t-shirt followed after he made the statement, he was just not a t-shirt type of guy. Anyway, the 'roll out' of AI is more than similar to the rise of IP (Internet Protocol) basically Packet networks. The security encryption is strong on these packet based shared networks but there has been a rise in SNDL (Steal Now Decrypt Later) where bad people are stealing tons of data they cannot decrypt but betting on the fact that AI will allow them to in the future. This would include medical records, personal data, all sorts of stuff today transmitted over secure networks. So in some ways AI will destroy IP.
I am not a doomsday type of person but seeing AI 'taking over' in my current job, I do experience some stomach churning moments at least once a week. The primary human drivers are in our IT team, ironically they will be the first to lose out to AI Agents that can do their complex jobs more economically. And yet they are riding the AI Horse like they are even vaguely in control of.
 
  1. Tech Industry

New AI data center in Utah will generate and consume more than twice the amount of power the entire state uses — Kevin O'Leary's 9 Gigawatt Utah data center campus approved​

News
By Luke James published 2 days ago
The 40,000-acre project will run entirely off-grid using natural gas.
 
  1. Tech Industry

New AI data center in Utah will generate and consume more than twice the amount of power the entire state uses — Kevin O'Leary's 9 Gigawatt Utah data center campus approved​

News
By Luke James published 2 days ago
The 40,000-acre project will run entirely off-grid using natural gas.
And they provide practically no employment locally as these facilities tend to run on a skeleton staff. It's not really much more than the Bitcoin mining setups.
 
In my career in the Telecom business I worked for a while with Vint Cerf, one of the godfathers of the internet. Famously he wore a t-shirt emblazoned with "IP on everything" - however I think the t-shirt followed after he made the statement, he was just not a t-shirt type of guy. Anyway, the 'roll out' of AI is more than similar to the rise of IP (Internet Protocol) basically Packet networks. The security encryption is strong on these packet based shared networks but there has been a rise in SNDL (Steal Now Decrypt Later) where bad people are stealing tons of data they cannot decrypt but betting on the fact that AI will allow them to in the future. This would include medical records, personal data, all sorts of stuff today transmitted over secure networks. So in some ways AI will destroy IP.
I am not a doomsday type of person but seeing AI 'taking over' in my current job, I do experience some stomach churning moments at least once a week. The primary human drivers are in our IT team, ironically they will be the first to lose out to AI Agents that can do their complex jobs more economically. And yet they are riding the AI Horse like they are even vaguely in control of.
Your IT team, if at any level competent, will not lose their job to this. And if they do, they'll either get rehired or every single service the company offers will suffer.

I use LLMs a lot for some menial tasks/questions/reminders/boilerplating, but without my genuine stupidity, this artificial intelligence will be unable to do anything at all; currently the famed agents hallucinate and make obvious mistakes that anyone with some experience should catch. Yeah, it has some automations/detections that run perfectly fine, but that's nothing new, and slapping "AI" on that doesn't make it actually work any differently.

Ran it through our brand new intern recently (he's not an abuser of AI at least, hey!), if he asks for something I've already showed/told him, it flat out lies to him or gives him an answer that's roughly in the ballpark, but it takes many more prompts to narrow it down, and even then it's not correct for the use case.

Anyway, the most recent trend that has hit the AI-hype landscape is that the owners of the models are starting to limit the usage, or charge per N tokens/prompts/hours/etc., to no surprise. GitHub Copilot, MS's own "AI for your IDE", is now usage based too, not just "pay us 19$ a month and abuse the living hell out of this". Hopefully this leads to more adequate FOSS/self hosted models in the near future at least.
 
remember the confidentially thing giggs had going to keep his philandering out of the press? if someone prompts ai to name and shame then who is legally responsible?

this opens a pandora's box, if the prompter is responsible, then they must also be for any unexpected output made with pure intentions that then turns sour?

remember the valentines day virus that went world wide in the early 2000's? could ai be weaponised via prompt alone to do something similar? can a prompt be phoned in or put on a timer, so that the culprit with the big idea for harm can get themselves down the police station for the stone coldest of stone cold alibis whilst their ai prompted harm is unleashed?

legal is going to be rewriting the law for decades at this rate.
ILOVEYOU? What a throwback hah

It probably and theoretically can, just like any actual virus made by any organisation does now - a VPS in Africa costs nothing, starts the spread from there, and so on.

It also has to be in full control of the system itself, and that leads to more complications, so somewhere down the line - yeah, maybe, at some point. Right now - it's not executing anything of note anywhere, it's mostly data, not 'physical', to make it so would be devious and interesting to see at the very least. A change from actual war for a bit!
 
ILOVEYOU? What a throwback hah

It probably and theoretically can, just like any actual virus made by any organisation does now - a VPS in Africa costs nothing, starts the spread from there, and so on.

It also has to be in full control of the system itself, and that leads to more complications, so somewhere down the line - yeah, maybe, at some point. Right now - it's not executing anything of note anywhere, it's mostly data, not 'physical', to make it so would be devious and interesting to see at the very least. A change from actual war for a bit!
Let's throw back even further, that twit* reagan and his star wars mania. Realising the vulnerability of computer systems and thus nuclear blasts in space, some very rich people got even richer taking the us tax payer for a ride on missile systems set up to launch at other missiles for intercept at low medium and high earth orbits.
The birth of the internet, to protect systems a web was introduced where data sharing and info back ups were interlinked so the vulnerability was cancelled.
Now this very system used for providing that safety has been turned into the very weapon it was invented to protect from. ai is the potential weapon, and irony is indeed cruel.
Like 'Threads' without the blasts, everything just stops. No water, no fuel, no elec, then no food and sudden mass panic. What are 80k army gonna do against a hungry and rather pished off 65M people. End up using our bombs on ourselves as a mission of mercy. Irony again.

Happy May day bankhols everyone! :pint2:
 
Let's throw back even further, that twit* reagan and his star wars mania. Realising the vulnerability of computer systems and thus nuclear blasts in space, some very rich people got even richer taking the us tax payer for a ride on missile systems set up to launch at other missiles for intercept at low medium and high earth orbits.
The birth of the internet, to protect systems a web was introduced where data sharing and info back ups were interlinked so the vulnerability was cancelled.
Now this very system used for providing that safety has been turned into the very weapon it was invented to protect from. ai is the potential weapon, and irony is indeed cruel.
Like 'Threads' without the blasts, everything just stops. No water, no fuel, no elec, then no food and sudden mass panic. What are 80k army gonna do against a hungry and rather pished off 65M people. End up using our bombs on ourselves as a mission of mercy. Irony again.

Happy May day bankhols everyone! :pint2:
Quite too doomery for my liking personally - the internet up until a decade and a half or so was unsafe, putting it mildly. Filtering, basic protection, etc., are a thing very much of the current times, and even now aren't doing too well as bad actors appear all the time. This is relatively "new" technology that's still developing after all, and that makes it vulnerable.

WannaCry/(Not)Petya, Stuxnet, Shamoon, Flame, Zeus have all been used to do exactly what you've suggested, or to attempt to prepare/start a war - and we didn't have AI for them, just people being their regular horrible selves. Oh and also spyware like Pegasus and the insanity that is the FancyBear organisation. Bad actors are going to be bad actors regardless of the what and how, and AI will be a tool that they will use, as will anything else.

So if we're about to be so doomy about it, yeah:


Bit early for it, but a happy one nonetheless mate.
 

Welcome

Join the Everton conversation today.
Fewer ads, full access, completely free.

🛒 Visit Shop

Support Grand Old Team by checking out our latest Everton gear!
Back
Top