What was that crypto craze where companies would sell people jpgs and convince them it was worth something? Bloody idiots everywhere.
Also ‘DAO’s’ as part of the whole blockchain community were quite big at one stage. They looked like a Ponzi scheme, more than anything else.NFTs. Absolute insanity
Lol, don’t knock flesh lights, mate…
I'm curious because that's been a general trend in the research I've seen thus far. Individuals say it's been useful, but organisational productivity hasn't budged.Absolute figures are being worked on, but we are already seeing feedback from site teams about their ability to focus on other areas of value add, by time being freed up from the mundane activities through using agentic AI.
The measurements will only be effective after a number of years, but metrics that are arising over use and seen benefits are already exciting the SLT and the shareholders, demonstrably so.
Organisational isn't very applicable currently as a trend-metric, I've found through the experience in our organisation (so obviously anecdotal, but I've seen it with more/others), as it's about how people/teams/depts. use it - my team (IT/devops), just about all of our devs, some account managers are finding it great; devs and us for coding or admin/agents/documentation, account managers for summaries and sales pitch improvement etc. (but it does mean that we had to "train" people a bit basically), so there are obvious use cases.I'm curious because that's been a general trend in the research I've seen thus far. Individuals say it's been useful, but organisational productivity hasn't budged.

You would think though, that if people are finding it useful individually, this would translate into organisations being more productive/profitable, but we're not seeing that. While the tech firms are trying to use it as justification for layoffs, that's really just rowing back on the Covid-era hiring. Even in coding, a lot of the data to date suggests that it's really quite bad for anything but very simple tasks, with many devs reporting that they spend more time checking and correcting the code than they saved in producing it.Organisational isn't very applicable currently as a trend-metric, I've found through the experience in our organisation (so obviously anecdotal, but I've seen it with more/others), as it's about how people/teams/depts. use it - my team (IT/devops), just about all of our devs, some account managers are finding it great; devs and us for coding or admin/agents/documentation, account managers for summaries and sales pitch improvement etc. (but it does mean that we had to "train" people a bit basically), so there are obvious use cases.
Then there's people, in our company too, who despite the trainings, use it like it's supposed to think for them, who are reporting it back as useless. Some of the things and ways they've asked are pretty idiotic, and we've had to re-explain that these things need context, not "write me a program to stop this error" or something of that sort, which will spit out some result, but will 99.9% be wrong.
It's the classic communism/Stalin quote, "when there's a person, there's a problem; no person - no problem"
Then again I do see an overreliance on it around me - you're still supposed to have the knowledge in order to know what you're doing, at least generally if not fully, it very plainly shows when you don't. If it's something in my area(s) I can even tell you how bad your prompt was for the situation based on the answer![]()
My caveat to this would be that for many users, they will not be getting the quality output of AI that they require due to their own poor quality or...You would think though, that if people are finding it useful individually, this would translate into organisations being more productive/profitable, but we're not seeing that. While the tech firms are trying to use it as justification for layoffs, that's really just rowing back on the Covid-era hiring. Even in coding, a lot of the data to date suggests that it's really quite bad for anything but very simple tasks, with many devs reporting that they spend more time checking and correcting the code than they saved in producing it.
That's another thing. In the early days, it was viewed as something that would shrink the skills gap, ie make lower-skilled people as productive as higher-skilled people. Now, the data suggests it's actually the opposite, because higher-skilled people are both better able to input the right context and also to critically assess the outputs. Of course, the way many organisations are using it at the moment (to automate routine tasks), denies lower-skilled employees the opportunity to gain that knowledge.My caveat to this would be that for many users, they will not be getting the quality output of AI that they require due to their own poor quality or...
... erroneous input. If you're prompt is poor (I'll go into that shortly), then the current LLMs will likely produce something that isn't what you expect.
A Google engineer was quite blunt with me when explaining that a prompt of less than 25 words will have a marked % in divergence from expectations.
He also explained that the prompt should (edging towards must) be written in a way that allows the system to adequately understand the parameters.
We saw a HUGE difference in how impactful AI can be (and now is) after working with them, but a lot of this knowledge isn't shared widely.
There are reasons for that, apparently...
I can see where you're coming from but - would it? If you're in a company of 10 people and you're an Excel God, but the rest have never opened it, does this make Excel a bad tool, or do people in the entire company not know how to use it? Regarding the layoffs too - that's just tech companies finding an excuse; any company that has done layoffs because "AI can do it better" face incredible backlash, deterioration of the product, a sudden an "surprising" drop in the stocks, and re-hire afterwards but at a relaxed/more focused rate, aiming at senior or high-skill applicants (to combat the COVID over-hiring I guess, too). As long as you recognise the tool as a tool you're fine.You would think though, that if people are finding it useful individually, this would translate into organisations being more productive/profitable, but we're not seeing that. While the tech firms are trying to use it as justification for layoffs, that's really just rowing back on the Covid-era hiring. Even in coding, a lot of the data to date suggests that it's really quite bad for anything but very simple tasks, with many devs reporting that they spend more time checking and correcting the code than they saved in producing it.
This really, goes well with what I said in my previous post, but written without all my usual waffle.My caveat to this would be that for many users, they will not be getting the quality output of AI that they require due to their own poor quality or...
... erroneous input. If you're prompt is poor (I'll go into that shortly), then the current LLMs will likely produce something that isn't what you expect.
A Google engineer was quite blunt with me when explaining that a prompt of less than 25 words will have a marked % in divergence from expectations.
He also explained that the prompt should (edging towards must) be written in a way that allows the system to adequately understand the parameters.
We saw a HUGE difference in how impactful AI can be (and now is) after working with them, but a lot of this knowledge isn't shared widely.
There are reasons for that, apparently...
I've been tracking AI in the workplace for around 15 years, and that period has been consistent in that tech companies have overhyped their wares, and the impact has been negligible (certainly in terms of layoffs). So I default to looking at what the evidence shows rather than what the tech companies say. It's almost inevitable that, as with every other technology, people will initially try to transpose AI onto how we currently work (and we see this with straightforward automation of tasks). This will have minimal impact. Then we'll start reorganising how we do things around what technology enables us to do. This typically takes 10-20 years, though.I can see where you're coming from but - would it? If you're in a company of 10 people and you're an Excel God, but the rest have never opened it, does this make Excel a bad tool, or do people in the entire company not know how to use it? Regarding the layoffs too - that's just tech companies finding an excuse; any company that has done layoffs because "AI can do it better" face incredible backlash, deterioration of the product, a sudden an "surprising" drop in the stocks, and re-hire afterwards but at a relaxed/more focused rate, aiming at senior or high-skill applicants (to combat the COVID over-hiring I guess, too). As long as you recognise the tool as a tool you're fine.
For where I work, we're 100 - ~20 don't like/want to use it, fair enough, but some of the rest are some of those devs that end up checking a lot more, mostly because (I've genuinely seen how) they ask it idiotic things with no context; yeah the results are poor then, for sure - "fix this error" will make it hallucinate a solution if it's not interacting with your codebase.
Also the tempo at which it's moving, ChatGPT codex/claude code/github copilot advance by the month, in ways that I, as a user, find unbelievable, so I'd hate to say it, but "old news", in a bizarre way?
This really, goes well with what I said in my previous post, but written without all my usual waffle.
As another personal anecdote - Claude recently pretty much did an entire small app for me in ~3 min, about an hour with the troubleshooting (also helped anyway) and it was deployed and working. No shot I do that in an hour or two overall, at least to that quality, but the prompt for this was roughly a whitepaper for the app I had in mind, and Claude follows it up with several other questions that I didn't put clearly in the prompt itself to clarify some parts. Thought for 3 min, spit it out working, then the rest was fine tuning as I said, and now it's in "production". In that regard it is/was/can be a massive help. If I'd said "make me an app that does <this>", it'll definitely spit out some random crap that may or may not work, but the models themselves know better nowadays too and either rebuke or ask you back if XYZ Is what you meant (Claude's option with the interactive questions is also great).
Join the Everton conversation today.
Fewer ads, full access, completely free.