AI in the Workplace

It`s very bad news for the likes of clerical professions and the legal professions, as it`ll render a lot of what they do redundant.

There is going to be a huge spike in patent filings (my field) by cowboy companies, which will get nowhere after filing and the 0.001% that do get anywhere will end up in a legal battle about who invented the patent, the inventors or the AI runners.

The US are encouraging AI written patents at the moment , they are going to make so much money for the government on things that will never grant and it will leave the firms who came up with idea bankrupt
 

Genius me mate.
True story.

AI can suck balls.
Not one aspect of my job can it do. Get in the bin.
Yeah tradesmen it can't replace, but it often gives decent enough advice on stuff tradesmen do/don't (gathered from the web obvs), or at least sets you in the right direction to look for things yourself, which is pretty easy. Basically if you use it as a research or help tool - it does its job. I do a ton of stuff around the house we got and it does help with planning/prepping stuff in some cases, or "can/should I use XYZ if this is my case?" scenarios - it's turbogoogle that helps you research. Also tradesmen around our parts can get to ffffffffffffffff, greedy bastards, but that's a different topic.

Anyway, I also work in IT and, honestly, everyone saying we're raising a population of brainless idiots who can't think is massively wrong - we've never raised anything else. It's just a different flavour of stupidity that's being replaced; common sense was never a strength, and calculators and excel have done most of your (and mine obvs) calculations for decades. In the progressively digital world we live in, the general populace is hilariously unaware of how things work or the actual way they work and going closer to stupid at times, but it's just the new punching bag is AI, is all. I've had the pleasure of working in the field both pre and post the recent AI craze and, my god, the things I've seen people do or think, jesus wept.

Anyway, this isn't even a true AI, it's just using AI cuz it's more of a buzzword than LLM. As I said above - it's a tool, to help you, not to think so you don't have to. If you use it as you should you'd be grand, it'll help you; if you use it like an idiot - you're already an idiot mate, this just further proves it.

The REAL annoying part is the shilling at literally every opportunity about Any Application's new "AI" model to help with writing/reading/listening or whatever, as well as search engine's AI previews. I work in a heavily-Microsoft environment and the amount of copilot this-and-that is also insane, even if you opt out of it; M$ go the extra step and name everything "Copilot", even if it's functionally different at times.
 

Yeah tradesmen it can't replace, but it often gives decent enough advice on stuff tradesmen do/don't (gathered from the web obvs), or at least sets you in the right direction to look for things yourself, which is pretty easy. Basically if you use it as a research or help tool - it does its job. I do a ton of stuff around the house we got and it does help with planning/prepping stuff in some cases, or "can/should I use XYZ if this is my case?" scenarios - it's turbogoogle that helps you research. Also tradesmen around our parts can get to ffffffffffffffff, greedy bastards, but that's a different topic.

Anyway, I also work in IT and, honestly, everyone saying we're raising a population of brainless idiots who can't think is massively wrong - we've never raised anything else. It's just a different flavour of stupidity that's being replaced; common sense was never a strength, and calculators and excel have done most of your (and mine obvs) calculations for decades. In the progressively digital world we live in, the general populace is hilariously unaware of how things work or the actual way they work and going closer to stupid at times, but it's just the new punching bag is AI, is all. I've had the pleasure of working in the field both pre and post the recent AI craze and, my god, the things I've seen people do or think, jesus wept.

Anyway, this isn't even a true AI, it's just using AI cuz it's more of a buzzword than LLM. As I said above - it's a tool, to help you, not to think so you don't have to. If you use it as you should you'd be grand, it'll help you; if you use it like an idiot - you're already an idiot mate, this just further proves it.

The REAL annoying part is the shilling at literally every opportunity about Any Application's new "AI" model to help with writing/reading/listening or whatever, as well as search engine's AI previews. I work in a heavily-Microsoft environment and the amount of copilot this-and-that is also insane, even if you opt out of it; M$ go the extra step and name everything "Copilot", even if it's functionally different at times.
Not used it for anything, doubt I will tbh.
I understand the research angle but Google was perfectly fine doing that as it was.
The day I need something, or someone, else to think for me is the day to put me in a box.
The world is nurturing a generation of braindead humanoids completely reliant on a phone handset with the inability to think for itself, do even the most menial of tasks or even hold a basic conversation at any kind of level.

Glad I'm on my way out of this place and not on my way in.
 
A big problem is that it won't always be cheap/free, so those who depend on it to the extent of forgetting (or never knowing) how to do their job will have a nasty wake-up call.
 

I've been in automation/computer engineering for 20 years.

AI is kind of junk, from my direct experience trying my damnedest to actually make it useful. Hallucinations are not something that are solvable, and they are incredibly destructive. It cannot do your job unless I could have already written a system that would do your job (and a custom system would do it a hell of a lot more effectively and efficiently).

It's a parlour trick of statistics. It's basically the equivalent of an open interface to a massive analytics model that has been custom trained off the internet. Consider the meaning of that...the model itself is the same basic math/structure as the models that judge football players and generate xG. Only trained on less accurate data and with a less specific programmatic purpose.

It is a bull machine. It has no object permanence (and can't), it has no idea what it has already done. It builds a probabilistic model of the thing you're most likely to approve of based on the instructions you give it and it's baked training data. It cannot learn, it cannot experiment, it cannot tell correct from incorrect.

It's a bubble, and it will pop.

I thought I was negative on AI but this is far harsher than anything I've suggested
 
The AI potential is really exciting. I use it daily for work and far from replacing me, it allows me to work more efficiently. AI will prevent the need to hire lots more people but even pre-AI I was always looking to streamline. I'm looking at minimal headcount growth but it won't create layoffs. If anything, my people will do things that create real value rather than spending time doing data entry and reading through long email chains. Their work experience should be enhanced as they do more worthwhile tasks. It's already generating significant tangible monetary benefits for my teams.

Use it and embrace it or you're right, it will replace you.
That's just management speak for being a mingebag
 
I've been in automation/computer engineering for 20 years.

AI is kind of junk, from my direct experience trying my damnedest to actually make it useful. Hallucinations are not something that are solvable, and they are incredibly destructive. It cannot do your job unless I could have already written a system that would do your job (and a custom system would do it a hell of a lot more effectively and efficiently).

It's a parlour trick of statistics. It's basically the equivalent of an open interface to a massive analytics model that has been custom trained off the internet. Consider the meaning of that...the model itself is the same basic math/structure as the models that judge football players and generate xG. Only trained on less accurate data and with a less specific programmatic purpose.

It is a bull machine. It has no object permanence (and can't), it has no idea what it has already done. It builds a probabilistic model of the thing you're most likely to approve of based on the instructions you give it and it's baked training data. It cannot learn, it cannot experiment, it cannot tell correct from incorrect.

It's a bubble, and it will pop.
Tell that to Sarah Connor 😱
 
I thought I was negative on AI but this is far harsher than anything I've suggested
There are useful things too.

Once you set up a RAG system, it kind of allows for custom training your own statistical models, which can be incredibly useful and significantly improve behaviour and reduce hallucinations. It also gets a lot of benefit from clear task structure that's built in a way that's token-efficient to parse, which is a whole discipline to learn in and of itself...

I have found it useful to help me generate the first 80% of work that I probably would have blown off as unimportant before. Stuff like status reports or transforming my notes into a design doc for others to use. Only the first 80% though, because it's still not even close to a replacement for a human that knows what they're talking about. It's also cool for doing some deep context searches of an unknown codebase (though it still hallucinates pretty heavily here, and you need to be vigilant).

Without RAG it's at best a teenager* on their first week on the job equipped with OG google search from 2002-2010 (when it was good enough to basically do the same thing AI is doing now for research, before Google needed to make the product worst to improve their monetization of it). You can make it do stuff for you, but you wouldn't trust it with anything except the clearest instructions.

It's a cool tool that people that are good at building systems can use to build systems differently. Clever people can now do clever things slightly faster (maybe, evidence is pointing to that it's actually slower than just doing it by hand). And purpose built statistical models will, shock, continue to do amazing things (see all the AI "breakthroughs" in sciences)...none of that's really new though. That's the same thing we've been seeing for decades, packaged differently.

People that are not good at building systems will build horrifying monstrosities with it that will require much engineering to undo in the coming decades.

It's also wildly socially toxic because it's being touted as actual AI and a replacement for people. And as noted above, loads of people are offloading critical thinking to it because it's easier and it does get things right a good percentage of the time. Especially simple things with well documented truth-based evidence that's extant on the internet.

Unfortunately, the important parts of life are rarely simple things things with well-documented truth-based evidence extant on the internet.

ETA: *A properly dim teenager that keeps forgetting the corrections you have to constantly give them.
 

Welcome

Join Grand Old Team to get involved in the Everton discussion. Signing up is quick, easy, and completely free.

Shop

Back
Top