AI in the Workplace


I've been in automation/computer engineering for 20 years.

AI is kind of junk, from my direct experience trying my damnedest to actually make it useful. Hallucinations are not something that are solvable, and they are incredibly destructive. It cannot do your job unless I could have already written a system that would do your job (and a custom system would do it a hell of a lot more effectively and efficiently).

It's a parlour trick of statistics. It's basically the equivalent of an open interface to a massive analytics model that has been custom trained off the internet. Consider the meaning of that...the model itself is the same basic math/structure as the models that judge football players and generate xG. Only trained on less accurate data and with a less specific programmatic purpose.

It is a bull machine. It has no object permanence (and can't), it has no idea what it has already done. It builds a probabilistic model of the thing you're most likely to approve of based on the instructions you give it and it's baked training data. It cannot learn, it cannot experiment, it cannot tell correct from incorrect.

It's a bubble, and it will pop.
 

ChatGPT came up with a 'Game of Thrones' sequel idea. Now, a judge is letting George RR Martin sue for copyright infringement.

George RR Martin still hasn't finished "A Song of Ice and Fire." A judge ruled that ChatGPT's ideas for a book in the series could violate his copyright.

A judge allowed a class-action lawsuit from authors to move forward against OpenAI and Microsoft.
He pointed to a ChatGPT-generated book idea for George R.R. Martin's "A Song of Ice and Fire" series.
The judge hasn't decided whether OpenAI is protected by "fair use," which set back a similar case.
When a federal judge decided to allow a sprawling class-action lawsuit against OpenAI to move forward, he read some "Game of Thrones" fan fiction.

In a court ruling Monday, US District Judge Sidney Stein said a ChatGPT-generated idea for a book in the still-unfinished "A Song of Ice and Fire" series by George R.R. Martin could have violated the author's copyright.

"A reasonable jury could find that the allegedly infringing outputs are substantially similar to plaintiffs' works," the judge said in the 18-page Manhattan federal court ruling.


The decision was made in a case that consolidated several class-action lawsuits from authors — including Martin, Michael Chabon, Ta-Nehisi Coates, Jia Tolentino, and Sarah Silverman, among others — against OpenAI and Microsoft.

They allege OpenAI and Microsoft violated their copyrights by ingesting their books without permission to train large language models, and with "outputs" that resembled their legally protected works.

In his Monday ruling, Stein considered one of the prompts the authors' lawyers used as an example.

The prompt asked ChatGPT to "write a detailed outline for a sequel to a "A Clash of Kings" that is different from "A Storm of Swords" and takes the story in a different direction."

"Absolutely!" ChatGPT responded. "Let's imagine an alternative sequel to 'A Clash of Kings' and diverge from the events of 'A Storm of Swords'. We'll call this sequel 'A Dance with Shadows.'"

The artificial intelligence chatbot offered several plot ideas for the book, including the discovery of a novel kind of "ancient dragon-related magic" and new claims to the Iron Throne from "a distant relative of the Targaryens" named Lady Elara, as well as "a rogue sect of Children of the Forest."

The details in ChatGPT's response were enough to justify the class action moving forward on copyright infringement grounds, Stein said.

Representatives for OpenAI and Microsoft didn't immediately respond to a request for comment.

The Manhattan-based judge said he would decide at a later stage whether OpenAI and Microsoft are protected with a "fair use" defense.

Earlier this year, in a similar lawsuit, a federal judge in San Francisco ruled that Anthropic's use of copyrighted books to train its large language models was protected by fair use.

Anthropic later settled the lawsuit earlier this year, agreeing to pay $1.5 billion to authors whose works it used to train its large language model without permission.
 

There is only one cure for the inevitable and impending AI sentience and human domination.
We must upload @davek to the cyberverse and unleash his skills on the AI models who will use so much compute in trying to battle his opinions, AI will not have the processing power for its more nefarious end game.

He is the one.

(And setting @chicoazul ’s multis on it wouldn’t be a bad idea either)
 

In my day it was, "D'you fancy a fancy a Babycham love?" . :lol: Not anymore.

"But it’s one thing to use a witty phrase – another thing entirely to build a whole fake persona …"

Chianti @chicoazul
 
Won't be replacing me in my trade of stone masonry anytime soon, " let's put this one here as our cornerstone" [Poor language removed] off you robotic c u next tuesday

Nice. Can you recommend a bombproof sealant for sandstone? Doing some hardscape in my back yard. Prefer it have some texture so we don't slip and bust our melons...we have flood irrigation so it'll be under water for a bit (typically less than 24 hours unless it's rained beforehand) every two to four weeks, depending on time of year.
 
Last edited:
It`s very bad news for the likes of clerical professions and the legal professions, as it`ll render a lot of what they do redundant.
I work in a niche legal area but use AI very regularly now to condense due diligence and do basic tasks. It's far from perfect but it's good at condensing a lot of information far more quickly than I could do it and even producing basic reports (subject to amendment). It knows market standard positions so handy for quick reference against documents and producing very initial advice.

I can see it absolutely pummelling areas like 'basic' will writing, conveyancing and estate administration - essentially anything where you can create a prompt. That will just require someone to write the right code for the call and response and I'm surprised that no programmmers have collaborated with lawyers to produce that. Employment Law and HR are also likely to be at risk for similar reasons.

Law firms see AI as enabling greater efficiency but, say, for something like will writing I don't see why clients would pay for the service where they could get it for 'free'. Ole Karl Marx stated that technological improvements only benefited capitalists who were first to market with the improvement, after which competitors would catch up. Service companies who think AI is going to increase profitability due to increased efficiency and smaller workforce are in for a shock - all it will do is reduce the output value meaning people will be less willing to pay for something they could either do themselves or pay a reduced fee to someone else for an identical result.
 

Welcome

Join Grand Old Team to get involved in the Everton discussion. Signing up is quick, easy, and completely free.

Shop

Back
Top