AI isn’t coming for your job, and it’s not even really capable of doing so

AI isn’t coming for your job, and it’s not even really capable of doing so

I am sick of hearing about artificial intelligence. It’s not coming for your newsroom job. It’s not going to revolutionize anything, and it’s probably the biggest hype-induced lie that has been forced upon us since the supposed rise of cryptocurrency.

Remember when we were all going to stop using the bank? Sure, Tom Brady, I hope you enjoyed the endorsement checks before that scam went to complete shit and screwed countless people out of their hard-earned money. Just more reasons Patrick Mahomes is your future G.O.A.T., but that isn’t for this blog.

Artificial intelligence isn’t what it’s even called. To be intelligent would inherently require thought, and these large language models don’t think at all and aren’t ever going to be capable of doing so.

Sorry if you had “AI is going to kill us all” on your bingo card. It probably isn’t possible in my lifetime and certainly is impossible using a large language model, which is all these groups have right now.

Not one “AI” company is even close to actual general artificial intelligence based on current information available to the public.

I was recently asked my opinion on several things in regard to AI and the journalism industry, and while I know this is a hot topic and something people seem excited about, I probably came off as a crotchety old man in many of my responses.

“Get off my lawn, you teenagers and your fancy chat bots!”

Chatbot is really the best way to describe AI or, more accurately, a large language model (LLM). What Open AI and many of these groups are doing is telling the public that these LLMs are going to be capable of so many things and are going to revolutionize everything from cars to search to journalism.

It’s mostly hype and lies.

The funny thing: these LLMs aren’t capable of doing most of the functions companies keep telling the public they are on the cusp of doing, and I am not sure they are even close. Goldman Sachs agrees with me, by the way.

https://www.goldmansachs.com/intelligence/pages/gs-research/gen-ai-too-much-spend-too-little-benefit/report.pdf

I’m sticking with journalism, which is my industry and my area of mild expertise.

Are LLMs fun to play with? Sure. They can be kind of fun to mess around with if you can get over the fact that they were built by stealing tons of data from copyrighted sites and the work of your friends.

Can they potentially make mind-numbing rewrites of press releases faster? Sure. Refer to the above ethical issues, and also insert the fact that they need to be edited very carefully, always remembering that these LLMs hallucinate regularly.

What’s that? The thing that hallucinates constantly and you can’t always tell without double-checking everything is going to revolutionize journalism…? Sure. I’m certain America is going to be quick to also trust someone fully on a mushroom high to write about local government accurately, as well.

If something sounds too good to be true, it probably is. AI is exactly that. The promise is it’s going to free us from all the mind-numbing work and allow us to do far more fulfilling tasks.

The problem is that it’s just not very good at doing anything in our field yet. It’s OK at a few things, but it’s not great at anything in journalism, except for stealing data.

ChatGPT could be useful for a few things in journalism, I am not saying it couldn’t, but to talk like we will be using LLMs to write a lot of what we do in the paper is foolhardy with the current state of the technology. Polls show readers don’t like the idea much, either.

Look no further than Gannett, and it’s an utter failure to rush into using LLMs to write sports. If you are curious, if you want an example of what not to do you can usually turn to Gannett in all matters, not just with AI.

They pulled the program and apologized, losing even more credibility with its readers than they already had through their other business practices.

Rushing LLMs into your newspaper or publication is silly and not something that should be done without a lot of thought and discussion and probably some training from someone who is smart about new tech, like Kevin Slimp. Also keeping in mind the ethical issues of using something that was trained on copyrighted material, some of which might be yours.

I could see some long-term path forward for LLMs to be used regularly in newsrooms if the technology was profitable for the companies, but spoiler alert: it’s not. All of this hype from business leaders about AI revolutionizing everything is just that, hype. A ton of rich people have a lot of money invested in what they think is the “next big thing,” but there is no guarantee that LLMs will ever be profitable or the “next big thing,” and when the rich guys get fed up with losing tons of money every quarter propping up fancy Google, they will move on to the next thing they think they can get richer off of.

All of that means that if the rich guys funding these LLMs go away because the hype never actually pays off, the chatbots do, too. They require tons of data, energy and resources to run. They aren’t a typical website that uses a typical amount of resources. LLMs cost literal trillions to operate, and they aren’t solving trillion-dollar problems. If they are around for a while, they won’t stay free or cheap to use, either.

My guess is the investment money goes away, as the juice won’t be worth the squeeze much longer. I am not inserting any amount of my time and energy into a technology that I don’t see having staying power in my newsroom and certainly not one that is constantly on mushrooms with no possible way for it ever to get clean. The hallucinations are permanent, and there is no current solution for them to go away, based on recent reporting.

I might be naive and AI will become a daily use in our newsrooms, leaving me in shame, but I am not seeing that happen anytime soon or likely ever.

Test cases for this technology being used in other businesses haven’t gone super well. I don’t know anyone who loves MetaAI being a part of Facebook search, for instance, and the experiment with Google is laughable at best. Both technologies have only gotten worse recently, not better, so why would I be led to believe AI is going to be good for anything other than making user experiences worse?

Fast food chains using chatbots aren’t having the best time, either, as they require a decent amount of human intervention, and several have abandoned their use altogether.

If LLMs are here for our jobs, so be it, but I’ll believe it will happen when I see it. Not when some rich guy tells me it’s right around the corner during a hype speech, hoping for another round of funding.

Join the newspaper disruption:
Pin Share

Leave a Reply

Your email address will not be published. Required fields are marked *