I'm generally an AI skeptic, but it seems awfully early to make this call. Aside from the obvious frontline support, artist, junior coder etc, a whole bunch of white collar "pay me for advice on X" jobs (dietician, financial advice, tax agent, etc), where the advice follows set patterns only mildly tailored for the recipient, seem to be severely at risk.
Example: I recently used Gemini for some tax advice that would have cost hundreds of dollars to get from a licensed tax agent. And yes, the answer was supported by actual sources pointing to the tax office website, including a link to the office's well-hidden official calculator of precisely the thing I thought I would have to pay someone to figure out.
1vuio0pswjnm7 5 hours ago [-]
"I'm generally an AI skeptic, but it seems awfully early to make this call."
What call. Maybe some readers miss the (perhaps subtle) difference between "Generative AI is not ..." and "Generative Ai is not going to ..."
Then first can be based on fact, e.g., what has happened so far. The second is based on pure speculation. No one knows what will happen in the future. HN is continually being flooded with speculation, marketing, hype.
In contrast, this article, i.e., the paper it discusses, is is based on what has happened so far. There is no "call" being made. Only an examination of what has heppened so far. Facts not opinions.
bredren 3 hours ago [-]
> According to the study, "users report average time savings of just 2.8 percent of work hours" from using AI tools. That's a bit more than one hour per 40 hour work week.
Could be data is lagging as sibling comment said but this seems wildly difficult to report on a number like this.
It also doesn't take into account the benefits to colleagues of active users of LLMs (second order savings).
My use of LLMs often means I'm saving other people time because I can work through issues without communications loops and task switching. I can ask about much more important, novel items of discussion.
This is an important omission that lowers the paper's overall value and sets it up for headlines like this.
jaredklewis 3 hours ago [-]
The headline is about wages and jobs. It’s very possible that AI could result in time savings of 50% of work hours in a week and still have no impact on wages or jobs.
This is because the economy is not a static thing. If one variable changes (productivity), it’s not a given that GDP will remain constant and jobs/wages will consequently be reduced. More likely is that all of the variables are always in flux, reacting and responding to changes in the market.
bredren 3 hours ago [-]
Very well, I acknowledge this point.
However, the parent comment is about an examination of what has happened so far and facts that feed into the paper and its conclusions.
I was focused on what I see as important gaps in measuring impact of AI, and its actual (if difficult to measure) impact right now.
jaredklewis 2 hours ago [-]
The paper analyzes facts re: wages and jobs, which I think are (comparatively) easy to measure as compared with productivity, and are also an area where people have concerns about the impact of AI.
Mostly people aren't worried about productivity itself, which would be weird. "Oh no, AI is making us way more productive, and now we're getting too much stuff done and the economy is growing too much." The major concern is that the productivity is going to impact jobs and wages, and at least so far (according to this particular paper) that seems to not be happening.
catlikesshrimp 1 hours ago [-]
Won't doing the same job in half the time lead to having to pay half the salary? Indirectly, I mean. You can now hire half the people or pay the same number of people half.
Unless twice the work is suddenly required, which I doubt.
mfitton 45 minutes ago [-]
I think it's quite common that a company has way too many things that it could work on compared to what the amount of people they should reasonably hire can get done. And working on more things actually generates more work itself. The more products you have, or the more infrastructure capabilities you build out, the more possible work you can do.
So you could work on more things with the same number of employees, make more money as a result, and either further increase the number of things you do, or if not, increase your revenue and hopefully profits per-employee.
jaredklewis 46 minutes ago [-]
No one knows, but if history is any guide, it is very unlikely.
I would also be surprised if the twice the work was "suddenly" required, but would you be surprised if people buy more of something if it costs less? In the 1800s ordinary Americans typically owned only a few outfits. Coats were often passed down several generations. Today, ordinary Americans usually own dozens of outfits. Did Americans in the 1800s simply not like owning lots of clothing? Of course not. They would have liked to own more clothing, but demand was constrained by cost. As the price of clothing has gone down, demand for clothing has increased.
With software, won't it be the same? If engineers are twice as productive as before, competitive pressure will push the price of software down. Custom software for businesses (for example) is very expensive now. If it were less expensive, maybe more businesses will purchase custom software. If my Fastmail subscription becomes cheaper, maybe I will have more money to spend on other software subscriptions. In this way, across the whole economy, it is very ordinary for productivity gains to not reduce employment or wages.
Of course demand is not infinitely elastic (i.e. there is a limit on how many outfits a person will buy, no matter how cheap), but the effects of technological disruption on the economy are complex. Even if demand for one kind of labor is reduced, demand for other kinds of labor can increase. Even if we need less weavers, maybe we need more fashion designers, more cotton farmers, more truckers, more cardboard box factory workers, more logistics workers, and so on. Even if we need less programmers, maybe we need more data center administrators?
No one knows what the future economy will look like, but so far the long term trends in economic history are very promising.
jrflowers 2 hours ago [-]
> It’s very possible that AI could save 50% of work hours in a week and still have no impact on wages or jobs.
I like this sentence because it is grammatically and syntactically valid but has the same relationship to reality as say, the muttering of an incantation or spell has, in that it seeks to make the words come true by speaking them.
Aside from simply hoping that, if somebody says it it could be true, “If everyone’s hours got cut in half, employers would simply keep everyone and double wages” is up there with “It is very possible that if my car broke down I’d just fly a Pegasus to work”
jaredklewis 2 hours ago [-]
The cited statistics is in reference to time saved (as a percent of work hours), not a reduction in paid working hours.
But more generally, my comment is not absurd; it's a pattern that has played itself out in economic history dozens of times.
Despite the fact that modern textile and clothing machinery are easily 1000x more efficient than weaving cloth and sewing shirts by hand, the modern garment industry employs more people today than that of middle age Europe.
Will AI be the same? I don't know, but it wouldn't be unusual if it was.
jrflowers 60 minutes ago [-]
> The cited statistics is in reference to time saved (as a percent of work hours), not a reduction in paid working hours.
This makes sense. If everyone’s current workloads were suddenly cut in half tomorrow, there would simply be enough demand to double their workloads. This makes sense across the board because much like clothing and textiles, demand for every product and service scales linearly with population.
I was mistaken, you did not suggest that employers would gift workers money commensurate with productivity, you simply posit that demand is conceptually infinite and Jevons paradox means that no jobs ever get eliminated.
catlikesshrimp 1 hours ago [-]
That is the result of using fuel. In spite of efficiency gained, more work is possible; more work is demanded, too: a dress is not lasting a decade now, hopefully one season.
More people are also available since the fields are producing by themselves, comparatively. Not to mention less of us die to epidemies, famines and swords.
jononor 2 hours ago [-]
Doubling the wages is not going to happen... But it could be that output gets doubled, at the same personell cost (jobs*wages). Ref Jervons Paradox.
datpuz 2 hours ago [-]
IMO, if you're gaining a significant amount of productivity from LLMs in a technical field, it's because you were either very junior and lacked much of the basic knowledge required of your role, or you performed like you were.
Arctic_fly 32 minutes ago [-]
Definitely not the case for coding. I'm a capable senior engineer, and I know many other very experienced senior engineers who are all benefitting immensely from AI, both in the code editor and chat interfaces.
My company just redid our landing page. It would probably have taken a decent developer two weeks to build it out. Using AI to create the initial drafts, it took two days.
treis 23 minutes ago [-]
I disagree. Maybe there's savants out there that can write SQL, K8s auto scaling yaml, dockerfiles, React components, backend code, and a dozen other things. But for the rest of us LLMs are helpful for the things we wade into every so often.
It's not miraculous but I feel like it saves me a couple hours a week from not going on wild goose chases. So maybe 5% of my time.
I don't think any engineering org is going to notice 5% more output and layoff 1/20th of their engineers. I think for now most of the time saved is going back to the engineers.
1 hours ago [-]
grandmczeb 23 minutes ago [-]
IMO, if you haven’t been getting a significant productivity boost from LLMs in a technical field, it’s because you lack the basic brain plasticity to adapt to new tools, or feel so psychologically threatened by change that you act like you do.
sanderjd 24 minutes ago [-]
Sorry but this just definitely isn't true.
I would (similarly insultingly) suggest that if you think this is true, you're spending time doing things more slowly that you could be doing more productively by using contemporary tools.
Early on in a paradigm shift, when you have small moves, or people are still trying to figure out the tech, it's likely that individual moves are hard to distinguish from noise. So I'd argue that a broad-based, "just look at the averages" approach is simply the wrong approach to use at this point in the tech lifecycle.
FWIW, I'd have to search for it, but there were economic analyses done that said it took decades for the PC to have a positive impact on productivity. IMO, this is just another article about "economists using tools they don't really understand". For decades they told us globalization would be good for all countries, they just kinda forgot about the massive political instability it could cause.
> In contrast, this article, i.e., the paper it discusses, is based on what has happened so far.
Not true. The article specifically calls into question whether the massive spending on AI is worth it. AI is obviously an investment, so determine whether it's "worth it", you need to consider future outcomes.
rightbyte 2 hours ago [-]
> economic analyses done that said it took decades for the PC to have a positive impact on productivity
I honestly think computers have a net negative productivity impact in many organizations. Maybe even "most".
7qW24A 1 hours ago [-]
It’s an interesting rabbit hole to go down. If you use the BLS’s definition of productivity, then computers seem to be a net drag on productivity:
Even more surprising for me is that productivity growth declined during the ZIRP era. How did we take all that free money and product less?
Arctic_fly 28 minutes ago [-]
This is an excellent question. My very unscientific suspicion is that the decreases in average attention span and ability to concentrate zero out the theoretical possible increases in productivity that computers allow.
exe34 1 hours ago [-]
> globalization would be good for all countries, they just kinda forgot about the massive political instability it could cause
Could you say a few more words on this please? Are you referring to the rise of China?
mac-attack 4 hours ago [-]
> In contrast, this article, i.e., the paper it discusses, is is based on what has happened so far.
What happened in 2023 and 2024 actually
Nitpicky but it's worth noting that last year's AI capabilities are not the April 2025 AI capabilities and definitely won't be the December 2025 capabilities.
It's using deprecated/replaced technology to make a statement, that is not forward projecting. I'm struggling to see the purpose. It's like announcing that the sun is still shining at 7pm, no?
rafaelmn 4 hours ago [-]
I feel like model improvement is severely overstated by the benchmarks and the last release cycle basically made no difference to my use cases. If you gave me Claude 3.5 and 3.7 I couldn't really tell the difference. OpenAI models feel like they are regressing, and LLAMA 4 regressed even on benchmarks.
And the hype was insane in 2023 already - it's useful to compare actual outcomes vs historic hype to gauge how credible the hype sellers are.
kypro 3 hours ago [-]
That's interesting. I think there's been some pretty significant improvements in the rate of hallucinations and accuracy of the models, especially when it comes to rule following. Perhaps the biggest improvement though is in the size of context windows which are huge compared to this time last year.
Maybe progress over the last 2-3 months is hard to see, but progress over the last 6 is very clear.
freejazz 2 hours ago [-]
Well, it's a given that the sun is shining when it is out. Not so much with the AI.
colinmorelli 7 hours ago [-]
This is the real value of AI that, I think, we're just starting to get into. It's less about automating workflows that are inherently unstructured (I think that we're likely to continue wanting humans for this for some time).
It's more about automating workflows that are already procedural and/or protocolized, but where information gathering is messy and unstructured (I.e. some facets of law, health, finance, etc).
Using your dietician example: we often know quite well what types of foods to eat or avoid based on your nutritional needs, your medical history, your preferences, etc. But gathering all of that information requires a mix of collecting medical records, talking to the patient, etc. Once that information is available, we can execute a fairly procedural plan to put together a diet that will likely work for you.
These are cases that I believe LLMs are actually very well suited, if the solution can be designed in such a way as to limit hallucinations.
karpour 5 hours ago [-]
I recently tried looking up something about local tax law in ChatGPT. It confidently told me a completely wrong rule. There are lots of sources for this, but since some probably unknowingly spread misinformation, ChatGPT just treated it as correct. Since I always verify what ChatGPT spits out, it wasn't a big deal for me, just a reminder that it's garbage in, garbage out.
freehorse 5 hours ago [-]
Yeah, I also find very often llms say sth wrong just because they found it in the internet. The problem is that we know to not trust a random website, but LLMs make wrong info more believable. So the problem in some sense is not exactly the LLM, as they pick up on wrong stuff people or "people" have written, but they are really bad at figuring these errors out and particularly good at covering them or backing them up.
mediaman 4 hours ago [-]
Out of curiosity, did you try this in o3?
O3's web research seems to have gotten much, much better than their earlier attempts at using the web, which I didn't like. It seems to browse in a much more human way (trying multiple searches, noticing inconsistencies, following up with more refined searches, etc).
But I wonder how it would do in a case like yours where there is conflicting information and whether it picks up on variance in information it finds.
SpicyLemonZest 3 hours ago [-]
I just asked o3 how to fill out a form 8949 for a sale with an incorrect 1099-B basis not reported to the IRS. It said (with no caveats or hedging, and explicit acknowledgement that it understood the basis was not reported) that you should put the incorrect basis in column (e) with adjustments in (f) and (g), while the IRS instructions are clear (as much as IRS instructions can be...) that in this scenario you should put the correct basis directly in column (e).
vjvjvjvjghv 4 hours ago [-]
I think this will be fixed by having LLM trained not on the whole internet but on well curated content. To me this feels like the internet in maybe 1993. You see the potential and it’s useful. But a lot of work and experimentation has to be done to work out use cases.
I think it’s weird to reject AI based on its current form.
throwaway743 5 hours ago [-]
Chatgpt isn't any good these days. Try switching to Claude or Gemini 2.5 pro.
calmoo 5 hours ago [-]
ChatGPT is still good. Try o3.
dingnuts 5 hours ago [-]
"Hallucination" implies that the LLM holds some relationship to truth. Output from an LLM is not a hallucination, it's bullshit[0].
> Using your dietician example: we often know quite well what types of foods to eat or avoid based on your nutritional needs
No we don't. It's really complicated. That's why diets are popular and real dietitians are expensive. and I would know, I've had to use one to help me manage an eating disorder!
There is already so much bullshit in the diet space that adding AI bullshit (again, using the technical definition of bullshit here) only stands to increase the value of an interaction with a person with knowledge.
And that's without getting into what happens when brand recommendations are baked into the training data.
Exactly! All LLMs do is “hallucinate”. Sometimes the output happens to be right, same as a broken clock.
colinmorelli 5 hours ago [-]
I find this way of looking at LLMs to be odd. Surely we all are aware that AI has always been probabilistic in nature. Very few people seem to go around talking about how their binary classifier is always hallucinating, but just sometimes happens to be right.
Just like every other form of ML we've come up with, LLMs are imperfect. They get things wrong. This is more of an indictment of yeeting a pure AI chat interface in front of a consumer than it is an indictment of the underlying technology itself. LLMs are incredibly good at doing some things. They are less good at other things.
There are ways to use them effectively, and there are bad ways to use them. Just like every other tool.
SketchySeaBeast 4 hours ago [-]
The problem is they are being sold as everything solutions. Never write code / google search / talk to a lawyer / talk to a human / be lonely again, all here, under one roof. If LLM marketing was staying in its lane as a creator of convincing text we'd be fine.
vjvjvjvjghv 4 hours ago [-]
I think a lot of problems will be solved by explicitly training on high quality content and probably injecting some expert knowledge in addition
henryaj 4 hours ago [-]
You imply that, like a stopped clock, LLMs are only right occasionally and randomly. Which is just nonsense.
mathgeek 3 hours ago [-]
Although I get what you're saying, it's still true that if something is wrong randomly at any point, it is always "randomly wrong".
habinero 3 hours ago [-]
It's true, though. It strings together plausible words using a statistical model. If those words happen to mean something, it's by chance.
mordymoop 4 hours ago [-]
Same is true of humans fwiw.
colinmorelli 5 hours ago [-]
> "Hallucination" implies that the LLM holds some relationship to truth. Output from an LLM is not a hallucination, it's bullshit[0].
I understand your perspective, but the intention was to use a term we've all heard to reflect the thing we're all thinking about. Whether or not this is the right term to use for scenarios where the LLM emits incorrect information is not relevant to this post in particular.
> No we don't. It's really complicated. That's why diets are popular and real dietitians are expensive.
No, this is not why real dietitians are expensive. Real dietitians are expensive because they go through extensive training on a topic and are a licensed (and thus supply constrained) group. That doesn't mean they're operating without a grounding fact base.
Dietitians are not making up nutritional evidence and guidance as they go. They're operating on studies that have been done over decades of time and millions of people to understand in general what foods are linked to what outcomes. Yes, the field evolves. Yes, it requires changes over time. But to suggest we "don't know" is inconsistent with the fact that we're able to teach dietitians how to construct diets in the first place.
There are absolutely cases in which the confounding factors for a patient are unique enough such that novel human thought will be required to construct a reasonable diet plan or treatment pathway for someone. That will continue to be true in law, health, finances, etc. But there are also many, many cases where that is absolutely not the case, the presentation of the case is quite simple, and the next step actions are highly procedural.
This is not the same as saying dietitians are useless, or physicians are useless, or attorneys are useless. It is to say that, due to the supply constraints of these professions, there are always going to be fundamental limits to the amount they can produce. But there is a credible argument to be made that if we can bolster their ability to deliver the common scenarios much more effectively, we might be able to unlock some of the capacity to reach more people.
jakubmazanec 6 minutes ago [-]
Awfully early? We have "useful" LLMs for almost three years. Where is the productivity increase? Also, your example is not very relevant. 1) Would you pay the professional if you couldn't find the answer yourself? 2) If search engines were still useful (I'm assuming you googled first), wouldn't they be able to find the official calculator too?
ozgrakkurt 7 hours ago [-]
It can’t replace a human for support, it is not even close to replacing a junior developer. It can’t replace any advice job because it lies instead of erroring.
As an example if you want diet advice, it can lie to you very convincingly so there is no point in getting advice from it.
Main value you get from a programmer is they understand what they are doing and they can take the responsibility of what they are developing. Very junior developers are hired mostly as an investment so they become productive and stay with the company. AI might help with some of this but doesn’t really replace anyone in the process.
For support, there is massive value in talking to another human and having them trying to solve your issue. LLMs don’t feel much better than the hardcoded menu style auto support there already is.
I find it useful for some coding tasks but think LLMs were overestimated and it will blow up like NFTs
franticgecko3 6 hours ago [-]
I agree with most of your points but this one
>I find it useful for some coding tasks but think LLMs were overestimated and it will blow up like NFTs
No way. NFTs did not make any headway in "the real world": their value proposition was that their cash value was speculative, like most other Blockchain technologies, and that understandably collapsed quickly and brilliantly. Right now developers are using LLMs and they have real tangible advantages. They are more successful than NFTs already.
I'm a huge AI skeptic and I believe it's difficult to measure their usefulness while we're still in a hype bubble but I am using them every day, they don't write my prod code because they're too unreliable and sloppy, but for one shot scripts <100 lines they have saved me hours, and they've entirely replaced stack overflow for me. If the hype bubble burst today I'd still be using LLMs tomorrow. Cannot say the same for NFTs
catdog 5 hours ago [-]
LLMs are somewhat useful compared to NFTs and other blockchain bullshit which is nearly completely useless.
It will be interesting what happens when the money from the investment bubble dries out and the real costs need to be paid by the users.
atrus 7 hours ago [-]
> As an example if you want diet advice, it can lie to you very convincingly so there is no point in getting advice from it.
How exactly is this different from getting advice from someone who acts confidently knowledgeable? Diet advice is an especially egregious example, since I can have 40 different dieticians give me 72 different diet/meal plans with them saying 100% certainty that this is the correct one.
It's bad enough the AI marketers push AI as some all knowing, correct oracle, but when the anti-ai people use that as the basis for their arguments, it's somehow more annoying.
Trust but verify is still a good rule here, no matter the source, human or otherwise.
mcmcmc 6 hours ago [-]
In the case of dieticians, investment advisors, and accountants they are usually licensed professionals who face consequences for misconduct. LLMs don’t have malpractice insurance
zo1 1 hours ago [-]
Good luck getting any of that to happen. All that does is raise the barrier for proof and consequence, because they've got accreditation and "licensing bodies" with their own opaque rules and processes. Accreditation makes it seem like these people are held to some amazing standard with harsh penalties if they don't comply, but really they just add layers of abstraction and places for incompetence, malice and power-tripping to hide.
E.g. Next time a lawyer abandons your civil case and ghosts you after being clearly negligent and down-right bad in their representation. Good luck holding them accountable with any body without consequences.
DharmaPolice 6 hours ago [-]
If a junior developer lies about something important, they can be fired and you can try to find someone else who wouldn't do the same thing. At the very least you could warn the person not to lie again or they're gone. It's not clear that you can do the same thing with an LLM as they don't know they've lied.
atrus 6 hours ago [-]
You're falling into the mistake of "correct" or "lied" though. Being wrong isn't lying.
bluefirebrand 6 hours ago [-]
Inventing answers is lying
If I ask it how to accomplish a task with the C standard library and it tells me to use a function that doesn't exist in the C standard library, that's not just "wrong" that is a fabrication. It is a lie
atrus 6 hours ago [-]
Lying requires intent to deceive.
If you ask me to remove whitespace from a string in Python and I mistakenly tell you use ".trim()" (the Java method, a mistake I've made annoyingly too much) instead of ".strip()", am I lying to you?
It's not a lie. It's just wrong.
bluefirebrand 5 hours ago [-]
You are correct that there is a difference between lying and making a mistake, however
> Lying requires intent to deceive
LLMs do have an intent to deceive, built in!
They have been built to never admit they don't know an answer, so they will invent answers based on faulty premises
I agree that for a human mixing up ".trim()" and ".strip()" is an honest mistake
In the example I gave you are asking for a function that does not exist. If it invents a function, because it is designed to never say "you are wrong that doesn't exist" or "I don't know the answer" that seems to qualify to me as "intent to deceive" because it is designed to invent something rather than give you a negative sounding answer
Smeevy 5 hours ago [-]
An LLM is not "just wrong" either. It's just bullshit.
The bullshitter doesn't care about if what they say is true or false or right or wrong. They just put out more bullshit.
sharemywin 6 hours ago [-]
which is interesting because AI doesn't have intent and there is incapable of lying.
asadotzler 2 hours ago [-]
Of course it has intent. It was literally designed to never say "I don't know" and to instead give what ever string of words best fits the patter. That's intent. It was designed with the intent to deceive rather than to offer any confidence levels or caveats. That's lying.
naming_the_user 5 hours ago [-]
It’s more like bullshitting which is inbetween the two. Basically, like that guy who always has some story to tell. He’s not lying as such, he’s just waffling.
ta20240528 6 hours ago [-]
" since I can have 40 different dieticians give me 72 different diet/meal plans with them saying 100% certainty that this is the correct one."
Because, as Brad Pilon of intermittent fasting fashion repeatedly stresses, "All diets work."*
* Once there is an energy deficit.
econ 2 hours ago [-]
OT but funny: I see a YouTube video with a lot of before and after photos where the coach guarantees results in 60 days. It was entirely focused on avoiding stress and strongly advised against caloric restriction. Something like sleeping is many times more important than exercise and exercise is many times more important than diet.
From what I know dieticians don't design exercise plans. (If true) the LLM has better odds to figure it out.
catdog 5 hours ago [-]
I would not say all of them but in general I agree, there is not one correct one but many correct ones.
lithocarpus 4 hours ago [-]
[dead]
munksbeer 6 hours ago [-]
> Trust but verify is still a good rule here
I wouldn't have a clue how to verify most things that get thrown around these days. How can I verify climate science? I just have to trust the scientific consensus (and I do). But some people refuse to trust that consensus, and they think that by reading some convincing sounding alternative sources they've verified that the majority view on climate science is wrong.
The same can apply for almost anything. How can I verify dietary studies? Just having the ability to read scientific studies and spot any flaws requires knowledge that only maybe 1 in 10000 people could do, if not worse than that.
blackoil 3 hours ago [-]
Ironic, but keep asking LLMs till you can connect it to your "known truth" knowledge. For many topics I spend ~15-60 mins on various topics asking for details, questioning any contradictory answers, verifying assumptions to get what feels right answer. I talked with them for topics varying from democracy-economy, irrational number proofs and understanding rainbows.
tempfile 6 hours ago [-]
Do people actually behave this way with you? If someone presents a plan confidently without explaining why, I tend to trust them less (even people like doctors, who just happen to start with a very high reputation). In my experience people are very forthcoming with things they don't know.
atrus 6 hours ago [-]
Someone can present a plan, explain that plan, and be completely wrong.
People are forthcoming with things they know they don't know. It's the stuff that they don't know that they don't know that get them. And also the things they think they know, but are wrong about. This may come as a shock, but people do make mistakes.
dml2135 5 hours ago [-]
And if someone presents a plan, explains that plan, and is completely wrong repeatedly and often, in a way that makes it seem like they don’t even have any concept whatsoever of what they may have done wrong, wouldn’t you start to consider at some point that maybe this person is not a reliable source of information?
Workaccount2 5 hours ago [-]
I trust cutting edge models now far more than the ones from a few years ago.
People talk a lot of about false info and hallucinations, which the models do in fact do, but the examples of this have become more and more far flung for SOTA models. It seems that now in order to elicit bad information, you pretty much have to write out a carefully crafted trick question or ask about a topic so on the fringes of knowledge that it basically is only a handful of papers in the training set.
However, asking "I am sensitive to sugar, make me a meal plan for the week targeting 2000cal/day and high protein with minimally processed foods" I would totally trust the output to be on equal footing with a run of the mill registered dietician.
As for the junior developer thing, my company has already forgone paid software solutions in order to use software written by LLMs. We are not a tech company, just old school manufacturing.
asadotzler 2 hours ago [-]
I get wrong answers for basic things like how to fill out a government form or the relationship between two distant historical figures, things I'm actually working on directly and not some "trick" to get the machine to screw up. They get a lot right a lot of the time, but they're inherently untrustworthy because they sometimes get things subtly or catastrophically wrong and without some kind of consistent confidence scoring, there's no way to tell the difference without further research, and almost necessarily on some other tool because LLMs like to hold onto their lies and it's very difficult to convince them to discard a hallucination.
victorbjorklund 4 hours ago [-]
NFT:s never had any real value. It was just speculation hoping some bigger sucker will come after you.
LLM:s create real value. I save a bunch of time coding with an LLM vs without one. Is it perfect? No, but it does not have to be for still creating a lot of value.
Are some people hyping it up too much? Sure, an reality will set in but it wont blow up. It will rather be like the internet. 2000s and everyone thought "slap some internet on it and everything will be solved". They overestimated the (shorterm) value of the internet. But internet was still useful.
asadotzler 2 hours ago [-]
NFTs weren't a trillion dollar black hole that's yet to come close to providing value anywhere near that investment level. Come back when AI companies are actually profitable. Until then, LLM AI value is negative, and if the companies can't turn that around, they'll be as dead as NFTs and you won't even get the heavily subsidized company killing free of cheap features you think are solid.
gokhan 3 hours ago [-]
> I find it useful for some coding tasks but think LLMs were overestimated and it will blow up like NFTs
Can't disagree more (on LLMs. NFTs are of course rubbish). I'm using them with all kinds of coding tasks with good success, and it's getting better every week. Also created a lot of documents using them, describing APIs, architecture, processes and many more.
Lately working on creating an MCP for an internal mid-sized API of a task management suite that manages a couple hundred people. I wasn't sure about the promise of AI handling your own data until starting this project, now I'm pretty sure it will handle most of the personal computing tasks in the future.
econ 3 hours ago [-]
> It can’t replace a human for support,
It doesn't have to. It can replace having no support at all.
It would be possible to run a helpdesk for a free product. It might suck but it could be great if you are stuck.
Support call centers usually work in layers. Someone to pick up the phone who started 2 days ago and knows nothing. They forward the call to someone who managed to survive for 3 weeks. Eventually you get to talk to someone who knows something but can't make decisions.
It might take 45 minutes before you get to talk to only the first helper. Before you penetrate deep enough to get real support you might lose an hour or two. The LLM can answer instantly and do better than tortured minimum wage employees who know nothing.
There may be large waves of similar questions if someone or something screwed up. The LLM can do that.
The really exciting stuff will come where the LLM can instantly read your account history and has a good idea what you want to ask before you do. It can answer questions you didn't think to ask.
This is specially great if you've had countless email exchanges with miles of text repeating the same thing over and over. The employee can't read 50 pages just to get up to speed on the issue, if they had the time you don't so you explain again for the 5th time that delivery should be on adress B not A and be on these days between these times unless it are type FOO orders.
Stuff that would be obvious and easy if they made actual money.
steamrolled 5 hours ago [-]
> It can’t replace a human for support
But it is replacing it. There's a rapidly-growing number of large, publicly-traded companies that replaced first-line support with LLMs. When I did my taxes, "talk to a person" was replaced with "talk to a chatbot". Airlines use them, telcos use them, social media platforms use them.
I suspect what you're missing here is that LLMs here aren't replacing some Platonic ideal of CS. Even bad customer support is very expensive. Chatbots are still a lot cheaper than hundreds of outsourced call center people following a rigid script. And frankly, they probably make fewer mistakes.
> and it will blow up like NFTs
We're probably in a valuation bubble, but it's pretty unlikely that the correct price is zero.
NeutralCrane 2 hours ago [-]
> As an example if you want diet advice, it can lie to you very convincingly so there is no point in getting advice from it.
Have you somehow managed to avoid the last several decades of human-sourced dieting advice?
DoughnutHole 5 hours ago [-]
> It can’t replace a human for support
It doesn’t wholly replace the need for human support agents but if it can adequately handle a substantial number of tickets that’s enough to reduce headcount.
A huge percentage of problems raised in customer support are solved by otherwise accessible resources that the user hasn’t found. And AI agents are sophisticated enough to actually action on a lot of issues that require action.
The good news is that this means human agents can focus on the actually hard problems when they’re not consumed by as much menial bullshit. The bad news for human agents is that with half the workload we’ll probably hit an equilibrium with a lot fewer people in support.
unquietwiki 4 hours ago [-]
I already know of at least one company that's pivoted to using a mix of AI and off-shoring their support, as well as some other functions; that's underway, with results unclear, aside from layoffs that took place. There was also a brouhaha a year or two ago when a mental health advocacy tried using AI to replace their support team... did not go as planned when it suggested self-harm to some users.
vjvjvjvjghv 4 hours ago [-]
LLM is already very useful for a lot of tasks. NFT and most other crypto has never been useful for anything other than speculation.
chgs 7 hours ago [-]
I tend to use ai for the same things I’d have used Google for in 2005.
Google is pretty much useless now as it changed into ann ad platform, and I suspect AI will go the same way soon enough.
flmontpetit 6 hours ago [-]
It seems like an obvious thing on the surface, but I've already noticed that when asked questions on LLM usage (eg building RAG pipelines and whatnot), ChatGPT will exclusively refer you to OpenAI products.
erkt 6 hours ago [-]
I just asked O3 for a software stack for deploying AI in a local application and it recommended llama over OpenAI API.
It has always been easy to imagine how advertising could destroy the integrity of LLM's. I can guarantee that there will be companies unable to resist the temporary cash flows from it. Those models will destroy their reputation in no time.
I'm an AI pessimist, yet I don't see this happening (at least not without some major advancements in how LLMs work).
One major problem is the payment mechanism. The nature of LLMs means you just can't really know or force it to spit out ad garbage in a predictable manor. That'll make it really tricky for an advertiser to want to invest in your LLM advertising (beyond being able to sell the fact that they are an AI ad service).
Another is going to be regulations. How can you be sure to properly highlight "sponsored" content in the middle of an AI hallucination? These LLM companies run a very real risk of running a fowl of FTC rules.
handfuloflight 3 hours ago [-]
> The nature of LLMs means you just can't really know or force it to spit out ad garbage in a predictable manor.
You certainly can with middleware on inference.
rurp 6 hours ago [-]
It matters a lot how much of the market they capture before then though. Oracle and Google are two companies that have spent years torching their reputation but they are still ubiquitous and wildly profitable.
JoshTko 6 hours ago [-]
My bet is that free versions of models will become sponsor aligned.
notahacker 2 hours ago [-]
A friend of mine is working on a "RAG for tax advisers" startup. He's selling it to the tax accountants, as a way for them to spot things they wouldn't otherwise have the time to review and generate more specialist tax advisory work. There's a lot more work they could do if only it's affordable to the businesses they advise (none of whom could do their own taxes even if they wanted to!)
Jevons law in action: some pieces of work get lost, but lower cost of doing work generates more demand overall...
dcchuck 7 hours ago [-]
I actually paid for tax advice from one of those big companies (it was recommended - last time I will take that person's recommendations!). I was very disappointed in the service. It felt like the person I was speaking to on the phone would have been better of just echoing the request into AI. So I did just that as I waited on the line. I found the answer and the tax expert "confirmed" it.
ninetyninenine 6 hours ago [-]
According to the article the Tax expert still has a job though.
BeFlatXIII 35 minutes ago [-]
> set patterns only mildly tailored for the recipient
If that’s true, probably for the best that those jobs get replaced. Then again, the value may have been in the personal touch (pay to feel good about your decisions) rather than quality of directions.
vishnugupta 5 hours ago [-]
I use it everyday to an extent that I’ve come to depend on it.
For copywriting, analyzing contracts, exploring my business domain, etc etc. Each of those tasks would have required me to consult with an expert a few years ago. Not anymore.
HeyLaughingBoy 2 hours ago [-]
> tax advice that would have cost hundreds of dollars to get from a licensed tax agent
But are those really the same? You're not paying the tax agent to give you the advice per se: even before Gemini, you could do your own research for free. You're really paying the tax agent to provide you advice that you can trust without having to go to the extra steps of doing deep research.
One of the most important bits of information I get from my tax agent is, "is this likely to get me audited if we do it?" It's going to be quite some time before I trust AI to answer that correctly.
freedomben 2 hours ago [-]
Also, frequently you're buying some protection by using the licensed agent. If they give you bad advice, there's a person to go to and in the most extreme cases, maybe file a lawsuit or insurance claim against.
m3047 2 hours ago [-]
The biggest harm today is people in training for "people interaction" specialties with a high degree of empathy / ability to read others: psychology, counseling, forensic interviewing. They pay a lot of money (or it's invested in them) to get trained and then have to do practical residency / internship: they need to do those interships NOW, and a significant proportion of the population that they'd otherwise interact with to do so is off faffing about with AI. The fact that anecdotally they can't seem to create "convincing" enough AIs to take the place of subjects is damning.
gilbetron 7 hours ago [-]
How do you know it was correct without being a tax expert? And consulting a tax expert would give you legal recourse if it was wrong.
colinmorelli 7 hours ago [-]
As for correctness, they mentioned the LLM citing links that the person can verify. So there is some protection at that level.
But, also, the threshold of things we manage ourselves versus when we look to others is constantly moving as technology advances and things change. We're always making risk tradeoff decisions measuring the probability we get sued or some harm comes to us versus trusting that we can handle some tasks ourselves. For example, most people do not have attorneys review their lease agreements or job offers, unless they have a specific circumstance that warrants they do so.
The line will move, as technology gives people the tools to become better at handling the more mundane things themselves.
ryandrake 7 hours ago [-]
If it’s also returning links, wouldn’t it be faster and more authoritative to just go read the official links and skip the LLM slop entirely?
blendergeek 6 hours ago [-]
No. The LLM in the story found the necessary links. In this case the LLM was a better search engine.
krisoft 6 hours ago [-]
Sure. But often you don’t know how to find the information or what are the right technical terms for your problem.
In a more general sense sometimes, but not always, it is easier to verify something than to come up with it at the first place.
victorbjorklund 4 hours ago [-]
But if you dont know anything about programming a link to a library/etc is not so useful. Same if you dont know about tax law and it cities the tax code and how it should be understood (the code is correct but the interpretation is not)
andy99 6 hours ago [-]
I think in many cases, chatbots may make information accessible to people who otherwise wouldn't have it, like in the OP's case. But I'm more sceptical it's replacing experts in specialize subjects that had been previously making a living at them. They would be serving different markets.
conductr 7 hours ago [-]
I’m a bit of a skeptic too and kind of agree on this. Also, the human employee displacement will be slow. It will start by not eliminating existing jobs but just eliminating the need for additional headcount, so it caps the growth of these labor markets. As it does that, the folks in the roles leveraging AI the most will start slowly stealing share of demand as they find more efficient and cheaper ways to perform the work. Meanwhile, core demand is shrinking as self service by customers is increasingly enabled. Then at some step pattern, perhaps the next global business cycle down turn, the headcount starts trending downward. This will repeat a handful of times, probably taking decades to be measured in aggregate by this type of study.
nzeid 3 hours ago [-]
> Aside from the obvious frontline support, artist, junior coder etc, a whole bunch of white collar "pay me for advice on X" jobs (dietician, financial advice, tax agent, etc), where the advice follows set patterns only mildly tailored for the recipient, seem to be severely at risk.
These examples aren't wrong but you might be overstating their impact on the economy as a whole.
E.g. the overwhelming majority of people do not pay solely for tax advice, or have a dietician, etc. Corporations already crippled their customer support so there's no remaining damage to be dealt.
Your tax example won't move the needle on people who pay to have their taxes done in their entirety.
datpuz 2 hours ago [-]
Nearly every job, even a lot of creative ones, require a degree of accuracy and consistency that gen AI just can't deliver. Until some major breakthrough is achieved, not many people doing real work are in danger.
ericmcer 4 hours ago [-]
This is a great point, I was just using it understand various DMV procedures. It is invaluable for navigating bureaucracy so if your job is to ingest and regurgitate a bunch of documents and procedures you may be highly at risk here.
That is a great use for it too, rather than replacing artists we have personal advisors who can navigate almost any level of complex bureaucracy instantaneously. My girlfriend hates AI, like rails against it at any opportunity, but after spending a few hours on the DMV website I sat down and fed her questions into Claude and had answers in a few seconds. Instant convert.
soared 7 hours ago [-]
Similairly, while not perfect I use AI to help redesign my landscaping by uploading a picture of my yard and having it come up with different options.
Also took a picture of my tire while at the garage and asked it if I really needed new tires or not.
Took a picture of my sprinkler box and had it figure out what was going on.
Potentially all situations where I would’ve paid (or paid more than I already was) a local laborer for that advice. Or at a minimum spent much more time googling for the info.
mmmBacon 5 hours ago [-]
For the tire you can also use a penny. If you stick the penny in the tread with Liconln’s head down and his hair isn’t covered, then you need new tires. No AI. ;)
olyjohn 49 minutes ago [-]
You don't even need a penny, or have to remember where on the penny you're supposed to be looking... There are wear bars in the tread in every single tire. If the tire tread is flush with them, the tires are shot. Also there is a date code on the side, and if your tires are getting near 10 years old, it's probably a good time to replace them.
notTooFarGone 7 hours ago [-]
So in the coming few years on the question whether or not to change your tires, a suggestions for shops in your area will come with a recommendation to change them. Do you think you would trust the outcome?
handfuloflight 3 hours ago [-]
God forbid people attempt to facilitate legitimate commerce!
Workaccount2 5 hours ago [-]
I am hoping that there will always be premium paid options for LLMs, and thus the onus would be on the user whether or not they want biased answers.
These will likely be cell-phone-plan level expensive, but the value prop would still be excellent.
dist-epoch 5 hours ago [-]
Why do you think that's not a problem today when you ask a car mechanic?
redwall_hp 3 hours ago [-]
My mechanic takes a video of a tire tread depth gauge being inserted into each wheel and reports the values, when doing the initial inspection and tests before every oil change.
It's something that can be empirically measured instead of visually guessed at by a human or magic eight-ball. Using a tool that costs only a few dollars, no less, like the pressure gauge you should already keep in your glovebox.
loloquwowndueo 6 hours ago [-]
Once they have you hooked they’ll start jacking up the prices.
msp26 6 hours ago [-]
It's a race to the bottom for pricing. They can't do shit. Even if the American companies colluded to stop competing and raise prices, Chinese providers will undermine that.
There is no moat. Most of these AI APIs and products are interchangeable.
asadotzler 1 hours ago [-]
OK, so they won't raise prices, they'll simply EOL their too expensive to maintain services and users won't feel the impact on their wallets, they'll just lose their tool and historical data and what ever else of theirs was actually the property of the company.
quickthrowman 6 hours ago [-]
> Also took a picture of my tire while at the garage and asked it if I really needed new tires or not.
You can use a penny and your eyeballs to assess this, and all it costs is $0.01
ninetyninenine 6 hours ago [-]
I find it easily hallucinates this stuff. It’s understanding of a picture is decidedly worse then its understanding of words. Be careful here about asking if it needs a tire change it is likely giving you an answer that only looks real.
bluefirebrand 6 hours ago [-]
It's also something so trivial to determine yourself.
It blows my mind the degree that people are offloading any critical thinking to AI
tbrownaw 5 hours ago [-]
There's a reason that people have to be told to not just believe everything they read on the Internet. And there's a reason some people still do that anyway.
> And yes, the answer was supported by actual sources pointing to the tax office website, including a link to the office's well-hidden official calculator of precisely the thing I thought I would have to pay someone to figure out.
Sounds like reddit could also do a good job at this, though nobody said "reddit will replace your jobs". Maybe because not as many people actively use reddit as they use generative AI now, but I cannot imagine any other reason than that.
mr_toad 7 hours ago [-]
> I recently used Gemini for some tax advice that would have cost hundreds of dollars to get from a licensed tax agent.
That’s like buying a wrench and changing your own spark plugs. Wrenches are not putting mechanics out of business.
Avicebron 7 hours ago [-]
Depends on how good the wrench is, if I can walk over to the wrench, kick it, say change my spark plugs now you fuck, and it does so instantly and for free and doesn't complain....
jaredcwhite 32 minutes ago [-]
> including a link to the office's well-hidden official calculator
So…all you needed was a decent search engine, which in the past would have been Google before it was completely enshittified.
worik 13 minutes ago [-]
> So…all you needed was a decent search engine,
Yes.
"...all you need" A good search engine is a big ask. Google at its height was quite good. LLMs are shaping up to be very good search engines
That would be enough, for me to be very pleased with them
mchusma 5 hours ago [-]
Yeah, 2023 I would expect no effect. 2024 I think generally not, wasn’t good or deployed enough. I think 2025 might be the first signs, it I still think there is a lot of plumbing and working with these things. 2026 though I expect to show an effect.
asadotzler 1 hours ago [-]
That's now what the AI boosters and shills were saying in 2021. Might be worth a refresher if you've got the time, but nearly every timeline that's been floated by any of the leadership in any of the OG LLM makers has been as hallucinated as the worst answers coming from their bots.
Jeff_Brown 5 hours ago [-]
2024 was already madness for translators and graphic artists, according to my personal anecdata.
jmull 2 hours ago [-]
> ...where the advice follows set patterns only mildly tailored for the recipient, seem to be severely at risk
I doubt it.
Search already "obsoletes" these fields in the same way AI does. AI isn't really competing against experts here, but against search.
It's also really not clear that AI has an overall advantage over dumb search in this area. AI can provide more focused/tailored results, but it costs more. Keep in mind that AI hasn't been enshittified yet like search. The enshittification is inevitable and will come fast and hard considering the cost of AI. That is, AI responses will be focused and tailored to better monetize you, not better serve you.
eqmvii 7 hours ago [-]
Your tax example isn't far off from what's already possible with Google.
The legal profession specifically saw the rise of computers, digitization of cases and records, and powerful search... it's never been easier to "self help" - yet people still hire lawyers.
ravenstine 5 hours ago [-]
I don't trust what anyone says in this space because there is so much money to be made (by a fraction of people) if AI lives up to its promise, and money to be made to those who claim that AI is "bullshit".
The only thing I can remotely trust is my own experience. Recently, I decided to have some business cards made, which I haven't done in probably 15 years. A few years ago, I would have either hired someone on Fiverr to design my business card or pay for a premade template. Instead, I told Sora to design me a business card, and it gave me a good design the first time; it even immediately updated it with my Instagram link when I asked it to.
I'm sorry, but I fail to see how AI, as we now know it, doesn't take the wind out of the sails of certain kinds of jobs.
epicureanideal 5 hours ago [-]
But didn't we have business card template programs, and even free suggested business card designs from the companies that sell business cards, almost immediately after they opened for business on the internet?
ravenstine 5 hours ago [-]
That's missing the point, and is kind of like saying why bother paying someone to build you a house when there are DIY home building kits. (or why even buy a home when you can live in a van rent-free)
The point is that I would have paid for another human being's time. Why? Because I am not a young man anymore, and have little desire to do everything myself at this point. But now, I don't have to pay for someone's time, and that surplus time doesn't necessarily transfer to something equivalent like magic.
JohnMakin 4 hours ago [-]
You do pay for it though. Compute isn't free.
ravenstine 4 hours ago [-]
Could I really have been more clear?
I am not talking about whether I have to pay more or less for anything. My problem is not paying. I want to pay so that I don't have to make something myself or waste time fiddling with a free template.
What I am proposing is that, in the current day, a human being is less likely to be at the other end of the transaction when I want to spend money to avoid sacrificing my time.
Sure, one can say that whomever is working for one of these AI companies benefits, but they would be outliers and AI is effectively homogenizing labor units in that case. Someone with creative talent isn't going to feasibly spin up a competitive AI business the way they could have started their own business selling their services directly.
asadotzler 1 hours ago [-]
For your amateur use case, maybe. For real professions in the real economy, the article you're commenting under disagrees.
ravenstine 1 hours ago [-]
> real professions in the real economy
That's both pompous and bizarre. The "real" economy doesn't end at the walls of corporate offices. Far from it.
pclmulqdq 7 hours ago [-]
Ironically, your example is what you used to get from a Google search back when Google wasn't aggressively monetized and enshittified.
empath75 2 hours ago [-]
This thought process sort of implies that there's a limited amount of work that's available to do, and once AI is doing all of it, that everyone else will just sit on their hands and stop using their brains to do stuff.
Even if every job that exists today were currently automated _people would find other stuff to do_. There is always going to be more work to do that isn't economical for AIs to do for a variety of reasons.
boredtofears 5 hours ago [-]
I thought the value of using a licensed tax agent is that if they give you advice that ends up being bad, they have an ethical/professional obligation to clean up their mess.
cynicalsecurity 7 hours ago [-]
Not consulting a real tax advisor is probably going to cost you much more.
I wouldn't be saving on tax advisors. Moreover, I would hire two different tax advisors, so I could cross check them.
ryandrake 7 hours ago [-]
Most people’s (USA) taxes are not complex, and just require basic arithmetic to complete. Even topics like stock sales, IRA rollovers, HSAs, and rental income (which the vast majority of taxpayers don’t have) are straightforward if you just read the instructions on the forms and follow them. In 30 years of paying taxes, I’ve only had a tax professional do it once: as an experiment after I already did them myself to see if there was any difference in the output. I paid a tax professional $400 and the forms he handed me back were identical to the ones I filled out myself.
aaronbaugher 6 hours ago [-]
I'm one of those weird kids who liked doing those puzzles where you had to walk through a list of tricky instructions and end up with the right answers, so I'm pretty good at that sort of thing. I also have fairly simple finances: a regular W-2 job and a little side income that doesn't have taxes withdrawn. But last year the IRS sent me a $450 check and a note that said I'd made a mistake on my taxes and paid too much. Sadly, they didn't tell me what the mistake was, so I couldn't be sure to correct it this year.
Technically, all you have to do is follow the written instructions. But there are a surprising number of maybes in those instructions. You hit a checkbox that asks whether you qualify for such-and-such deduction, and find yourself downloading yet another document full of conditions for qualification, which aren't always as clear-cut as you'd like. You can end up reading page after page to figure out whether you should check a single box, and that single box may require another series of forms.
My small side income takes me from a one-page return to several pages, and next year I'm probably going to have to pay estimated taxes in advance because that non-taxed income leaves me owing at the end of the year more than some acceptable threshold that could result in fines. All because I make an extra 10% doing some evening freelancing.
Most people's taxes shouldn't be complex, but in practice they're more complex than they should be.
ryandrake 5 hours ago [-]
I don't think it makes you weird, and taxes really aren't that much of a puzzle to put together, outside of the many deduction-related edge cases (which you can skip if you just take the standard deduction). My federal and state returns last year added up to 36 pages, not counting the attachments listing investment sales. Still, they're pretty straightforward. I now at least use online software to do them, but that's only to save time filling out forms, not for the software's "expertise." I have no doubt I could do them by hand if I wanted to give myself more writing to do.
If I can do this, most people can do a simple 2-page 1040EZ.
dismalaf 4 hours ago [-]
Google (non-Gemini) has always been a great source for tax advice, at least here in Canada because, if nothing else, the government's website appears to leave all its pages available for indexing (even if it's impossible to navigate on its own).
bamboozled 5 hours ago [-]
Here’s why I don’t think it matters , because the machine is paying for everyone’s productivity boost, even your accountants. So maybe this tide will rise all boats. Time will tell.
Your accountant also is probably saving hundreds of dollars in other areas using AI assistance.
Personally I still think you should cross check with a professional.
doctorpangloss 5 hours ago [-]
The kind of person who wants to pay nothing for advice wasn’t going to hire a lawyer or an accountant anyway.
This fact is so simple and yet here we are having arguments about it. To me people are conflating an economic assessment - whose jobs are going to be impacted and how much - with an aspirational one - which of your acquaintances personally could be replaced by an AI, because that would satisfy a beef.
andrewmutz 5 hours ago [-]
"AI is all hype and is going to destroy the labor market"
jll29 4 hours ago [-]
Read Paul Tetlock's research about so-called "experts" and their inability to make good forecasts.
Here's my own take:
- It is far too early to tell.
- The roll-out of ChatGPT caused a mind-set revolution. People now "get" what is possible already now, and it encourages conceiving and persuing new use cases on what people have seen.
- I would not recommend any kinds to train to become a translator for sure; even before LLMs, people were paid penny amounts per word or line translated, and rates plummeted further due to tools that cache translations in previous versions of documents (SDL TRADOS etc.). The same decline not to be expected for interpreters.
- Graphic designers that live from logo designs and similar works may suffer fewer requests.
- Text editors (people that edit/proofread prose, not computer programs) will be replaced by LLMs.
- LLMs are a basic technology that now will be embedded into various products, from email clients over word processors to workflow tools and chat clients. This will take 2-3 years, and it may reduce the number of people needed in an office with a secretarial/admin/"analyst" type background after that.
- Industry is already working on the next-gen version of smarter tools for medics and lawyers. This is more of a 3-5 year development, but then again some early adopters started already 2-3 years ago. Once this is rolled out, there will be less demand for assitants-type jobs such as paralegals.
asadotzler 1 hours ago [-]
We're ten years and a trillion dollars into this. When we were 10 years and $1T into the massive internet builout between 1998 and 2008, that physical network had added over a trillion dollars to the economy and then about a trillion more every year after. How's the nearly ten years of LLM AI stacking up? DO we expect it'll add a trillion dollars a year to the economy in a couple years? I don't. Not even close. it'll still be a net drain industry, deeply in the red. That trillion dollars could have done some much good if spent on something more serious than man-child dreams about creating god computers.
meroes 4 hours ago [-]
My dentist already uses something called OverJet(?) that reads X-rays for issues. They seem to trust it and it agreed with what they suspected on the X-rays. Personally, I’ve been misdiagnosed through X-rays by a medical doctor so even being an LLM skeptic, Im slightly favorable to AI in medicine.
But I already trust my dentist. A new dentist deferring to AI is scary, and obviously will happen.
aaronbaugher 4 hours ago [-]
I had a misread X-ray once, and I can see how a machine could be better at spotting patterns than a tired technician, so I'm favorable too. I think I'd like a human to at least take a glance at it, though.
The mistake on mine was caught when a radiologist checked over the work of the weekend X-ray technician who missed a hairline crack. A second look is always good, and having one look be machine and the other human might be the best combo.
weatherlite 2 hours ago [-]
> A second look is always good, and having one look be machine and the other human might be the best combo
For now I agree.
2-4 years from now it can be 20 ultra strong models each trained somewhat differently that converse on the X-ray and reach a conclusion. I don't think technicians will have much to add to the accuracy.
spondylosaurus 4 hours ago [-]
> Text editors (people that edit/proofread prose, not computer programs) will be replaced by LLMs.
This is such a broad category that I think it's inaccurate to say that all editors will be automated, regardless of your outlook on LLMs in general. Editing and proofreading are pretty distinct roles; the latter is already easily automated, but the former can take on a number of roles more akin to a second writer who steers the first writer in the correct direction. Developmental editors take an active role in helping creatives flesh out a work of fiction, technical editors perform fact-checking and do rewrites for clarity, etc.
sxg 4 hours ago [-]
> Read Paul Tetlock's research about so-called "experts" and their inability to make good forecasts
Do you mean Philip Tetlock? He wrote Superforecasting, which might be what you're referring to?
voxl 4 hours ago [-]
Name a better duo: software engineering hype cycles and anti-intellectualism
nearbuy 16 minutes ago [-]
Isn't your post anti-intellectual, since you're denigrating someone without justification just for referencing the work of a professor you disagree with?
42lux 4 hours ago [-]
We were the stochastic parrots all along.
kubb 2 hours ago [-]
And because we are very smart, so must be it.
nradov 4 hours ago [-]
Video VFX artists are already suffering from lower demand.
empath75 2 hours ago [-]
> - Text editors (people that edit/proofread prose, not computer programs) will be replaced by LLMs.
It has been a very, very long time since editors have been proof-reading prose for typos and grammar mistakes, and you don't need LLMs for that. Good editors do a lot more creative work than that, and LLMs are terrible at it.
thatjoeoverthr 9 hours ago [-]
My primary worry since the start has been not that it would "replace workers", but that it can destroy value of entire sectors. Think of resume-sending. Once both sides are automated, the practice is actually superfluous. The concept of "posting" and "applying" to jobs has to go. So any infrastructure supporting it has to go. At no point did it successfully "do a job", but the injury to the signal to noise ratio wipes out the economic value a system.
This is what happened to Google Search. It, like cable news, does kinda plod along because some dwindling fraction of the audience still doesn't "get it", but decline is decline.
JumpCrisscross 8 hours ago [-]
> it can destroy value of entire sectors. Think of resume-sending. Once both sides are automated, the practice is actually superfluous
"Like all ‘magic’ in Tolkien, [spiritual] power is an expression of the primacy of the Unseen over the Seen and in a sense as a result such spiritual power does not effect or perform but rather reveals: the true, Unseen nature of the world is revealed by the exertion of a supernatural being and that revelation reshapes physical reality (the Seen) which is necessarily less real and less fundamental than the Unseen" [1].
The writing and receiving of resumes has been superfluous for decades. Generative AI is just revealing that truth.
Interesting: At first I was objecting in my mind ("Clearly, the magic - LLMs - can create effect instead of only revealing it.") but upon further reflecting on this, maybe you're right:
First, LLMs are a distillation of our cultural knowledge. As such they can only reveal our knowledge to us.
Second, they are limited even more so by the users knowledge. I found that you can barely escape your "zone of proximal development" when interacting with an LLM.
(There's even something to be said about prompt engineering in the context of what the article is talking about: It is 'dark magic' and 'craft-magic' - some of the full potential power of the LLM is made available to the user by binding some selected fraction of that power locally through a conjuration of sorts. And that fraction is a product of the craftsmanship of the person who produced the prompt).
mjburgess 6 hours ago [-]
My view has been something of a middle ground. It's not exactly that it reveals relevant domains of activity are merely performative, but its a kind of "accelerationism of the almost performative". So it pushes these almost-performative systems into a death spiral of pure uselessness.
In this sense, I have rarely seen AI have negative impacts. Insofar as an LLM can generate a dozen lines of code, it forces developers to engage in less "performative copy-paste of stackoverflow/code-docs/examples/etc." and engage the mind in what those lines should be. Even if, this engagement of the mind, is a prompt.
krainboltgreene 4 hours ago [-]
Yeah man, I'm not so sure about that. My father made good money writing resumes in his college years studying for his MFA. Same for my mother. Neither of them were under the illusion that writing/receiving resumes was important or needed. Nor were the workers or managers. The only people who were confused about it were capitalists who needed some way to avoid losing their sanity under the weight of how unnecessary they were in the scheme of things.
Rebuff5007 8 hours ago [-]
Im not sure this is a great example... yes the infrastructure of posting and applying to jobs has to go, but the cost of recruitment in this world would actually be much higher... you likely need more people and more resources to recruit a single employee.
In other words, there is a lot more spam in the world. Efficiencies in hiring that implicitly existed until today may no longer exist because anyone and their mother can generate a professional-looking cover letter or personal web page or w/e.
Attrecomet 6 hours ago [-]
I'm not sure that is actually a bad thing. Being a competent employee and writing a professional-looking resume are two almost entirely distinct skill sets held together only by "professional-looking" being a rather costly marker of being in the in-group for your profession.
BrtByte 8 hours ago [-]
Resume-sending is a great example: if everyone's blasting out AI-generated applications and companies are using AI to filter them, the whole "application" process collapses into meaningless busywork
Attrecomet 6 hours ago [-]
No, the whole process is revealed to be meaningless busywork. But that step has been taken for a long time, as soon as automated systems and barely qualified hacks were employed to filter applications. I mean, they're trying to solve a hard and real problem, but those solutions are just bad at it.
3 hours ago [-]
tbrownaw 6 hours ago [-]
Doesn't this assume that a resume has no actual relation to reality?
Attrecomet 5 hours ago [-]
The technical information on the cv/resume is, in my opinion, at most half of the process. And that's assuming that the person is honest, and already has the cv-only knowledge of exactly how much to overstate and brag about their ability and to get through screens.
Presenting soft skills is entirely random, anyway, so the only marker you can have on a cv is "the person is able to write whatever we deem well-written [$LANGUAGE] for our profession and knows exactly which meaningless phrases to include that we want to see".
So I guess I was a bit strong on the low information content, but you better have a very, very strong resume if you don't know the unspoken rules of phrasing, formatting and bragging that are required to get through to an actual interview. For those of us stuck in the masses, this means we get better results by adding information that we basically only get by already being part of the in-group, not by any technical or even interpersonal expertise.
Edit: If I constrain my argument to CVs only, I think my statement holds: They test an ability to send in acceptably written text, and apart from that, literally only in-group markers.
mprovost 4 hours ago [-]
For some applications it feels like half the signal of whether you're qualified is whether the CV is set in Computer Modern, ie was produced via LaTeX.
osigurdson 7 hours ago [-]
input -> ai expand -> ai compress -> input'
Where input' is a distorted version of input. This is the new reality.
We should start to be less impressed volume of text and instead focus on density of information.
blitzar 6 hours ago [-]
> the whole "application" process collapses into meaningless busywork
Always was.
bambax 8 hours ago [-]
> This is what happened to Google Search
This is completely untrue. Google Search still works, wonderfully. It works even better than other attempts at search by the same Google. For example, there are many videos that you will NEVER find on Youtube search that come up as the first results on Google Search. Same for maps: it's much easier to find businesses on Google Search than on maps. And it's even more true for non-google websites; searching Stack Overflow questions on SO itself is an exercice in frustration. Etc.
weatherlite 2 hours ago [-]
Yeah I agree. But this is a strong perception and why Google stock is quite cheap (people are afraid Search is dying).
I think Search has its place for years to come (while it will evolve as well with AI) and that Google is going to be pretty much unbeatable unless it is broken up.
BeFlatXIII 34 minutes ago [-]
In the specific case of résumé-sending, the decline of the entire sector is a good thing. Nothing but make-work.
weatherlite 2 hours ago [-]
> This is what happened to Google Search. It, like cable news, does kinda plod along because some dwindling fraction of the audience still doesn't "get it", but decline is decline.
Well their Search revenue actually went up last quarter, as all quarters. Overall traffic might be a bit down (they don't release that data so we can't be sure) but not revenue.
While I do take tons of queries to LLMs now, the kind of queries Google actually makes a lot of money on (searching flights, restaurants etc) I don't go to an LLM for - either because of habit or because of fear these things are still hallucinating.
If Search was starting to die I'd expect to see it in the latest quarter earnings but it isn't happening.
thehappypm 9 hours ago [-]
Are you sure suggesting google search is in decline? The latest Google earnings call suggests it’s still growing
Zanfa 8 hours ago [-]
Google Search is distinct from Google's expansive ad network. Google search is now garbage, but their ads are everywhere are more profitable than ever.
OtherShrezzing 7 hours ago [-]
On Google's earnings call - within the last couple of weeks - they explicitly stated that their stronger-than-expected growth in the quarter was due to a large unexpected increase in search revenues[0]. That's a distinct line-item from their ads business.
>Google’s core search and advertising business grew almost 10 per cent to $50.7bn in the quarter, surpassing estimates for between 8 per cent and 9 per cent.[0]
The "Google's search is garbage" paradigm is starting to get outdated, and users are returning to their search product. Their results, particularly the Gemini overview box, are (usually) useful at the moment. Their key differentiator over generative chatbots is that they have reliable & sourced results instantly in their overview. Just concise information about the thing you searched for, instantly, with links to sources.
> The "Google's search is garbage" paradigm is starting to get outdated
Quite the opposite. It's never been more true. I'm not saying using LLMs for search is better, but as it stands right now, SEO spammers have beat Google, since whatever you search for, the majority of results are AI slop.
Their increased revenue probably comes down to the fact that they no longer show any search results in the first screenful at all for mobile and they've worked hard to make ads indistinguishable from real results at a quick glance for the average user. And it's not like there exists a better alternative. Search in general sucks due to SEO.
Workaccount2 5 hours ago [-]
Can you give an example of an everyday person search that generates a majority of AI slop?
If anything my frustration with google search comes from it being much harder to find niche technical information, because it seems google has turned the knobs hard towards "Treat search queries like they are coming from the average user, so show them what they are probably looking for over what they are actually looking for."
Zanfa 4 hours ago [-]
Basically any product comparison or review for example.
yreg 2 hours ago [-]
Let's try "samsung fridge review". The top results are a reddit thread, consumer reports article, Best Buy listing, Quora thread and some YouTube videos by actual humans.
Where is this slop you speak of?
disgruntledphd2 7 hours ago [-]
> Quite the opposite. It's never been more true. I'm not saying using LLMs for search is better, but as it stands right now, SEO spammers have beat Google, since whatever you search for, the majority of results is AI slop.
It's actually sadder than that. Google appear to have realised that they make more money if they serve up ad infested scrapes of Stack Overflow rather than the original site. (And they're right, at least in the short term).
Jensson 8 hours ago [-]
Most Google ads comes from Google search, its a misconception Google derives most of their profits from third party ads that is just a minor part of Googles revenue.
philipov 8 hours ago [-]
You are talking past each other. They say "Google search sucks now" and you retort with "But people still use it." Both things can be true at the same time.
otabdeveloper4 4 hours ago [-]
You misunderstand. Making organic search results shittier will drive up ad revenue as people click on sponsored links in the search results page instead.
Not a sustainable strategy in the long term though.
nottorp 8 hours ago [-]
I've all but given up on google search and have Gemini find me the links instead.
Not because the LLM is better, but because the search is close to unusable.
hgomersall 8 hours ago [-]
We're in the phase of yanking hard on the enshittification handle. Of course that increases profits whilst sufficient users can't or won't move, but it devalues the product for users. It's in decline insomuch as it's got notably worse.
InDubioProRubio 8 hours ago [-]
The line goes up, democracy is fine, the future will be good. Disregard reality
benterix 8 hours ago [-]
GenAI is like plastic surgery for people who want to look better - looks good only if you can do it in a way it doesn't show it's plastic surgery.
Resume filtering by AI can work well on the first line (if implemented well). However, once we get to the the real interview rounds and I see the CV is full of AI slop, it immediately suggests the candidate will have a loose attitude to checking the work generated by LLMs. This is a problem already.
noja 4 hours ago [-]
> looks good only if you can do it in a way it doesn't show it's plastic surgery.
I think the plastic surgery users disagree here: it seems like visible plastic surgery has become a look, a status symbol.
cornholio 8 hours ago [-]
Probably the first significant hit are going to be drivers, delivery men, truckers etc. a demographic of 5 million jobs in US and double that in EU, with ripple effects costing other millions of jobs in industries such as roadside diners and hotels.
The general tone of this study seems to be "It's 1995, and this thing called the Internet has not made TV obsolete"; same for the Acemoglu piece linked elsewhere in the. Well, no, it doesn't work like that, it first comes for your Blockbuster, your local shops and newspaper and so on, and transforms those middle class jobs vulnerable to automation into minimum wages in some Amazon warehouse. Similarly, AI won't come for lawyers and programmers first, even if some fear it.
The overarching theme is that the benefits of automation flow to those who have the bleeding edge technological capital. Historically, labor has managed to close the gap, especially trough public education; it remains to be seen if this process can continue, since eventually we're bound to hit the "hardware" limits of our wetware, whereas automation continues to accelerate.
So at some point, if the economic paradigm is not changed, human capital loses and the owners of the technological capital transition into feudal lords.
Ekaros 7 hours ago [-]
I think that drivers are probably pretty late in cycle. Many environments they operate in are somewhat complicated. Even if you do a lot to make automation possible. Say with garbage move to containers that can simply be lifted either by crane or forks. Still places were those containers are might need lot of individual training to navigate to.
Similar thing goes to delivery. Moving single pallet to store or replacing carpets or whatever. Lot of complexity if you do not offload it to receiver.
More regular the environment is easier it is to automate. A shelving in store in my mind might be simpler than all environments where vehicles need to operate in.
And I think we know first to go. Average or below average "creative" professionals. Copywriter, artists and so on.
ringeryless 8 hours ago [-]
LLMs are the least deterministic means you could possibly ever have for automation.
What you are truly seeking is high level specifications for automation systems, which is a flawed concept to the degree that the particulars of a system may require knowledgeable decisions made on a lower level.
However, CAD/CAM, and infrastructure as code are true amplifiers of human power.
LLMs destroy the notion of direct coupling or having any layered specifications or actual levels involved at all, you try to prompt a machine trained in trying to ascertain important datapoints for a given model itself, when the correct model is built up with human specifications and intention at every level.
Wrongful roads lead to erratic destinations, when it turns out that you actually have some intentions you wish to implement IRL
cornholio 2 hours ago [-]
If you give the same subject to two different journalists, or even the same one under different "temperature" settings, say, he had lunch or not, or he's in different moods, the outputs and approaches to the subject will be completely different, totally nondeterministic.
But that doesn't mean the article they wrote in each of those scenarios in not useful and economically valuable enough for them to maintain a job.
pixl97 6 hours ago [-]
If you want to get to a destination you use google maps.
If you want to reach the actual destination because conditions changed (there is a wreck in front of you) you need a system to identify changes that occur in a chaotic world and can pick from an undefined/unbounded list of actions.
otabdeveloper4 8 hours ago [-]
Generative AI has failed to automate anything at all so far.
(Racist memes and furry pornography doesn't count.)
jcelerier 8 hours ago [-]
Yeah no, I'm seeing more and more shitty ai generated ads, shop logos, interior design & graphics for instance in barber shops, fast food places etc.
The sandwich shop next to my work has a music playlist which is 100% ai generated repetitive slop.
Do you think they'll be paying graphic designers, musicians etc. for now on when something certainly shittier than what a good artist does, but also much better than what a poor one is able to achieve, can be used in five minutes for free?
dgfitz 7 hours ago [-]
> Do you think they'll be paying graphic designers, musicians etc. for now on
People generating these things weren't ever going to be customers of those skillsets. Your examples are small business owners basically fucking around because they can, because it's free.
Most barber shops just play the radio, or "spring" for satellite radio, for example. AI generated music might actively lose them customers.
otabdeveloper4 4 hours ago [-]
That's not automation, that's replacing a product with a cheaper and shittier version.
pydry 8 hours ago [-]
Given that the world is fast deglobalizing there will be a flood of factory work being reshored in the next 10 years.
There's also going to be a shrinkage in the workforce caused by demographics (not enough kids to replace existing workers).
At the same time education costs have been artificially skyrocketed.
Personally the only scenario I see mass unemployment happening is under a "Russia-in-the-90s" style collapse caused by an industrial rugpull (supply chains being cut off way before we are capable of domestically substituting them) and/or the continuation of policies designed to make wealth inequality even worse.
cornholio 8 hours ago [-]
The world is not deglobalizing, US is.
clarionbell 8 hours ago [-]
The world is deglobalizing. EU has been cutting off from Russia since the war started, and forcing medical industries to reshore since covid. At the same time it has begun drive to remilitarize itself. This means more heavy industry and all of it local.
There is brewing conflict across continents. India and Pakistan, Red sea region, South China sea. The list goes on and on. It's time to accept it. The world has moved on.
ringeryless 8 hours ago [-]
navel gazing will be shown to be a reactionary empty step, as all current global issues require more global cooperation to solve, not less.
the individual phenomena you describe are indeed detritus of this failed reaction to an increasing awareness of all humans of our common conditions under disparate nation states.
nationalism is broken by the realization that everyone everywhere is paying roughly 1/4 to 1/3 of their income in taxes, however what you receive for that taxation varies.
your nation state should have to compete with other nation states to retain you.
the nativist movement is wrongful in the usa for the reason that none of the folks crying about foreigners is actually native american,
but it's globally in error for not presenting the truth: humans are all your relatives, and they are assets, not liabilities: attracting immigration is a good thing, but hey feel free to recycle tired murdoch media talking points that have made us nothing but trouble for 40 years.
smallnix 7 hours ago [-]
> Global connectedness is holding steady at a record high level based on the latest data available in early 2025, highlighting the resilience of international flows in the face of geopolitical tensions and uncertainty.
Source for counter argument is in the page that you just linked here. You have cherry picked one sentence.
boredtofears 5 hours ago [-]
"Nothing to see here, folks! Keep shipping your stuff internationally!"
munksbeer 6 hours ago [-]
> The world is deglobalizing.
We have had thousands of years of globalising. The trend has always been towards a more connected world. I strongly suspect the current Trump movement (and to an extent brexit depending on which brexit version you chose to listen to) will be blips in that continued trend. That is because it doesn't make sense for there to be 200 countries all experts in microchip manufacturing and banana growing.
clarionbell 5 hours ago [-]
But doesn't make sense to be dependent on your enemies either.
pydry 5 hours ago [-]
>We have had thousands of years of globalising.
It happens in cycles. Globalization has followed deglobalization before and vice versa. It's never been one straight line upward.
>That is because it doesn't make sense for there to be 200 countries all experts in microchip manufacturing and banana growing.
It'll break down into blocs, not 200 individual countries.
Ask Estonia why they buy overpriced LNG from America and Qatar rather than cheap gas from their next door neighbor.
If you think the inability to source high end microchips from anywhere apart from Taiwan is going to prevent a future conflict (the Milton Friedman(tm) golden arches theory) then I'm afraid I've got bad news.
pydry 8 hours ago [-]
Much of the globalized system is dependent upon US institutions which currently dont have a substitute.
BRICs have been trying to substitute for some of them and have made some nonzero progress but theyre still far, far away from stuff like a reserve currency.
nemo44x 7 hours ago [-]
Yeah you need a global navy that can assure the safe passage of thousands of ships daily. Now, how do you ensure that said navy will protect your interests? Nothing is free.
oytis 8 hours ago [-]
What's the alternative here? Apart from well-known, but not so useful useful advice to have a ton of friends who can hire you or be so famous as to not need an introduction.
bilsbie 8 hours ago [-]
Making dumb processes dumber to the point of failure is actually a feature.
lysecret 5 hours ago [-]
Funny you call it value I call it inefficiency.
paulsutter 9 hours ago [-]
Why is this a worry? Sounds wonderful
jspdown 8 hours ago [-]
I'm a bit worried about the social impacts.
When a sector collapses and become irrelevant, all its workers no longer need to be employed. Some will no longer have any useful qualifications and won't be able to find another job. They will have to go back to training and find a different activity.
It's fine if it's an isolated event. Much worse when the event is repeated in many sectors almost simultaneously.
9rx 6 hours ago [-]
> They will have to go back to training
Why? When we've seen a sector collapse, the new jobs that rush in to fill the void are new, never seen before, and thus don't have training. You just jump in and figure things out along the way like everyone else.
The problem, though, is that people usually seek out jobs that they like. When that collapses they are left reeling and aren't apt to want to embrace something new. That mental hurdle is hard to overcome.
throwaway35364 5 hours ago [-]
What if no jobs, or fewer jobs than before, rush in to fill the void this time? You only need so many prompt engineers when each one can replace hundreds of traditional workers.
9rx 4 hours ago [-]
> What if no jobs, or fewer jobs than before, rush in to fill the void this time?
That means either:
1. The capitalists failed to redeploy capital after the collapse.
2. We entered into some kind of post-capitalism future.
To explore further, which one are you imagining?
twoodfin 7 hours ago [-]
As others in this thread have pointed out, this is basically what happened in the relatively short period of 1995 to 2015 with the rise of global wireless internet telecommunications & software platforms.
Many, many industries and jobs transformed or were relegated to much smaller niches.
Overall it was great.
paulsutter 46 minutes ago [-]
Good thing that we have AI tools that are tireless teachers
ninetyninenine 6 hours ago [-]
Until we solve the hallucination problem google search still has a place of power as something that doesn’t hallucinate.
And even if we solve this problem of hallucination, the ai agents still need a platform to do search.
If I was Google I’d simply cut off public api access to the search engine.
pixl97 6 hours ago [-]
>google search still has a place of power as something that doesn’t hallucinate.
Google search is fraught with it's own list of problems and crappy results. Acting like it's infallible is certainly an interesting position.
>If I was Google I’d simply cut off public api access to the search engine.
The convicted monopolist Google? Yea, that will go very well for them.
voidspark 5 hours ago [-]
LLMs are already grounding their results in Google searches with citations. They have been doing that for a year already. Optional with all the big models from OpenAI, Google, xAI
asadotzler 1 hours ago [-]
And yet they still hallucinate and offer dead links. I've gotten wrong answers to simple historical event and people questions with sources that are entirely fabricated and referencing a dead link to an irrelevant site. Google results don't do that. This is why I use LLM's to help me come up with better searches that I perform and tune myself. That's valuable, the wordsmithing they can do given their solid word and word part statistics.
voidspark 1 hours ago [-]
Is that using the state of the art reasoning models with Google search enabled?
OpenAI o3
Gemini 2.5 Pro
Grok 3
Anything below that is obsolete or dumbed down to reduce cost
I doubt this feature is actually broken and returning hallucinated links
People talk about LLM hallucinations as if they're a new problem, but content mill blog posts existed 15 years ago, and they read like LLM bullshit back then, and they still exist. Clicking through to Google search results typically results in lower-quality information than just asking Gemini 2.5 pro. (which can give you the same links formatted in a more legible fashion if you need to verify.)
What people call "AI slop" existed before AI and AI where I control the prompt is getting to be better than what you will find on those sorts of websites.
belter 8 hours ago [-]
I had similar thoughts, but then remembered companies still burn billions on Google Ads, sure that humans...and not bots...click them, and thinking that in 2025 most people browse without ad-blockers.
disgruntledphd2 7 hours ago [-]
Most people do browse without ad blockers, otherwise the entire DR ads industry would have collapsed years ago.
Note also that ad blockers are much less prevalent on mobile.
theshackleford 7 hours ago [-]
People will pay for what works. I consult for a number of ecommerce companies and I assure you they get a return on their spend.
akshaybhalotia 5 hours ago [-]
I humbly disagree. I've seen team members and sometimes entire teams being laid off because of AI. It's also not just layoffs, the hiring processes and demand have been affected as well.
As an example, many companies have recently shifted their support to "AI first" models. As a result, even if the team or certain team members haven't been fired, the general trend of hiring for support is pretty much down (anecdotal).
I agree that some automation is better for the humans to do their jobs better, but this isn't one of those. When you're looking for support, something has clearly went wrong. Speaking or typing to an AI which responds with random unrelated articles or "sorry I didn't quite get that" is just evading responsibility in the name of "progress", "development", "modernization", "futuristic", "technology", <insert term of choice>, etc.
johnfn 5 hours ago [-]
How do you know that these layoffs are the result of AI, rather than AI being a convenient place to lay the blame? I've seen a number of companies go "AI first" and stop hiring or have layoffs (Salesforce comes to mind) but I suspect they would have been in a slump without AI entirely.
danans 5 hours ago [-]
> How do you know that these layoffs are the result of AI, rather than AI being a convenient place to lay the blame?
Both of those can be true, because companies are placing bets that AI will replace a lot of human work (by layoffs and reduced hiring), while also using it in the short term as a reason to cut short term costs.
ponector 5 hours ago [-]
AI is not hurting jobs in Denmark they said.
Software development jobs there have bigger threat: outsourcing to cheaper locations.
As well for teachers: it is hard to replace a person supervising kids with a chatbot.
pc86 4 hours ago [-]
Has any serious person every suggested replacing teachers with chatbots? Seems like a non sequitur.
geraneum 5 hours ago [-]
> I humbly disagree
Both your experience and what the article (research) says can be valid at the same time. That’s how statistics works.
venantius 4 hours ago [-]
It is possible for all of the following to be true:
1. This study is accurate
2. We are early in a major technological shift
3. Companies have allocated massive amounts of capital to this shift that may not represent a good investment
4. Assuming that the above three will remain true going forward is a bad idea
The .com boom and bust is an apt reference point. The technological shift WAS real, and the value to be delivered ultimately WAS delivered…but not in 1999/2000.
It may be we see a massive crash in valuations but AI still ends up the dominant driver of software value over the next 5-10 years.
hangonhn 4 hours ago [-]
That's a repeating pattern with technologies. Most of the early investments don't pay off and the transformation does happen but also quite a bit later than people predicted. This was true of the steam engine, the telegraph, electricity, and the railroad. It actually tends to be the later stage investors who reap most of the reward because by then the lessons have been learned and solutions developed.
asadotzler 1 hours ago [-]
The dot com boom gave us $1T in physical broadband, fiber, and cellular networking that's added many many trillions to the economy since. What's LLM-based AI gonna leave us when its bubble pops? Will that AI infrastructure be outliving its creators and generating trillions for the economy when all the AI companies collapse and are sold off for parts and scrap?
venantius 35 minutes ago [-]
Among other things the big tech companies are literally planning to build nuclear power plants off this so I think the infrastructure investments will likely be pretty good.
Sammi 1 hours ago [-]
People overestimate what can be done in the short term and underestimate what can be done in the long term.
lpolovets 21 minutes ago [-]
This feels premature -- we've only had great AI capabilities for a little while, and jobs are not replaced overnight.
This reminds me of some early stage startup pitches. During a pitch, I might ask: "what do you think about competitor XYZ?" And sometimes the answer is "we don't think highly of them, we have never even seen them in a single deal we've competed for!" But that's almost a statistical tautology: if you both have .001% market share and you're doubling or tripling annually, the chance that you're going to compete for the same customers is tiny. That doesn't mean you can just dismiss that competitor. Same thing with the article above dismissing AI as a threat to jobs so quickly.
To give a concrete example of a job disappearing: I run a small deep tech VC fund. When I raised the fund in early '24, my plan was to hire one investor and one researcher. I hired a great investor, but given all of the AI progress I'm now 80% sure I won't hire a researcher. ChatGPT is good enough for research. I might end up adding a different role in the near future, but this is a research job that likely disappeared because of AI.
paulvnickerson 45 minutes ago [-]
This line of worry has never panned out. There are two points:
1) AI/automation will replace jobs. This is 100% certain in some cases. Look at the industrial revolution.
2) AI/automation will increase unemployment. This has never happened and it's doubtful it will ever happen.
The reason is that humans always adapt and find ways to be helpful that automation can't do. That is why after 250 years after the industrial revolution started, we still have single-digit unemployment.
Kbelicius 17 minutes ago [-]
> 2) AI/automation will increase unemployment. This has never happened and it's doubtful it will ever happen.
> The reason is that humans always adapt and find ways to be helpful that automation can't do. That is why after 250 years after the industrial revolution started, we still have single-digit unemployment.
Horses, for thousand of years, were very useful to humans. Even with the various technological advances through that time their "unemployment" was very low. Until the invention and perfection of internal combustion engines.
To say that it is doubtful that it will ever happen to us is basically saying that human cognitive and/or physical capabilities are without bounds and that there is some reason that with our unbounded cognitive capabilities we will never be able to create a machine that could replicate those capabilities. That is a ridiculous claim.
DebtDeflation 6 hours ago [-]
The study looks at 11 occupations in Denmark in 2023-24.
Maybe instead look at the US in 2025. EU labor regulations make it much harder to fire employees. And 2023 was mainly a hype year for GenAI. Actual Enterprise adoption (not free vendor pilots) started taking off in the latter half of 2024.
That said, a lot of CEOs seem to have taken the "lay off all the employees first, then figure out how to have AI (or low cost offshore labor) do the work second" approach.
cogman10 6 hours ago [-]
2025 US has some really big complicating factors that'd make assessing the job market impact really hard to gauge.
For example, the mass layoffs of federal employees.
mlnj 6 hours ago [-]
>"lay off all the employees first, then figure out how to have AI (or low cost offshore labor) do the work second"
Surprisingly, Denmark is one of the easiest countries in which to fire someone.
Sammi 1 hours ago [-]
Workers in denmark are almost all unionised and get unemployment benefits from their union. So it's pretty directly because of the unions that it becomes such a small issue for someone in denmark to be laid off.
WillAdams 9 hours ago [-]
Have any of these economists ever tried to scrape by as an entry-level graphic designer/illustrator?
Apparently not, since the sort of specific work which one used to find for this has all but vanished --- every AI-generated image one sees represents an instance where someone who might have contracted for an image did not (ditto for stock images, but that's a different conversation).
lolinder 7 hours ago [-]
> every AI-generated image one sees represents an instance where someone who might have contracted for an image did not
This is not at all true. Some percentage of AI generated images might have become a contract, but that percentage is vanishingly small.
Most AI generated images you see out there are just shared casually between friends. Another sizable chunk are useless filler in a casual blog post and the author would otherwise have gone without, used public domain images, or illegally copied an image.
A very very small percentage of them are used in a specific subset of SEO posts whose authors actually might have cared enough to get a professional illustrator a few years ago but don't care enough to avoid AI artifacts today. That sliver probably represents most of the work that used to exist for a freelance illustrator, but it's a vanishingly small percentage of AI generated images.
probably_wrong 5 hours ago [-]
There is more to entry-level illustrators than SEO posts. In my daily life I've witnessed a bakery, an aspiring writer of children's books, and two University departments go for self-made AI pictures instead of hiring an illustrator. Those jobs would have definitely gone to a local illustrator.
asadotzler 1 hours ago [-]
I've seen the same more times that I can count, having been in that business decades ago. Then it was clip art and bad illustrator work, no different than what you're seeing today with AI -- and to a trained professional, the delta between the two "home made" approaches and professional ones is clearly evident. We'll look at the AI slop in 10 years the way we look at clip art from 1995.
TheRealQueequeg 6 hours ago [-]
> That sliver probably represents most of the work that used to exist for a freelance illustrator, but it's a vanishingly small percentage of AI generated images.
I prefer to get my illegally copied images from only the most humanely trained LLM instead of illegally copying them myself like some neanderthal or, heaven forbid, asking a human to make something. Such a though is revolting; humans breathe so loud and sweat so much and are so icky. Hold on - my wife just texted me. "Hey chat gipity, what is my wife asking about now?" /s
jelder 8 hours ago [-]
I miss the old internet, when every article didn't have a goofy image at the top just for "optimization." With the exception of photography in reporting, it's all a waste of time and bandwidth.
Most if it wasn't bespoke assets created by humans but stock art picked by if lucky, a professional photo editor, but more often the author themselves.
myaccountonhn 7 hours ago [-]
Yeah, I saw a investment app that was filled with obviously AI generated images. One of the more recommended choices in my country.
It feels very short-sighted from the company side because I nope'd right out of there. They didn't make me feel any trust for the company at all.
mattlondon 8 hours ago [-]
It looks like the writing is on the wall too for other menial and low-value creative jobs too - so basic music and videos - I fully expect that 90+% of video adverts will be entirely AI generated within the next year or two. see Google Veo - they have the tech already and they have YouTube already and they have the ad network already ...
Instead of uploading your video ad you already created, you'll just enter a description or two and the AI will auto-generate the video ads in thousands of iterations to target every demographic.
Google is going to run away with this with their ecosystem - OpenAI etc al can't compete with this sort of thing.
lambdaba 8 hours ago [-]
People will develop an eye for how AI-generated looks and that will make human creativity stand out even more. I'm expecting more creativity and less cookie-cutter content, I think AI generated content is actually the end of it.
Workaccount2 5 hours ago [-]
>People will develop an eye for how AI-generated looks
People will think they have an eye for AI-generated content, and miss all the AI that doesn't register. If anything it would benefit the whole industry to keep some stuff looking "AI" so people build a false model of what "AI" looks like.
This is like the ChatGPT image gen of last year, which purposely put a distinct style on generated images (that shiny plasticy look). Then everyone had an "eye for AI" after seeing all those. But in the meantime, purpose made image generators without the injected prompts were creating indistinguishable images.
It is almost certain that every single person here has laid eyes on an image already, probably in an ad, that didn't set off any triggers.
pllbnk 7 hours ago [-]
Given that the goal of generative AI is to generate content that is virtually indistinguishable from expert creative people, I think it's one of these scenarios:
1. If the goal is achieved, which is highly unlikely, then we get very very close to AGI and all bets are off.
2. If the goal is not achieved and we stay in this uncanny valley territory (not at the bottom of it but not being able to climb out either), then eventually in a few years' time we should see a return to many fragmented almost indie-like platforms offering bespoke human-made content. The only way to hope to achieve the acceptable quality will be to favor it instead of scale as the content will have to be somehow verified by actual human beings.
Topfi 7 hours ago [-]
> If the goal is achieved, which is highly unlikely, then we get very very close to AGI and all bets are off.
Question on two fronts:
1. Why do you think, considering the current rate of progress think it is very unlikely that LLM output becomes indistinguishable from expert creatives? Especially considering a lot of tells people claim to see are easily alleviated by prompting.
2. Why do you think a model whose output reaches that goal would rise in any way to what we’d consider AGI?
Personally, I feel the opposite. The output is likely to reach that level in the coming years, yet AGI is still far away from being reached once that has happened.
pllbnk 1 hours ago [-]
Interesting thoughts, to which I partially agree.
1. The progress is there but it's been slowing down yet the downsides have largely remained.
1.1. With the LLMs, while thanks to the larger context window (mostly achieved via hardware, not software), the models can keep track of the longer conversations better, the hallucinations are as bad as ever; I use them eagerly yet I haven't felt any significant improvements to the outputs in a long time. Anecdotally, a couple days ago I decided to try my luck and vibe-code a primitive messaging library and it led me in the wrong path even though I was challenging it along the way; it was so convincing that I wouldn't have noticed hadn't my colleague told me there was a better way. Granted, the colleague is extremely smart, but LLM should have told me what was the right approach because I was specifically questioning it.
1.2. The image generation has also barely improved. The biggest improvement during the past year has been with 4o, which can be largely attributed to move from diffusion to autoregression but it's far from perfect and still suffers from hallucinations even more than LLMs.
1.3. I don't think video models are even worth discussing because you just can't get a decent video if you can't get a decent still in the first place.
2. That's speculation, of course. Let me explain my thought process. A truly expert level AI should be able to avoid mistakes and create novel writings or research just by the human asking it to do it. In order to validate the research, it can also invent the experiments that need to be done by humans. But if it can do all this, then it could/should find the way to build a better AI, which after an iteration or two should lead to AGI. So, it's basically a genius that, upon human request, can break itself out of the confines.
mattlondon 6 hours ago [-]
People already know what the ads are and what is content, but yet the advertisers keep on paying for ads on videos so they must be working.
It feels to me that the SOTA video models today are pretty damn good already, let alone in another 12 months when SOTA will no doubt have moved on significantly.
ninetyninenine 6 hours ago [-]
This eye will be a driving force for improving ai until it becomes in parity with real non generated pictures.
nottorp 8 hours ago [-]
> fully expect that 90+% of video adverts will be entirely AI generated within the next year or two
And on the other end we'll have "AI" ad blockers, hopefully. They can watch each other.
pj_mukh 7 hours ago [-]
I don't know. Even with these tools, I don't want to be doing this work.
I'd still hire an entry level graphic designer. I would just expect them to use these tools and 2x-5x their output. That's the only changing I'm sensing.
ninetyninenine 6 hours ago [-]
Also pay them less, because they don’t need to be as skilled anymore since ai is covering it.
Dumblydorr 7 hours ago [-]
Probably not, economists generally stay in school straight to becoming professors or they’ll go into finance right after school.
That said I don’t think entry level illustration jobs can be around if software can do their job better than they do. Just like we don’t have a lot of calculators anymore, technological replacement is bound to occur in society, AI or not.
markus_zhang 7 hours ago [-]
AI I different. It impacts everything directly. It's like the computer in boost. It's like trains taking over horses but for every job out there.
Well at least that's the potential.
surement 6 hours ago [-]
> Have any of these economists ever tried to scrape by as an entry-level graphic designer/illustrator?
"Equip yourself with skills that other people are willing to pay for." –Thomas Sowell
pixl97 6 hours ago [-]
The general thought works good until it doesn't.
fhd2 9 hours ago [-]
> The economists found for example that "AI chatbots have created new job tasks for 8.4 percent of workers, including some who do not use the tools themselves."
For me, the most interesting takeaway. It's easy to think about a task, break it down into parts, some of which can be automated, and count the savings. But it's more difficult to take into account any secondary consequences from the automation. Sometimes you save nothing because the bottleneck was already something else. Sometimes I guess you end up causing more work down the line by saving a bit of time at an earlier stage.
This can make automation a bit of a tragedy of the commons situation: It would be better for everyone collectively to not automate certain things, but it's better for some individually, so it happens.
chii 9 hours ago [-]
> you end up causing more work down the line by saving a bit of time at an earlier stage
in this case, the total cost would've gone up, and thus, eventually the stakeholder (aka, the person who pays) is going to not want to pay when the "old" way was cheaper/faster/better.
> It would be better for everyone collectively to not automate certain things, but it's better for some individually, so it happens.
not really, as long as the precondition i mentioned above (the total cost dropping) is true.
fhd2 9 hours ago [-]
That's probably true as long as the workers generally cooperate.
But there's also adversarial situations. Hiring would be one example: Companies use automated CV triaging tools that make it harder to get through to a human, and candidates auto generate CVs and cover letters and even auto apply to increase their chance to get to a human. Everybody would probably be better off if neither side attempted to automate. Yet for the individuals involved, it saves them time, so they do it.
chii 7 hours ago [-]
Right, so it's like advertising when the market is already saturated (see coca cola vs pepsi advertising).
BrtByte 8 hours ago [-]
Short-term gains for individuals can gradually hollow out systems that, ironically, worked better when they were a little messy and human
black_13 5 hours ago [-]
[dead]
nirui 8 hours ago [-]
There are few problems in this research, first:
> AI chatbots have had no significant impact on earnings or recorded hours in any occupation
But Generative AI is not just AI chatbots. There are ones that generate sounds/music, ones that generates imagines etc.
Another thing is, the research only looked Denmark, a nation with fairly healthy altitude towards work-life-balance, not a nation that gives proud to people who work their own ass off.
And the research also don't cover the effect of AI generated product: if music or painting can be created by an AI within just 1 minute based on prompt typed in by a 5 year old, then your expected value for "art work" will decrease, and you'll not pay the same price when you're buying from a human artist.
asadotzler 1 hours ago [-]
For that last point, as a graphic designer competing with the first generation of digital printmaking and graphic design tools, I experienced the opposite. DIY people and companies are DIY people and companies. The ones that would have paid a real designer continued to do so, and my rates even went up because I offered something that stuck out even from the growing mass of garbage design from the amateurs with PageMaker or Illustrator. I adopted the same tools and my game was elevated far more than the non-professionals with those tools further separating my high value from the low value producers. It also gave me a few years of advantage over other professionals who still worked on a drawing table with pen and paper.
causal 7 hours ago [-]
That last point is especially important.
nico 6 hours ago [-]
Also in the news today:
> Duolingo will replace contract workers with AI. The company is going to be ‘AI-first,’ says its CEO.
> von Ahn’s email follows a similar memo Shopify CEO Tobi Lütke sent to employees and recently shared online. In that memo, Lütke said that before teams asked for more headcount or resources, they needed to show “why they cannot get what they want done using AI.”
raincole 9 hours ago [-]
> We examine the labor market effects of AI chatbots using two large-scale adoption surveys (late 2023 and 2024) covering 11 exposed occupations (25,000 workers, 7,000 workplaces), linked to matched employer-employee data in Denmark.
It sounds like they didn't ask those who got laid off.
causal 7 hours ago [-]
Yeah this is like counting horses a few years after the automobile was invented.
tunesmith 1 hours ago [-]
I think the effects are more indirect. For instance, GenAI can enable Google to serve summarized content (rather than just search results) that users find useful, which then cuts in to the margins of companies that manually generate that content. Those companies lose revenue, and lay off head count, inhibiting their ability to generate that custom content. So they start using GenAI instead.
At no point did that company choose to pivot to GenAI to cut costs and reduce headcount. It's more reactive than that.
roenxi 9 hours ago [-]
Based on the speed most companies operate at - no surprises here. The internet also didn't have most of its impact in the first decade. And as is fairly well understood, most of the current generation of AI models are a bit dicey in practice. There isn't much of a question that this early phase where AI is likely to create new jobs and opportunities. The real question is what happens when AI is reliably intellectually superior to humans in all domains and it has been proven to everyone's satisfaction, which is still some uncertain time away.
It is like expecting cars to replace horses before anyone starts investing in the road network and getting international petroleum supply chains set up - large capital investment is an understatement when talking about how long it takes to bring in transformative tech and bed it in optimally. Nonetheless, time passed and workhorses are rare beasts.
asadotzler 1 hours ago [-]
At the end of the massive investment period, about a trillion dollars of broadband, fiber and cellular build out between 1998 and 2008, that infrastructure had already added a trillion back to the economy and would add that much nearly every year after. LLM AI is nearing 10 years of massive investment, approaching $1T. Where are the trillions in returns. And what amazing economy-wide impacting infrastructure will that trillion in AI investment leave us with when the bubble pops and these AI companies are sold for parts and scrap and the not-AI companies boosting AI all pull back? When the dot com boom collapsed, we still got value from all that investment that continues to lead the global economy today. What will LLMs leave us with?
4ndrewl 8 hours ago [-]
Does the same follow for The Metaverse, or for Blockchain?
throw310822 8 hours ago [-]
My absolutely unqualified opinion is that blockchain will survive but won't find much uses apart from those it already has; while the metaverse- or vr usage and contents- will have an explosive growth at some point, especially when mixed with AI generated and rendered worlds- which will be lifelike and almost infinitely flexible. Which btw, is also a great way to spend your time when your job has been replaced by another AI and you have little money for anything else.
roenxi 8 hours ago [-]
If they end up going somewhere? Absolutely, we haven't seen anything out of the crypto universe yet compared to what'll start to happen when the tech is a century old and well understood by the bankers.
rspoerri 9 hours ago [-]
All the jobs (11) they looked at are at least medium level complexity and task delegating. They are the ones giving out time-consuming, low level jobs to cheap labour (assistants etc.) . They can save time and money by directly doing it using AI assistants instead of waiting to have an assistant available.
I am 100% convinced that Ai will and already has destroyed lots of Jobs. We will likely encounter world order disrupting changes in the coming decades when computer get another 1000 times faster and powerful in the coming 10 years.
The jobs described might get lost (obsolete or replaced) as well in the longer term if AI gets better than them. For example just now another article was mentioned in HN: "Gen Z grads say their college degrees were a waste of time and money as AI infiltrates the workplace" which would make teachers obsolete.
They are saying that and yet on one of the last earnings calls the VP of Sales admitted that they are shifting the weight of their sales force from peddling Copilot to traditional money-makers like migrations or updates. This could merely speak to Copilot being a dogshit product, but that never really stopped Microsoft from trying, so it could also signal a certain shaking belief in Enterprise AI being that revolutionary.
jofzar 9 hours ago [-]
Not replacing jobs yet.
Seen a whole lot of gen AI deflecting customer questions which would have been previously tickets. That is a reduced ticket volume that would have been taken by a junior support engineer.
We are a couple of years away from the death of the level 1 support engineer. I can't even imagine what's going to happen to the level 0 IT support.
asadotzler 37 minutes ago [-]
We saw that happening before LLM bots with pre-LLM chatbots, FAQs, support wizards, and even redirects to site-specific or web-wide search. If you save more money avoiding human support costs than you lose from dissatisfied customers, it's a win. Same for outsourcing support to low-wage countries. Same for LLM chatbots. It's not some seismic event, it's a gradual move from high quality bespoke output to low quality mass production, same as it ever was.
Cthulhu_ 7 hours ago [-]
> We are a couple of years away from the death of the level 1 support engineer.
And this trend isn't new; a lot of investments into e.g. customer support is to need less support staff, for example through better self-service websites, chatbots / conversational interfaces / phone menus (these go back decades), or to reduce expenses by outsourcing call center work to low-wage countries. AI is another iteration, but gut feeling says they will need a lot of training/priming/coaching to not end up doing something other than their intended task (like Meta's AIs ending up having erotic chats with minors).
One of my projects was to replace the "contact" page of a power company with a wizard - basically, get the customers to check for known outages first, then check their own fuse boxes etc, before calling customer support.
BrtByte 8 hours ago [-]
Yeah, exactly. It's not about a sudden "mass firing" event - it's more like a slow erosion of entry-level roles
FilosofumRex 7 hours ago [-]
Those types of jobs are mostly in India & Philippines, not the US or Denmark, so let them deal with it.
Clubber 8 hours ago [-]
Perhaps briefly. Companies tried this with offshoring support. Some really took a hit and had to bring it back. Some didn't though, so it's not all or nothing in the medium term. In the short term, most of the execs will buy into the hype and try it. I suspect the lower quality companies will use it, but the companies whose value is in their reputation for quality will continue to use people.
admissionsguy 5 hours ago [-]
I have had AI support agents deflect my questions, but not resolve them. It is more companies ending customer support under the guise of automation than AI obsoleting the support workers.
oytis 8 hours ago [-]
I mean, if it really works in the end, we just redefine levels humans need to deal with. There are lots of problems with AI, but I can't see one here.
davidkl 9 hours ago [-]
That person apparently didn't talk to copy writers, photographers, content creators and authors.
8 hours ago [-]
sct202 9 hours ago [-]
Or customer service. My last few online store issues have been fully chatbot when they used to be half chatbot for intake and half person.
thehoff 9 hours ago [-]
Same, after a little back and forth it became obvious I was not talking to a real person.
Drakim 8 hours ago [-]
I like to get the chatbot to promise me massive discounts just to get whoever is reading the logs to sweat a little.
sph 9 hours ago [-]
I have survived until today using the shibboleth "let me speak to a human" [1] The day this doesn't work any more, is the day I stop paying for that service. We should make a list of companies that still have actual customer service.
1: https://xkcd.com/806/ - from an era when the worst that could happen was having to speak with incompetent, but still human, tech support.
xnorswap 8 hours ago [-]
It no longer works for virgin media (UK cable monopoly).
I got myself into a loop where no matter what I did, there was no human in the loop.
Even the "threaten to cancel" trick didn't work, still just chatbots / automated services.
Thankfully more and more of the UK is getting FTTH. Sadly for me I accidentally misunderstood the coverage checker when I last moved house.
pixl97 5 hours ago [-]
> is the day I stop paying for that service.
You're acting like it's not the companies that are monopolies that implement these systems first.
rspoerri 9 hours ago [-]
> Many of these occupations have been described as being vulnerable to AI: accountants, customer support specialists, financial advisors, HR professionals, IT support specialists, journalists, legal professionals, marketing professionals, office clerks, software developers, and teachers.
5 hours ago [-]
BurningFrog 3 hours ago [-]
New technology has replaced human jobs since the start of the Industrial Revolution 250 years ago. The replaced workforce have always moved to other jobs, often in entirely new professions.
For all those 250 years most people have predicted that the next new technology will make the replaced workforce permanently unemployed, despite the track record of that prediction. We constantly predict poverty and get prosperity.
I kinda get why: The job loss is concrete reality while the newly created jobs are speculation.
Still, I'm confident AI will continue the extremely strong trend.
asadotzler 57 minutes ago [-]
Long before the industrial revolution. The ox-drawn plow was invented about 6000 years ago in Mesopotamia or India (my memory's poor, sorry) and it put a lot of workers with hoes out of work, while also growing prosperity and the number of people who gained work thanks to that increased prosperity and population growth it supported. It has always been this way and always will be.
jruohonen 9 hours ago [-]
The results are basically what Acemoglu and others have also been saying; e.g.,
I think the methods here are highly questionable, and appear to be based on self report from a small amount of employees in Denmark 1 year ago.
The overall rate of participation in the labor work force is falling. I expect this trend to continue as AI makes the economy more and more dynamic and sets a higher and higher bar for participation.
Overall GDP is rising while labor participation rate is falling. This clearly points to more productivity with fewer people participating. At this point one of the main factors is clearly technological advancement, and within that I believe if you were to make a survey of CEOS and ask what technological change has allowed them to get more done with fewer people, the resounding consensus would definitely be AI
jmacd 2 hours ago [-]
I am currently dealing with a relatively complex legal agreement. It's about 30 pages. I have a lawyer working on it who I consider the best in the country for this domain.
I was able to pre-process the agreement, clearly understand most of the major issues, and come up with a proposed set of redlines all relatively easily. I then waited for his redlines and then responded asking questions about a handful of things he had missed.
I value a lawyer being willing to take responsibility for their edits, and he also has a lot of domain specific transactional knowledge that no LLM will have, but I easily saved 10 hours of time so far on this document.
jhp123 4 hours ago [-]
The thing about AI is that it doesn't work, you can't build on top of it, and it won't get better.
It doesn't work: even for the tiny slice of human work that is so well defined and easily assessed that it is sent out to freelancers on sites like Fiverr, AI mostly can't do it. We've had years to try this now, the lack of any compelling AI work is proof that it can't be done with current technology.
You can't build on top of it: unlike foundational technologies like the internet, AI can only be used to build one product, a chatbot. The output of an AI is natural language and it's not reliable. How are you going to meaningfully process that output? The only computer system that can process natural language is an AI, so all you can do is feed one AI into another. And how do you assess accuracy? Again, your only tool is an AI, so your only option is to ask AI 2 if AI 1 is hallucinating, and AI 2 will happily hallucinate its own answer. It's like The Cat in the Hat Comes Back, Cat E trying to clean up the mess Cat D made trying to clean up the mess Cat C made and so on.
And it won't get any better. LLMs can't meaningfully assess their training data, they are statistical constructions. We've already squeezed about all we can from the training corpora we have, more GPUs and parameters won't make a meaningful difference. We've succeeded at creating a near-perfect statistical model of wikipedia and reddit and so on, it's just not very useful even if it is endlessly amusing for some people.
bhelkey 3 hours ago [-]
> [LLMs] won't get any better.
Can you pinpoint the date which LLMs stagnated?
More broadly, it appears to me that LLMs have improved up to and including this year.
If you consider LLMs to not have improved in the last year, I can see your point. However, then one must consider ChatGPT 4.5, Claude 3.5, Deepseek, and Gemini 2.5 to not be improvements.
jhp123 2 hours ago [-]
Sept 2024 was when OpenAI announced its new model - not an LLM but a "chain of thought" model built on LLMs. This represented a turn away from the "scale is all you need to reach AGI" idea by its top proponent.
bhelkey 2 hours ago [-]
If September 2024 marks the date in your mind stagnation was obvious, surely the last improvement must have come before?
Whatever the case, there are open platforms that give users a chance to compare two anonymous LLMs and rank the models as a result [1].
What I observe when I look for these rankings is that none of the top ranked models come from before your stagnation cut off date of September 2024 [2].
The survey questions they asked are bad questions if you’re attempting to answer the question about future labor state. However they didn’t ask that, they asked existing employees how LLMs have changed their workplace.
This is the wrong question.
The question should be to hiring managers: Do you expect LLM based tools to increase or decrease your projected hiring of full time employees?
LLM workflows are already *displacing* entry-level labor because people are reaching for copilot/windsurf/CGPT instead of hiring a contract developer, researcher, BD person. I’m watching this happen across management in US startups.
It’s displacing job growth in entry level positions across primary writing copy, admin tasks or research.
You’re not going to find it in statistics immediately because it’s not a 1:1 replacement.
Much like the 1971 labor-productivity separation that everyone scratched their head about (answer: labor was outsourced and capital kept all value gains), we will see another asymptote to that labor productivity graph based on displacement not replacement.
ajb 7 hours ago [-]
A bold assumption, that it will continue not to.
I have a 185 year old treatise on wood engraving. At the time, to reproduce any image required that it be engraved in wood or metal for the printer; the best wood engravers were not mere reproducers, as they used some artistry when reducing the image to black and white, to keep the impression from continuous tones. (And some, of course, were also original artists in their own right). The wood engraving profession was destroyed by the invention of photo-etching (there was a weird interval before the invention of photo etching, in which cameras existed but photos had to be engraved manually anyway for printing).
Maybe all the wood engravers found employment; although I doubt it. But at this speed, there will be a lot of people who won't be able to retrain during employment and will either have to use up their savings while doing so, or have to take lower paid jobs.
asadotzler 17 minutes ago [-]
As a graphic designer, my work didn't evaporate because Aldus shipped PageMaker. It didn't collapse when Office and the Web made clip art available to everyone. It didn't disappear when credit card, letterhead, and logo templates and generators came online. Every time a new tool allowed more DIY, the gulf between the low-effort stuff and my stuff grew and I was able to secure even more and better paying work. And using those tools early myself, I also gained advantage over my professional competition for various lengths of time.
This is how engraving went too. It wasn't overnight. The tools were not distributed evenly and it was a good while before amateurs could produce anything like what the earlier professionals did.
Because you can buy a microwave and pizza rolls doesn't make you a chef. Maybe in 100 years the tooling will make you as good as the chefs of our time, but by then they'll all be doing even better work and there are people who will pay for higher quality no matter how high the bar is raised for baseline quality so eliminating all work in a profession is rare.
it_citizen 8 hours ago [-]
No opinion on the topic but "say economists" doesn't inspire trust
kurtis_reed 8 hours ago [-]
Thank you
Animats 48 minutes ago [-]
Data is from late 2023 and 2024. Year-old data.
ChatGPT was released in late November, 2023.
solfox 8 hours ago [-]
Extrapolating from my current experience with AI-assisted work: AI just makes work more meaningful. My output has increased 10x, allowing me to focus on ideas and impact rather than repetitive tasks. Now apply that to entire industries and whole divisions of labor: manual data entry, customer support triage, etc. Will people be out of those jobs? Most certainly. But it gives all of us a chance to level up—to focus on more meaningful labor.
As a father, my forward-thinking vision for my kids is that creativity will rule the day. The most successful will be those with the best ideas and most inspiring vision.
asadotzler 12 minutes ago [-]
I keep seeing these "my output is 10X with LLMs" but I'm not seeing any increase in quality or decrease in price for any of the very many tech products I've updated or upgraded in the last couple of years.
We're coming up in 3 years of ChatGPT and well over a year since I started seeing the proliferation of these 10X claims, and yet LLM users seem to be bearing none of the fruit one might expect from a 10X increase in productivity.
I'm beginning to think that this 10X thing is overstated.
jplusequalt 5 hours ago [-]
>The most successful will be those with the best ideas and most inspiring vision.
This has never been the truth of the world, and I doubt AI will make it come to fruition. The most successful people are by and large those with powerful connections, and/or access to capital. There are millions of smart, inspired people alive right now who will never rise above the middle class. Meanwhile kids born in select zip codes will continue to skate by unburdened by the same economic turmoil most people face.
begueradj 8 hours ago [-]
What about technical debts related to the generated code?
Cthulhu_ 8 hours ago [-]
First off, is there any? That's making an assumption, one which can just as easily be attributed to human-written code. Nobody writes debt-free code, that's why you have many checks and reviews before things go to production - ideally.
Second, in theory, future generations of AI tools will be able to review previous generations and improve upon the code. If it needs to, anyway.
But yeah, tech debt isn't unique to AIs, and I haven't seen anything conclusive that AIs generate more tech debt than regular people - but please share if you've got sources of the opposite.
(disclaimer, I'm very skeptical about AI to generate code myself, but I will admit to use it for boring tasks like unit test outlines)
Capricorn2481 1 hours ago [-]
> Second, in theory, future generations of AI tools will be able to review previous generations and improve upon the code. If it needs to, anyway.
Is that what's going to happen? These are still LLMs. There's nothing in the future generations that guarantees those changes would be better, if not flat out regressions. Humans can't even agree on what good code looks like, as its very subjective and context heavy with the skills of the team.
Likely, you ask gpt-6 to improve your code and it just makes up piddly architecture changes that don't fundamentally improve anything.
happymellon 7 hours ago [-]
Presumably as a father they are thinking about ways for their children to be employed.
Cthulhu_ 8 hours ago [-]
If it actually works like that, it'll be just like all labor-saving innovations, going back to the loom and printing press and the like; people will lose their job, but it'll be local / individual tragedies, the large scale economic impact will likely be positive.
It'd still suck to lose your job / vocation though, and some of those won't be able to find a new job.
solfox 6 hours ago [-]
Honestly, much of work under capitalism is meaningless (see: The Office). The optimistic take is that many of those same paper-pushing roles could evolve into far more meaningful work—with the right training and opportunity (also AI).
When the car was invented, entire industries tied to horses collapsed. But those that evolved, leveled up: Blacksmiths became auto mechanics and metalworkers, etc.
As a creatively minded person with entrepreneurial instincts, I’ll admit: my predictions are a bit self-serving. But I believe it anyway—the future of work is entrepreneurial. It’s creative.
jplusequalt 5 hours ago [-]
>the future of work is entrepreneurial. It’s creative.
How is this the conclusion you've come to when the sectors impacted most heavily by AI thus far have been graphic design, videography, photography, and creative writing?
throwaway35364 5 hours ago [-]
> The optimistic take is that many of those same paper-pushing roles could evolve into far more meaningful work—with the right training and opportunity (also AI).
There already isn't enough meaningful work for everyone. We see people with the "right training" failing to find a job. AI is already making things worse by eliminating meaningful jobs — art, writing, music production are no longer viable career paths.
__MatrixMan__ 4 hours ago [-]
My biggest concern about AI is that it will make us better at things that we're already doing. Things that we would've stopped doing if we hadn't had such a slow introduction to their consequences, consequences that we're now accustomed to--but not adapted to. Frog in slowly warming water stuff like the troubling relationship between advertising and elections, or the lack of consent in our monetary systems.
I'm worried the shock will not be abrupt enough to encourage a proper rethink.
tough 2 hours ago [-]
Let's be honest here, to all the productivity enjoyors, you're not gaining any -hours- or producitvity gains onto your company just by using AI unless your profit also increases
the rest is fugazi
crabsand 3 hours ago [-]
I believe LLMs will create more jobs than it eliminated by raising standards in various fields, including software development.
We will have to get to 100% test coverage and document everything and add more bells and whistles to UI etc. The day to day activity may change but there will always be developers.
asadotzler 54 seconds ago [-]
What I'm seeing is marked decrease in standards and quality everywhere I see LLMs (and diffusion models, any of the transformer-based stuff.)
Sometimes that decrease in quality is matched by an increase in reach / access, and so the benefits can outweigh the costs. Think about language translation in web browsers and even smart spectacles, for example. Language translation has been around forever but generally limited to popular books or small-scale proprietary content because it was expensive to use mult-lingual humans to do that work.
Now even my near-zero readership blog can be translated from English to Portuguese (or most other widely used languages) for a reader in Brazil with near-zero cost/effort for that user. The quality isn't as good as human translation, often losing nuance and style and sometimes even with blatant inaccuracies, but the increased access offered by language translation software makes the lower standard acceptable for lots of use cases.
I wouldn't depend on machine translation for critical financial, healthcare, or legal use cases, though I might start there to get the gist, but for my day-to-day reading on the web, it's pretty amazing.
Software at scale is different than individuals engaging in leisure activities. A loss of nuance and occasional catastrophic failures in a piece of software with hundreds of millions or billions of users could have devastating impacts.
lonelyasacloud 6 hours ago [-]
The report looks at "at the labor market impact of AI chatbots on 11 occupations, covering 25,000 workers and 7,000 workplaces in Denmark in 2023 and 2024."
As with all other technologies the jobs it removes are not normally in country that introduces it but that they never happen elsewhere.
For example, while the automated looms that the Luddites were protesting about didn't result in significant job losses in the UK. How much clothing manufacturing has been curtailed in Africa because of it and similar innovations since that have lead to cheap mass produced clothes making it uneconomic to produce there.
As suggest by this report, Denmark and West will probably be make good elsewhere and be largely unaffected.
However, places like India, Vietnam with large industries based on call centres and outsourced development servicing the West are likely to be more vulnerable.
colinmorelli 7 hours ago [-]
FYI: The actual study may not quite say what this article is suggesting. Unless I'm missing something, the study seems to focus on employee use of chat-based assistants, not on company-wide use of AI workflow solutions. The answers come from interviewing the employees themselves. There is an analysis of impacts on the labor market, but that is likely flawed if the companies are segmented based on employee use of chat assistants versus company-wide deployment of AI technology.
In other words, this more likely answers the question "If customer support agents all use ChatGPT or some in-house equivalent, does the company need fewer customer support agents?" than it answers the question "If we deploy an AI agent for customers to interact with, can it reduce the volume of inquiries that make it to our customer service team and, thus, require fewer agents?"
qoez 4 hours ago [-]
Chatbots probably won't be the final interface. But machine learning in general is a full on revolutionary tech (much clearer now than ten years ago) that hasn't been explored fully and will eventually be quite disruptive on the scale of computers on the economy probably. Though it likely won't take the form it's taking today (chatbots etc).
jalev 4 hours ago [-]
The headline is a bit baity (in that the article is describing no job losses because there hasn't been any economic benefit to LLM/GenAI to justify it), but what if we re-ran the study in a country _without_ exceptionally strong unionisation participation? Would we see the same results?
mg 9 hours ago [-]
One thing nobody seems to discuss is:
In the future, we will do a lot more.
In other terms: There will be a lot more work. So even if robots do 80% of it, if we do 10x more - the amount of work we need humans to do will double.
We will write more software, build more houses, build more cars, planes and everything down the supply chain to make these things.
When you look at planet earth, it is basically empty. While rent in big cities is high. But nobody needs to sleep in a big city. We just do so because getting in and out of it is cumbersome and building houses outside the city is expensive.
When robots build those houses and drive us into town in the morning (while we work in the car) that will change. I have done a few calculations, how much more mobility we could achieve with the existing road infrastructure if we use electric autonomous buses, and it is staggering.
Another way to look at it: Currently, most matter of planet earth has not been transformed to infrastructure used by humans. As work becomes cheaper, more and more of it will. There is almost infinitely much to do.
WillAdams 8 hours ago [-]
For my part, I would like for there still to be wild and quiet places to go to when I need time away from my fellow man, and I don't envision a world paved over for modern infrastructure as desirable, but rather the stuff of nightmares such as the movie _Silent Running_ envisioned.
That said, the fact that I can't find an opensource LLM front-end which will accept a folder full of images to run a prompt on sequentially, then return the results in aggregate is incredibly frustrating.
patapong 8 hours ago [-]
I agree! People will become more productive, meaning fewer people can do more work. That said, I hope this does not result in the production of evermore things at the cost of nature!
I think we are at a crossroads as to what this will result in, however. In one case, the benefits will accrue at the top, with corporations earning greater profits while employing less people, leaving a large part of the population without jobs.
In the second case, we manage to capture these benefits, and confer them not just on the corporations but also the public good. People could work less, leaving more time for community enhancing activities. There are also many areas where society is currently underserved which could benefit from freed up workforce, such as schooling, elderly care, house building and maintenance etc etc.
I hope we can work toward the latter rather than the former.
jspdown 8 hours ago [-]
> That said, I hope this does not result in the production of evermore things at the cost of nature!
It will for sure!
Just today the impact is collosal.
As an example, people used to read technical documentation, now, they ask LLMs. Which replaces a simple static file by 50k matrix multiplication.
poisonborz 6 hours ago [-]
...and saves humongous amounts of time in the process. Documentations are rarely a good read (however sad, I like good docs), and we should waste less engineering time reading them.
ringeryless 7 hours ago [-]
the earth is not the property of humans, nor is any of it empty until you show zero ecosystem or wildlife or plants there.
for sure, we are doing our best to eradicate the conditions that make earth habitable, however i suggest that the first needed change is for computer screen humans to realize that other life forms exist. this requires stepping outside and questioning human hubris, so it might be a big leap, but i am fairly confident that you will discover that absolutely none of our planet is empty.
wetpaws 6 hours ago [-]
[dead]
jplusequalt 5 hours ago [-]
Yes, lets extract even more resources from the Earth when we're already staring down the barrel of long term environmental issues.
10729287 9 hours ago [-]
I like that some places are empty.
mg 9 hours ago [-]
Would you be ok if instead of 97% of earth being empty 94% is empty and your rent is cut in half? Another plus point of the future: An electric autonomous bus is at your disposal every 5 minutes, bringing you to whatever nice lonely place you wish.
poisonborz 6 hours ago [-]
Rents, or any living costs going down? But everything is based on "stocks only go up".
tgv 8 hours ago [-]
I've got no idea what you're going on about, but 97% of the Earth isn't empty in any useful sense. For starters, almost 70% is ocean. There are also large parts which are otherwise uninhabitable, and large parts which have agricultural use. Buses don't go to uninhabited places, since that's costs too much. Every five minutes is a frequency which no form of public transport can afford.
mg 8 hours ago [-]
The nature of technological progress is that it makes formerly uninhabitable areas inhabitable.
Costs of buses are mostly the driver. Which will go away. The rest is mostly building and maintaining them. Which will be done by robots. The rest is energy. The sun sends more energy to earth in an hour than humans use in a year.
tgv 2 hours ago [-]
I've done a quick check on the financial statement 2023 of the Amsterdam public transport company, and personnel (which is absolutely not just drivers) is 1/3 of the total.
And use of solar energy is absolutely unrelated to doubling the living areal. That can, and should, be done anyway.
WillAdams 8 hours ago [-]
How will this 3% be selected?
Which of the few remaining wild creatures will be displaced?
Planet earth is still resource constrained. This is easy to forget when skills availability is more frequently the bottleneck and you live in a society that for the time being has fairly easy access to raw materials.
mbesto 6 hours ago [-]
When new technology that seemingly replaces human effort it often doesn't directly replace humans (e.g. businesses don't rush to immediately replace them with the technology). More often than not, these systems are put in place to help scale a business. We've seen this time and time again and AI seems to be no different.
Anecdotal situation - I use ChatGPT daily to rewrite sentences in the client reports I write. I would have traditionally had a marketing person review these and rewrite them, but now AI does it.
dfxm12 5 hours ago [-]
AI can't replace jobs or hurt wages. AI doesn't make these decisions & wages have been suppressed for a very long time, well before general AI adoption. Managers make these decisions. Don't blame AI if you get laid off or if your wages aren't even keeping up with inflation, let alone your productivity. Blame your manager.
Be wary of people trying to deflect the away from the managerial class for these issues.
Remember when economists didn't see the 2088 crash coming, even though a 5th grader could see it was mathematically impossible not to.
Either mathematics sucks or economists suck. Real hard choice.
Eliezer 4 hours ago [-]
Translators? Graphic artists? The omission of the most obviously impacted professions immediately identifies this as a cooked study, along with talking about LLMs as "chatbots". I wonder who paid for it.
jimmyjazz14 4 hours ago [-]
are graphic artists actually getting replaced by AI? If so that would surprise me for as impressive as AI image generation is, very little of what it does seems like it would replace a graphic artists.
thinkingtoilet 7 hours ago [-]
This is just objectively false. My friend is a freelance copy writer and live in the freelance world. It is 100% replacing writing jobs, editing jobs, and design jobs.
AstroBen 6 hours ago [-]
Since when? If they're writing online content then that was wiped out somewhat recently by Google changing their search algorithm and killing a huge amount of content based sites
schnitzelstoat 7 hours ago [-]
To be fair, those jobs were already pretty precarious.
Ever since the explosion in popularity of the internet in the 2000's, anything journalism related has been in terminal decline. The arrival of the smartphones accelerated this process.
tummler 7 hours ago [-]
This is shameless "AI is not bad, we swear" propaganda. Study looked at 11 occupations, 25k workers, in Denmark, in 2023-2024. How this says anything of consequence for the world at large (or even just the US) with developments moving as fast as they are, in such an unstable economic environment, is beyond me. What I do know is that I have plenty of first-hand anecdotal evidence to the contrary.
fajmccain 5 hours ago [-]
AI scaring students away from the software field and simultaneously making it hard for new developers to learn (because its too temping to click a button rather than struggle for 30 minutes) could be balancing out some job losses as well
whoomp12342 2 hours ago [-]
maybe not directly, but you can't argue that this new wave of AI is allowing job applications to SPAM openings at an unprecedented rate which makes it much harder for real humans to be seen...
cjbgkagh 3 hours ago [-]
How long until we can replace economists with AI? It would be hard to do worse that’ll what we already have.
godzillabrennus 7 hours ago [-]
November 30th, 2022 is when ChatGPT burst into the world stage and upended what people thought AI was capable of doing. It’s been less than three years since then. The technology is still imperfect but improving at an exponential rate.
I know it’s replaced marketing content writers in startups. I know it has augmented development in startups and reduced hiring needs.
The effects as it gains capability will be mass unemployment.
sharemywin 6 hours ago [-]
At what point would any one trust an AI to do a job versus just giving advice. even when you have it "write" code it's really just giving advice.
even customer service bots are just nicer front ends for knowledge bases.
tmvphil 6 hours ago [-]
It's seemed to me that all the productivity gains would be burned up by just making our jobs more and more BS, not be reducing hours worked, just like with previous technology. I expect more meetings, not less work.
emsign 3 hours ago [-]
Oh, it's because it's not as useful and productive as the hype is trying to convince us of.
carabiner 42 minutes ago [-]
"When the anecdotes don't match the data, it's usually the anecdotes that are correct" - Jeff Bezos
rincebrain 6 hours ago [-]
n=small, but I've had multiple friends who did freelance technical writing and copyediting work tell me that the market died when genAI became easily available. Repeat clients no longer interested in their work, and all the new work postings not even really worth the cost even if you tried just handing back unmodified genAI output instantly.
So I find this result improbable, at best, given that I personally know several people who had to scramble to find new ways of earning money when their opportunities dried up with very little warning.
specproc 3 hours ago [-]
Work is gaseous, it expands to fill the available space.
5cott0 5 hours ago [-]
In contrast to statements like the following from the dweebs sucking harry potter's farts out of the less-wrong bubble
>Coding AIs increasingly look like autonomous agents rather than mere assistants: taking instructions via Slack or Teams and making substantial code changes on their own, sometimes saving hours or even days
Economists can’t figure out that skyrocketing corporate profits result in skyrocketing inflation for workers because their models don’t let them consider that the majority of workers have given up the power to negotiate wages, so I certainly would not trust their determinations regarding the job market with respect to AI. Burying one’s head in the sand makes everyone look A-OK and that perspective error skews their entire field’s work.
seydor 2 hours ago [-]
Maybe it's transitory
zelon88 5 hours ago [-]
> "My general conclusion is that any story that you want to tell about these tools being very transformative, needs to contend with the fact that at least two years after [the introduction of AI chatbots], they've not made a difference for economic outcomes."
I'm someone who tries to avoid AI tools. But this paper is literally basing its whole assessment off of two things; wages and hours. This is a disingenuous assertion.
Lets assume that I work 8 hours per day. If I am able to automate 1h of my day with AI, does that mean I get to go home 1 hour early? No. Does that mean I get an extra hour of pay? No.
So the assertion that there has been no economic impact assumes that the AI is a separate agent that would normally be paid in wages for time. That is not the case.
The AI is an augmentation for an existing human agent. It has the potential to increase the efficiency of a human agent by n%. So we need to be measuring the impact that is has on effectiveness and efficiency. It will never offset wages or hours. It will just increase the productivity for a given wage or number of hours.
jandrese 5 hours ago [-]
I think it's time for OpenAI to release an AI economist.
7 hours ago [-]
givemeethekeys 6 hours ago [-]
These monkeys should look into the recent history of the music industry.
blitzar 6 hours ago [-]
Its also doing no meaningful quantity of "work".
ramesh31 4 hours ago [-]
This is a completely meaningless study with no correlation at all to reality in the US in right now. The hockey stick started around 2/25. We are in a completely different world now for devs.
bluecheese452 7 hours ago [-]
Economists are pr people. Of course they would say that.
paulsutter 9 hours ago [-]
I spend much more time coding now that I can code 5x faster
Demand for software has high elasticity
wisty 5 hours ago [-]
Tools can either increase or decrease employment.
Imagine if a tool made content writers 10x as productive. You might hire more, not less, because they are now better value! You might eventually realise you spent too much, but this will come later.
ADAIK no company I know of starts a shiny new initiative by firing, they start by hiring then cutting back once they have their systems in place or hit a ceiling. Even Amazon runs projects fat then makes them lean AFAIK.
There's also pent up demand.
You never expect a new labour saving device to cost jobs while the project managers are in the export building phase.
meltyness 7 hours ago [-]
Also economists, during every bubble ever:
Ir0nMan 3 hours ago [-]
Alternative 1915 headline: "The motorcar is not replacing jobs or hurting wages in horse and carriage industry, say economists".
Pxtl 4 hours ago [-]
Because it doesn't do anything useful.
nemo44x 4 hours ago [-]
AI makes people more productive so that incentivizes me to hire more people, not less. In many cases anyhow.
If each of my developers is 30% more productive that means we can ship 30% more functionally which means more budget to hire more developers. If you think you’ll just pocket that surplus you have another thing coming.
deadbabe 4 hours ago [-]
Companies have been wanting to lay people off. Using AI as an excuse is a convenient way to turn a negative into a positive.
Truth is, companies that don’t need layoffs are pushing employees to use AI to supercharge their output.
You don’t grow a business by just cutting costs, you need to increase revenue. And increasing revenue means more work, which means it’s better for existing employees to put out more with AI.
gigel82 4 hours ago [-]
It's not replacing jobs, but it's definitely the scarecrow invoked in layoff decisions across the tech industry. I suspect whatever metrics they use are simply too slow to measure the actual impact this is having in the job market.
cess11 9 hours ago [-]
'"The adoption of these chatbots has been remarkably fast," Humlum told The Register. "Most workers in the exposed occupations have now adopted these chatbots. Employers are also shifting gears and actively encouraging it. But then when we look at the economic outcomes, it really has not moved the needle."'
So, as of yet, according to these researchers, the main effect is that of a data pump, certain corporations get a deep insight into people's and other corporation's inner life.
piva00 9 hours ago [-]
I was discussing with a colleague the past months, my view on how and why all these AI tools are being shoved down our throats (just look at Google's Gemini push into all enterprise tools, it's like Google+ for B2B) before there are clear cut use-cases you can point to and say "yes, this would have been much harder to do without LLM/AI" is because... Training data is the most valuable asset, all these tools are just data collection machines with some bonus features that make them look somewhat useful.
I'm not saying that I think LLMs are useless, far from it, I use them when I think it's a good fit for the research I'm doing, the code I need to generate, etc., but the way it's being pushed from a marketing perspective tells me that companies making these tools need people to use them to create a data moat.
Extremely annoying to be getting these pop-ups to "use our incredible Intelligence™" at every turn, it's grating on me so much that I've actively started to use them less, and try to disable every new "Intelligence™" feature that shows up in a tool I use.
Macha 8 hours ago [-]
It seems very simple cause and effect from a economic standpoint. Hype about AI is very high, so investors ask boards what they're doing about AI and using it, because they think AI will disrupt investments that don't.
The boards in turn instruct the CEOs to "adopt AI" and so you get all the normal processes about deciding what/if/when to do stuff get short circuited and so you get AI features that no one asked for or mandates for employees to adopt AI with very shallow KPIs to claim success.
The hype really distorts both sides of the conversation. You get the boosters for which any use of AI is a win, no matter how inconsequential the results, and then you get things like the original article which indicate it hasn't caused job losses yet as a sign that it hasn't changed anything. And while it might disprove the hype (especially the "AI is going to replace all mental labour in $SHORT_TIMEFRAME" hype), it really doesn't indicate that it won't replace anything.
Like when has a technology making the customer support experience worse for users or employees ever stopped it's rollout if there's cost savings to be had?
I think this why AI is so complicated for me. I've used it, and I can see some gains. But it's on the order of when IDE auto complete went from substring matches of single methods to when it could autocomplete chains of method calls based on types. The agent stuff fails on anything but the most bite size work when I've tried it.
Clearly some people seem it as something more transformative than that. There's other times when people have seen something transformative and it's just been so clearly nothing of value (NFTs for example) that it's easy to ignore the hype train. The reason AI is challenging for me is it's clearly not nothing, but also it's so far away from the vision that others have that it's not clear how realistic that is.
bwfan123 5 hours ago [-]
LLMs have mesmerized us, because, they are able to communicate meaning to us.
Fundamentally, we (the recipient of llm output) are generating the meaning from the words given. ie, llms are great when the recipient of their output is a human.
But, when their recipient is a machine, the model breaks down, because, machine to machine requires deterministic interactions. this is the weakness I see - regardless of all the hype about llm agents. fundamentally, the llms are not deterministic machines.
LLMs lack a fundamental human capability of deterministic symbolization - which is to create NEW symbols with associated rules which can deterministically model worlds we interact with. They have a long way to go on this.
namaria 7 hours ago [-]
Bingo. Especially with the 'coding assistants', these companies are getting great insight into how software features are described and built, and how software is architected across the board.
It's very telling that we see "we won't use your data for training" sometimes and opt-outs but never "we won't collect your data". 'Training' being at best ill defined.
cess11 6 hours ago [-]
Most likely they can identify very good software developers, or at least acquire this ability in the short term. That information has immediate value.
partiallypro 5 hours ago [-]
I think if we go into a sharp recession companies will use this as an excuse to replace workers with other workers that effectively use AI cutting down on overhead. It just seems obvious this will happen. I don't think it's the doom and gloom scenario, but many CEOs, etc are chomping at the bit.
wonderwonder 7 hours ago [-]
I see Ai not replacing all workers but reducing head count.
On a software team I could see a team of 8 reduced to a team of 4 with Ai.
Especially in smaller, leaner companies.
You already see attorneys using it to write briefs; often to hilarious effect. These are clearly the precursor though to a much reduced need to for Jr / associate level attorneys at firms.
6510 7 hours ago [-]
An LLM wouldn't intentionally confuse "didn't" with "isn't"
intellectronica 7 hours ago [-]
"Life is awesome", said the frog, "the owners arranged a jacuzzi for me, it's warm and lovely in the water, not dangerous at all".
yieldcrv 4 hours ago [-]
you’re not going to see the firing but you’re also not going to see the hiring
watch out for headcount lacking in segments of the market
FilosofumRex 7 hours ago [-]
Keep in mind this kind of drivel is produced by economists and the tail-end of CS, who are desperately trying to stay relevant in the emerging work place.
The wise, will displace economists and consultants with LLMs, but the trend followers will hire them to prognostic about the future impact - such that the net affect could be zero.
insane_dreamer 2 hours ago [-]
Still much too early to tell. Give it another couple of years.
user9999999999 8 hours ago [-]
It shouldn't. Its propaganda spread by VCs and ai 'thought leaders' who are finally seeing a glimmer of their fantastical imagination coming to life (it isn't)
poulpy123 8 hours ago [-]
LMAO it's too early and too small to see anything yet
jeffeehua 5 hours ago [-]
[dead]
sirnonw 7 hours ago [-]
[dead]
black_13 6 hours ago [-]
[dead]
windex 9 hours ago [-]
Right now AI's impact is the equivalent of giving the ancient Egyptians a couple of computer chips. People will eventually figure out what they are, but until then it will only be used as combs, paperweights, pendants etc.
I would say the use cases are only coming into view.
bilsbie 8 hours ago [-]
I’m starting to think most jobs are performative. Hiring is just managers wanting more people in the office to celebrate their birthdays.
And any important jobs won’t be replaced because managers are too lazy and risk averse to try AI.
We may never see job displacement from AI. Did you know bank teller jobs actually increased in the decades following the roll out of ATMs.
jvanderbot 8 hours ago [-]
You should take time to learn what those jobs are for. You'd be surprised what it takes to keep a business running past any reasonable level of scale.
bilsbie 4 hours ago [-]
I’ve worked 10+ of those jobs, guy.
jvanderbot 3 hours ago [-]
Not convinced.
A lot of jobs have tons of slack until something goes wrong.
But even then, I'm not saying all are equally vital, I'm just saying that the statement, "most jobs are performative" doesn't even come close to being supported by "I've worked 10 performative jobs".
Example: I recently used Gemini for some tax advice that would have cost hundreds of dollars to get from a licensed tax agent. And yes, the answer was supported by actual sources pointing to the tax office website, including a link to the office's well-hidden official calculator of precisely the thing I thought I would have to pay someone to figure out.
What call. Maybe some readers miss the (perhaps subtle) difference between "Generative AI is not ..." and "Generative Ai is not going to ..."
Then first can be based on fact, e.g., what has happened so far. The second is based on pure speculation. No one knows what will happen in the future. HN is continually being flooded with speculation, marketing, hype.
In contrast, this article, i.e., the paper it discusses, is is based on what has happened so far. There is no "call" being made. Only an examination of what has heppened so far. Facts not opinions.
Could be data is lagging as sibling comment said but this seems wildly difficult to report on a number like this.
It also doesn't take into account the benefits to colleagues of active users of LLMs (second order savings).
My use of LLMs often means I'm saving other people time because I can work through issues without communications loops and task switching. I can ask about much more important, novel items of discussion.
This is an important omission that lowers the paper's overall value and sets it up for headlines like this.
This is because the economy is not a static thing. If one variable changes (productivity), it’s not a given that GDP will remain constant and jobs/wages will consequently be reduced. More likely is that all of the variables are always in flux, reacting and responding to changes in the market.
However, the parent comment is about an examination of what has happened so far and facts that feed into the paper and its conclusions.
I was focused on what I see as important gaps in measuring impact of AI, and its actual (if difficult to measure) impact right now.
Mostly people aren't worried about productivity itself, which would be weird. "Oh no, AI is making us way more productive, and now we're getting too much stuff done and the economy is growing too much." The major concern is that the productivity is going to impact jobs and wages, and at least so far (according to this particular paper) that seems to not be happening.
Unless twice the work is suddenly required, which I doubt.
So you could work on more things with the same number of employees, make more money as a result, and either further increase the number of things you do, or if not, increase your revenue and hopefully profits per-employee.
I would also be surprised if the twice the work was "suddenly" required, but would you be surprised if people buy more of something if it costs less? In the 1800s ordinary Americans typically owned only a few outfits. Coats were often passed down several generations. Today, ordinary Americans usually own dozens of outfits. Did Americans in the 1800s simply not like owning lots of clothing? Of course not. They would have liked to own more clothing, but demand was constrained by cost. As the price of clothing has gone down, demand for clothing has increased.
With software, won't it be the same? If engineers are twice as productive as before, competitive pressure will push the price of software down. Custom software for businesses (for example) is very expensive now. If it were less expensive, maybe more businesses will purchase custom software. If my Fastmail subscription becomes cheaper, maybe I will have more money to spend on other software subscriptions. In this way, across the whole economy, it is very ordinary for productivity gains to not reduce employment or wages.
Of course demand is not infinitely elastic (i.e. there is a limit on how many outfits a person will buy, no matter how cheap), but the effects of technological disruption on the economy are complex. Even if demand for one kind of labor is reduced, demand for other kinds of labor can increase. Even if we need less weavers, maybe we need more fashion designers, more cotton farmers, more truckers, more cardboard box factory workers, more logistics workers, and so on. Even if we need less programmers, maybe we need more data center administrators?
No one knows what the future economy will look like, but so far the long term trends in economic history are very promising.
I like this sentence because it is grammatically and syntactically valid but has the same relationship to reality as say, the muttering of an incantation or spell has, in that it seeks to make the words come true by speaking them.
Aside from simply hoping that, if somebody says it it could be true, “If everyone’s hours got cut in half, employers would simply keep everyone and double wages” is up there with “It is very possible that if my car broke down I’d just fly a Pegasus to work”
But more generally, my comment is not absurd; it's a pattern that has played itself out in economic history dozens of times.
Despite the fact that modern textile and clothing machinery are easily 1000x more efficient than weaving cloth and sewing shirts by hand, the modern garment industry employs more people today than that of middle age Europe.
Will AI be the same? I don't know, but it wouldn't be unusual if it was.
This makes sense. If everyone’s current workloads were suddenly cut in half tomorrow, there would simply be enough demand to double their workloads. This makes sense across the board because much like clothing and textiles, demand for every product and service scales linearly with population.
I was mistaken, you did not suggest that employers would gift workers money commensurate with productivity, you simply posit that demand is conceptually infinite and Jevons paradox means that no jobs ever get eliminated.
More people are also available since the fields are producing by themselves, comparatively. Not to mention less of us die to epidemies, famines and swords.
My company just redid our landing page. It would probably have taken a decent developer two weeks to build it out. Using AI to create the initial drafts, it took two days.
It's not miraculous but I feel like it saves me a couple hours a week from not going on wild goose chases. So maybe 5% of my time.
I don't think any engineering org is going to notice 5% more output and layoff 1/20th of their engineers. I think for now most of the time saved is going back to the engineers.
I would (similarly insultingly) suggest that if you think this is true, you're spending time doing things more slowly that you could be doing more productively by using contemporary tools.
But here's the thing - there is already plenty of documented proof of individuals losing their job to ChatGPT. This is an article from 2 years ago: https://www.washingtonpost.com/technology/2023/06/02/ai-taki...
Early on in a paradigm shift, when you have small moves, or people are still trying to figure out the tech, it's likely that individual moves are hard to distinguish from noise. So I'd argue that a broad-based, "just look at the averages" approach is simply the wrong approach to use at this point in the tech lifecycle.
FWIW, I'd have to search for it, but there were economic analyses done that said it took decades for the PC to have a positive impact on productivity. IMO, this is just another article about "economists using tools they don't really understand". For decades they told us globalization would be good for all countries, they just kinda forgot about the massive political instability it could cause.
> In contrast, this article, i.e., the paper it discusses, is based on what has happened so far.
Not true. The article specifically calls into question whether the massive spending on AI is worth it. AI is obviously an investment, so determine whether it's "worth it", you need to consider future outcomes.
I honestly think computers have a net negative productivity impact in many organizations. Maybe even "most".
https://usafacts.org/articles/what-is-labor-productivity-and...
Even more surprising for me is that productivity growth declined during the ZIRP era. How did we take all that free money and product less?
Could you say a few more words on this please? Are you referring to the rise of China?
What happened in 2023 and 2024 actually
Nitpicky but it's worth noting that last year's AI capabilities are not the April 2025 AI capabilities and definitely won't be the December 2025 capabilities.
It's using deprecated/replaced technology to make a statement, that is not forward projecting. I'm struggling to see the purpose. It's like announcing that the sun is still shining at 7pm, no?
And the hype was insane in 2023 already - it's useful to compare actual outcomes vs historic hype to gauge how credible the hype sellers are.
Maybe progress over the last 2-3 months is hard to see, but progress over the last 6 is very clear.
It's more about automating workflows that are already procedural and/or protocolized, but where information gathering is messy and unstructured (I.e. some facets of law, health, finance, etc).
Using your dietician example: we often know quite well what types of foods to eat or avoid based on your nutritional needs, your medical history, your preferences, etc. But gathering all of that information requires a mix of collecting medical records, talking to the patient, etc. Once that information is available, we can execute a fairly procedural plan to put together a diet that will likely work for you.
These are cases that I believe LLMs are actually very well suited, if the solution can be designed in such a way as to limit hallucinations.
O3's web research seems to have gotten much, much better than their earlier attempts at using the web, which I didn't like. It seems to browse in a much more human way (trying multiple searches, noticing inconsistencies, following up with more refined searches, etc).
But I wonder how it would do in a case like yours where there is conflicting information and whether it picks up on variance in information it finds.
I think it’s weird to reject AI based on its current form.
> Using your dietician example: we often know quite well what types of foods to eat or avoid based on your nutritional needs
No we don't. It's really complicated. That's why diets are popular and real dietitians are expensive. and I would know, I've had to use one to help me manage an eating disorder!
There is already so much bullshit in the diet space that adding AI bullshit (again, using the technical definition of bullshit here) only stands to increase the value of an interaction with a person with knowledge.
And that's without getting into what happens when brand recommendations are baked into the training data.
0 https://link.springer.com/article/10.1007/s10676-024-09775-5
Just like every other form of ML we've come up with, LLMs are imperfect. They get things wrong. This is more of an indictment of yeeting a pure AI chat interface in front of a consumer than it is an indictment of the underlying technology itself. LLMs are incredibly good at doing some things. They are less good at other things.
There are ways to use them effectively, and there are bad ways to use them. Just like every other tool.
I understand your perspective, but the intention was to use a term we've all heard to reflect the thing we're all thinking about. Whether or not this is the right term to use for scenarios where the LLM emits incorrect information is not relevant to this post in particular.
> No we don't. It's really complicated. That's why diets are popular and real dietitians are expensive.
No, this is not why real dietitians are expensive. Real dietitians are expensive because they go through extensive training on a topic and are a licensed (and thus supply constrained) group. That doesn't mean they're operating without a grounding fact base.
Dietitians are not making up nutritional evidence and guidance as they go. They're operating on studies that have been done over decades of time and millions of people to understand in general what foods are linked to what outcomes. Yes, the field evolves. Yes, it requires changes over time. But to suggest we "don't know" is inconsistent with the fact that we're able to teach dietitians how to construct diets in the first place.
There are absolutely cases in which the confounding factors for a patient are unique enough such that novel human thought will be required to construct a reasonable diet plan or treatment pathway for someone. That will continue to be true in law, health, finances, etc. But there are also many, many cases where that is absolutely not the case, the presentation of the case is quite simple, and the next step actions are highly procedural.
This is not the same as saying dietitians are useless, or physicians are useless, or attorneys are useless. It is to say that, due to the supply constraints of these professions, there are always going to be fundamental limits to the amount they can produce. But there is a credible argument to be made that if we can bolster their ability to deliver the common scenarios much more effectively, we might be able to unlock some of the capacity to reach more people.
As an example if you want diet advice, it can lie to you very convincingly so there is no point in getting advice from it.
Main value you get from a programmer is they understand what they are doing and they can take the responsibility of what they are developing. Very junior developers are hired mostly as an investment so they become productive and stay with the company. AI might help with some of this but doesn’t really replace anyone in the process.
For support, there is massive value in talking to another human and having them trying to solve your issue. LLMs don’t feel much better than the hardcoded menu style auto support there already is.
I find it useful for some coding tasks but think LLMs were overestimated and it will blow up like NFTs
>I find it useful for some coding tasks but think LLMs were overestimated and it will blow up like NFTs
No way. NFTs did not make any headway in "the real world": their value proposition was that their cash value was speculative, like most other Blockchain technologies, and that understandably collapsed quickly and brilliantly. Right now developers are using LLMs and they have real tangible advantages. They are more successful than NFTs already.
I'm a huge AI skeptic and I believe it's difficult to measure their usefulness while we're still in a hype bubble but I am using them every day, they don't write my prod code because they're too unreliable and sloppy, but for one shot scripts <100 lines they have saved me hours, and they've entirely replaced stack overflow for me. If the hype bubble burst today I'd still be using LLMs tomorrow. Cannot say the same for NFTs
How exactly is this different from getting advice from someone who acts confidently knowledgeable? Diet advice is an especially egregious example, since I can have 40 different dieticians give me 72 different diet/meal plans with them saying 100% certainty that this is the correct one.
It's bad enough the AI marketers push AI as some all knowing, correct oracle, but when the anti-ai people use that as the basis for their arguments, it's somehow more annoying.
Trust but verify is still a good rule here, no matter the source, human or otherwise.
E.g. Next time a lawyer abandons your civil case and ghosts you after being clearly negligent and down-right bad in their representation. Good luck holding them accountable with any body without consequences.
If I ask it how to accomplish a task with the C standard library and it tells me to use a function that doesn't exist in the C standard library, that's not just "wrong" that is a fabrication. It is a lie
If you ask me to remove whitespace from a string in Python and I mistakenly tell you use ".trim()" (the Java method, a mistake I've made annoyingly too much) instead of ".strip()", am I lying to you?
It's not a lie. It's just wrong.
> Lying requires intent to deceive
LLMs do have an intent to deceive, built in!
They have been built to never admit they don't know an answer, so they will invent answers based on faulty premises
I agree that for a human mixing up ".trim()" and ".strip()" is an honest mistake
In the example I gave you are asking for a function that does not exist. If it invents a function, because it is designed to never say "you are wrong that doesn't exist" or "I don't know the answer" that seems to qualify to me as "intent to deceive" because it is designed to invent something rather than give you a negative sounding answer
The bullshitter doesn't care about if what they say is true or false or right or wrong. They just put out more bullshit.
Because, as Brad Pilon of intermittent fasting fashion repeatedly stresses, "All diets work."*
* Once there is an energy deficit.
From what I know dieticians don't design exercise plans. (If true) the LLM has better odds to figure it out.
I wouldn't have a clue how to verify most things that get thrown around these days. How can I verify climate science? I just have to trust the scientific consensus (and I do). But some people refuse to trust that consensus, and they think that by reading some convincing sounding alternative sources they've verified that the majority view on climate science is wrong.
The same can apply for almost anything. How can I verify dietary studies? Just having the ability to read scientific studies and spot any flaws requires knowledge that only maybe 1 in 10000 people could do, if not worse than that.
People are forthcoming with things they know they don't know. It's the stuff that they don't know that they don't know that get them. And also the things they think they know, but are wrong about. This may come as a shock, but people do make mistakes.
People talk a lot of about false info and hallucinations, which the models do in fact do, but the examples of this have become more and more far flung for SOTA models. It seems that now in order to elicit bad information, you pretty much have to write out a carefully crafted trick question or ask about a topic so on the fringes of knowledge that it basically is only a handful of papers in the training set.
However, asking "I am sensitive to sugar, make me a meal plan for the week targeting 2000cal/day and high protein with minimally processed foods" I would totally trust the output to be on equal footing with a run of the mill registered dietician.
As for the junior developer thing, my company has already forgone paid software solutions in order to use software written by LLMs. We are not a tech company, just old school manufacturing.
LLM:s create real value. I save a bunch of time coding with an LLM vs without one. Is it perfect? No, but it does not have to be for still creating a lot of value.
Are some people hyping it up too much? Sure, an reality will set in but it wont blow up. It will rather be like the internet. 2000s and everyone thought "slap some internet on it and everything will be solved". They overestimated the (shorterm) value of the internet. But internet was still useful.
Can't disagree more (on LLMs. NFTs are of course rubbish). I'm using them with all kinds of coding tasks with good success, and it's getting better every week. Also created a lot of documents using them, describing APIs, architecture, processes and many more.
Lately working on creating an MCP for an internal mid-sized API of a task management suite that manages a couple hundred people. I wasn't sure about the promise of AI handling your own data until starting this project, now I'm pretty sure it will handle most of the personal computing tasks in the future.
It doesn't have to. It can replace having no support at all.
It would be possible to run a helpdesk for a free product. It might suck but it could be great if you are stuck.
Support call centers usually work in layers. Someone to pick up the phone who started 2 days ago and knows nothing. They forward the call to someone who managed to survive for 3 weeks. Eventually you get to talk to someone who knows something but can't make decisions.
It might take 45 minutes before you get to talk to only the first helper. Before you penetrate deep enough to get real support you might lose an hour or two. The LLM can answer instantly and do better than tortured minimum wage employees who know nothing.
There may be large waves of similar questions if someone or something screwed up. The LLM can do that.
The really exciting stuff will come where the LLM can instantly read your account history and has a good idea what you want to ask before you do. It can answer questions you didn't think to ask.
This is specially great if you've had countless email exchanges with miles of text repeating the same thing over and over. The employee can't read 50 pages just to get up to speed on the issue, if they had the time you don't so you explain again for the 5th time that delivery should be on adress B not A and be on these days between these times unless it are type FOO orders.
Stuff that would be obvious and easy if they made actual money.
But it is replacing it. There's a rapidly-growing number of large, publicly-traded companies that replaced first-line support with LLMs. When I did my taxes, "talk to a person" was replaced with "talk to a chatbot". Airlines use them, telcos use them, social media platforms use them.
I suspect what you're missing here is that LLMs here aren't replacing some Platonic ideal of CS. Even bad customer support is very expensive. Chatbots are still a lot cheaper than hundreds of outsourced call center people following a rigid script. And frankly, they probably make fewer mistakes.
> and it will blow up like NFTs
We're probably in a valuation bubble, but it's pretty unlikely that the correct price is zero.
Have you somehow managed to avoid the last several decades of human-sourced dieting advice?
It doesn’t wholly replace the need for human support agents but if it can adequately handle a substantial number of tickets that’s enough to reduce headcount.
A huge percentage of problems raised in customer support are solved by otherwise accessible resources that the user hasn’t found. And AI agents are sophisticated enough to actually action on a lot of issues that require action.
The good news is that this means human agents can focus on the actually hard problems when they’re not consumed by as much menial bullshit. The bad news for human agents is that with half the workload we’ll probably hit an equilibrium with a lot fewer people in support.
Google is pretty much useless now as it changed into ann ad platform, and I suspect AI will go the same way soon enough.
It has always been easy to imagine how advertising could destroy the integrity of LLM's. I can guarantee that there will be companies unable to resist the temporary cash flows from it. Those models will destroy their reputation in no time.
https://www.washingtonpost.com/technology/2025/04/17/llm-poi...
One major problem is the payment mechanism. The nature of LLMs means you just can't really know or force it to spit out ad garbage in a predictable manor. That'll make it really tricky for an advertiser to want to invest in your LLM advertising (beyond being able to sell the fact that they are an AI ad service).
Another is going to be regulations. How can you be sure to properly highlight "sponsored" content in the middle of an AI hallucination? These LLM companies run a very real risk of running a fowl of FTC rules.
You certainly can with middleware on inference.
Jevons law in action: some pieces of work get lost, but lower cost of doing work generates more demand overall...
If that’s true, probably for the best that those jobs get replaced. Then again, the value may have been in the personal touch (pay to feel good about your decisions) rather than quality of directions.
For copywriting, analyzing contracts, exploring my business domain, etc etc. Each of those tasks would have required me to consult with an expert a few years ago. Not anymore.
But are those really the same? You're not paying the tax agent to give you the advice per se: even before Gemini, you could do your own research for free. You're really paying the tax agent to provide you advice that you can trust without having to go to the extra steps of doing deep research.
One of the most important bits of information I get from my tax agent is, "is this likely to get me audited if we do it?" It's going to be quite some time before I trust AI to answer that correctly.
But, also, the threshold of things we manage ourselves versus when we look to others is constantly moving as technology advances and things change. We're always making risk tradeoff decisions measuring the probability we get sued or some harm comes to us versus trusting that we can handle some tasks ourselves. For example, most people do not have attorneys review their lease agreements or job offers, unless they have a specific circumstance that warrants they do so.
The line will move, as technology gives people the tools to become better at handling the more mundane things themselves.
In a more general sense sometimes, but not always, it is easier to verify something than to come up with it at the first place.
These examples aren't wrong but you might be overstating their impact on the economy as a whole.
E.g. the overwhelming majority of people do not pay solely for tax advice, or have a dietician, etc. Corporations already crippled their customer support so there's no remaining damage to be dealt.
Your tax example won't move the needle on people who pay to have their taxes done in their entirety.
That is a great use for it too, rather than replacing artists we have personal advisors who can navigate almost any level of complex bureaucracy instantaneously. My girlfriend hates AI, like rails against it at any opportunity, but after spending a few hours on the DMV website I sat down and fed her questions into Claude and had answers in a few seconds. Instant convert.
Also took a picture of my tire while at the garage and asked it if I really needed new tires or not.
Took a picture of my sprinkler box and had it figure out what was going on.
Potentially all situations where I would’ve paid (or paid more than I already was) a local laborer for that advice. Or at a minimum spent much more time googling for the info.
These will likely be cell-phone-plan level expensive, but the value prop would still be excellent.
It's something that can be empirically measured instead of visually guessed at by a human or magic eight-ball. Using a tool that costs only a few dollars, no less, like the pressure gauge you should already keep in your glovebox.
There is no moat. Most of these AI APIs and products are interchangeable.
You can use a penny and your eyeballs to assess this, and all it costs is $0.01
It blows my mind the degree that people are offloading any critical thinking to AI
Sounds like reddit could also do a good job at this, though nobody said "reddit will replace your jobs". Maybe because not as many people actively use reddit as they use generative AI now, but I cannot imagine any other reason than that.
That’s like buying a wrench and changing your own spark plugs. Wrenches are not putting mechanics out of business.
So…all you needed was a decent search engine, which in the past would have been Google before it was completely enshittified.
Yes.
"...all you need" A good search engine is a big ask. Google at its height was quite good. LLMs are shaping up to be very good search engines
That would be enough, for me to be very pleased with them
I doubt it.
Search already "obsoletes" these fields in the same way AI does. AI isn't really competing against experts here, but against search.
It's also really not clear that AI has an overall advantage over dumb search in this area. AI can provide more focused/tailored results, but it costs more. Keep in mind that AI hasn't been enshittified yet like search. The enshittification is inevitable and will come fast and hard considering the cost of AI. That is, AI responses will be focused and tailored to better monetize you, not better serve you.
The legal profession specifically saw the rise of computers, digitization of cases and records, and powerful search... it's never been easier to "self help" - yet people still hire lawyers.
The only thing I can remotely trust is my own experience. Recently, I decided to have some business cards made, which I haven't done in probably 15 years. A few years ago, I would have either hired someone on Fiverr to design my business card or pay for a premade template. Instead, I told Sora to design me a business card, and it gave me a good design the first time; it even immediately updated it with my Instagram link when I asked it to.
I'm sorry, but I fail to see how AI, as we now know it, doesn't take the wind out of the sails of certain kinds of jobs.
The point is that I would have paid for another human being's time. Why? Because I am not a young man anymore, and have little desire to do everything myself at this point. But now, I don't have to pay for someone's time, and that surplus time doesn't necessarily transfer to something equivalent like magic.
I am not talking about whether I have to pay more or less for anything. My problem is not paying. I want to pay so that I don't have to make something myself or waste time fiddling with a free template.
What I am proposing is that, in the current day, a human being is less likely to be at the other end of the transaction when I want to spend money to avoid sacrificing my time.
Sure, one can say that whomever is working for one of these AI companies benefits, but they would be outliers and AI is effectively homogenizing labor units in that case. Someone with creative talent isn't going to feasibly spin up a competitive AI business the way they could have started their own business selling their services directly.
That's both pompous and bizarre. The "real" economy doesn't end at the walls of corporate offices. Far from it.
Even if every job that exists today were currently automated _people would find other stuff to do_. There is always going to be more work to do that isn't economical for AIs to do for a variety of reasons.
I wouldn't be saving on tax advisors. Moreover, I would hire two different tax advisors, so I could cross check them.
Technically, all you have to do is follow the written instructions. But there are a surprising number of maybes in those instructions. You hit a checkbox that asks whether you qualify for such-and-such deduction, and find yourself downloading yet another document full of conditions for qualification, which aren't always as clear-cut as you'd like. You can end up reading page after page to figure out whether you should check a single box, and that single box may require another series of forms.
My small side income takes me from a one-page return to several pages, and next year I'm probably going to have to pay estimated taxes in advance because that non-taxed income leaves me owing at the end of the year more than some acceptable threshold that could result in fines. All because I make an extra 10% doing some evening freelancing.
Most people's taxes shouldn't be complex, but in practice they're more complex than they should be.
If I can do this, most people can do a simple 2-page 1040EZ.
Your accountant also is probably saving hundreds of dollars in other areas using AI assistance.
Personally I still think you should cross check with a professional.
This fact is so simple and yet here we are having arguments about it. To me people are conflating an economic assessment - whose jobs are going to be impacted and how much - with an aspirational one - which of your acquaintances personally could be replaced by an AI, because that would satisfy a beef.
Here's my own take:
- It is far too early to tell.
- The roll-out of ChatGPT caused a mind-set revolution. People now "get" what is possible already now, and it encourages conceiving and persuing new use cases on what people have seen.
- I would not recommend any kinds to train to become a translator for sure; even before LLMs, people were paid penny amounts per word or line translated, and rates plummeted further due to tools that cache translations in previous versions of documents (SDL TRADOS etc.). The same decline not to be expected for interpreters.
- Graphic designers that live from logo designs and similar works may suffer fewer requests.
- Text editors (people that edit/proofread prose, not computer programs) will be replaced by LLMs.
- LLMs are a basic technology that now will be embedded into various products, from email clients over word processors to workflow tools and chat clients. This will take 2-3 years, and it may reduce the number of people needed in an office with a secretarial/admin/"analyst" type background after that.
- Industry is already working on the next-gen version of smarter tools for medics and lawyers. This is more of a 3-5 year development, but then again some early adopters started already 2-3 years ago. Once this is rolled out, there will be less demand for assitants-type jobs such as paralegals.
But I already trust my dentist. A new dentist deferring to AI is scary, and obviously will happen.
The mistake on mine was caught when a radiologist checked over the work of the weekend X-ray technician who missed a hairline crack. A second look is always good, and having one look be machine and the other human might be the best combo.
For now I agree. 2-4 years from now it can be 20 ultra strong models each trained somewhat differently that converse on the X-ray and reach a conclusion. I don't think technicians will have much to add to the accuracy.
This is such a broad category that I think it's inaccurate to say that all editors will be automated, regardless of your outlook on LLMs in general. Editing and proofreading are pretty distinct roles; the latter is already easily automated, but the former can take on a number of roles more akin to a second writer who steers the first writer in the correct direction. Developmental editors take an active role in helping creatives flesh out a work of fiction, technical editors perform fact-checking and do rewrites for clarity, etc.
Do you mean Philip Tetlock? He wrote Superforecasting, which might be what you're referring to?
It has been a very, very long time since editors have been proof-reading prose for typos and grammar mistakes, and you don't need LLMs for that. Good editors do a lot more creative work than that, and LLMs are terrible at it.
This is what happened to Google Search. It, like cable news, does kinda plod along because some dwindling fraction of the audience still doesn't "get it", but decline is decline.
"Like all ‘magic’ in Tolkien, [spiritual] power is an expression of the primacy of the Unseen over the Seen and in a sense as a result such spiritual power does not effect or perform but rather reveals: the true, Unseen nature of the world is revealed by the exertion of a supernatural being and that revelation reshapes physical reality (the Seen) which is necessarily less real and less fundamental than the Unseen" [1].
The writing and receiving of resumes has been superfluous for decades. Generative AI is just revealing that truth.
[1] https://acoup.blog/2025/04/25/collections-how-gandalf-proved...
First, LLMs are a distillation of our cultural knowledge. As such they can only reveal our knowledge to us.
Second, they are limited even more so by the users knowledge. I found that you can barely escape your "zone of proximal development" when interacting with an LLM.
(There's even something to be said about prompt engineering in the context of what the article is talking about: It is 'dark magic' and 'craft-magic' - some of the full potential power of the LLM is made available to the user by binding some selected fraction of that power locally through a conjuration of sorts. And that fraction is a product of the craftsmanship of the person who produced the prompt).
In this sense, I have rarely seen AI have negative impacts. Insofar as an LLM can generate a dozen lines of code, it forces developers to engage in less "performative copy-paste of stackoverflow/code-docs/examples/etc." and engage the mind in what those lines should be. Even if, this engagement of the mind, is a prompt.
In other words, there is a lot more spam in the world. Efficiencies in hiring that implicitly existed until today may no longer exist because anyone and their mother can generate a professional-looking cover letter or personal web page or w/e.
Presenting soft skills is entirely random, anyway, so the only marker you can have on a cv is "the person is able to write whatever we deem well-written [$LANGUAGE] for our profession and knows exactly which meaningless phrases to include that we want to see".
So I guess I was a bit strong on the low information content, but you better have a very, very strong resume if you don't know the unspoken rules of phrasing, formatting and bragging that are required to get through to an actual interview. For those of us stuck in the masses, this means we get better results by adding information that we basically only get by already being part of the in-group, not by any technical or even interpersonal expertise.
Edit: If I constrain my argument to CVs only, I think my statement holds: They test an ability to send in acceptably written text, and apart from that, literally only in-group markers.
Where input' is a distorted version of input. This is the new reality.
We should start to be less impressed volume of text and instead focus on density of information.
Always was.
This is completely untrue. Google Search still works, wonderfully. It works even better than other attempts at search by the same Google. For example, there are many videos that you will NEVER find on Youtube search that come up as the first results on Google Search. Same for maps: it's much easier to find businesses on Google Search than on maps. And it's even more true for non-google websites; searching Stack Overflow questions on SO itself is an exercice in frustration. Etc.
Well their Search revenue actually went up last quarter, as all quarters. Overall traffic might be a bit down (they don't release that data so we can't be sure) but not revenue. While I do take tons of queries to LLMs now, the kind of queries Google actually makes a lot of money on (searching flights, restaurants etc) I don't go to an LLM for - either because of habit or because of fear these things are still hallucinating. If Search was starting to die I'd expect to see it in the latest quarter earnings but it isn't happening.
>Google’s core search and advertising business grew almost 10 per cent to $50.7bn in the quarter, surpassing estimates for between 8 per cent and 9 per cent.[0]
The "Google's search is garbage" paradigm is starting to get outdated, and users are returning to their search product. Their results, particularly the Gemini overview box, are (usually) useful at the moment. Their key differentiator over generative chatbots is that they have reliable & sourced results instantly in their overview. Just concise information about the thing you searched for, instantly, with links to sources.
[0] https://www.ft.com/content/168e9ba3-e2ff-4c63-97a3-8d7c78802...
Quite the opposite. It's never been more true. I'm not saying using LLMs for search is better, but as it stands right now, SEO spammers have beat Google, since whatever you search for, the majority of results are AI slop.
Their increased revenue probably comes down to the fact that they no longer show any search results in the first screenful at all for mobile and they've worked hard to make ads indistinguishable from real results at a quick glance for the average user. And it's not like there exists a better alternative. Search in general sucks due to SEO.
If anything my frustration with google search comes from it being much harder to find niche technical information, because it seems google has turned the knobs hard towards "Treat search queries like they are coming from the average user, so show them what they are probably looking for over what they are actually looking for."
Where is this slop you speak of?
It's actually sadder than that. Google appear to have realised that they make more money if they serve up ad infested scrapes of Stack Overflow rather than the original site. (And they're right, at least in the short term).
Not a sustainable strategy in the long term though.
Not because the LLM is better, but because the search is close to unusable.
Resume filtering by AI can work well on the first line (if implemented well). However, once we get to the the real interview rounds and I see the CV is full of AI slop, it immediately suggests the candidate will have a loose attitude to checking the work generated by LLMs. This is a problem already.
I think the plastic surgery users disagree here: it seems like visible plastic surgery has become a look, a status symbol.
The general tone of this study seems to be "It's 1995, and this thing called the Internet has not made TV obsolete"; same for the Acemoglu piece linked elsewhere in the. Well, no, it doesn't work like that, it first comes for your Blockbuster, your local shops and newspaper and so on, and transforms those middle class jobs vulnerable to automation into minimum wages in some Amazon warehouse. Similarly, AI won't come for lawyers and programmers first, even if some fear it.
The overarching theme is that the benefits of automation flow to those who have the bleeding edge technological capital. Historically, labor has managed to close the gap, especially trough public education; it remains to be seen if this process can continue, since eventually we're bound to hit the "hardware" limits of our wetware, whereas automation continues to accelerate.
So at some point, if the economic paradigm is not changed, human capital loses and the owners of the technological capital transition into feudal lords.
Similar thing goes to delivery. Moving single pallet to store or replacing carpets or whatever. Lot of complexity if you do not offload it to receiver.
More regular the environment is easier it is to automate. A shelving in store in my mind might be simpler than all environments where vehicles need to operate in.
And I think we know first to go. Average or below average "creative" professionals. Copywriter, artists and so on.
What you are truly seeking is high level specifications for automation systems, which is a flawed concept to the degree that the particulars of a system may require knowledgeable decisions made on a lower level.
However, CAD/CAM, and infrastructure as code are true amplifiers of human power.
LLMs destroy the notion of direct coupling or having any layered specifications or actual levels involved at all, you try to prompt a machine trained in trying to ascertain important datapoints for a given model itself, when the correct model is built up with human specifications and intention at every level.
Wrongful roads lead to erratic destinations, when it turns out that you actually have some intentions you wish to implement IRL
But that doesn't mean the article they wrote in each of those scenarios in not useful and economically valuable enough for them to maintain a job.
If you want to reach the actual destination because conditions changed (there is a wreck in front of you) you need a system to identify changes that occur in a chaotic world and can pick from an undefined/unbounded list of actions.
(Racist memes and furry pornography doesn't count.)
The sandwich shop next to my work has a music playlist which is 100% ai generated repetitive slop.
Do you think they'll be paying graphic designers, musicians etc. for now on when something certainly shittier than what a good artist does, but also much better than what a poor one is able to achieve, can be used in five minutes for free?
People generating these things weren't ever going to be customers of those skillsets. Your examples are small business owners basically fucking around because they can, because it's free.
Most barber shops just play the radio, or "spring" for satellite radio, for example. AI generated music might actively lose them customers.
There's also going to be a shrinkage in the workforce caused by demographics (not enough kids to replace existing workers).
At the same time education costs have been artificially skyrocketed.
Personally the only scenario I see mass unemployment happening is under a "Russia-in-the-90s" style collapse caused by an industrial rugpull (supply chains being cut off way before we are capable of domestically substituting them) and/or the continuation of policies designed to make wealth inequality even worse.
There is brewing conflict across continents. India and Pakistan, Red sea region, South China sea. The list goes on and on. It's time to accept it. The world has moved on.
the individual phenomena you describe are indeed detritus of this failed reaction to an increasing awareness of all humans of our common conditions under disparate nation states.
nationalism is broken by the realization that everyone everywhere is paying roughly 1/4 to 1/3 of their income in taxes, however what you receive for that taxation varies. your nation state should have to compete with other nation states to retain you.
the nativist movement is wrongful in the usa for the reason that none of the folks crying about foreigners is actually native american,
but it's globally in error for not presenting the truth: humans are all your relatives, and they are assets, not liabilities: attracting immigration is a good thing, but hey feel free to recycle tired murdoch media talking points that have made us nothing but trouble for 40 years.
https://www.dhl.com/global-en/microsites/core/global-connect...
Source for counter argument?
We have had thousands of years of globalising. The trend has always been towards a more connected world. I strongly suspect the current Trump movement (and to an extent brexit depending on which brexit version you chose to listen to) will be blips in that continued trend. That is because it doesn't make sense for there to be 200 countries all experts in microchip manufacturing and banana growing.
It happens in cycles. Globalization has followed deglobalization before and vice versa. It's never been one straight line upward.
>That is because it doesn't make sense for there to be 200 countries all experts in microchip manufacturing and banana growing.
It'll break down into blocs, not 200 individual countries.
Ask Estonia why they buy overpriced LNG from America and Qatar rather than cheap gas from their next door neighbor.
If you think the inability to source high end microchips from anywhere apart from Taiwan is going to prevent a future conflict (the Milton Friedman(tm) golden arches theory) then I'm afraid I've got bad news.
BRICs have been trying to substitute for some of them and have made some nonzero progress but theyre still far, far away from stuff like a reserve currency.
When a sector collapses and become irrelevant, all its workers no longer need to be employed. Some will no longer have any useful qualifications and won't be able to find another job. They will have to go back to training and find a different activity.
It's fine if it's an isolated event. Much worse when the event is repeated in many sectors almost simultaneously.
Why? When we've seen a sector collapse, the new jobs that rush in to fill the void are new, never seen before, and thus don't have training. You just jump in and figure things out along the way like everyone else.
The problem, though, is that people usually seek out jobs that they like. When that collapses they are left reeling and aren't apt to want to embrace something new. That mental hurdle is hard to overcome.
That means either:
1. The capitalists failed to redeploy capital after the collapse.
2. We entered into some kind of post-capitalism future.
To explore further, which one are you imagining?
Many, many industries and jobs transformed or were relegated to much smaller niches.
Overall it was great.
And even if we solve this problem of hallucination, the ai agents still need a platform to do search.
If I was Google I’d simply cut off public api access to the search engine.
Google search is fraught with it's own list of problems and crappy results. Acting like it's infallible is certainly an interesting position.
>If I was Google I’d simply cut off public api access to the search engine.
The convicted monopolist Google? Yea, that will go very well for them.
OpenAI o3
Gemini 2.5 Pro
Grok 3
Anything below that is obsolete or dumbed down to reduce cost
I doubt this feature is actually broken and returning hallucinated links
https://ai.google.dev/gemini-api/docs/grounding
What people call "AI slop" existed before AI and AI where I control the prompt is getting to be better than what you will find on those sorts of websites.
Note also that ad blockers are much less prevalent on mobile.
As an example, many companies have recently shifted their support to "AI first" models. As a result, even if the team or certain team members haven't been fired, the general trend of hiring for support is pretty much down (anecdotal).
I agree that some automation is better for the humans to do their jobs better, but this isn't one of those. When you're looking for support, something has clearly went wrong. Speaking or typing to an AI which responds with random unrelated articles or "sorry I didn't quite get that" is just evading responsibility in the name of "progress", "development", "modernization", "futuristic", "technology", <insert term of choice>, etc.
Both of those can be true, because companies are placing bets that AI will replace a lot of human work (by layoffs and reduced hiring), while also using it in the short term as a reason to cut short term costs.
Software development jobs there have bigger threat: outsourcing to cheaper locations.
As well for teachers: it is hard to replace a person supervising kids with a chatbot.
Both your experience and what the article (research) says can be valid at the same time. That’s how statistics works.
The .com boom and bust is an apt reference point. The technological shift WAS real, and the value to be delivered ultimately WAS delivered…but not in 1999/2000.
It may be we see a massive crash in valuations but AI still ends up the dominant driver of software value over the next 5-10 years.
This reminds me of some early stage startup pitches. During a pitch, I might ask: "what do you think about competitor XYZ?" And sometimes the answer is "we don't think highly of them, we have never even seen them in a single deal we've competed for!" But that's almost a statistical tautology: if you both have .001% market share and you're doubling or tripling annually, the chance that you're going to compete for the same customers is tiny. That doesn't mean you can just dismiss that competitor. Same thing with the article above dismissing AI as a threat to jobs so quickly.
To give a concrete example of a job disappearing: I run a small deep tech VC fund. When I raised the fund in early '24, my plan was to hire one investor and one researcher. I hired a great investor, but given all of the AI progress I'm now 80% sure I won't hire a researcher. ChatGPT is good enough for research. I might end up adding a different role in the near future, but this is a research job that likely disappeared because of AI.
1) AI/automation will replace jobs. This is 100% certain in some cases. Look at the industrial revolution.
2) AI/automation will increase unemployment. This has never happened and it's doubtful it will ever happen.
The reason is that humans always adapt and find ways to be helpful that automation can't do. That is why after 250 years after the industrial revolution started, we still have single-digit unemployment.
> The reason is that humans always adapt and find ways to be helpful that automation can't do. That is why after 250 years after the industrial revolution started, we still have single-digit unemployment.
Horses, for thousand of years, were very useful to humans. Even with the various technological advances through that time their "unemployment" was very low. Until the invention and perfection of internal combustion engines.
To say that it is doubtful that it will ever happen to us is basically saying that human cognitive and/or physical capabilities are without bounds and that there is some reason that with our unbounded cognitive capabilities we will never be able to create a machine that could replicate those capabilities. That is a ridiculous claim.
Maybe instead look at the US in 2025. EU labor regulations make it much harder to fire employees. And 2023 was mainly a hype year for GenAI. Actual Enterprise adoption (not free vendor pilots) started taking off in the latter half of 2024.
That said, a lot of CEOs seem to have taken the "lay off all the employees first, then figure out how to have AI (or low cost offshore labor) do the work second" approach.
For example, the mass layoffs of federal employees.
Case in point: Klarna.
2024: "Klarna is All in on AI, Plans to Slash Workforce in Half" https://www.cxtoday.com/crm/klarna-is-all-in-on-ai-plans-to-...
2025: "Klarna CEO “Tremendously Embarrassed” by Salesforce Fallout and Doubts AI Can Replace It" https://www.salesforceben.com/klarna-ceo-tremendously-embarr...
Apparently not, since the sort of specific work which one used to find for this has all but vanished --- every AI-generated image one sees represents an instance where someone who might have contracted for an image did not (ditto for stock images, but that's a different conversation).
This is not at all true. Some percentage of AI generated images might have become a contract, but that percentage is vanishingly small.
Most AI generated images you see out there are just shared casually between friends. Another sizable chunk are useless filler in a casual blog post and the author would otherwise have gone without, used public domain images, or illegally copied an image.
A very very small percentage of them are used in a specific subset of SEO posts whose authors actually might have cared enough to get a professional illustrator a few years ago but don't care enough to avoid AI artifacts today. That sliver probably represents most of the work that used to exist for a freelance illustrator, but it's a vanishingly small percentage of AI generated images.
I prefer to get my illegally copied images from only the most humanely trained LLM instead of illegally copying them myself like some neanderthal or, heaven forbid, asking a human to make something. Such a though is revolting; humans breathe so loud and sweat so much and are so icky. Hold on - my wife just texted me. "Hey chat gipity, what is my wife asking about now?" /s
Most if it wasn't bespoke assets created by humans but stock art picked by if lucky, a professional photo editor, but more often the author themselves.
It feels very short-sighted from the company side because I nope'd right out of there. They didn't make me feel any trust for the company at all.
Instead of uploading your video ad you already created, you'll just enter a description or two and the AI will auto-generate the video ads in thousands of iterations to target every demographic.
Google is going to run away with this with their ecosystem - OpenAI etc al can't compete with this sort of thing.
People will think they have an eye for AI-generated content, and miss all the AI that doesn't register. If anything it would benefit the whole industry to keep some stuff looking "AI" so people build a false model of what "AI" looks like.
This is like the ChatGPT image gen of last year, which purposely put a distinct style on generated images (that shiny plasticy look). Then everyone had an "eye for AI" after seeing all those. But in the meantime, purpose made image generators without the injected prompts were creating indistinguishable images.
It is almost certain that every single person here has laid eyes on an image already, probably in an ad, that didn't set off any triggers.
1. If the goal is achieved, which is highly unlikely, then we get very very close to AGI and all bets are off.
2. If the goal is not achieved and we stay in this uncanny valley territory (not at the bottom of it but not being able to climb out either), then eventually in a few years' time we should see a return to many fragmented almost indie-like platforms offering bespoke human-made content. The only way to hope to achieve the acceptable quality will be to favor it instead of scale as the content will have to be somehow verified by actual human beings.
Question on two fronts:
1. Why do you think, considering the current rate of progress think it is very unlikely that LLM output becomes indistinguishable from expert creatives? Especially considering a lot of tells people claim to see are easily alleviated by prompting.
2. Why do you think a model whose output reaches that goal would rise in any way to what we’d consider AGI?
Personally, I feel the opposite. The output is likely to reach that level in the coming years, yet AGI is still far away from being reached once that has happened.
1. The progress is there but it's been slowing down yet the downsides have largely remained.
1.1. With the LLMs, while thanks to the larger context window (mostly achieved via hardware, not software), the models can keep track of the longer conversations better, the hallucinations are as bad as ever; I use them eagerly yet I haven't felt any significant improvements to the outputs in a long time. Anecdotally, a couple days ago I decided to try my luck and vibe-code a primitive messaging library and it led me in the wrong path even though I was challenging it along the way; it was so convincing that I wouldn't have noticed hadn't my colleague told me there was a better way. Granted, the colleague is extremely smart, but LLM should have told me what was the right approach because I was specifically questioning it.
1.2. The image generation has also barely improved. The biggest improvement during the past year has been with 4o, which can be largely attributed to move from diffusion to autoregression but it's far from perfect and still suffers from hallucinations even more than LLMs.
1.3. I don't think video models are even worth discussing because you just can't get a decent video if you can't get a decent still in the first place.
2. That's speculation, of course. Let me explain my thought process. A truly expert level AI should be able to avoid mistakes and create novel writings or research just by the human asking it to do it. In order to validate the research, it can also invent the experiments that need to be done by humans. But if it can do all this, then it could/should find the way to build a better AI, which after an iteration or two should lead to AGI. So, it's basically a genius that, upon human request, can break itself out of the confines.
It feels to me that the SOTA video models today are pretty damn good already, let alone in another 12 months when SOTA will no doubt have moved on significantly.
And on the other end we'll have "AI" ad blockers, hopefully. They can watch each other.
I'd still hire an entry level graphic designer. I would just expect them to use these tools and 2x-5x their output. That's the only changing I'm sensing.
That said I don’t think entry level illustration jobs can be around if software can do their job better than they do. Just like we don’t have a lot of calculators anymore, technological replacement is bound to occur in society, AI or not.
Well at least that's the potential.
"Equip yourself with skills that other people are willing to pay for." –Thomas Sowell
For me, the most interesting takeaway. It's easy to think about a task, break it down into parts, some of which can be automated, and count the savings. But it's more difficult to take into account any secondary consequences from the automation. Sometimes you save nothing because the bottleneck was already something else. Sometimes I guess you end up causing more work down the line by saving a bit of time at an earlier stage.
This can make automation a bit of a tragedy of the commons situation: It would be better for everyone collectively to not automate certain things, but it's better for some individually, so it happens.
in this case, the total cost would've gone up, and thus, eventually the stakeholder (aka, the person who pays) is going to not want to pay when the "old" way was cheaper/faster/better.
> It would be better for everyone collectively to not automate certain things, but it's better for some individually, so it happens.
not really, as long as the precondition i mentioned above (the total cost dropping) is true.
But there's also adversarial situations. Hiring would be one example: Companies use automated CV triaging tools that make it harder to get through to a human, and candidates auto generate CVs and cover letters and even auto apply to increase their chance to get to a human. Everybody would probably be better off if neither side attempted to automate. Yet for the individuals involved, it saves them time, so they do it.
> AI chatbots have had no significant impact on earnings or recorded hours in any occupation
But Generative AI is not just AI chatbots. There are ones that generate sounds/music, ones that generates imagines etc.
Another thing is, the research only looked Denmark, a nation with fairly healthy altitude towards work-life-balance, not a nation that gives proud to people who work their own ass off.
And the research also don't cover the effect of AI generated product: if music or painting can be created by an AI within just 1 minute based on prompt typed in by a 5 year old, then your expected value for "art work" will decrease, and you'll not pay the same price when you're buying from a human artist.
> Duolingo will replace contract workers with AI. The company is going to be ‘AI-first,’ says its CEO.
https://www.theverge.com/news/657594/duolingo-ai-first-repla...
-
And within that article:
> von Ahn’s email follows a similar memo Shopify CEO Tobi Lütke sent to employees and recently shared online. In that memo, Lütke said that before teams asked for more headcount or resources, they needed to show “why they cannot get what they want done using AI.”
It sounds like they didn't ask those who got laid off.
At no point did that company choose to pivot to GenAI to cut costs and reduce headcount. It's more reactive than that.
It is like expecting cars to replace horses before anyone starts investing in the road network and getting international petroleum supply chains set up - large capital investment is an understatement when talking about how long it takes to bring in transformative tech and bed it in optimally. Nonetheless, time passed and workhorses are rare beasts.
I am 100% convinced that Ai will and already has destroyed lots of Jobs. We will likely encounter world order disrupting changes in the coming decades when computer get another 1000 times faster and powerful in the coming 10 years.
The jobs described might get lost (obsolete or replaced) as well in the longer term if AI gets better than them. For example just now another article was mentioned in HN: "Gen Z grads say their college degrees were a waste of time and money as AI infiltrates the workplace" which would make teachers obsolete.
Seen a whole lot of gen AI deflecting customer questions which would have been previously tickets. That is a reduced ticket volume that would have been taken by a junior support engineer.
We are a couple of years away from the death of the level 1 support engineer. I can't even imagine what's going to happen to the level 0 IT support.
And this trend isn't new; a lot of investments into e.g. customer support is to need less support staff, for example through better self-service websites, chatbots / conversational interfaces / phone menus (these go back decades), or to reduce expenses by outsourcing call center work to low-wage countries. AI is another iteration, but gut feeling says they will need a lot of training/priming/coaching to not end up doing something other than their intended task (like Meta's AIs ending up having erotic chats with minors).
One of my projects was to replace the "contact" page of a power company with a wizard - basically, get the customers to check for known outages first, then check their own fuse boxes etc, before calling customer support.
1: https://xkcd.com/806/ - from an era when the worst that could happen was having to speak with incompetent, but still human, tech support.
I got myself into a loop where no matter what I did, there was no human in the loop.
Even the "threaten to cancel" trick didn't work, still just chatbots / automated services.
Thankfully more and more of the UK is getting FTTH. Sadly for me I accidentally misunderstood the coverage checker when I last moved house.
You're acting like it's not the companies that are monopolies that implement these systems first.
For all those 250 years most people have predicted that the next new technology will make the replaced workforce permanently unemployed, despite the track record of that prediction. We constantly predict poverty and get prosperity.
I kinda get why: The job loss is concrete reality while the newly created jobs are speculation.
Still, I'm confident AI will continue the extremely strong trend.
https://economics.mit.edu/news/daron-acemoglu-what-do-we-kno...
The overall rate of participation in the labor work force is falling. I expect this trend to continue as AI makes the economy more and more dynamic and sets a higher and higher bar for participation.
Overall GDP is rising while labor participation rate is falling. This clearly points to more productivity with fewer people participating. At this point one of the main factors is clearly technological advancement, and within that I believe if you were to make a survey of CEOS and ask what technological change has allowed them to get more done with fewer people, the resounding consensus would definitely be AI
I was able to pre-process the agreement, clearly understand most of the major issues, and come up with a proposed set of redlines all relatively easily. I then waited for his redlines and then responded asking questions about a handful of things he had missed.
I value a lawyer being willing to take responsibility for their edits, and he also has a lot of domain specific transactional knowledge that no LLM will have, but I easily saved 10 hours of time so far on this document.
It doesn't work: even for the tiny slice of human work that is so well defined and easily assessed that it is sent out to freelancers on sites like Fiverr, AI mostly can't do it. We've had years to try this now, the lack of any compelling AI work is proof that it can't be done with current technology.
You can't build on top of it: unlike foundational technologies like the internet, AI can only be used to build one product, a chatbot. The output of an AI is natural language and it's not reliable. How are you going to meaningfully process that output? The only computer system that can process natural language is an AI, so all you can do is feed one AI into another. And how do you assess accuracy? Again, your only tool is an AI, so your only option is to ask AI 2 if AI 1 is hallucinating, and AI 2 will happily hallucinate its own answer. It's like The Cat in the Hat Comes Back, Cat E trying to clean up the mess Cat D made trying to clean up the mess Cat C made and so on.
And it won't get any better. LLMs can't meaningfully assess their training data, they are statistical constructions. We've already squeezed about all we can from the training corpora we have, more GPUs and parameters won't make a meaningful difference. We've succeeded at creating a near-perfect statistical model of wikipedia and reddit and so on, it's just not very useful even if it is endlessly amusing for some people.
Can you pinpoint the date which LLMs stagnated?
More broadly, it appears to me that LLMs have improved up to and including this year.
If you consider LLMs to not have improved in the last year, I can see your point. However, then one must consider ChatGPT 4.5, Claude 3.5, Deepseek, and Gemini 2.5 to not be improvements.
Whatever the case, there are open platforms that give users a chance to compare two anonymous LLMs and rank the models as a result [1].
What I observe when I look for these rankings is that none of the top ranked models come from before your stagnation cut off date of September 2024 [2].
[1] https://arxiv.org/abs/2403.04132
[2] https://lmarena.ai/
This is the wrong question.
The question should be to hiring managers: Do you expect LLM based tools to increase or decrease your projected hiring of full time employees?
LLM workflows are already *displacing* entry-level labor because people are reaching for copilot/windsurf/CGPT instead of hiring a contract developer, researcher, BD person. I’m watching this happen across management in US startups.
It’s displacing job growth in entry level positions across primary writing copy, admin tasks or research.
You’re not going to find it in statistics immediately because it’s not a 1:1 replacement.
Much like the 1971 labor-productivity separation that everyone scratched their head about (answer: labor was outsourced and capital kept all value gains), we will see another asymptote to that labor productivity graph based on displacement not replacement.
I have a 185 year old treatise on wood engraving. At the time, to reproduce any image required that it be engraved in wood or metal for the printer; the best wood engravers were not mere reproducers, as they used some artistry when reducing the image to black and white, to keep the impression from continuous tones. (And some, of course, were also original artists in their own right). The wood engraving profession was destroyed by the invention of photo-etching (there was a weird interval before the invention of photo etching, in which cameras existed but photos had to be engraved manually anyway for printing).
Maybe all the wood engravers found employment; although I doubt it. But at this speed, there will be a lot of people who won't be able to retrain during employment and will either have to use up their savings while doing so, or have to take lower paid jobs.
This is how engraving went too. It wasn't overnight. The tools were not distributed evenly and it was a good while before amateurs could produce anything like what the earlier professionals did.
Because you can buy a microwave and pizza rolls doesn't make you a chef. Maybe in 100 years the tooling will make you as good as the chefs of our time, but by then they'll all be doing even better work and there are people who will pay for higher quality no matter how high the bar is raised for baseline quality so eliminating all work in a profession is rare.
As a father, my forward-thinking vision for my kids is that creativity will rule the day. The most successful will be those with the best ideas and most inspiring vision.
We're coming up in 3 years of ChatGPT and well over a year since I started seeing the proliferation of these 10X claims, and yet LLM users seem to be bearing none of the fruit one might expect from a 10X increase in productivity.
I'm beginning to think that this 10X thing is overstated.
This has never been the truth of the world, and I doubt AI will make it come to fruition. The most successful people are by and large those with powerful connections, and/or access to capital. There are millions of smart, inspired people alive right now who will never rise above the middle class. Meanwhile kids born in select zip codes will continue to skate by unburdened by the same economic turmoil most people face.
Second, in theory, future generations of AI tools will be able to review previous generations and improve upon the code. If it needs to, anyway.
But yeah, tech debt isn't unique to AIs, and I haven't seen anything conclusive that AIs generate more tech debt than regular people - but please share if you've got sources of the opposite.
(disclaimer, I'm very skeptical about AI to generate code myself, but I will admit to use it for boring tasks like unit test outlines)
Is that what's going to happen? These are still LLMs. There's nothing in the future generations that guarantees those changes would be better, if not flat out regressions. Humans can't even agree on what good code looks like, as its very subjective and context heavy with the skills of the team.
Likely, you ask gpt-6 to improve your code and it just makes up piddly architecture changes that don't fundamentally improve anything.
It'd still suck to lose your job / vocation though, and some of those won't be able to find a new job.
When the car was invented, entire industries tied to horses collapsed. But those that evolved, leveled up: Blacksmiths became auto mechanics and metalworkers, etc.
As a creatively minded person with entrepreneurial instincts, I’ll admit: my predictions are a bit self-serving. But I believe it anyway—the future of work is entrepreneurial. It’s creative.
How is this the conclusion you've come to when the sectors impacted most heavily by AI thus far have been graphic design, videography, photography, and creative writing?
There already isn't enough meaningful work for everyone. We see people with the "right training" failing to find a job. AI is already making things worse by eliminating meaningful jobs — art, writing, music production are no longer viable career paths.
I'm worried the shock will not be abrupt enough to encourage a proper rethink.
the rest is fugazi
We will have to get to 100% test coverage and document everything and add more bells and whistles to UI etc. The day to day activity may change but there will always be developers.
Sometimes that decrease in quality is matched by an increase in reach / access, and so the benefits can outweigh the costs. Think about language translation in web browsers and even smart spectacles, for example. Language translation has been around forever but generally limited to popular books or small-scale proprietary content because it was expensive to use mult-lingual humans to do that work.
Now even my near-zero readership blog can be translated from English to Portuguese (or most other widely used languages) for a reader in Brazil with near-zero cost/effort for that user. The quality isn't as good as human translation, often losing nuance and style and sometimes even with blatant inaccuracies, but the increased access offered by language translation software makes the lower standard acceptable for lots of use cases.
I wouldn't depend on machine translation for critical financial, healthcare, or legal use cases, though I might start there to get the gist, but for my day-to-day reading on the web, it's pretty amazing.
Software at scale is different than individuals engaging in leisure activities. A loss of nuance and occasional catastrophic failures in a piece of software with hundreds of millions or billions of users could have devastating impacts.
As with all other technologies the jobs it removes are not normally in country that introduces it but that they never happen elsewhere.
For example, while the automated looms that the Luddites were protesting about didn't result in significant job losses in the UK. How much clothing manufacturing has been curtailed in Africa because of it and similar innovations since that have lead to cheap mass produced clothes making it uneconomic to produce there.
As suggest by this report, Denmark and West will probably be make good elsewhere and be largely unaffected.
However, places like India, Vietnam with large industries based on call centres and outsourced development servicing the West are likely to be more vulnerable.
In other words, this more likely answers the question "If customer support agents all use ChatGPT or some in-house equivalent, does the company need fewer customer support agents?" than it answers the question "If we deploy an AI agent for customers to interact with, can it reduce the volume of inquiries that make it to our customer service team and, thus, require fewer agents?"
In the future, we will do a lot more.
In other terms: There will be a lot more work. So even if robots do 80% of it, if we do 10x more - the amount of work we need humans to do will double.
We will write more software, build more houses, build more cars, planes and everything down the supply chain to make these things.
When you look at planet earth, it is basically empty. While rent in big cities is high. But nobody needs to sleep in a big city. We just do so because getting in and out of it is cumbersome and building houses outside the city is expensive.
When robots build those houses and drive us into town in the morning (while we work in the car) that will change. I have done a few calculations, how much more mobility we could achieve with the existing road infrastructure if we use electric autonomous buses, and it is staggering.
Another way to look at it: Currently, most matter of planet earth has not been transformed to infrastructure used by humans. As work becomes cheaper, more and more of it will. There is almost infinitely much to do.
That said, the fact that I can't find an opensource LLM front-end which will accept a folder full of images to run a prompt on sequentially, then return the results in aggregate is incredibly frustrating.
I think we are at a crossroads as to what this will result in, however. In one case, the benefits will accrue at the top, with corporations earning greater profits while employing less people, leaving a large part of the population without jobs.
In the second case, we manage to capture these benefits, and confer them not just on the corporations but also the public good. People could work less, leaving more time for community enhancing activities. There are also many areas where society is currently underserved which could benefit from freed up workforce, such as schooling, elderly care, house building and maintenance etc etc.
I hope we can work toward the latter rather than the former.
It will for sure! Just today the impact is collosal.
As an example, people used to read technical documentation, now, they ask LLMs. Which replaces a simple static file by 50k matrix multiplication.
for sure, we are doing our best to eradicate the conditions that make earth habitable, however i suggest that the first needed change is for computer screen humans to realize that other life forms exist. this requires stepping outside and questioning human hubris, so it might be a big leap, but i am fairly confident that you will discover that absolutely none of our planet is empty.
Costs of buses are mostly the driver. Which will go away. The rest is mostly building and maintaining them. Which will be done by robots. The rest is energy. The sun sends more energy to earth in an hour than humans use in a year.
And use of solar energy is absolutely unrelated to doubling the living areal. That can, and should, be done anyway.
Which of the few remaining wild creatures will be displaced?
https://www.worldwildlife.org/press-releases/catastrophic-73...
Anecdotal situation - I use ChatGPT daily to rewrite sentences in the client reports I write. I would have traditionally had a marketing person review these and rewrite them, but now AI does it.
Be wary of people trying to deflect the away from the managerial class for these issues.
Either mathematics sucks or economists suck. Real hard choice.
Ever since the explosion in popularity of the internet in the 2000's, anything journalism related has been in terminal decline. The arrival of the smartphones accelerated this process.
I know it’s replaced marketing content writers in startups. I know it has augmented development in startups and reduced hiring needs.
The effects as it gains capability will be mass unemployment.
even customer service bots are just nicer front ends for knowledge bases.
So I find this result improbable, at best, given that I personally know several people who had to scramble to find new ways of earning money when their opportunities dried up with very little warning.
>Coding AIs increasingly look like autonomous agents rather than mere assistants: taking instructions via Slack or Teams and making substantial code changes on their own, sometimes saving hours or even days
https://ai-2027.com/
I'm someone who tries to avoid AI tools. But this paper is literally basing its whole assessment off of two things; wages and hours. This is a disingenuous assertion.
Lets assume that I work 8 hours per day. If I am able to automate 1h of my day with AI, does that mean I get to go home 1 hour early? No. Does that mean I get an extra hour of pay? No.
So the assertion that there has been no economic impact assumes that the AI is a separate agent that would normally be paid in wages for time. That is not the case.
The AI is an augmentation for an existing human agent. It has the potential to increase the efficiency of a human agent by n%. So we need to be measuring the impact that is has on effectiveness and efficiency. It will never offset wages or hours. It will just increase the productivity for a given wage or number of hours.
Demand for software has high elasticity
Imagine if a tool made content writers 10x as productive. You might hire more, not less, because they are now better value! You might eventually realise you spent too much, but this will come later.
ADAIK no company I know of starts a shiny new initiative by firing, they start by hiring then cutting back once they have their systems in place or hit a ceiling. Even Amazon runs projects fat then makes them lean AFAIK.
There's also pent up demand.
You never expect a new labour saving device to cost jobs while the project managers are in the export building phase.
If each of my developers is 30% more productive that means we can ship 30% more functionally which means more budget to hire more developers. If you think you’ll just pocket that surplus you have another thing coming.
Truth is, companies that don’t need layoffs are pushing employees to use AI to supercharge their output.
You don’t grow a business by just cutting costs, you need to increase revenue. And increasing revenue means more work, which means it’s better for existing employees to put out more with AI.
So, as of yet, according to these researchers, the main effect is that of a data pump, certain corporations get a deep insight into people's and other corporation's inner life.
I'm not saying that I think LLMs are useless, far from it, I use them when I think it's a good fit for the research I'm doing, the code I need to generate, etc., but the way it's being pushed from a marketing perspective tells me that companies making these tools need people to use them to create a data moat.
Extremely annoying to be getting these pop-ups to "use our incredible Intelligence™" at every turn, it's grating on me so much that I've actively started to use them less, and try to disable every new "Intelligence™" feature that shows up in a tool I use.
The boards in turn instruct the CEOs to "adopt AI" and so you get all the normal processes about deciding what/if/when to do stuff get short circuited and so you get AI features that no one asked for or mandates for employees to adopt AI with very shallow KPIs to claim success.
The hype really distorts both sides of the conversation. You get the boosters for which any use of AI is a win, no matter how inconsequential the results, and then you get things like the original article which indicate it hasn't caused job losses yet as a sign that it hasn't changed anything. And while it might disprove the hype (especially the "AI is going to replace all mental labour in $SHORT_TIMEFRAME" hype), it really doesn't indicate that it won't replace anything.
Like when has a technology making the customer support experience worse for users or employees ever stopped it's rollout if there's cost savings to be had?
I think this why AI is so complicated for me. I've used it, and I can see some gains. But it's on the order of when IDE auto complete went from substring matches of single methods to when it could autocomplete chains of method calls based on types. The agent stuff fails on anything but the most bite size work when I've tried it.
Clearly some people seem it as something more transformative than that. There's other times when people have seen something transformative and it's just been so clearly nothing of value (NFTs for example) that it's easy to ignore the hype train. The reason AI is challenging for me is it's clearly not nothing, but also it's so far away from the vision that others have that it's not clear how realistic that is.
Fundamentally, we (the recipient of llm output) are generating the meaning from the words given. ie, llms are great when the recipient of their output is a human.
But, when their recipient is a machine, the model breaks down, because, machine to machine requires deterministic interactions. this is the weakness I see - regardless of all the hype about llm agents. fundamentally, the llms are not deterministic machines.
LLMs lack a fundamental human capability of deterministic symbolization - which is to create NEW symbols with associated rules which can deterministically model worlds we interact with. They have a long way to go on this.
It's very telling that we see "we won't use your data for training" sometimes and opt-outs but never "we won't collect your data". 'Training' being at best ill defined.
You already see attorneys using it to write briefs; often to hilarious effect. These are clearly the precursor though to a much reduced need to for Jr / associate level attorneys at firms.
watch out for headcount lacking in segments of the market
The wise, will displace economists and consultants with LLMs, but the trend followers will hire them to prognostic about the future impact - such that the net affect could be zero.
I would say the use cases are only coming into view.
And any important jobs won’t be replaced because managers are too lazy and risk averse to try AI.
We may never see job displacement from AI. Did you know bank teller jobs actually increased in the decades following the roll out of ATMs.
But even then, I'm not saying all are equally vital, I'm just saying that the statement, "most jobs are performative" doesn't even come close to being supported by "I've worked 10 performative jobs".