NHacker Next
login
▲AI is differentantirez.com
463 points by grep_it 3 days ago | 751 comments
Loading comments...
gdubs 6 hours ago [-]
AI has been improving at a very rapid pace, which means that a lot of people have really outdated priors. I see this all the time online where people are dismissive about AI in a way that suggests it's been a while since they last checked-in on the capabilities of models. They wrote off the coding ability of ChatGPT on version 3.5, for instance, and have missed all the advancements that have happened since. Or they talk about hallucination and haven't tried Deep Research as an alternative to traditional web-search.

Then there's a tendency to be so 'anti' that there's an assumption that anyone reporting that the tools are accomplishing truly impressive and useful things must be an 'AI booster' or shill. Or they assume that person must not have been a very good engineer in the first place, etc.

Really is one of those examples of the quote, "In the beginner's mind there are many possibilities, but in the expert's mind there are few."

It's a rapidly evolving field, and unless you actually spend some time kicking the tires on the models every so often, you're just basing your opinions on outdated experiences or what everyone else is saying about it.

jdoliner 4 hours ago [-]
I feel like I see these two opposite behaviors. People who formed an opinion about AI from an older model and haven't updated it. And people who have an opinion about what AI will be able to do in the future and refuse to acknowledge that it doesn't do that in the present.

And often when the two are arguing it's tricky to tell which is which, because whether or not it does something isn't totally black and white, there's some things it can sometimes do, which you can argue either way about that being in its capabilities or not.

forgotTheLast 53 minutes ago [-]
I.e. people who look at f(now) and assume it'll be like this forever against people who look at f'(now) and assume it'll improve like this forever
thegrim33 5 hours ago [-]
To play devil's advocate, how is your argument not a 'no true scottsman' argument? As in, "oh, they had a negative view of X, well that's of course because they weren't testing the new and improved X2 model which is different". Fast forward a year .. "Oh, they have a negative view on X2, well silly them, they need to be using the Y24 model, that's where it's at, the X2 model isn't good anymore". Fast forward a year .. ad infinitum.

Are the models that exist today a "true scottsman" for you?

xwowsersx 4 hours ago [-]
It's not a No True Scotsman. That fallacy redefines the group to dismiss counterexamples. The point here is different: when the thing itself keeps changing, evidence from older versions naturally goes stale. Criticisms of GPT-3.5 don't necessarily hold against GPT-4, just like reviews of Windows XP don't apply to Windows 11.
cmiles74 2 hours ago [-]
IMHO, by placing people with a negative attitude toward AI products under the guise "their priors are outdated" you effectively negate any arguments from those people. That is, because their priors are outdated their counterexamples may be dismissed. That is, indeed, the no true Scotsman!
ludwik 2 hours ago [-]
I don’t see a claim that anyone with a negative attitude toward AI shouldn’t be listened to because it automatically means that they formed their opinion on older models. The claim was simply that there’s a large cohort of people who undervalue the capabilities of language models because they formed their views while evaluating earlier versions.
barrell 50 minutes ago [-]
Yes but almost definitionally that is everyone who did not find value from LLMs. If you don’t find value from LLMs, you’re not going to use them all the time.

The only people you’re excluding are the people who are forced to use it, and the random sampling of people who happened to try it recently.

So it may have been accidental or indirectly, but yes, no true Scotsman would apply to your statement.

gmm1990 53 minutes ago [-]
I wouldn’t think gpt5 is any better than the previous chat gpt. I know it’s a silly example but I was trying to trip it up with the 8.6-8.11 and it got it right .49 but then it said the opposite of 8.6 - 8.12 was -.21.

I just don’t see that much of a difference coding either with Claude 4 or Gemini 2.5 pro. Like they’re all fine but the difference isn’t changing anything in what I use them for. Maybe people are having more success with the agent stuff but in my mind it’s not that different than just forking a GitHub repo that already does what you’re “building” with the agent.

vlovich123 5 hours ago [-]
How is that different than the models today are actually usable for non trivial things and more capable than yesterdays and it’s also true that tomorrow’s models will also probably be more capable than today’s?

For example, I dismissed AI three years ago because it couldn’t do anything I needed it to. Today I use it for certain things and it’s not quite capable of other things. Tomorrow it might be capable of a lot more.

Yes, priors have to be updated when the ground truth changes and the capabilities of AI change rapidly. This is how chess engines on supercomputers were competitive in the 90s then hybrid systems became the leading edge competitive and then machines took over for good and never looked back.

Eggpants 5 hours ago [-]
It’s not that the LLMs are better, it’s the internal tools/functions being called that do the actual work are better. They didn’t spend millions to retrain a model to statistically output the number of r’s in strawberry, but just offloaded that trivial question to a function call.

So I would say the overall service provided is better than it was, thanks to functions being built based on user queries, but not the actual LLM models themselves.

vlovich123 2 hours ago [-]
LLMs are definitely better quality today than 3 years ago at codegen quality - there’s quantitative benchmarks as well as for me my personal qualitative experience (given the gaming that companies engage in).

It is also true that the tooling and context management has gotten more sophisticated (often using models by the way). That doesn’t negate that the models themselves have gotten better at reliable tool calling so that the LLM is driving more of the show rather than purpose built coordination into the LLM and that the codegen quality is higher than it used to be.

Mars008 3 hours ago [-]
There is another big and growing group: charlatans (influencers). People who don't know much but make bold statements, select 'proof' cases. Just to get attention. There are many of them on youtube. When you someone on thumbnail making faces this is most likely it.
resource0x 2 hours ago [-]
> There are many of them on youtube.

Not as many as on HN. "Influencers" have agendas and the stream of income, or other self-interest. HN always comes off as a monolith, on any subject. Counter-arguments get ignored and downvoted to oblivion.

jkubicek 53 minutes ago [-]
I’m spending a lot of time on LinkedIn because my team is hiring and, boy oh boy, LinkedIn is terminally infested with AI influencers. It’s a hot mess.
analog31 34 minutes ago [-]
There's a middle ground which is to watch and see what happens around us. Is it unholy to not have an opinion?
kelseyfrog 2 hours ago [-]
There's three important beliefs at play in the A(G)I story:

1. When(if) AGI will arrive. It's likely going to be smeared out over a couple months to years, but relative to everything else, it's a historical blip. This really is the most contention belief with the most variability. It is currently predicted to be 8 years[1].

2. What percentage of jobs will be replaceable with AGI? Current estimates between 80-95% of professions. The remaining professions "culturally require" humans. Think live performance, artisanal goods, in-person care.

3. How quickly will AGI supplant human labor? What is the duration of replacement from inception to saturation? Replacement won't happen evenly, some professions are much easier to replace with AGI, some much more difficult. Let's estimate a 20-30 years horizon for the most stubborn to replace professions.

What we have is a ticking time bomb of labor change at least an order of magnitude greater than the transition from an agricultural economy to an industrial economy or from an industrial economy to a service economy.

Those happened over the course of several generations. Society: culture, education, the legal system, the economy, where able to absorb the changes over 100-200 years. Yet we're talking about a change on the same scale happening 10 times faster - within the timeline of one's professional career. And still, with previous revolutions we had incredible unrest, and social change. Taken as a whole, we'll have possibly the majority of the economy operating outside the territory of society, the legal system, and the existing economy. A kid born on the the "day" AGI arrives will become an adult in a profoundly different world as if born on a farm in 1850 and reaching adulthood in a city in 2000.

1. https://www.metaculus.com/questions/5121/date-of-artificial-...

semi-extrinsic 1 hours ago [-]
Your only reference [1] is to a page where anybody in the world can join and vote. It literally means absolutely nothing.

For [2] you have no reference whatsoever. How does AI replace a nurse, a vet, a teacher, a construction worker?

kelseyfrog 14 minutes ago [-]
What are you talking about? This is common knowledge.

Median forecasts indicated a 50% probability of AI systems being capable of automating 90% of current human tasks in 25 years and 99% of current human tasks in 50 years[1]

The scope of work replaceable by embodied AGI and the speed of AGI saturation of vastly under estimated. The bottle necks are production of a replacement workforce, not retraining human laborers.

1. https://arxiv.org/pdf/1901.08579

libraryofbabel 4 hours ago [-]
I do see this a lot. It's hard to have a reasonable conversation about AI amidst, on the one hand, hype-mongers and boosters talking about how we'll have AGI in 2027 and all jobs are just about to be automated away, and on the other hand, a chorus of people who hate AI so much they have invested their identify in it failing and haven't really updated their priors since ChatGPT came out. Both groups repeat the same set of tired points that haven't really changed much in three years.

But there are plenty of us who try and walk a middle course. A lot of us have changed our opinions over time. ("When the facts change, I change my mind.") I didn't think AI models were much use for coding a year ago. The facts changed. (Claude Code came out.) Now I do. Frankly, I'd be suspicious of anyone who hasn't changed their opinions about AI in the last year.

You can believe all these things at once, and many of us do:

* LLMs are extremely impressive in what they can do. (I didn't believe I'd see something like this in my lifetime.)

* Used judiciously, they are a big productivity boost for software engineers and many other professions.

* They are imperfect and make mistakes, often in weird ways. They hallucinate. There are some trivial problems that they mess up.

* But they're not just "stochastic parrots." They can model the world and reason about it, albeit imperfectly and not like humans do.

* AI will change the world in the next 20 years

* But AI companies are overvalued at the present time and we're mostly likely in a bubble which will burst.

* Being in a bubble doesn't mean the technology is useless. (c.f. the dotcom bubble or the railroad bubble in the 19th century.)

* AGI isn't just around the corner. (There's still no way models can learn from experience.)

* A lot of people making optimistic claims about AI are doing it for self-serving boosterish reasons, because they want to pump up their stock price or sell you something

* AI has many potential negative consequences for society and mental health, and may be at least as nasty as social media in that respect

* AI has the potential to accelerate human progress in ways that really matter, such as medical research

* But anyone who claims to know the future is just guessing

IX-103 2 hours ago [-]
> But they're not just "stochastic parrots." They can model the world and reason about it, albeit imperfectly and not like humans do.

I've not seen anything from a model to persuade me they're not just stochastic parrots. Maybe I just have higher expectations of stochastic parrots than you do.

I agree with you that AI will have a big impact. We're talking about somewhere between "invention of the internet" and "invention of language" levels of impact, but it's going to take a couple of decades for this to ripple through the economy.

libraryofbabel 1 hours ago [-]
What is your definition of "stochastic parrot"? Mine is something along the lines of "produces probabilistic completions of language/tokens without having any meaningful internal representation of the concepts underlying the language/tokens."

Early LLMs were like that. That's not what they are now. An LLM got Gold on the Mathematical Olympiad - very difficult math problems that it hadn't seen in advance. You don't do that without some kind of working internal model of mathematics. There is just no way you can get to the right answer by spouting out plausible-sounding sentence completions without understanding what they mean. (If you don't believe me, have a look at the questions.)

pm 10 minutes ago [-]
Ignoring its negative connotation, it's more likely to be a highly advanced "stochastic parrot".

> "You don't do that without some kind of working internal model of mathematics."

This is speculation at best. Models are black boxes, even to those who make them. We can't discern a "meaningful internal representation" in a model, anymore than a human brain.

> "There is just no way you can get to the right answer by spouting out plausible-sounding sentence completions without understanding what they mean."

You've just anthropomorphised a stochastic machine, and this behaviour is far more concerning, because it implies we're special, and we're not. We're just highly advanced "stochastic parrots" with a game loop.

nuancebydefault 19 minutes ago [-]
Stochastic parrot here (or not?). Can you tell the difference?
app134 1 hours ago [-]
In-context learning is proof that LLMs are not stochastic parrots.
dvfjsdhgfv 2 hours ago [-]
> AI will change the world in the next 20 years

Well, it's been changing the world for quite some time, both in good and bad ways. There is no need to add an arbitrary timestamp.

dmead 4 hours ago [-]
Is there anything you can tell me that will help me drop the nagging feeling that gradient descent trained models will just never be good?

I understand all of what you said, but I can't get over that fact that the term AI is being used for these architectures. It seems like the industry is just trying to do a cool parlor trick in convincing the masses this is somehow AI from science fiction.

Maybe I'm being overly cynical, but a lot of this stinks.

atleastoptimal 3 hours ago [-]
The thing is AI is already "good" for a lot of things. It all depends on your definition of "good" and what you require of an AI model.

It can do a lot of things that are generally very effective. High reliability semantic parsing from images is just one thing that modern LLM's are very reliable at.

Zacharias030 4 hours ago [-]
Wouldn’t you say that now, finally, what people call AI combines subsymbolic systems („gradient descent“) with search and with symbolic systems (tool calls)?

I had a professor in AI who was only working on symbolic systems such as SAT-solvers, Prolog etc. and the combination of things seems really promising.

Oh, and what would be really nice is another level of memory or fast learning ability that goes beyond burning in knowledge through training alone.

dmead 4 hours ago [-]
I had such a professor as well, but those people used to use the more accurate term "machine learning".

There was also wide understanding that those architectures were trying to imitate small bits of what we understood was happening in the brain (see marvin minsky's perceptron etc). The hope was, as I understood it that there would be some breakthrough in neuroscience that would let the computer scientists pick up the torch and simulate what we find in nature.

None of that seems to be happening anymore and we're just interested in training enough to fool people.

"AI" companies investing in brain science would convince me otherwise. At this point they're just trying to come up with the next money printing machine.

app134 1 hours ago [-]
You asked earlier if you were being overly cynical, and I think the answer to that is "yes"

We are indeed simulating what we find in nature when we create neural networks and transformers, and AI companies are indeed investing heavily in BCI research. ChatGPT can write an original essay better than most of my students. Its also artificial. Is that not artificial intelligence?

dmead 42 minutes ago [-]
It is not intelligent.

Hiding the training data behind gradient descent and then making attributions to the program that responds using this model is certainly artificial though.

This analogy just isn't holding water.

barrell 2 hours ago [-]
There are also a bunch of us who do kick the tires very often and are consistently underwhelmed.

There are also those of us who have used them substantially, and seen the damage that causes to a codebase in the long run (in part due to the missing gains of having someone who understands the codebase).

There are also those of us who just don’t like the interface of chatting with a robot instead of just solving the problem ourselves.

There are also those of us who find each generation of model substantially worse than the previous generation, and find the utility trending downwards.

There are also those of us who are concerned about the research coming out about the effects of using LLMs on your brain and cognitive load.

There are also those of us who appreciate craft, and take pride in what we do, and don’t find that same enjoyment/pride in asking LLMs to do it.

There are also those of us who worry about offloading our critical thinking to big corporations, and becoming dependent on a pay-to-play system, that is current being propped up by artificially lowered prices, with “RUG PULL” written all over them.

There are also those of us who are really concerned about the privacy issues, and don’t trust companies hundreds of billions of dollars in debt to some of the least trust worth individuals with that data.

Most of these issues don’t require much experience with the latest generation.

I don’t think the intention of your comment was to stir up FUD, but I feel like it’s really easy for people to walk away with that from this sort of comment, so I just wanted to add my two cents and tell people they really don’t need to be wasting their time every 6 weeks. They’re really not missing anything.

Can you do more than a few weeks ago? Sure? Maybe? But I can also do a lot more than I was able to a few weeks ago as well not using an LLM. I’ve learned and improved myself.

Chances are if you’re not already using an LLM it’s because you don’t like it, or don’t want to, and that’s really ok. If AGSI comes out in a few months, all the time you would have invested now would be out of date anyways.

There’s really no rush or need to be tapped in.

bigstrat2003 2 hours ago [-]
> There are also a bunch of us who do kick the tires very often and are consistently underwhelmed.

Yep, this is me. Every time people are like "it's improved so much" I feel like I'm taking crazy pills as a result. I try it every so often, and more often than not it still has the same exact issues it had back in the GPT-3 days. When the tool hasn't improved (in my opinion, obviously) in several years, why should I be optimistic that it'll reach the heights that advocates say it will?

barrell 2 hours ago [-]
haha I have to laugh because I’ve probably said “I feel like I’m taking crazy pills” at least 20 times this week (I spent a day using cursor with the new GPT and was thoroughly, thoroughly unimpressed).

I’m open to programming with LLMs, and I’m entirely fine with people using them and I’m glad people are happy. But this insistence that progress is so crazy that you have to be tapped in at all times just irks me.

LLM models are like iPhones. You can skip a couple versions it’s fine, you will have the new version at the same time with all the same functionality as everyone else buying one every year.

libraryofbabel 2 hours ago [-]
There’s really three points mixed up in here.

1) LLMs are controlled by BigCorps who don’t have user’s best interests at heart.

2) I don’t like LLMs and don’t use them because they spoil my feeling of craftsmanship.

3) LLMs can’t be useful to anyone because I “kick the tires” every so often and am underwhelmed. (But what did you actually try? Do tell.)

#1 is obviously true and is a problem, but it’s just capitalism. #2 is a personal choice, you do you etc., but it’s also kinda betting your career on AI failing. You may or may not have a technical niche where you’ll be fine for the next decade, but would you really in good conscience recommend a juniorish web dev take this position? #3 is a rather strong claim because it requires you to claim that a lot of smart reasonable programmers who see benefits from AI use are deluded. (Not everyone who says they get some benefit from AI is a shill or charlatan.)

barrell 1 hours ago [-]
How exactly am I betting my career on LLMs failing? The inverse is definitely true — going all in on LLMs feels like betting on the future success of LLMs. However not using LLMs to program today is not betting on anything, except maybe myself, but even that’s a stretch.

After all, I can always pick up LLMs in the future. If a few weeks is long enough for all my priors to become stale, why should I have to start now? Everything I learn will be out of date in a few weeks. Things will only be easier to learn 6, 12, 18 months from now.

Also no where in my post did I say that LLMs can’t be useful to anyone. In fact I said the opposite. If you like LLMs or benefit from them, then you’re probably already using them, in which case I’m not advocating anyone stop. However there are many segments of people who LLMs are not for. No tool is a panacea. I’m just trying to nip and FUD in the butt.

There are so many demands for our attention in the modern world to stay looped in and up to date on everything; I’m just here saying don’t fret. Do what you enjoy. LLMs will be here in 12 months. And again in 24. And 36. You don’t need to care now.

And yes I mentor several juniors (designers and engineers). I do not let them use LLMs for anything and actively discourage them from using LLMs. That is not what I’m trying to do in this post, but for those whose success I am invested in, who ask me for advice, I quite confidently advise against it. At least for now. But that is a separate matter.

EDIT: My exact words from another comment in this thread prior to your comment:

> I’m open to programming with LLMs, and I’m entirely fine with people using them and I’m glad people are happy.

saltcured 3 minutes ago [-]
I wonder, what drives this intense FOMO ideation about AI tools as expressed further upthread?

How does someone reconcile a faith that AI tooling is rapdily improving with that contradictory belief that there is some permanent early-adopter benefit?

on_the_train 3 hours ago [-]
But the reports are from shills. The impact of ai is almost non existent. The greatest impact it had was on role-playing. It's hardly even useful for coding.

And that all wouldn't be a problem if it wasn't for the wave of bots that makes the crypto wave seem like child's play.

lopatin 3 hours ago [-]
> They wrote off the coding ability of ChatGPT on version 3.5, for instance, and have missed all the advancements that have happened since.

> It's hardly even useful for coding.

I’m curious what kind of projects you’re writing where AI coding agents are barely useful.

It’s the “shills” on YouTube that keep me up to date with the latest developments and best practices to make the most of these tools. To me it makes tools like CC not only useful but indispensable. Now I do not focus on writing the thing, but I focus on building agents who are capable of building the thing with a little guidance.

loandbehold 3 hours ago [-]
I don't understand people who say AI isn't useful for coding. Claude Code improved my productivity 10x. I used to put solid 8 hours a day in my remote software engineering job. Now I finish everything in 2 hours and go play with my kids. And my performance is better than before.
bigstrat2003 2 hours ago [-]
I don't understand people who say this. My knee jerk reaction (which I rein in because it's incredibly rude) is always "wow, that person must really suck at programming then". And I try to hold to the conviction that there's another explanation. For me, the vast, vast majority of the time I try to use it, AI slows my work down, it doesn't speed it up. As a result it's incredibly difficult to understand where these supposed 10x improvements are being seen.
libraryofbabel 1 hours ago [-]
Usually the "10x" improvements come from greenfield projects or at least smaller codebases. Productivity improvements on mature complex codebases are much more modest, more like 1.2x.

If you really in good faith want to understand where people are coming from when they talk about huge productivity gains, then I would recommend installing Claude Code (specifically that tool) and asking it to build some kind of small project from scratch. (The one I tried was a small app to poll a public flight API for planes near my house and plot the positions, along with other metadata. I didn't give it the api schema at all. It was still able to make it work.) This will show you, at least, what these tools are capable of -- and not just on toy apps, but also at small startups doing a lot of greenfield work very quickly.

Most of us aren't doing that kind of work, we work on large mature codebases. AI is much less effective there because it doesn't have all the context we have about the codebase and product. Sometimes it's useful, sometimes not. But to start making that tradeoff I do think it's worth first setting aside skepticism and seeing it at its best, and giving yourself that "wow" moment.

loandbehold 20 minutes ago [-]
I was able to realize huge productivity gains working on a 20 years old codebase with 2+ million loc, as I mentioned in the sister post. So I disagree that big productivity gains are only on greenfield projects. Realizing productivity gains on mature code based requires more skill and upfront setup. You need to put some work in your claude.md and give Claude tools for accessing necessary data, logs, build process. It should be able to test your code autonomously as much as possible. In my experience, people who say they are not able to realize productivity gains don't put enough effort to understand these new tools and setup them properly for their project.
mattmanser 42 minutes ago [-]
So, I'm doing that right now. You do get wow moments, but then you rapidly hit the WTF are you doing moments.

One of the first three projects I tried was a spin on a to-do app. The buttons didn't even work when clicked.

Yes, I keep it iterating, give it a puppeteer MCP, etc.

I think you're just misunderstanding how hard it is to make a greenfield project when you have a super-charged stack overflow that AI is.

Greenfield projects aren't hard, what's hard is starting them.

What AI has helped me immensely with is blank page syndrome. I get it to spit out some boilerplate for a SINGLE page, then boom, I have a new greenfield project 95% my own code in a couple of days.

That's the mistake I think you 10x ers are making.

And you're all giddy and excited and are putting in a ton of work without realising you're the one doing the work, not the AI.

And you'll eventually burn out on that.

And those of us who are a bit more skeptical are realising we could have done it on our own, faster, we just wouldn't normally have bothered. I'd have gone done some gardening with that time instead.

loandbehold 1 hours ago [-]
For me, most of the value comes from Claude Code's ability to 1. research codebase and answer questions about it 2. Perform adhoc testing on the code. Actually writing code is icing on the cake. I work on large code base with more than two million lines of code. Claude Code's ability to find relevant code, understand it's purpose, history and interfaces is very time saving. It can answer in minutes questions that would take hours of digging through the code base. Ad hoc testing is another thing. E.g. I can just ask it to test an API endpoint. It will find correct data to use in the database, call the endpoint and verify that it returned correct data and e.g. everything was updated in db correctly.
bentcorner 1 hours ago [-]
It depends on what kind of code you're working on and what tools you're using. There's a sliding scale of "well known language + coding patterns" combined with "useful coding tools that make it easy to leverage AI", where AI can predict what you're going to type, and also you can throw problems at the AI and it is capable of solving "bigger" problems.

Personally I've found that it struggles if you're using a language that is off the beaten path. The more content on the public internet that the model could have consumed, the better it will be.

CPLX 5 hours ago [-]
I agree with you. I am a perpetual cynic about new technology (and a GenXer so multiply that by two) and I have deeply embraced AI in all parts of my business and basically am engaging with it all day for various tasks from helping me compare restaurant options to re-tagging a million contact records in salesforce.

It’s incredibly powerful and will just clearly be useful. I don’t believe it’s going to replace intelligence or people but it’s just obviously a remarkable tool.

But I think at least part of the dynamic is that the SV tech hype booster train has been so profoundly full of shit for so long that you really can’t blame people for skepticism. Crypto was and is just a giant and elaborate grift, to name one example. Also guys like Altman are clearly overstating the current trajectory.

The dismissive response does come with some context attached.

parineum 4 hours ago [-]
> But I think at least part of the dynamic is that the SV tech hype booster train has been so profoundly full of shit for so long that you really can’t blame people for skepticism.

They are still full of shit about LLMs, even if it is useful.

keiferski 11 hours ago [-]
There’s a simple flaw in this reasoning:

Just because X can be replaced by Y today doesn’t imply that it can do so in a Future where we are aware of Y, and factor it into the background assumptions about the task.

In more concrete terms: if “not being powered by AI” becomes a competitive advantage, then AI won’t be meaningfully replacing anything in that market.

You can already see this with YouTube: AI-generated videos are a mild amusement, not a replacement for video creators, because made by AI is becoming a negative label in a world where the presence of AI video is widely known.

Of course this doesn’t apply to every job, and indeed many jobs have already been “replaced” by AI. But any analysis which isn’t reflectively factoring in the reception of AI into the background is too simplistic.

keiferski 11 hours ago [-]
Just to further elaborate on this with another example: the writing industry. (Technical, professional, marketing, etc. writing - not books.)

The default logic is that AI will just replace all writing tasks, and writers will go extinct.

What actually seems to be happening, however, is this:

- obviously written-by-AI copywriting is perceived very negatively by the market

- companies want writers that understand how to use AI tools to enhance productivity, but understand how to modify copy so that it doesn’t read as AI-written

- the meta-skill of knowing what to write in the first place becomes more valuable, because the AI is only going to give you a boilerplate plan at best

And so the only jobs that seem to have been replaced by AI directly, as of now, are the ones writing basically forgettable content, report-style tracking content, and other low level things. Not great for the jobs lost, but also not a death sentence for the entire profession of writing.

jaynetics 8 hours ago [-]
As someone who used to be in the writing industry (a whole range of jobs), this take strikes me as a bit starry-eyed. Throw-away snippets, good-enough marketing, generic correspondence, hastily compiled news items, flairful filler text in books etc., all this used to be a huge chunk of the work, in so many places. The average customer had only a limited ability to judge the quality of texts, to put it mildly. Translators and proofreaders already had to prioritize mass over flawless output, back when Google Translate was hilariously bad and spell checkers very limited. Nowadays, even the translation of legal texts in the EU parliament is done by a fraction of the former workforce. Very few of the writers and none of the proofreaders I knew are still in the industry.

Addressing the wider point, yes, there is still a market for great artists and creators, but it's nowhere near large enough to accommodate the many, many people who used to make a modest living, doing these small, okay-ish things, occasionally injecting a bit of love into them, as much as they could under time constraints.

anon191928 8 hours ago [-]
What I understand is AI leads certain markets to be smaller in terms of economics. Wayy smaller actually. Only few industry will keep growing because of this.
cj 4 hours ago [-]
Specifically markets where “good enough” quality is acceptable.

Translation is an good example. Still need humans for perfect quality, but most use cases arguably don’t require perfect.

And for the remaining translators their job has now morphed into quality control.

nostrademons 7 hours ago [-]
I think this is a key point, and one that we've seen in a number of other markets (eg. computer programming, art, question-answering, UX design, trip planning, resume writing, job postings, etc.). AI eats the low end, the portion that is one step above bullshit, but it turns out that in a lot of industries the customer just wants the job done and doesn't care or can't tell how well it is done. It's related to Terence Tao's point about AI being more useful as a "red team" member [1].

This has a bunch of implications that are positive and also a bunch that are troubling. On one hand, it's likely going to create a burst of economic activity as the cost of these marginal activities goes way down. Many things that aren't feasible now because you can't afford to pay a copywriter or an artist or a programmer are suddenly going to become feasible because you can pay ChatGPT or Claude or Gemini at a fraction of the cost. It's a huge boon for startups and small businesses: instead of needing to raise capital and hire a team to build your MVP, just build it yourself with the help of AI. It's also a boon for DIYers and people who want to customize their life: already I've used Claude Code to build out a custom computer program for a couple household organization tasks that I would otherwise need to get an off-the-shelf program that doesn't really do what I want for, because the time cost of programming was previously too high.

But this sort of low-value junior work has historically been what people use to develop skills and break into the industry. And juniors become seniors, and typically you need senior-level skills to be able to know what to ask the AI and prompt it on the specifics of how to do a task best. Are we creating a world that's just thoroughly mediocre, filled only with the content that a junior-level AI can generate? What happens to economic activity when people realize they're getting shitty AI-generated slop for their money and the entrepreneur who sold it to them is pocketing most of the profits? At least with shitty human-generated bullshit, there's a way to call the professional on it (or at least the parts that you recognize as objectionable) and have them do it again to a higher standard. If the business is structured on AI and nobody knows how to prompt it to do better, you're just stuck, and the shitty bullshit world is the one you live in.

[1] https://news.ycombinator.com/item?id=44711306

zarzavat 9 hours ago [-]
The assumption here is that LLMs will never pass the Turing test for copywriting, i.e. AI writing will always be distinguishable from human writing. Given that models that produce intelligible writing didn't exist a few years ago, that's a very bold assumption.
keiferski 9 hours ago [-]
No, I’m sure they will at some point, but I don’t think that eliminates the actual usefulness of a talented writer. It just makes unique styles more valuable, raises the baseline acceptable copy to something better (in the way that Bootstrap increased website design quality), and shifts the role of writer to more of an editor.

Someone still has to choose what to prompt and I don’t think a boilerplate “make me a marketing plan then write pages for it” will be enough to stand out. And I’d bet that the cyborg writers using AI will outcompete the purely AI ones.

(I also was just using it as a point to show how being identified as AI-made is already starting to have a negative connotation. Maybe the future is one where everything is an AI but no one admits it.)

zarzavat 8 hours ago [-]
Why couldn't an AI do all of that?

> And I’d bet that the cyborg writers using AI will outcompete the purely AI ones.

In the early days of chess engines there were similar hopes for cyborg chess, whereby a human and engine would team up to be better than an engine alone. What actually happened was that the engines quickly got so good that the expected value of human intervention was negative - the engine crunching so much information than the human ever could.

Marketing is also a kind of game. Will humans always be better at it? We have a poor track record so far.

CuriouslyC 7 hours ago [-]
Chess is objective, stories and style are subjective. Humans crave novelty, fresh voices, connection and layers of meaning. It's possible that the connection can be forged and it can get smart enough to bake layers of meaning in there, but AI will never be good at bringing novelty or a fresh voice just by its very nature.
dingnuts 6 hours ago [-]
LLMs are frozen in time and do not have experiences so there's nothing to relate to.

I'd pay extra for writing with some kind of "no AI used" certification, especially for art or information

cobbzilla 7 hours ago [-]
no matter what you ask AI to do, it’s going to give you an “average“ answer. Even if you tell it to use a very distinct specific voice and write in a very specific tone, it’s going to give you the “average“ specific voice and tone you’ve asked for. AI is the antithesis of creativity and originality. This gives me hope.
IX-103 56 minutes ago [-]
That's mostly true of humans though. They almost always give average answers. That works out because 1) most of the work that needs to be done is repetitive, not new so average answers are okay 2) the solution space that has been explored by humans is not convex, so average answers will still hit unexplored territory most of the time
cobbzilla 23 minutes ago [-]
Absolutely! You can communicate with without (or with minimal) creativity. It’s not required in most cases. So AI is definitely very useful, and it can ape creativity better and better, but it will always be “faking it”.
chuckadams 7 hours ago [-]
What is creative or original thought? You are not the first person to say this after all.
cobbzilla 3 hours ago [-]
Not being 100% algorithmically or mathematically derived is a good start. I’m certain there’s more but to me this is a minimum bar.
slowlyform 7 hours ago [-]
I tried asking chatgpt for brainrot speech and all examples they gave me sound very different from what the new kids on the internet are using. Maybe language will always evolve faster than whatever amount of Data openAI can train their model with :).
oblio 8 hours ago [-]
Intellectuals have a strong fetish for complete information games such as chess.

Reality and especially human interaction are basically the complete opposite.

wybiral 7 hours ago [-]
AI will probably pass that test. But art is about experience and communicating more subtle things that we humans experience. AI will not be out in society being a person and gaining experience to train on. So if we're not writing it somewhere for it to regurgitate... It will always feel lacking in the subtlety of a real human writer. It depends on us creating content with context in order to mimic someone that can create those stories.

EDIT: As in, it can make really good derivative works. But it will always lag behind a human that has been in real life situations of the time and experienced being a human throughout them. It won't be able to hit the subtle notes that we crave in art.

j45 8 hours ago [-]
Today’s models are tuned to output the average quality of their corpus.

This could change with varying results.

What is average quality? For some it’s a massive upgrade. For others it’s a step down. For the experienced it’s seeing through it.

zarzavat 3 hours ago [-]
You're absolutely right, but AIs still have their little quirks that set them apart.

Every model has a faint personality, but since the personality gets "mass produced" any personality or writing style makes it easier to detect it as AI rather than harder. e.g. em dashes, etc.

But reducing personality doesn't help either because then the writing becomes insipid — slop.

Human writing has more variance, but it's not "temperature" (i.e. token level variance), it's per-human variance. Every writer has their own individual style. While it's certainly possible to achieve a unique writing style with LLMs through fine-tuning it's not cost effective for something like ChatGPT, so the only control is through the system prompt, which is a blunt instrument.

Scarblac 6 hours ago [-]
Seems a bit optimistic to me. Companies may well accept a lower quality than they used to get if it's far cheaper. We may just get shittier writing across the board.

(and shittier software, etc)

jhbadger 11 hours ago [-]
>You can already see this with YouTube: AI-generated videos are a mild amusement, not a replacement for video creators, because made by AI is becoming a negative label in a world where the presence of AI video is widely known.

But that's because, at present, AI generated video isn't very good. Consider the history of CGI. In the 1990s and early 2000s, it was common to complain about how the move away from practical sets in favor of CGI was making movies worse. And it was! You had backgrounds and monsters that looked like they escaped from a video game. But that complaint has pretty much died out these days as the tech got better (although Nolan's Oppenheimer did weirdly hype the fact that its simulated Trinity blast was done by practical effects).

morsecodist 7 hours ago [-]
I don't agree that it is because of the "quality" of the video. The issue with AI art is that it lacks intentional content. I think people like art because it is a sort of conversation between the creator and the viewer. It is interesting because it has a consistent perspective. It is possible AI art could one day be indistinguishable but for people to care about it I feel they would need to lie and say it was made by a particular person or create some sort of persona for the AI. But there are a lot of people who want to do the work of making art. People are not the limiting factor, in fact we have way more people who want to make art than there is a market for it. What I think is more likely is that AI becomes a tool in the same way CGI is a tool.
tbrownaw 6 hours ago [-]
> The issue with AI art is that it lacks intentional content. I think people like art because it is a sort of conversation between the creator and the viewer.

Intent is in the eye of the beholder.

nprateem 5 hours ago [-]
The trouble with AI shit is it's all contaminated by association.

I was looking on YT earlier for info on security cameras. It's easy to spot the AI crap: under 5 minutes and just stock video in the preview or photos.

What value could there be in me wasting time to see if the creators bothered to add quality content if they can't be bothered to show themselves in front of the lens?

What an individual brings is a unique brand. I'm watching their opinion which carries weight based on social signals and their catalogue etc.

Generic AI will always lack that until it can convincingly be bundled into a persona... only then the cycle will repeat: search for other ways to separate the lazy, generic content from the meaningful original stuff.

ninetyninenine 7 hours ago [-]
[flagged]
morsecodist 7 hours ago [-]
I honestly can't tell if you're being facetious. Maybe I suck at writing and don't properly understand sarcasm but unfortunately I'm only human.
nprateem 5 hours ago [-]
It's obviously not AI written.
keiferski 10 hours ago [-]
CGI is a good analogy because I think AI and creators will probably go in the same direction:

You can make a compelling argument that CGI operators outcompeted practical effects operators. But CGI didn’t somehow replace the need for a filmmaker, scriptwriter, cinematographers, etc. entirely – it just changed the skillset.

AI will probably be the same thing. It’s not going to replace the actual job of YouTuber in a meaningful sense; but it might redefine that job to include being proficient at AI tools that improve the process.

tomrod 9 hours ago [-]
I think they are evolving differently. Some very old cgi holds up because they invested a lot of money to make it so. Then they tried to make it cheaper and people started complaining because the output was worse than all prior options.
Melatonic 7 hours ago [-]
Jurassic Park is a great example - they also had excellent compositing to hide any flaws (compositing never gets mentioned in casual CGI talk but is one of the most important steps)

The dinosaurs were also animated by oldschool stop motion animators who were very, very good at their jobs. Another very underrated part of the VFX pipeline.

Doesnt matter how nice your 3D modelling and texturing are if the above two are skimped on !

yoz-y 8 hours ago [-]
That said, the complaint is coming back. Namely because most new movies use an incredible amount of CGI and due to the time constraints the quality suffers.

As such, CGI is once again becoming a negative label.

I don’t know if there is an AI equivalent of this. Maybe the fact that as models seem to move away from a big generalist model at launch, towards a multitude of smaller expert models (but retaining the branding, aka GPT-4), the quality goes down.

djtango 10 hours ago [-]
That's a Nolan thing like how Dunkirk used no green screen.

I think Harry Potter and Lord of the Rings embody the transition from old school camera tricks to CGI as they leaned very heavily into set and prop design and as a result have aged very gracefully as movies

silvestrov 9 hours ago [-]
I think the first HP movie was more magical than the latter ones as they felt too "Marvel CGI" for me.

Marvel movies have become tiresome for me, too much CGI that does not tell any interesting story. Old animated Disney movies are more rewatchable.

__MatrixMan__ 10 hours ago [-]
Do you get the feeling that AI generated content is lacking something that can be incrementally improved on?

Seems to me that it's already quite good in any dimension that it knows how to improve on (e.g. photorealism) and completely devoid of the other things we'd want from it (e.g. meaning).

tomrod 8 hours ago [-]
It's missing random flaws. Often the noise has patternd as a result of the diffusion or generation process.
keiferski 10 hours ago [-]
Yeah if you look at many of the top content creators, their appeal often has very little to do with production value, and is deliberately low tech and informal.

I guess AI tools can eventually become more human-like in terms of demeanor, mood, facial expressions, personality, etc. but this is a long long way from a photorealistic video.

danielbln 9 hours ago [-]
Ironically, while the non-CGI SFX in e.g. Interstellar looked amazing, that sad fizzle of a practical explosion in Oppenheimer did not do the real thing justice and would've been better served by proper CGI VFX.
Barrin92 9 hours ago [-]
>But that's because, at present, AI generated video isn't very good.

It isn't good, but that's not the reason. There's a paper about 10 years ago where people used some computer system to generate Bach-like music that even Bach experts couldn't reliably tell apart, but nobody listens to bot music. (or nobody except for engine programmers watches computer chess, despite superiority. Chess is thriving more now including commercially than it ever did)

In any creative field what people are after is the interaction between the creator and the content, which is why compelling personalities thrive more, not less in a sea of commodified slop (be that by AI or just churned out manually).

It's why we're in an age where twitch content creators or musicians are increasingly skilled at presenting themselves as authentic and personal. These people haven't suffered from the fact that mass production of media is cheap, they've benefited from it.

thefaux 7 hours ago [-]
The wonder of Bach goes much deeper than just the aesthetic qualities of his music. His genius almost forces one to reckon with his historical context and wonder, how did he do it? Why did he do it? What made it all possible? Then there is the incredible influence that he had. It is easy to forget that music theory as we know it today was not formalized in his day. The computer programs that simulate the kind of music he made are based on that theory that he understood intuitively and wove into his music and was later revealed through diligent study. Everyone who studies Bach learns something profound and can feel both a kinship for his humanity and also an alienation from his seemingly impossible genius. He is one of the most mysterious figures in human history and one could easily spend their entire life primarily studying just his music (and that of his descendants). From that perspective, computer generated music in his style is just a leaf on the tree, but Bach himself is the seed.

> These people haven't suffered from the fact that mass production of media is cheap, they've benefited from it.

Maybe? This really depends on your value system. Every moment that you are focused on how you look on camera and trying to optimize an extractive algorithm is a moment you aren't focused on creating the best music that you can in that moment. If the goal is maximizing profit to ensure survival, perhaps they are thriving. Put another way, if these people were free to create music in any context, would they choose content creation on social media? I know I wouldn't, but I also am sympathetic to the economic imperatives.

vidarh 7 hours ago [-]
That's interesting, because after ElevenLabs launched their music generation I decided I really quite want to spent some time to have it generate background tracks for me to have on while working.

I don't know the name of any of the artists whose music I listened to over the last week because it does not matter to me. What mattered was that it was unobtrusive and fit my general mood. So I have a handful of starting points that I stream music "similar to". I never care about looking up the tracks, or albums, or artists.

I'm sure lots of people think like you, but I also think you underestimate how many contexts there are where people just don't care.

pfdietz 6 hours ago [-]
Authenticity and sincerity are very important. When you can fake those, you've got it made.
antirez 10 hours ago [-]
To understand why this is too optimistic, you have to look at things where AI is already almost human-level. Translations are more and more done exclusively with AI or with a massive AI help (with the effect of destroying many jobs anyway) at this point. Now ebook reading is switching to AI. Book and music album covers are often done with AI (even if this is most of the times NOT advertised), and so forth. If AI progresses more in a short timeframe (the big "if" in my blog post), we will see a lot of things done exclusively (and even better 90% of the times, since most humans doing a given work are not excellence in what they do) by AI. This will be fine if governments immediately react and the system changes. Otherwise there will be a lot of people to feed without a job.
Wowfunhappy 9 hours ago [-]
> Now ebook reading is switching to AI.

IMO these are terrible, I don't understand how anyone uses them. This is coming from someone who has always loved audiobooks but has never been particularly precious about the narrator. I find the AI stuff unlistenable.

keiferski 10 hours ago [-]
I can buy the idea that simple specific tasks like translation will be dramatically cut down by AI.

But even then – any serious legal situation (like a contract) is going to want a human in the loop to verify that the translation is actually correct. This will require actual translator skills.

AI art seems to basically only be viable when it can’t be identified as AI art. Which might not matter if the intention is to replace cheap graphic design work. But it’s certainly nowhere near developed enough to create anything more sophisticated, sophisticated enough to both read as human-made and have the imperfect artifacts of a human creator. A lot of the modern arts are also personality-driven, where the identity and publicity of the artist is a key part of their reception. There are relatively few totally anonymous artists.

Beyond these very specific examples, however, I don’t think it follows that all or most jobs are going to be replaced by an AI, for the reasons I already stated. You have to factor in the sociopolitical effects of technology on its adoption and spread, not merely the technical ones.

Davidzheng 5 hours ago [-]
Isn't "But even then – any serious legal situation (like a contract) is going to want a human in the loop to verify that the translation is actually correct. This will require actual translator skills." only true if the false positive rate of the verifier is not much higher than the failure rate of the AI? At some point it's like asking a human to double check a calculator
griffzhowl 7 hours ago [-]
You might still need humans in the loop for many things, but it can still have a profound effect if the work that used to be done by ten people can now be done by two or three. In the sectors that you mention, legal, graphic design, translation, that might be a conservative estimate.

There are bound to be all kinds of complicated sociopolitical effects, and as you say there is a backlash against obvious AI slop, but what about when teams of humans working with AI become more skillful at hiding that?

spenrose 7 hours ago [-]
Look at your examples. Translation is a closed domain; the LLM is loaded with all the data and can traverse it. Book and music album covers _don't matter_ and have always been arbitrary reworkings of previous ideas. (Not sure what “ebook reading” means in this context.) Math, where LLMs also excel, is a domain full of internal mappings.

I found your post “Coding with LLMs in the summer of 2025 (an update)” very insightful. LLMs are memory extensions and cognitive aides which provide several valuable primitives: finding connections adjacent to your understanding, filling in boilerplate, and offloading your mental mapping needs. But there remains a chasm between those abilities and much work.

evanelias 7 hours ago [-]
> Book and music album covers are often done with AI (even if this is most of the times NOT advertised)

This simply isn't true, unless you're considering any minor refinement to a human-created design to be "often done with AI".

It certainly sounds like you're implying AI is often the initial designer or primary design tool, which is completely incorrect for major publishers and record labels, as well as many smaller independent ones.

apwell23 10 hours ago [-]
> Book and music album covers are often done with AI

These suck. Things made with AI just suck big time. Not only are they stupid but they have negative value on your product.

I cannot think of single purely AI made video, song or any form of art that is any a good.

All AI has done is falsely convince ppl that they can now create things that they had no skills to do before AI.

antirez 9 hours ago [-]
This is not inherent to AI, but how the AI models were recently trained (by preference agreement of many random users). Look for the latest Krea / Black Forest Labs paper on AI style. The "AI look" can be removed.

Songs right now are terrible. For the videos, things are going to be very different once people can create full movies in their computers. Many will have access to the ability to create movies, and a few will be very good, and this will likely change many things. Btw this stupid "AI look" is only transient and is nowhere needed. It will be fixed, and AI images/videos generation will be impossible to stop.

nprateem 5 hours ago [-]
The trouble is, I'm perfectly well aware I can go to the AI tools, ask it to do something and it'll do it. So there's no point me wasting time eg reading AI blog posts as they'll probably just tell me what I've just read. The same goes for any media.

It'll only stand on its own when significant work is required. This is possible today with writing, provided the AI is directed to incorporate original insights.

And unless it's immediately obvious to consumers a high level of work has gone into it, it'll all be tarred by the same brush.

Any workforce needs direction. Thinking an AI can creatively execute when not given a vision is flawed.

Either people will spaff out easy to generate media (which will therefore have no value due to abundance), or they'll spend time providing insight and direction to create genuinely good content... but again unless it's immediately obvious this has been done, it will again suffer the tarring through association.

The issue is really one of deciding to whom to give your attention. It's the reason an ordinary song produced by a megastar is a hit vs when it's performed by an unsigned artist. Or, as in the famous experiment, the same world class violinist gets paid about $22 for a recital while busking vs selling out a concert hall for $100 per seat that same week.

This is the issue AI, no matter how good, will have to overcome.

HDThoreaun 4 hours ago [-]
Ive made a ton of songs I enjoy with suno. Theyre not the greatest, but theyre definitely not the worst either.
apetresc 9 hours ago [-]
I mean, test after test have shown that the vast, vast majority of humans are woefully unable to distinguish good AI art made by SOTA models from human art, and in many/most cases actively prefer it.

Maybe you’re a gentleman of such discerningly superior taste that you can always manage to identify the spark of human creativity that eludes the rest of us. Or maybe you’ve just told yourself you hate it and therefore you say you always do. I dunno.

apwell23 8 hours ago [-]
you couldve given me an example instead of this butthurt comment :)
9 hours ago [-]
varelse 9 hours ago [-]
[dead]
jordanpg 9 hours ago [-]
Of course, your opinion may be subject to selection bias (i.e., you are only judging the art that you became aware was AI generated).
WesleyLivesay 9 hours ago [-]
Reminds me of the issue with bad CGI in movies. The only CGI you notice is the bad CGI, the good stuff just works. Same for AI generated art, you see the bad stuff but do not realize when you see a good one.
apwell23 9 hours ago [-]
care to give me some examples from youtube ? I am talking about videos that ppl on youtube connected to for the content in the video ( not AI demo videos).
yoavm 6 hours ago [-]
Perhaps this will go the way the industrial revolution did? A knife handcrafted by a Japanese master might have a very high value, but 99.9% of the knives are mass produced. "Creators" will become artisans - appreciated by many, consumed by few.
onlyrealcuzzo 7 hours ago [-]
It's becoming a negative label because they aren't as good.

I'm not saying it will happen, but it's possible to imagine a future in which AI videos are generally better, and if that happens, almost by definition, people will favor them (otherwise they aren't "better").

glhaynes 7 hours ago [-]
I'm not on Facebook, but, from what I can tell, this has arguably already happened for still images on it. (If defining "better" as "more appealing to/likely to be re-shared by frequent users of Facebook.")
techpineapple 7 hours ago [-]
I mean, I can imagine any future, but the problem with “created by AI” is that because it’s relatively inexpensive, it seems like it will necessarily become noise rather than signal, if a person could pop out a high quality video in a day, in which case signal will revert to the celebrity that is marketing it rather than the video itself.
gopalv 4 hours ago [-]
> because made by AI is becoming a negative label in a world

The negative label is the old world pulling the new one back, it rarely sticks.

I'm old enough to remember the folks saying "We used to have the paint the background blue" and "All music composers need to play an instrument" (or turn into a symbol).

danielvaughn 7 hours ago [-]
Another flaw is that humans won’t find other things to do. I don’t see the argument for that idea. If I had to bet, I’d say that if AI continues getting more powerful, then humans will transition to working on more ambitious things.
johnecheck 7 hours ago [-]
This is very similar to the 'machines will do all the work, we'll just get to be artists and philosophers' idea.

It sounds nice. But to have that, you need resources. Whoever controls the resources will get to decide whether you get them. If AI/machines are our entire economy, the people that control the machines control the resources. I have little faith in their benevolence. If they also control the political system?

You'll win your bet. A few humans will work on more ambitious things. It might not go so well for the rest of us.

treis 7 hours ago [-]
>This is very similar to the 'machines will do all the work, we'll just get to be artists and philosophers' idea

We've come a long ways to that goal. The amount of work both economic and domestic that humans do has dropped dramatically.

PKop 2 hours ago [-]
There are more mouths to feed and less territory per capita for each person (thus real estate inflation in desired locations). Like lanes on a highway, the population just fills the capacity with growth without the selective pressure of even selecting for skill or ability. The ways we've come see mostly front loaded as initially population takes time to grow while the immediate low hanging fruit of domestic drudgery being eliminated was quite a while ago. Meanwhile now "work" that has filled much of that obligation in the home has expanded to necessitating two full-time income households.
msgodel 7 hours ago [-]
And it's very similar to "slaves will do all the work" which was actually possible but never happened.
bamboozled 7 hours ago [-]
If it became magic smart, then I don’t see why we couldn’t use it to enhance ourselves and become Transhuman?
johnecheck 7 hours ago [-]
There are a number of reasons you might not be able to.

Most likely? It's ridiculously expensive and you're poor.

cesarvarela 5 hours ago [-]
Techonolgy has been deflationary so far, the rich get it first but eventually it reaches everyone.
7 hours ago [-]
variadix 5 hours ago [-]
Re: YT AI content. That is because AI video is (currently) low quality. If AI video generators could spit out full length videos that rivaled or surpassed the best human made content people wouldn’t have the same association. We don’t live in that world yet, but someday we might. I don’t think “human made” will be a desirable label for _anything_, videos, software, or otherwise, once AI is as good or better than humans in that domain.
d3nj4l 8 hours ago [-]
> AI-generated videos are a mild amusement, not a replacement for video creators

If you seriously think this, you don’t understand the YouTube landscape. Shorts - which have incredible view times - are flooded with AI videos. Most thumbnails these days are made with AI image generators. There’s an entire industry of AI “faceless” YouTubers who do big numbers with nobody in the comments noticing. The YouTuber Jarvis Johnson made a video about how his feed has fully AI generated and edited videos with great view counts: https://www.youtube.com/watch?v=DDRH4UBQesI

What you’re missing is that most of these people aren’t going onto Veo 3, writing “make me a video” and publishing that; these videos are a little more complex in that they have separate models writing scripts, generating voiceover, and doing basic editing.

keiferski 8 hours ago [-]
These videos and shorts are a fraction of the entire YouTube landscape, and actual creators with identities are making vastly, vastly more money - especially once you realize how YouTube and video content in general is becoming a marketing channel for other businesses. Faceless channels have functionally zero brand, zero longevity, and no real way to extend that into broader products in the way that most successful creators have done.

That was my point: someone that has an identity as a YouTuber shouldn’t worry too much about being replaced by faceless AI bot content.

andai 8 hours ago [-]
This only works in a world where AI sucks and/or can be easily detected. I've already found videos where on my 2nd or 3rd time watching I went, "wait, that's not real!" We're starting to get there, which is frankly beyond my ability to reason about.

It's the same issue with propaganda. If people say a movie is propaganda, that means the movie failed. If a propaganda movie is good propaganda, people don't talk about that. They don't even realize. They just talk about what a great movie it is.

jostylr 8 hours ago [-]
One thing to keep in mind is not so much that AI would replace the work of video creators for general video consumption, but rather it could create personalized videos or music or whatever. I experimented with creating a bunch of AI music [1] that was tailored to my interests and tastes, and I enjoy listening to them. Would others? I doubt it, but so what? As the tools get better and easier, we can create our own art to reflect our lives. There will still be great human art that will rise to the top, but the vast inundation of slop to the general public may disappear. Imagine the fun of collaboratively designing whole worlds and stories with people, such as with tabletop role-playing, but far more immersive and not having to have a separate category of creators or waiting on companies to release products.

1: https://www.youtube.com/playlist?list=PLbB9v1PTH3Y86BSEhEQjv...

j45 8 hours ago [-]
Poorly made videos are poorly made videos.

Whether poor videos made by a human directly, or poorly made by a human using AI.

The use of software like AI to create videos with sloppy quality and reaults reflects on their skill.

Currently the use of AI leans towards sloppy because of the lower digital literacy of content creators with AI, and once they get into it, realizing how much goes into videos.

MichaelZuo 10 hours ago [-]
That’s the fundamental issue with most “analysis”, and most discussions really, on HN.

Since the vast vast majority of writers and commentators are not literal geniuses… they can’t reliably produce high quality synthetic analysis, outside of very narrow niches.

Even though for most comment chains on HN to make sense, readers certainly have to pretend some meaningful text was produced beyond happenstance.

Partly because quality is measured relative to the average, and partly because the world really is getting more complex.

nprateem 5 hours ago [-]
Oh come on. I may not be a genius but I can turn my mind to most things.

"I may not be a gynecologist, but I'll have a look."

inquirerGeneral 10 hours ago [-]
[dead]
btilly 19 hours ago [-]
In every technology wave so far, we've disrupted many existing jobs. However we've also opened up new kinds of jobs. And, because it is easier to retrain humans than build machines for those jobs, we wound up with more and better jobs.

This is the first technology wave that doesn't just displace humans, but which can be trained to the new job opportunities more easily than humans can. Right now it can't replace humans for a lot of important things. But as its capabilities improve, what do displaced humans transition to?

I don't think that we have a good answer to that. And we may need it sooner rather than later. I'd be more optimistic if I trusted our leadership more. But wise political leadership is not exactly a strong point for our country right now.

chrisco255 18 hours ago [-]
> but which can be trained to the new job opportunities more easily than humans can

What makes you think that? Self driving cars have had untold billions of dollars in reaearch and decades in applied testing, iteration, active monitoring, etc and it still has a very long tail of unaddressed issues. They've been known to ignore police traffic redirections, they've run right through construction barriers, and recently they were burnt to a crisp in the LA riots, completely ignorant of the turmoil that was going on. A human driver is still far more adaptive and requires a lot less training than AI, and humans are ready to handle the infinitely long tail of exceptions to the otherwise algorithmic task of driving, which follows strict rules.

And when you talk about applying this same tech, so confidently, to domains far more nuanced and complex than driving, with even less training data than to go off, I find myself firmly in the skeptics camp, that holds you will struggle even harder to apply humanoid robotics in uncontrolled environments across a diverse range of tasks without human intervention or piloting or maintenence or management.

Unemployment is still near all time lows, this will persist for sometime as we have a structural demographic problem with massive amounts of retirees and less children to support the population "pyramid" (which is looking more like a tapering rectangle these days).

schneems 18 hours ago [-]
A few months ago I saw one driverless car maybe every three days. Now I see roughly 3-5 every day.

I get that it’s taken a long time and a lot of hype that hasn’t panned out. But once the tech works and it’s just about juicing the scale then things shift rapidly.

Even if you think “oh that’s the next generation’s problem” if there is a chance you’re wrong, or if you want to be kind to the next generation: now is the time to start thinking and planning for those problems.

I think the most sensible answer would be something like UBI. But I also think the most sensible answer for climate change is a carbon tax. Just because something is sensible doesn’t meant it’s politically viable.

tfourb 15 hours ago [-]
I guess you live in a place with perfect weather year round? I don’t and I haven’t seen a robo taxi my entire life. I do have access to a Tesla though and it’s current self-driving capabilities are not even close to anything I would call „autonomous“ und real world conditions (including weather).

Maybe the tech will at some point be good enough. At the current rate of improvement this will still take decades at least. Which is sad because I personally hoped that my kids would never have to get a driver’s License.

boulos 12 hours ago [-]
Our next vehicle sensor suite will be able to handle winter weather (https://waymo.com/blog/2024/08/meet-the-6th-generation-waymo...).
Tryk 6 hours ago [-]
Blog post is almost exactly 1 year old...
LtWorf 8 hours ago [-]
Will it be able to function on super slippery roads while volcanic ash is falling down? Or ice?

I do drive in these conditions.

suddenlybananas 10 hours ago [-]
I'll believe it when I see it.
DougBTX 10 hours ago [-]
That’s one of the interesting things about innovation, you have to believe that things are possible before they have been done.
AYBABTME 9 hours ago [-]
Only if you've set out to build it. Otherwise you can sit and wait.
layer8 9 hours ago [-]
Believing a thing is possible doesn’t by itself make it so, however.
pixl97 2 hours ago [-]
This is kind of weird. It's like saying "Driving in snow is impossible", well we know it is possible because humans do it.

And this even ignores all the things modern computer controlled vehicles do above and beyond humans as it is. Take most people used to driving modern cars and chunk them an old armstrong steering car and they'll put themselves into a ditch on a rainy day.

Really the last things in self driving cars is fast portable compute and general intelligence. General intelligence will be needed for the million edge cases we need while driving. The particular problem is once we get this general intelligence a lot of problems are going to disappear and bring up a whole new set of problems for people and society at large.

suddenlybananas 10 minutes ago [-]
Ah we only need general intelligence, something so ineffable and hard to understand that we don't even have a clear definition of it.
BoorishBears 15 hours ago [-]
I've ridden just under 1,000 miles in autonmous (no scare quotes) Waymos, so it's strange to see someone letting Tesla's abject failure inform their opinions on how much progress AVs have made.

Tesla that got fired as a customer by Mobileye for abusing their L2 tech is your yardstick?

Anyways, Waymo's DC launch is next year, I wonder what the new goalpost will be.

thephotonsphere 14 hours ago [-]
Tesla uses only cameras, which sounds crazy (reflections, direct sunlight disturbances, fog , smoke, etc.

LiDAR, radar assistance feels crucial

https://fortune.com/2025/08/15/waymo-srikanth-thirumalai-int...

latexr 12 hours ago [-]
Indeed. Mark Rober did some field tests on that exact difference. LiDAR passed all of them, while Tesla’s camera-only approach failed half.

https://www.youtube.com/watch?v=IQJL3htsDyQ

randallsquared 7 hours ago [-]
I'm not sure the guy who did the Tesla crash test hoax and (partially?) faked his famous glitterbomb pranks is the best source. I would separately verify anything he says at this point.
latexr 5 hours ago [-]
> Tesla crash test hoax

First I’m hearing of that. In doing a search, I see a lot of speculation but no proof. Knowing the shenanigans perpetrated by Musk and his hardcore fans, I’ll take theories with a grain of salt.

> and (partially?) faked his famous glitterbomb pranks

That one I remember, and the story is that the fake reactions were done by a friend of a friend who borrowed the device. I can’t know for sure, but I do believe someone might do that. Ultimately, Rober took accountability, recognised that hurt his credibility, and edited out that part from the video.

https://www.engadget.com/2018-12-21-viral-glitter-bomb-video...

I have no reason to protect Rober, but also have no reason to discredit him until proof to the contrary. I don’t follow YouTube drama but even so I’ve seen enough people unjustly dragged through the mud to not immediately fall for baseless accusations.

One I bumped into recently was someone describing the “fall” of another YouTuber, and in one case showed a clip from an interview and said “and even the interviewer said X about this person”, with footage. Then I watched the full video and at one point the interviewer says (paraphrased) “and please no one take this out of context, if you think I’m saying X, you’re missing the point”.

So, sure, let’s be critical about the information we’re fed, but that cuts both ways.

ACCount37 14 hours ago [-]
Humans use only cameras. And humans don't even have true 360 coverage on those cameras.

The bottleneck for self-driving technology isn't sensors - it's AI. Building a car that collects enough sensory data to enable self-driving is easy. Building a car AI that actually drives well in a diverse range of conditions is hard.

tfourb 13 hours ago [-]
That's actually categorically false. We also use sophisticated hearing, a well developed sense of inertia and movement, air pressure, impact, etc. And we can swivel our heads to increase our coverage of vision to near 360°, while using very dependable and simple technology like mirrors to cover the rest. Add to that that our vision is inherently 3D and we sport a quite impressive sensor suite ;-). My guess is that the fidelity and range of the sensors on a Tesla can't hold a candle to the average human driver. No idea how LIDAR changes this picture, but it sure is better than vision only.

I think there is a good chance that what we currently call "AI" is fundamentally not technologically capable of human levels of driving in diverse conditions. It can support and it can take responsibility in certain controlled (or very well known) environments, but we'll need fundamentally new technology to make the jump.

ACCount37 10 hours ago [-]
Yes, human vision is so bad it has to rely on a swivel joint and a set of mirrors just to approximate 360 coverage.

Modern cars can have 360 vision at all times, as a default. With multiple overlapping camera FoVs. Which is exactly what humans use to get near field 3D vision. And far field 3D vision?

The depth-discrimination ability of binocular vision falls off with distance squared. At far ranges, humans no longer see enough difference between the two images to get a reliable depth estimate. Notably, cars can space their cameras apart much further, so their far range binocular perception can fare better.

How do humans get that "3D" at far distances then? The answer is, like it usually is when it comes to perception, postprocessing. Human brain estimates depth based on the features it sees. Not unlike an AI that was trained to predict depth maps from a single 2D image.

If you think that perceiving "inertia and movement" is vital, then you'd be surprised to learn that an IMU that beats a human on that can be found in an average smartphone. It's not even worth mentioning - even non-self-driving cars have that for GPS dead reckoning.

pixl97 2 hours ago [-]
I mean, technically what we need is fast general intelligence.

A lot of the problems with driving aren't driving problems. They are other people are stupid problems, and nature is random problems. A good driver has a lot of ability to predict what other drivers are going to do. For example people commonly swerve slightly on the direction they are going to turn, even before putting on a signal. A person swerving in a lane is likely going to continue with dumb actions and do something worse soon. Clouds in the distance may be a sign of rain and that bad road conditions and slower traffic may exist ahead.

Very little of this has to do with the quality of our sensors. Current sensors themselves are probably far beyond what we actually need. It's compute speed (efficiency really) and preemption that give humans an edge, at least when we're paying attention.

svara 12 hours ago [-]
A fine argument in principle, but even if we talk only about vision, the human visual system is much more powerful than a camera.

Between brightly sunlit snow and a starlit night, we can cover more than 45 stops with the same pair of eyeballs; the very best cinematographic cameras reach something like 16.

In a way it's not a fair comparison, since we're taking into account retinal adaptation, eyelids/eyelashes, pupil constriction. But that's the point - human vision does not use cameras.

the8472 9 hours ago [-]
> In a way it's not a fair comparison,

Indeed. And the comparison is unnecessarily unfair.

You're comparing the dynamic range of a single exposure on a camera vs. the adaptive dynamic range in multiple environments for human eyes. Cameras do have comparable features: adjustable exposure times and apertures. Additionally cameras can also sense IR, which might be useful for driving in the dark.

svara 3 hours ago [-]
Exposure adjustment is constrained by frame rate, that doesn't buy you very much dynamic range.

A system that replicates the human eye's rapid aperture adjustment and integration of images taken at quickly changing aperture/ filter settings is very much not what Tesla is putting in their cars.

But again, the argument is fine in principle. It's just that you can't buy a camera that performs like the human visual system today.

the8472 2 hours ago [-]
Human eyes are unlikely the only thing in parameter-space that's sufficient for driving. Cameras can do IR, 360° coverage, higher frame rates, wider stereo separation... but of course nothing says Teslas sit at a good point in that space.
TheOtherHobbes 12 hours ago [-]
Humans are notoriously bad at driving, especially in poor weather. There are more than 6 million accidents annually in the US, which is >16k a day.

Most are minor, but even so - beating that shouldn't be a high bar.

There is no good reason not to use LIDAR with other sensing technologies, because cameras-only just makes the job harder.

ACCount37 10 hours ago [-]
Self-driving cars beat humans on safety already. This holds for Waymos and Teslas both.

They get into less accidents, mile for mile and road type for road type, and the ones they get into trend towards less severe. Why?

Because self-driving cars don't drink and drive.

This is the critical safety edge a machine holds over a human. A top tier human driver in the top shape outperforms this generation of car AIs. But a car AI outperforms the bottom of the barrel human driver - the driver who might be tired, distracted and under influence.

tfourb 4 hours ago [-]
I trust Tesla's data on this kind of stuff only as far as a Starship can travel on its return trip to Mars. Anything coming from Elon would have to be audited by an independent entity for me to give it an ounce of credence.

Generally you are comparing Apples and Oranges if you are comparing the safety records of i.e. Waymos to that of the general driving population.

Waymos drive under incredibly favorable circumstances. They also will simply stop or fall back on human intervention if they don't know what to do – failing in their fundamental purpose of driving from point A to point B. To actually get comparable data, you'd have to let Waymos or Teslas do the same type of drives that human drivers do, under the same curcumstances and without the option of simply stopping when they are unsure, which they simply are not capable of doing at the moment.

That doesn't mean that this type of technology is useless. Modern self-driving and adjacent tech can make human drivers much safer. I imagine, it would be quite easy to build some AI tech that has a decent success rate in recognizing inebriated drivers and stopping the cars until they have talked to a human to get cleared for driving. I personally love intelligent lane and distance assistance technology (if done well, which Tesla doesn't in my view). Cameras and other assistive technology are incredibly useful when parking even small cars and I'd enjoy letting a computer do every parking maneuver autonomously until the end of my days. The list could go on.

Waymos have cumulatively driven about 100 million miles without a safety driver as of July 2025 (https://fifthlevelconsulting.com/waymos-100-million-autonomo...) over a span of about 5 years. This is such a tiny fraction of miles driven by US (not to speak of worldwide) drivers during that time, that it can't usefully be expressed. And they've driven these miles under some of the most favorable conditions available to current self-driving technology (completely mapped areas, reliable and stable good weather, mostly slow, inner city driving, etc.). And Waymo themselves have repeatedly said that overcoming the limitations of their tech will be incredibly hard and not guaranteed.

yladiz 10 hours ago [-]
Do you have independent studies to back up your assertion that they are safer per distance than a human driver?
davemp 4 hours ago [-]
> A top tier human driver in the top shape outperforms this generation of car AIs.

Most non-impaired humans outperform the current gen. The study I saw had FSD at 10x fatalities per mile vs non-impaired drivers.

cbrozefsky 7 hours ago [-]
They data indicated they hold an edge over drunk and incapacitated humans, not humans.
lagadu 12 hours ago [-]
Once computers and AIs can approach even a small fraction of the our capacity then sure, only cameras is fine, it's a shame that our suite of camera data processing equipment is so far beyond our understanding that we don't even have models of how it might work at its core.

Even at that point, why would you possibly use only cameras though, when you can get far better data by using multiple complementary systems? Humans still crash plenty often, in large part because of how limited our "camera" system can be.

latexr 12 hours ago [-]
> Humans use only cameras.

Not true. Humans also interpret the environment in 3D space. See a Tesla fail against a Wile E. Coyote-inspired mural which humans perceive:

https://youtu.be/IQJL3htsDyQ?t=14m34s

ACCount37 10 hours ago [-]
This video proves nothing other than "a YouTuber found a funny viral video idea".

Teslas "interpret the environment in 3D space" too - by feeding all the sensor data into a massive ML sensor fusion pipeline, and then fusing that data across time too.

This is where the visualizers, both the default user screen one and the "Terminator" debugging visualizer, get their data from. They show plain and clear that the car operates in a 3D environment.

You could train those cars to recognize and avoid Wile E. Coyote traps too, but do you really want to? The expected amount of walls set in the middle of the road with tunnels painted onto them is very close to zero.

latexr 8 hours ago [-]
Maybe watch the rest of the video. The Tesla, unlike the LiDAR car, also failed the fog and rain tests. The mural was just the last and funniest one.

Let’s also not forget murals like that do exist in real life. And those aren’t foam.

https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg...

Additionally, as the other commenter pointed out, trucks often have murals painted on them, either as art or adverts.

https://en.wikipedia.org/wiki/Truck_art_in_South_Asia

https://en.wikipedia.org/wiki/Dekotora

Search for “truck ads” and you’ll find a myriad companies offering the service.

paulryanrogers 8 hours ago [-]
I've seen semi trucks with scenic views painted on them, both rear and side panels.
vrighter 10 hours ago [-]
which cameras have stereoscopic vision and the dynamic range of an eye?

Even if what you're saying is true, which it's not, cameras are so inferior to eyes it's not even funny

perryizgr8 7 hours ago [-]
> which cameras have stereoscopic vision

Any 2 cameras separated by a few inches.

> dynamic range of an eye

Many cameras nowadays match or exceed the eye in dynamic range. Specially if you consider that cameras can vary their exposure from frame to frame, similar to the eye, but much faster.

ACCount37 2 hours ago [-]
What's more is, the power of depth perception in binocular vision is a function of distance between two cameras. The larger that distance is, the further out depth can be estimated.

Human skull only has two eyesockets, and it can only get this wide. But cars can carry a lot of cameras, and maintain a large fixed distance between them.

bayindirh 11 hours ago [-]
Even though it's false, let's imagine that's true.

Our cameras (also called eyes) have way better dynamic range, focus speed, resolution and movement detection capabilities, Backed by a reduced bandwidth peripheral vision which is also capable of detecting movement.

No camera, incl. professional/medium format still cameras are that capable. I think one of the car manufacturers made a combined tele/wide lens system for a single camera which can see both at the same time, but that's it.

Dynamic range, focus speed, resolution, FoV and motion detection still lacks.

...and that's when we imagine that we only use our eyes.

BuckRogers 13 hours ago [-]
Except a car isn’t a human.

That’s the mistake Elon Musk made and the same one you’re making here.

Not to mention that humans driving with cameras only is absolutely pathetic. The amount of accidents that occur that are completely avoidable doesn’t exactly inspire confidence that all my car needs to be safe and get me to my destination is a couple cameras.

ACCount37 11 hours ago [-]
This isn't a "mistake". This is the key problem of getting self-driving to work.

Elon Musk is right. You can't cram 20 radars, 50 LIDARs and 100 cameras into a car and declare self-driving solved. No amount of sensors can redeem a piss poor driving AI.

Conversely, if you can build an AI that's good enough, then you don't need a lot of sensors. All the data a car needs to drive safely is already there - right in the camera data stream.

vrighter 10 hours ago [-]
if additional sensors improve the ai, then your last statement is categorically untrue. The reason it worked better is that those additional sensors gave it information that wac not available in the video stream
ACCount37 9 hours ago [-]
"If."

So far, every self-driving accident where the self-driving car was found to be at fault follows the same pattern: the car had all the sensory data it needed to make the right call, and it didn't make the right call. The bottleneck isn't in sensors.

rootusrootus 5 hours ago [-]
In that case we're probably even further from self-driving cars than I'd have guessed. Adding more sensors is a lot cheaper than putting a sufficient amount of compute in a car.
8 hours ago [-]
amelius 10 hours ago [-]
The nice thing about LiDAR is that you can use it to train a model to simulate a LiDAR based on camera inputs only. And of course to verify how good that model is.
mycall 8 hours ago [-]
I can't wait until V2X and sensor fusion comes to autonomous vehicles, greatly improving the detailed 3D mapping of LiDAR, the object classification capabilities of cameras, and the all-weather reliability of radar and radio pings.
perryizgr8 7 hours ago [-]
> only cameras, which sounds crazy

Crazy that billions of humans drive around every day with two cameras. And they have various defects too (blind spots, foveated vision, myopia, astigmatism, glass reflection, tiredness, distraction).

amanaplanacanal 13 hours ago [-]
The goalpost will be when you can buy one and drive it anywhere. How many cities are Waymo in now? I think what they are doing is terrific, but each car must cost a fortune.
BoorishBears 13 hours ago [-]
The cars aren't expensive by raw cost (low six figures, which is about what an S-class with highway-only L3 costs)

But there is a lot of expenditure relative to each mile being driven.

> The goalpost will be when you can buy one and drive it anywhere.

This won't happen any time soon, so I and millions of other people will continue to derive value from them while you wait for that.

yladiz 12 hours ago [-]
Low six figures is quite expensive, and unobtainable to a large number of people.
BoorishBears 12 hours ago [-]
Not even close.

It's a 2-ton vehicle that can self-drive reliably enough to be roving a city 24/7 without a safety driver.

The measure of expensive for that isn't "can everyone afford it", the fact we can even afford to let anyone ride them is a small wonder.

yladiz 10 hours ago [-]
I’m a bit confused. If we’re talking about consumer cars, the end goal is not to rent a car that can drive itself, the end goal is to own a car that can drive itself, and so it doesn’t matter if the car is available for purchase but costs $250,000 because few consumers can afford that, even wealthy ones.
BoorishBears 9 hours ago [-]
a) I'm not talking about consumer cars, you are. I said very plainly this level of capability won't reach consumers soon and I stand by that. Some Chinese companies are trying to make it happen in the US but there's too many barriers.

b) If there was a $250,000 car that could drive itself around given major cities, even with the geofence, it would sell out as many units as could be produced. That's actually why I tell people to be weary of BOM costs: it doesn't reflect market forces like supply and demand.

You're also underestimating both how wealthy people and corporations are, and the relative value being provided.

A private driver in a major city can easily clear $100k a year on retainer, and there are people are paying it.

yladiz 8 hours ago [-]
If you look at the original comment that you replied to, the goalpost was explained clearly:

> The goalpost will be when you can buy one and drive it anywhere.

So let’s just ignore the non-consumer parts entirely to avoid shifting the goalpost. I still stand by the fact that the average (or median) consumer will not be able to afford such an expensive car, and I don’t think it’s controversial to state this given the readily available income data in the US and various other countries. The point isn’t that it exists, Rolls Royce and Maseratis exist, but they are niche and so if self-driving cars will be so expensive to be niche they won’t actually make a real impact on real people, thus the goalpost of general availability to a consumer.

freehorse 13 hours ago [-]
> I and millions of other people

People "wait" because of where they live and what they need. Not all people live and just want to travel around SF or wherever these go nowadays.

BoorishBears 9 hours ago [-]
Why the scare quotes on wait? There is literally nothing for you to do but wait.

At the end of the day it's not like no one lives in SF, Phoenix, Austin, LA, and Atlanta either. There's millions of people with access to the vehicles and they're doing millions of rides... so acting like it's some great failing of AVs that the current cities are ones with great weather is frankly, a bit stupid.

It takes 5 seconds to look up the progress that's been made even in the last few years.

saint_yossarian 12 hours ago [-]
How many of those rides required human intervention by Waymo's remote operators? From what I can tell they're not sharing that information.
BoorishBears 11 hours ago [-]
I worked at Zoox, which has similar teleoperations to Waymo: remote operators can't joystick the vehicles.

So if we're saying how many times would it have crashed without a human: 0.

They generally intervene when the vehicles get stuck and that happens pretty rarely, typically because humans are doing something odd like blocking the way.

andrei_says_ 16 hours ago [-]
Not sure how exactly politicians will jump from “minimal wages don’t have to be livable wages” and “people who are able to work should absolutely not have access to free healthcare” and “any tax-supported benefits are actually undeserved entitlements and should be eliminated” to “everyone deserves a universal basic income”.
omnimus 15 hours ago [-]
I wouldn't underestimate what can happen if 1/3 of your workforce is displaced and put aside with nothing to do.

People are usually obedient because they have something in life and they are very busy with work. So they don't have time or headspace to really care about politics. When suddenly big numbers of people start to more care about politics it leads to organizing and all kinds of political changes.

What i mean is that it wouldn't be current political class pushing things like UBI. At same time it seems that some of current elites are preparing for this and want to get rid of elections altogether to keep the status quo.

TheOtherHobbes 12 hours ago [-]
I wouldn't underestimate how easily AI will suppress this through a combination of ultrasurveillance, psychological and emotional modelling, and personally targeted persuasion delivered by chatbot etc.

If all else fails you can simply bomb city blocks into submission. Or arrange targeted drone decapitations of troublemakers. (Possibly literally.)

The automation and personalisation of social and political control - and violence - is the biggest difference this time around. The US has already seen a revolution in the effectiveness of mass state propaganda, and AI has the potential to take that up another level.

What's more likely to happen is survivors will move off-grid altogether - away from the big cities, off the Internet, almost certainly disconnected and unable to organise unless communication starts happening on electronic backchannels.

nerptastic 6 hours ago [-]
Speculating here, but I don't believe that the government would have the time or organization to do this. Widespread political unrest caused by job losses would be the first step. Almost as soon as there is some type of AI that can replace mass amounts of workers, people will be out on the streets - most people don't have 1-2 months of living expenses saved up. At that point, the government would realize that SHTF - but it's too late, people would be protesting / rioting in droves - doesn't matter how many drones you can produce, or whether or not you can psychologically manipulate people when all they want is... food.

I could be entirely wrong, but it feels like if AI were to get THAT good, the government would be affected just as much as the working class. We'd more likely see total societal collapse rather than the government maintaining power and manipulating / suppressing the people.

anonandwhistle 6 hours ago [-]
That is a lot assumption right there. Starving masses can't logically and physically fight with AI or government for long. They become weak after weeks or months? At that point government would be smaller and controlled probably be part of AI owners.

IF they dont have 1-2 months of living expenses saved, they die. They can'be a big threat even in millions??? they dont have organization capacity or anything that matches

vkou 13 hours ago [-]
Getting rid of peaceful processes for transferring power is not going to be the big win that they think it is.
anonandwhistle 6 hours ago [-]
This is why Palantir and others exist to stop masses.It´s been only tested but it will only grow from there and stop millions of people. SV you built this
mindcrime 15 hours ago [-]
Not sure how exactly politicians will jump from ...

Well, if one believes that the day will come when their choices will be "make that jump" or "the guillotine", then it doesn't seem completely outlandish.

Not saying that day will come, but if it did...

SoftTalker 6 hours ago [-]
> or "the guillotine"

Or even simply being voted out.

chr1 15 hours ago [-]
The money transferred from tax payers to people without money is in effect a price for not breaking the law.

If AI makes it much easier to produce goods, it reduces price of money, making it easier to pay some money to everyone in exchange for not breaking the law.

ludicrousdispla 15 hours ago [-]
Politicians are elected for limited terms, not for life, so they don't need to change their opinion for a change to occur.
polotics 13 hours ago [-]
Are you sure of this? Don't you think the next US presidential election and very many subsequent ones will be decided by the US Supreme Court?
leshow 15 hours ago [-]
UBI is not a good solution because you still have to provision everything on the market, so it's a subsidy to private companies that sell the necessities of life on the market. If we're dreaming up solutions to problems, much better would be to remove the essentials from the market and provide them to everyone universally. Non-market housing, healthcare, education all provided to every citizen by virtue of being a human.
jostylr 8 hours ago [-]
Your solution would ultimately lead to treating all those items as uniform goods, but they are not. There are preferences different people have. This is why the price system is so useful. It indicates what is desired by various people and gives strong signals as to what to make or not. If you have a central authority making the decisions, they will not get it right. Individual companies may not get it right, but the corrective mechanism of failure (profit loss, bankruptcy) corrects that while when governments provide this, it is extremely difficult to correct it as it is one monolithic block. In the market, you can choose various different companies for different needs. In the government in a democracy, you have to choose all of one politician or all of another. And as power is concentrated, the worst people go after it. It is true with companies, but people can choose differently. With the state, there is no alternative. That is what makes it the state rather than a corporation.

It is also interesting that you did not mention food, clothing and super-computers-in-pockets. While government is involved in everything, they are less involved in those markets than with housing, healthcare, and education, particularly in mandates as to what to do. Government has created the problem of scarcity in housing, healthcare, and education. Do you really think the current leadership of the US should control everyone's housing, healthcare, and education? The idea of a UBI is that it strips the politicians of that fine-grained control. There is still control that can be leveraged, but it comes down to a single item of focus. It could very well be disastrous, but it need not be whereas the more complex system that you give politicians control over, the more likely it will be disastrous.

sneak 15 hours ago [-]
You can’t provide valuable things for “free” en masse without institutionalizing either slavery or robbery. The value must come from somewhere.

The costs of what you propose are enormous. No legislation can change that fact.

There ain’t no such thing as a free lunch.

Who’s going to pay for it? Someone who is not paying for it today.

How do you intend to get them to consent to that?

Or do you think that the needs of the many should outweigh the consent of millions of people?

The state, the only organization large enough to even consider undertaking such a project, has spending priorities that do not include these things. In the US, for example, we spend the entire net worth of Elon Musk (the “richest man in the world”, though he rightfully points out that Putin owns far more than he does) about every six months on the military alone. Add in Zuckerberg and you can get another 5 months or so. Then there’s the next year to think about. Maybe you can do Buffet and Gates; what about year three?

That’s just for the US military, at present day spending levels.

What you’re describing is at least an order of magnitude more expensive than that, just in one country that only has 4% of people. To extend it to all human beings, you’re talking about two more orders of magnitude.

There aren’t enough billionaires on the entire planet even to pay for one country’s military expenses out of pocket (even if you completely liquidated them), and this proposed plan is 500-1000x more spending than that. You’re talking about 3-5 trillion dollars per year just for the USA - if you extrapolate out linearly, that’d be 60-200 trillion per year for the Earth.

Even if you could reduce cost of provision by 90% due to economies of scale ($100/person/month for housing, healthcare, and education combined, rather than $1000 - a big stretch), it is still far, far too big to do under any currently envisioned system of wealth redistribution. Society is big and wealthy private citizens (ie billionaires) aren’t that numerous or rich.

There is a reason we all pay for our own food and housing.

mcny 15 hours ago [-]
> You’re talking about 3-5 trillion dollars per year just for the USA

I just want to point out that's about a fifth of our GDP and we spend about this much for healthcare in the US. We badly need a way to reduce this to at least half.

> There is a reason we all pay for our own food and housing.

The main reason I support UBI is I don't want need based or need aware distribution. I want everyone to get benefits equally regardless of income or wealth. That's my entire motivation to support UBI. If you can come up with another something that guarantees no need based or need aware and does not have a benefit cliff, I support that too. I am not married to UBI.

dbdblldwn 14 hours ago [-]
Just want to point out that any abstract intrinsic value about the economy like GDP is a socialized illusion

Reduce costs by eliminating fiat ledgers that only have value if we believe and realize the real economy is physical statistics and ship resources where the people demand

But of course that simple solution violates the embedded training of Americans. So it's a non-starter and we'll continue to desperately seek some useless reformation of an antiquated social system.

listenallyall 14 hours ago [-]
> I support UBI

Honestly, what type of housing do you envision under a UBI system? Houses? Modern apartment buildings? College dormitory-like buildings? Soviet-style complexes? Prison-style accommodations? B stands for basic, how basic?

ben_w 13 hours ago [-]
(Not the person you're replying to)

I think a UBI system is only stable in conjunction with sufficient automation that work itself becomes redundant. Before that point, I don't think UBI can genuinely be sustained; and IMO even very close to that point the best I expect we will see, if we're lucky, is the state pension age going down. (That it's going up in many places suggests that many governments do not expect this level of automation any time soon).

Therefore, in all seriousness, I would anticipate a real UBI system to provide whatever housing you want, up to and including things that are currently unaffordable even to billionaires, e.g. 1:1 scale replicas of any of the ships called Enterprise including both aircraft carriers and also the fictional spaceships.

That said, I am a proponent of direct state involvement in the housing market, e.g. the UK council housing system as it used to be (but not as it now is, there're not building enough):

• https://en.wikipedia.org/wiki/Public_housing_in_the_United_K...

• https://en.wikipedia.org/wiki/Council_house

ben_w 14 hours ago [-]
> You can’t provide valuable things for “free” en masse without institutionalizing either slavery or robbery. The value must come from somewhere.

Is AI slavery? Because that's where the value comes from in the scenario under discussion.

motorest 14 hours ago [-]
> You can’t provide valuable things for “free” en masse without institutionalizing either slavery or robbery. The value must come from somewhere.

Utter nonsense.

Do you believe the European countries that provides higher education for free are manning tenure positions with slaves or robbing people at gunpoint?

How come do you see public transportation services in some major urban centers being provided free of charge?

How do you explain social housing programmes conducted throughout the world?

Are countries with access to free health care using slavery to keep hospitals and clinics running?

What you are trying to frame as impossibilities is already the reality for many decades in countries ranking far higher in development and quality of living indexes that the US.

How do you explain that?

juniperus 14 hours ago [-]
You're missing the point, language can be tricky. Technically, the state confiscating wealth derived from your labor through taxes is a form of robbery and slavery. It used to be called corvée. But the words being used have a connotation of something much more brutal and unrewarding. This isn't a political statement, I'm not a libertarian who believes all taxation is evil robbery and needs to be abolished. I'm just pointing out by the definition of slavery aka forced labor, and robbery aka confiscation of wealth, the state employs both of those tactics to fund the programs you described.
andrepd 13 hours ago [-]
> Technically, the state confiscating wealth derived from your labor through taxes is a form of robbery and slavery.

Without the state, you wouldn't have wealth. Heck there wouldn't even be the very concept of property, only what you could personally protect by force! Not to mention other more prosaic aspects: if you own a company, the state maintains the roads that your products ship through, the schools that educate your workers, the cities and towns that house your customers... In other words the tax is not "money that is yours and that the evil state steals from you", but simply "fair money for services rendered".

juniperus 12 hours ago [-]
To a large extent, yes. That's why the arrangement is so precarious, it is necessary in many regards, but a totalitarian regime or dictatorship can use this arrangement in a nefarious manner and tip the scale toward public resentment. Balancing things to avoid the revolutionary mob is crucial. Trading your labor for protection is sensible, but if the exchange becomes exorbitant, then it becomes a source of revolt.
cataphract 10 hours ago [-]
If the state "confiscated" wealth derived from capital (AI) would that be OK with you?
motorest 13 hours ago [-]
> You're missing the point, language can be tricky. Technically, the state confiscating wealth derived from your labor through taxes is a form of robbery and slavery.

You're letting your irrational biases show.

To start off, social security contributions are not a tax.

But putting that detail aside, do you believe that paying a private health insurance also represents slavery and robbery? Are you a slave to a private pension fund?

Are you one of those guys who believes unions exploit workers whereas corporations are just innocent bystanders that have a neutral or even positive impact on workers lives and well being?

juniperus 13 hours ago [-]
No, I'm a progressive and believe in socialism. But taxation is de facto a form of unpaid labor taken by the force of the state. If you don't pay your taxes, you will go to jail. It is both robbery and slavery, and in the ideal situation, it is a benevolent sort of exchange, despite existing in the realm of slavery/robbery. In a totalitarian system, it become malevolent very quickly. It also can be seen as not benevolent when the exchange becomes onerous and not beneficial. Arguing this is arguing emotionally and not rationally using language with words that have definitions.

social security contributions are a mandatory payment to the state taken from your wages, they are a tax, it's a compulsory reduction in your income. Private health insurance is obviously not mandatory or compulsory, that is different, clearly. Your last statement is just irrelevant because you assume I'm a libertarian for pointing out the reality of the exchange taking place in the socialist system.

dns_snek 10 hours ago [-]
> No, I'm a progressive and believe in socialism

I'd be very interested in hearing which definition of "socialism" aligns with those obviously libertarian views?

> If you don't pay your taxes, you will go to jail. It is both robbery and slavery [...] Arguing this is arguing emotionally and not rationally using language with words that have definitions.

Indulging in the benefits of living in a society, knowingly breaking its laws, being appalled by entirely predictable consequences of those action, and finally resorting to incorrect usage of emotional language like "slavery" and "robbery" to deflect personal responsibility is childish.

Taxation is payment in exchange for services provided by the state and your opinion (or ignorance) of those services doesn't make it "robbery" nor "slavery". Your continued participation in society is entirely voluntary and you're free to move to a more ideologically suitable destination at any time.

pixl97 2 hours ago [-]
Good idea, lets make taxes optional or non enforceable. What comes next. Oh right, nobody pays. The 'government' you have collapses and then strong men become warlords and set up fiefdoms that fight each other. Eventually some authoritarian gathers up enough power to unite everyone by force and you have your totalitarian system you didn't want, after a bunch of violence you didn't want.

We assume you're libertarian because you are spouting libertarian ideas that just don't work in reality.

motorest 12 hours ago [-]
> No, I'm a progressive and believe in socialism. But taxation is de facto a form of unpaid labor taken by the force of the state.

I do not know what you mean by "progressive", but you are spewing neoliberal/libertarian talking points. If anything, this tells how much Kool aid you drank.

sneak 14 hours ago [-]
> Are countries with access to free health care using slavery to keep hospitals and clinics running?

No, robbery. They’re paid for with tax revenues, which are collected without consent. Taking of someone’s money without consent has a name.

Have you ever stopped to consider why class mobility is much much less common in Europe than in the USA?

331c8c71 13 hours ago [-]
> Have you ever stopped to consider why class mobility is much much less common in Europe than in the USA?

My understanding is that your info is seriously out of date. It might have been the case in the distant past but not the case anymore.

https://news.yale.edu/2025/02/20/tracking-decline-social-mob...

https://en.wikipedia.org/wiki/Global_Social_Mobility_Index

Rexxar 13 hours ago [-]
> Have you ever stopped to consider why class mobility is much much less common in Europe than in the USA?

It's a common idea but each time you try to measure social mobility, you find a lot of European countries ahead of USA.

- https://en.wikipedia.org/wiki/Global_Social_Mobility_Index

- https://www.theguardian.com/society/2018/jun/15/social-mobil...

motorest 14 hours ago [-]
> Have you ever stopped to consider why class mobility is much much less common in Europe than in the USA?

Which class mobility is this that you speak of? The one that forces the average US citizens to be a paycheck away from homelessness? Or is it the one where you are a medical emergency away from filing bankruptcy?

Have you stopped to wonder how some European countries report higher median household incomes than the US?

But by any means continue to believe your average US citizen is a temporarily embarrassed billionaire, just waiting for the right opportunity to benefit from your social mobility.

In the meantime, also keep in mind that mobility also reflects how easy it is to move down a few pegs. Let that sink in.

juniperus 14 hours ago [-]
the economic situation in Europe is much more dire than the US...
motorest 12 hours ago [-]
> the economic situation in Europe is much more dire than the US...

Is it, though? The US reports by far the highest levels of lifetime literal homelessness, which is three times greater than in countries like Germany. Homeless people on Europe aren't denied access to free healthcare, primary or even tertiary.

Why do you think the US, in spite of it's GDP, features so low in rankings such as human development index or quality of life?

suddenlybananas 10 hours ago [-]
Several US states have the life expectancy of Bangladesh.
andrepd 13 hours ago [-]
Yet people live better. Goes to show you shouldn't optimise for crude, raw GDP as an end in itself, only as a means for your true end: health, quality of life, freedom, etc.
juniperus 13 hours ago [-]
In many of the metrics, yeah. But Americans can afford larger houses and more stuff essentially, which isn't necessarily a good replacement for general quality of life things.
motorest 12 hours ago [-]
> In many of the metrics, yeah. But Americans can afford larger houses and more stuff essentially, which isn't necessarily a good replacement for general quality of life things.

I think this is the sort of red herring that prevents the average US citizen from realizing how screwed over they are. Again, the median household income in the US is lower than in some European countries. On top of this, the US provides virtually no social safety net or even socialized services to it's population.

The fact that the average US citizen is a paycheck away from homelessness and the US ranks so low in human development index should be a wake-up call.

andrepd 13 hours ago [-]
>Have you ever stopped to consider why class mobility is much much less common in Europe than in the USA?

This is not true, it was true historically, but not since WWII. Read Piketty.

victorbjorklund 15 hours ago [-]
So basically the model North Korea practices?
ido 15 hours ago [-]
> Non-market housing, healthcare, education all provided to every citizen

This can also describe Nordic and Germanic models of welfare capitalism (incrementally dismantled with time but still exist): https://en.wikipedia.org/wiki/Welfare_capitalism

bboygravity 11 hours ago [-]
Carbon tax on a state level to try to fight a global problem makes 0 sense actually.

You just shift the emissions from your location to the location that you buy products from.

Basically what happened in Germany: more expensive "clean" energy means their own production went down and the world bought more from China instead. The net result is probably higher global emissions overall.

__MatrixMan__ 10 hours ago [-]
This is why an economics based strictly on scarcity cannot get us where we need to go. Markets, not knowing what it's like to be thirsty, will interpret a willingness to poison the well as entrepreneurial spirit to be encouraged.

We need a system where being known as somebody who causes more problems than they solve puts you (and the people you've done business with) at an economic disadvantage.

mrcincinnatus 10 hours ago [-]
> I think the most sensible answer would be something like UBI.

What corporation will accept to pay dollars for members of society that are essentially "unproductive"? What will happen with the value of UBI in time, in this context, when the strongest lobby will be of the companies that have the means of producing AI? And, more essentially, how are humans able to negotiate for themselves when they lose their abilities to build things?

I'm not opposing the technology progress, I'm merely trying to unfold the reality of UBI being a thing, knowing human nature and the impetus for profit.

DiscourseFan 18 hours ago [-]
The major shift for me is now its normal to take Waymos. Yeah, there aren't as fast as Uber if you have to get across town, but for trips less than 10 miles they're my go to now.
schneems 18 hours ago [-]
Ive never taken one. They seem nice though.

On the other hand, the Tesla “robotaxi” scares the crap out of me. No lidar and seems to drive more aggressively. The Mark Rober YouTube of a Tesla plowing into a road-runner style fake tunnel is equal parts hilarious and nightmare fuel when you realize that’s what’s next to your kid biking down the street.

bscphil 16 hours ago [-]
> Mark Rober YouTube of a Tesla plowing into a road-runner style fake tunnel

I understand the argument for augmenting your self-driving systems with LIDAR. What I don't really understand is what videos like this tell us. The comparison case for a "road-runner style fake tunnel" isn't LIDAR, it's humans, right? And while I'm sure there are cases where a human driver would spot the fake tunnel and stop in time, that is not at all a reasonable assumption. The question isn't "can a Tesla save your life when someone booby traps a road?", it's "is a Tesla any worse than you at spotting booby trapped roads?", and moreover, "how does a Tesla perform on the 99.999999% of roads that aren't booby trapped?"

tfourb 15 hours ago [-]
Tesla‘s insistence on not using Lidar while other companies deem it necessary for save auto-pilot creates the need for Tesla to demonstrate that their approach is equally as save for both drivers and ie pedestrians. They haven’t done that, arguably the data shows the contrary. This generates the impression that Tesla skimps on security and if they skimp in one area, they’ll likely skimp in others. Stuff like the Rober video strengthens these impressions. It’s a public perception issue and Tesla has done nothing (and maybe isn’t able to do anything) to dispel this notion.
ekunazanu 15 hours ago [-]
> Is a Tesla any worse than you at spotting booby trapped roads

That would've been been the case if all laws, opinions and purchasing decisions were made by everyone acting rationally. Even if self driving cars are safer than human drivers, it just takes a few crashes to damage their reputation. It has to be much, much safer than humans for mass adoption. Ideally also safer than the competition, if you're comparing specific companies.

DiscourseFan 12 hours ago [-]
And Waymo is much safer than human drivers. Its better at chauffeuring than humans, too.
ziofill 18 hours ago [-]
I’m curious, are they now fully autonomous? I remember some time ago they had a remote operator.
Animats 15 hours ago [-]
Waymo has a control center, but it's customer service, not remote driving. They can look at the sensor data, give hints to the car ("back out, turn around, try another route") and talk to the customer, but can't take direct control and drive remotely.

Baidu's system in China really does have remote drivers.[1]

Tesla also appears to have remote drivers, in addition to someone in each car with an emergency stop button.[2]

[1] https://cyberlaw.stanford.edu/blog/2025/05/comparing-robotax...

[2] https://insideevs.com/news/760863/tesla-hiring-humans-to-con...

neom 17 hours ago [-]
Good account to follow to track their progress, sufficed to say they're nearing/at the end of the beginning: https://x.com/reed // https://x.com/daylenyang/status/1953853807227523178
Joeri 13 hours ago [-]
UBI could easily become a poverty trap, enough to keep living, not enough to have a shot towards becoming an earner because you’re locked out of opportunities. I think in practice it is likely to turn out like “basic” in The Expanse, with people hoping to win a lottery to get a shot at having a real job and building a decent life for themselves.

If no UBI is installed there will be a hard crash while everyone figures out what it is that humans can do usefully, and then a new economic model of full employment gets established. If UBI is installed then this will happen more slowly with less pain, but it is possible for society to get stuck in a permanently worse situation.

Ultimately if AI really is about to automate as much as it is promised then what we really need is a model for post-capitalism, for post-scarcity economics, because a model based on scarcity is incapable of adapting to a reality of genuine abundance. So far nobody seems to have any clue of how to do such a thing. UBI as a concept still lives deeply in the Overton window bounded by capitalist scarcity thinking. (Not a call for communism btw, that is a train to nowhere as well because it also assumes scarcity at its root.)

What I fear is that we may get a future like The Diamond Age, where we have the technology to get rid of scarcity and have human flourishing, but we impose legal barriers that keep the rich rich and the poor poor. We saw this happen with digital copyright, where the technology exists for abundance, but we’ve imposed permanent worldwide legal scarcity barriers to protect revenue streams to megacorps.

immibis 5 hours ago [-]
That's way better than the present situation where they just die, though. It's at least a start.
aorloff 15 hours ago [-]
Every time someone casually throws out UBI my mind goes to the question "who is paying taxes when some people are on UBI ?"

Is there like a transition period where some people don't have to pay taxes and yet don't get UBI, and if so, why hasn't that come yet ? Why aren't the minimum tax thresholds going up if UBI could be right around the corner ?

juniperus 15 hours ago [-]
The taxes will be most burdensome for the wealthiest and most productive of institutions, which is generally why these arrangements collapse economies and nations. UBI is hard to implement because it incentivizes non-productive behavior and disincentivizes productive activity. This creates economic crisis, taxes are basically a smaller scale version of this, UBI is like a more comprehensive wealth redistribution scheme. The creation of a syndicate (in this case, the state) to steal from the productive to give to the non-productive is a return to how humanity functioned before the creation of state-like structures when marauders and bandits used violence to steal from those who created anything. Eventually, the state arose to create arrangements and contracts to prevent theft, but later become the thief itself, leading to economic collapse and the cyclical revolutionary cycle.

So, AI may certainly bring about UBI, but the corporations that are being milked by the state to provide wealth to the non-productive will begin to foment revolution along with those who find this arrangement unfair, and the productive activity of those especially productive individuals will be directed toward revolution instead of economic productivity. Companies have made nations many times before, and I'm sure it'll happen again.

grues-dinner 10 hours ago [-]
The problem is the "productive activity" is rather hard to define if there's so much "AI" (be it classical ML, LLM, ANI, AGI, ASI, whatever) around that nearly everything can be produced by nearly no one.

The destruction of the labour theory of value has been a goal of "tech" for a while, but if they achieve it, what's the plan then?

Assuming humans stay in control of the AIs because otherwise all bets are off, in a case where a few fabulously wealthy (or at least "onwing/controlling", since the idea of wealth starts to become fuzzy) industrialists control the productive capacity for everything from farming to rocketry and there's no space for normal people to participate in production any more, how do you even denominate the value being "produced"? Who is it even for? What do they need to give in return? What can they give in return?

lotsoweiners 4 hours ago [-]
> Assuming humans stay in control of the AIs because otherwise all bets are off, in a case where a few fabulously wealthy (or at least "onwing/controlling", since the idea of wealth starts to become fuzzy) industrialists control the productive capacity for everything from farming to rocketry and there's no space for normal people to participate in production any more

Why do the rest of humanity even have to participate in this? Just continue on the way things were before without any super AI. Start new businesses that don’t use AI and hire humans to work there.

grues-dinner 2 hours ago [-]
Because with presumably tiny marginal costs of production, the AI owners can flood and/or buy out your human-powered economy.

You'd need a very united front and powerful incentives to prevent, say, anyone buying AI-farmed wheat when it's half the cost of human-farmed (say). If you don't prevent that, Team AI can trade wheat (and everything else) for human economy money and then dominate there.

essnine 14 hours ago [-]
The assumption here that UBI "incentivizes non-productive behavior and disincentivizes productive activity" is the part that doesn't make sense. What do you think universal means? How does it disincentivize productive activity if it is provided to everyone regardless of their income/productivity/employment/whatever?
juniperus 13 hours ago [-]
Evolutionarily, people engage in productive activity in order to secure resources to ensure their survival and reproduction. When these necessary resources are gifted to a person, there is a lower chance that they will decide to take part in economically productive behavior.

You can say that because it is universal, it should level the playing field just at a different starting point, but you are still creating a situation where even incredibly intelligent people will choose to pursue leisure over labor, in fact, the most intelligent people may be the ones to be more aware of the pointlessness of working if they can survive on UBI. Similarly, the most intelligent people will consider the arrangement unfair and unsustainable and instead of devoting their intelligence toward economically productive ventures, they will devote their abilities toward dismantling the system. This is the groundwork of a revolution. The most intelligent will prefer a system where their superior intelligence provides them with sufficient resources to choose a high-quality mate. If they see an arrangement where high-quality mates are being obtained by individuals who they deem to be receiving benefits that they cannot defend/protect adequately, such an arrangement will be dismantled. This evolutionary drive is hundreds of millions of years old. Primitive animals will take resources from others that they observe to be unable to defend their status.

So, overall, UBI will probably be implemented, and it will probably end in economic crisis, revolution, and the resumption of this cycle that has been playing out over and over for centuries.

LouisSayers 2 hours ago [-]
> When these necessary resources are gifted to a person, there is a lower chance that they will decide to take part in economically productive behavior.

Source?

Even if that's true though, who cares if AI and robots are doing the work?

What's so bad about allowing people leisure, time to do whatever they want? What are you afraid of?

essnine 7 hours ago [-]
There are two things bothering me here. The first bit where you're talking about motivations and income driving it seems either very reductive or implying of something that ought to be profoundly upsetting: - that intelligent people will see that the work they do is pointless if they're paid enough to survive and care for themselves, and not see work as another source of income for better financial security - that most intelligent people will see it as exploitation and then choose to focus on dismantling the system that levels the playing field

Which sort of doesn't add up. So there are intelligent people who are working right now because they need money and don't have it, while the other intelligent people who are working and employing other people are only doing it to make money and will rebel if they lose some of the money they make.

But then, why doesn't the latter group of intelligent people just stop working if they have enough money? Are they less/more/differently intelligent than the former group? Are we thinking about other, more narrow forms of intelligence when describing either?

Also

> The most intelligent will prefer a system where their superior intelligence provides them with sufficient resources to choose a high-quality mate. If they see an arrangement where high-quality mates are being obtained by individuals who they deem to be receiving benefits that they cannot defend/protect adequately, such an arrangement will be dismantled. This evolutionary drive is hundreds of millions of years old.

I don't want to come off as mocking here - it's hard to take these points seriously. The whole point of civilization is to rise above these behaviours and establish a strong foundation for humanity as a whole. The end goal of social progress and the image of how society should be structured cannot be modeled on systems that existed in the past solely because those failure modes are familiar and we're fine with losing people as long as we know how our systems fail them. That evolutionary drive may be millions of years old, but industrial society has been around for a few centuries, and look at what it's done to the rest of the world.

> Primitive animals will take resources from others that they observe to be unable to defend their status.

Yeah, I don't know what you're getting at with this metaphor. If you're talking predatory behaviour, we have plenty of that going around as things are right now. You don't think something like UBI will help more people "defend their status"?

> it will probably end in economic crisis, revolution, and the resumption of this cycle that has been playing out over and over for centuries

I don't think human civilization has ever been close to this massive or complex or dysfunctional in the past, so this sentence seems meaningless, but I'm no historian.

CER10TY 13 hours ago [-]
I guess the thinking goes like this: Why start a business, get a higher paying job etc if you're getting ~2k€/mo in UBI and can live off of that? Since more people will decide against starting a business or increasing their income, productive activity decreases.
essnine 7 hours ago [-]
I see more people starting businesses because they now have less risk, more people not changing jobs just to get a pay hike. The sort of financial aid UBI would bring might even make people more productive on the whole, since people who are earning have spare income for quality of life, and people with financial risk are able to work without being worried half the day about paying rent and bills.

It's a bit of a dunk on people who see their position as employer/supervisor as a source of power because they can impose financial risk as punishment on people, which happens more often than any of us care to think, but isn't that a win? Or are we conceding that modern society is driven more by stick than carrot and we want it that way?

lotsoweiners 4 hours ago [-]
If everyone has 2k/mo then nobody has 2k/mo.
LouisSayers 2 hours ago [-]
That's like saying "money doesn't exist".

In a sense everybody does have "2k" a month, because we all have the same amount of time to do productive things and exchange with others.

ako 14 hours ago [-]
You also have to consider the alternative: if there’s no ubi, are you expecting millions to starve? This is a recipe for civil war, if you have a very large group of people unable to survive you get social unrest. Either you spend the money on ubi or on police/military suppression to battle the unrest.
woile 15 hours ago [-]
There's another question to answer:

Who is working?

dongping 11 hours ago [-]
The robotaxi business model is the total opposite of scaling. At my previous employer we were solving the problem "block by block, city by city", , and I can only assume that you are living in the right city/block where they are tackling.
griffzhowl 6 hours ago [-]
That just sounds like scaling slowly, rather than not scaling
OneMorePerson 17 hours ago [-]
Isn't it the case that companies are always competing and evolving? Unless we see that there's a ceiling to driverless tech that is immediately obvious.

We "made cars work" about 100 years ago, but they have been innovating on that design since then on comfort, efficiency, safety, etc. I doubt the very first version of self driving will have zero ways to improve (although eventually I suppose you would hit a ceiling).

visarga 17 hours ago [-]
> I think the most sensible answer would be something like UBI.

Having had the experience of living under communist regime prior to 1989 I have zero trust in the state providing support, while I am totally dependent and have no recourse. Instead I would rather rely on my own two hands like my grandparents did.

I see a world where we can build anything we want with our own hands and AI automation. Jobs might become optional.

magicalist 17 hours ago [-]
> I see a world where we can build anything we want with our own hands and AI automation. Jobs might become optional.

Unless your two hands are building murderbots, though, it doesn't matter what you're building if you can't grow or buy food.

I haven't personally seen how UBI could end up working viably, but I also don't see any other system working without much more massive societal changes than anyone is talking about.

Meanwhile, there are many many people that are very invested in maintaining massive differentials between the richest and the poorest that will be working against even the most modest changes.

griffzhowl 6 hours ago [-]
> I also don't see any other system working without much more massive societal changes than anyone is talking about.

The other system is that the mass of people are coerced to work for tokens that buy them the right to food and to live in a house. i.e. the present system but potentially with more menial and arduous labour.

Hopefully we can think of something else

kannanvijayan 17 hours ago [-]
I'd argue against the entire perspective of evaluating every policy idea along one-dimensional modernist polemics put forwards as "the least worst solution to all of human economy for all time".

Right now the communists in China are beating us at capitalism. I'm starting to find the entire analytical framework of using these ideologies ("communism", "capitalism") to evaluate _anything_ to be highly suspect, and maybe even one of the west's greatest mistakes in the last century.

> I see a world where we can build anything we want with our own hands and AI automation. Jobs might become optional.

I was a teenager back in the 90s. There was much talk then about the productivity boosts from computers, the internet, automation, and how it would enable people to have so much more free time.

Interesting thing is that the productivity gains happened. But the other side of that equation never really materialized.

Who knows, maybe it'll be different this time.

bee_rider 16 hours ago [-]
I’m not certain we don’t have free time, but I’m not sure how to test that. Is it possible that we just feel busier nowadays because we spend more time watching TV? Work hours haven’t dropped precipitously, but maybe people are spending more time in the office just screwing around.
leshow 15 hours ago [-]
It's the same here. Calling what the west has a "free-market capitalist" system is also a lie. At every level there is massive state intervention. Most discoveries come from publicly funded work going on at research universities or from billions pushed into the defense sector that has developed all the technology we use today from computers to the internet to all the technology in your phone. That's no more a free-market system than China is "communist" either.

I think the reality is just that governments use words and have an official ideology, but you have to ignore that and analyze their actions if you want to understand how they behave.

kannanvijayan 7 hours ago [-]
My thoughts on these ideologies lately have shifted to viewing them as "secular religions". There are many characteristics that line up with that perspective.

Both communist and capitalist purists tend to be enriched for atheists (speaking as an atheist myself). Maybe some of that is people who have fallen out with religion over superstitions and other primitivisms, and are looking to replace that with something else.

Like religions, the movements have their respective post-hoc anointed scriptural prophets: Marx for one and Smith for the other.. along with a host of lesser saints.

Like religions, they are very prescriptive and overarching and proclaim themselves to have a better connection with some greater, deeper underlying truth (in this case about human behaviour and how it organizes).

For analytical purposes there's probably still value in the underlying texts - a lot of Smith and Marx's observations about society and human behaviour are still very salient.

But these ideologies, the outgrowths from those early analytical works, seem utterly devoid of any value whatsoever. What is even the point of calling something capitalist or communist. It's a meaningless label.

These days I eschew that model entirely and try to keep to a more strict analytical understanding on a per-policy basis. Organized around certain principles, but eschewing ideology entirely. It just feels like a mental trap to do otherwise.

juniperus 15 hours ago [-]
not to mention that most corporations in the US are owned by the public through the stock market and the arrangement of the American pension scheme, and public ownership of the means of production is one of the core tenets of communism. Every country on Earth is socialist and has been socialist for well over a century. Once you consider not just state investment in research, but centralized credit, tax-funded public infrastructure, etc. well yeah, terms such as "capitalism" become used in a totally meaningless way by most people lol.
juniperus 15 hours ago [-]
You will still need energy and resources.
leshow 15 hours ago [-]
In your world where jobs become "optional" because a private company has decided to fire half their workforce, and the state also does not provide some kind of support, what do all the "optional" people do?
lotsoweiners 4 hours ago [-]
Murder more CEOs and then start working your way down the org chart? Blow up corporate headquarters, data centers, etc? Lots of ways to be productive.
marsven_422 17 hours ago [-]
[dead]
socalgal2 12 hours ago [-]
Do you live in SF (the city, not the Bay Area as a whole) or West LA? I ask because in these areas you can stand on any city street and see several self driving cars go by every few minutes.

It's irrlevant that they've had a few issues. They already work and people love them. It's clear they will eventually replace every uber/lyft driver, probably every taxi driver, they'll likely replace every doordash/grubhub driver with vehicles design to let smaller automated delivery carts go the last few blocks. They may also replace every truck driver. Together that's around 5 million jobs in the USA.

Once they're let on the freeways their usage will expand even faster.

12 hours ago [-]
inferiorhuman 12 hours ago [-]

  It's irrlevant that they've had a few issues.
The last Waymo I saw (a couple weeks ago) was stuck trying to make a right turn on to Market St. It was conveniently blocking the pedestrian crosswalk for a few cycles before I went around it. The time before that one got befuddled by a delivery truck and ended up blocking both lanes of 14th Street. Before Cruise imploded they were way worse. I can't say that these self-driving cars have improved much since I moved out of the city a few years back.
einarfd 11 hours ago [-]
Driverless taxis is IMO the wrong tech to compare to. It’s a high consequence, low acceptance of error, real time task. Where it’s really hard to undo errors.

There is a big category of tasks that isn’t that. But that are economically significant. Those are a lot better fit for AI.

mofeien 13 hours ago [-]
> What makes you think that? Self driving cars [...]

AI is intentionally being developed to be able to make decisions in any domain humans work in. This is unlike any previous technology.

The more apt analogy is to other species. When was the last time there was something other than homo sapiens that could carry on an interesting conversation with homo sapiens. 40,000 years?

And this new thing has been in development for what? 70 years? The rise in its capabilities has been absolutely meteoric and we don't know where the ceiling is.

klabb3 6 hours ago [-]
> we don't know where the ceiling is.

The ceiling for current AI, while not provably known, can reasonably be upper bounded to human aggregate ability since these methods are limited to patterns in the training data. The big surprise was how many and sophisticated patterns were hiding in the training data (human written text). This current wave of AI progress is fueled by training data and compute in ”equal parts”. Since compute is cheaper, they’ve invested in more compute but failed scaling expectations since training data remained similarly sized.

Reaching super-intelligence through training data is paradoxical, because if it were known it wouldn’t be super-human. The other option is breaking out of the training data enclosure by relying on other methods. That may sound exciting but there’s no major progress I’m aware of that points that direction. It’s a little like being back to square one, before this hype cycle started. The smartest people seem to be focused on transformers, due to getting boatloads of money from companies or academia pushing them because of fomo.

motorest 14 hours ago [-]
> What makes you think that? Self driving cars have had (...)

I think you're confusing your cherry-picked comparison with reality.

LLMs are eliminating the need to have a vast array of positions on payrolls. From copywriters to customer support, and even creative activities such as illustration and even authoring books, today's LLMs are already more than good enough to justify replacing people with the output of any commercial chatbot service.

Software engineering is being affected as well, and it requires far greater know-how, experience, and expertise to meet the hiring bar.

> And when you talk about applying this same tech, so confidently, to domains far more nuanced and complex than (...)

Yes, your tech job is also going to be decimated. It's not a matter of having PMs write code. It's an issue of your junior SDE armed with a LLM being quite able to clear your bug backlog in a few days while improving test coverage metrics and refactoring code back from legacy status.

If a junior SDE can suddenly handle the workload that previously you required a couple of medior and senior developers, why would a company keep around 4 or 5 seasoned engineers when an inexperienced one is already able to handle the workload?

That's where the jobs will vanish. Even if demand remains there, it dropped considerably as to not justify retaining so many people in a company's payroll.

And what are you going to do, them? Drive a Uber?

taormina 14 hours ago [-]
> LLMs are eliminating the need to have a vast array of positions on payrolls. From copywriters to customer support, and even creative activities such as illustration and even authoring books, today's LLMs are already more than good enough to justify replacing people with the output of any commercial chatbot service.

I'd love a source to these claims. Many companies are claiming that they are able to layoff folks because of AI, but in fact, AI is just a scapegoat to counteract the reckless overhiring due to free money in the market over the last 5-10 years and investors are demanding to see a real business plan and ROI. "We can eliminate this headcount due to the efficiency of our AI" is just a fancy way to make the stock price go up while cleaning up the useless folks.

People have ideas. There are substantially more ideas than people who can implement ideas. As with most technology, the reasonable expectation is to assume that people are just going to want more done by the now tool powered humans, not less things.

motorest 14 hours ago [-]
> I'd love a source to these claims.

Have you been living under a rock?

You can start getting up to speed by how Amazon's CEO already laid out the company's plan.

https://www.thecooldown.com/green-business/amazon-generative...

> (...) AI is just a scapegoat to counteract the reckless overhiring due to (...)

That is your personal moralist scapegoat, and one that you made up to feel better about how jobs are being eliminated because someone somewhere screwed up.

In the meantime, you fool yourself and pretend that sudden astronomic productivity gains have no impact on demand.

huimang 14 hours ago [-]
These supposed "productivity gains" are only touted by the ones selling the product, i.e. the ones who stand to benefit from adoption. There is no standard way to measure productivity since it's subjective. It's far more likely that companies will use whatever scapegoat they can to fire people with as little blowback as possible, especially as the other commenter noted, people were getting hired like crazy.

Each one of the roles you listed above is only passable with AI at a superficial glance. For example, anyone who actually reads literature other than self-help and pop culture books from airport kiosks knows that AI is terrible at longer prose. The output is inconsistent because current AI does not understand context, at all. And this is not getting into the service costs, the environmental costs, and the outright intellectual theft in order to make things like illustrations even passable.

motorest 12 hours ago [-]
> These supposed "productivity gains" are only touted by the ones selling the product (...)

I literally pasted an announcement from the CEO of a major corporation warning they are going to decimate their workforce due to the adoption of AI.

The CEO literally made the following announcement:

> "As we roll out more generative AI and agents, it should change the way our work is done," Jassy wrote. "We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs."

This is not about selling a product. This is about how they are adopting AI to reduce headcount.

baconbrand 11 hours ago [-]
The CEO is marketing to the company’s shareholders. This is marketing. A CEO will say anything to sell the idea of their company to other people. Believe it or not, there is money to be made from increased share prices.
taormina 3 hours ago [-]
Congratulations for believing the marketing. He has about 2.46 trillion reasons to make this claim. In other news, water is wet and the sky is blue.
XenophileJKO 14 hours ago [-]
I think your assumption is probably a little backwards. It will be a senior SDE clearing the slate I think. Ones that embrace the technolocy and are able to accelerate their work. At that level of effeciency the cost is still way way lower than it is for a larger team.

When it gets to the point that you don't need a senior engineer doing the work, you won't need a junior either.

motorest 14 hours ago [-]
> I think your assumption is probably a little backwards. It will be a senior SDE clearing the slate I think.

I don't think you understood the point I made.

My point was not about Jr vs Sr, let alone how a Jr is somehow more capable than a Sr.

My point was that these productivity gains aren't a factor of experience of seniority, but they do devalue the importance of seniority to perform specific tasks. Just crack open a LLM, feed in a few prompts, and done. Hell, junior developers no longer need to reach out to seniors to as questions about any topic. Think about that for a second.

rpdillon 9 hours ago [-]
Just as an anecdote that might provide some context, this is not what I've observed. My observation is that senior engineers are vastly more effective at knowing how to employ and manage AI than junior engineers. Junior engineers are typically coming to the senior engineers to learn how to approach learning what the AI is good at and not good at because they themselves have trouble making those judgments.

I was working on a side project last night, and Gemini decided to inline the entire Crypto.js library in the file I was generating. And I knew it just needed a hashing function, so I had to tell it to just grab a hashing function and not inline all of Crypto.js. This is exactly the kind of thing that somebody that didn't know software engineering wouldn't be able to say, even as simple as it is. It made me realize I couldn't just hand this tool to my wife or my kids and allow them to create software because they wouldn't know to say that kind of thing to guide the AI towards success.

andrei_says_ 16 hours ago [-]
As someone who lives in LA, I don’t think self-driving cars existed at the time of the Rodney King LA riots and I am not aware of any other riots since.
StanislavPetrov 15 hours ago [-]
Let me be the first to welcome you out of your long slumber!
idontpost 16 hours ago [-]
[dead]
rafaelero 18 hours ago [-]
I feel like you are trapped in the first assessment of this problem. Yes, we are not there yet, but have you thought about the rate of improvement? Is that rate of improvement reliable? Fast? That's what matters, not where we are today.
badestrand 14 hours ago [-]
You could say that about any time in history. When the steam engine or mechanical loom were invented there were millions of people like you who predicted that mankind will be out of jobs soon and guess what happened? There's still a lot of things to do in this world and there still will be a lot to do (aka "jobs") for a loooong time.
rafaelero 7 hours ago [-]
Nothing in the rate of improvement of a steam engine suggest it would be able to drive a car or do the job of an attorney.
atleastoptimal 18 hours ago [-]
Everything anyone could say about bad AI driving could be said about bad human drivers. Nevertheless, Waymo has not had a single fatal accident despite many millions of passenger miles and is safer than human drivers.
fallous 17 hours ago [-]
Everything? How about legal liability for the car killing someone? Are all the self-driving vendors stepping up and accepting full legal liability for the outcomes of their non-deterministic software?
d1sxeyes 16 hours ago [-]
In the bluntest possible sense, who cares if we can make roads safer?

Solving liability in traffic collisions is basically a solved problem through the courts, and at least in the UK, liability is assigned in law to the vendor (more accurately, there’s a list of who’s responsible for stuff, I’m not certain if it’s possible to assume legal responsibility without being the vendor).

atleastoptimal 17 hours ago [-]
Thousands have died directly due to known defects in manufactured cars. Those companies (Ford, others) still are operating today.

Even if driverless cars killed more people than humans they would see mass adoption eventually. However they are subject to farr higher scrutiny than human drivers and even so make fewer mistakes, avoid accidents more frequently and can't get drunk, tired, angry, or distracted.

mejutoco 14 hours ago [-]
There is a fetish for technology that sometimes we are not aware of. On average there might be less accidents, but if specific accidents were preventable and now they happen, people will sue. And who will take the blame? The day the company takes the blame is the day self-driving exists IMO.
notyourav 16 hours ago [-]
A faulty break pad or an engine doesn’t take decisions that might endanger people. Self-driving cars do. They might also get hacked pretty thoroughly.

For the same reason, I’d probably never buy a home robot with more capabilities then a vacuum cleaner.

atleastoptimal 15 hours ago [-]
Current non-self-driving cars on the road can be hacked

https://www.wired.com/story/kia-web-vulnerability-vehicle-ha...

But even if they can theoretically be hacked, so far Waymos are still safer and more reliable than human drivers. The biggest danger someone has riding in one is someone destroying it for vindictive reasons.

bloaf 17 hours ago [-]
I think it is important to remember that "decades" here means <20 years. Remember that in 2004 it was considered sufficiently impossible that basically no one had a car that could be reliably controlled by a computer, let alone driven by computer alone:

https://en.wikipedia.org/wiki/DARPA_Grand_Challenge_(2004)

I also think that most job domains are not actually more nuanced or complex than driving, at least from a raw information perspective. Indeed, I would argue that driving is something like a worst-case scenario when it comes to tasks:

* It requires many different inputs, at high sampling rates, continuously (at the very least, video, sound, and car state)

* It requires loose adherence to laws in the sense that there are many scenarios where the safest and most "human" thing to do is technically illegal.

* It requires understanding of driving culture to avoid making decisions that confuse/disorient/anger other drivers, and anticipating other drivers' intents (although this can be somewhat faked with sufficiently fast reaction times)

* It must function in a wide range of environments: there is no "standard" environment

If we compare driving to other widespread-but-low-wage jobs (e.g. food prep, receptionists, cleaners) there are generally far more relaxed requirements:

* Rules may be unbreakable as opposed to situational, e.g. the cook time for burgers is always the same.

* Input requirements may be far lower. e.g. an AI receptionist could likely function with audio and a barcode scanner.

* Cultural cues/expectations drive fewer behaviors. e.g. an AI janitor just needs to achieve a defined level of cleanliness, not gauge people's intent in real-time.

* Operating environments are more standardized. All these jobs operate indoors with decent lighting.

baconbrand 11 hours ago [-]
I’m pretty sure you could generate a similar list for any human job.

It’s strange to me watching the collective meltdown over AI/jobs when AI doesn’t do jobs, it does tasks.

selimnairb 11 hours ago [-]
> A human driver is still far more adaptive and requires a lot less training than AI

I get what you are saying, but humans need 16 years of training to begin driving. I wouldn’t call that not a lot.

selimnairb 11 hours ago [-]
And the problem for Capitalists and other anti-humanists is that this doesn’t scale. Their hope with AI, I think, is that once they train one AI for a task, it can be trivially replicated, which scales much better than humans.
perryizgr8 7 hours ago [-]
> They've been known to ignore police traffic redirections, they've run right through construction barriers, and recently they were burnt to a crisp in the LA riots

All of this is very common for human driven cars too.

andrepd 13 hours ago [-]
To be fair, self-driving cars don't need to be perfect 0 casualty modes of transportation, they just need to be better than human drivers. Since car crashes kill 2 million people each year (and maim another 2 or 3), this is a low bar to clear...

Of course, the actual answer is that rail and cycling infrastructure are much more efficient than cars in any moderately dense region. But that would mean funding boring regular companies focused on providing a product or service for adequate profit, instead of exciting AI web3 high tech unicorn startups.

CamperBob2 18 hours ago [-]
Self-driving cars are a political problem, not a technical problem. A functioning government would put everything from automation-friendly signaling standards to battery-swapping facilities into place.

We humans used to do that sort of thing, but not anymore, so... bring on the AI. It won't work as well as it might otherwise be able to, but it'll probably kill fewer humans on the road at the end of the day. A low bar to clear.

lightbritefight 17 hours ago [-]
Self-driving car companies dont want a unfiied signalling platform or other "open for all" infrastructure updates. They want to own self-driving, to lock you into a subscription on their platform.

Literally the only open source self driving platform, from trillion to billion to million dollar companies is comma.ai, founded by Geohot. Thats it. Its actually very good, and I bet they would welcome these upgrades, but that would be a consortium of one underdog pushing for them.

Pelam 17 hours ago [-]
Ie. a political problem as the grandparent said.

Corporations generally follow a narrow somewhat predictable pattern towards some local maxima of their own value extraction. Since world is not zero sum, it produces value for others too.

Where politics (should) enter the picture is where we somehow can see a more global maxima (for all citizens) and try to drive towards it through some political, hopefully democratic means. (Laws, standards, education, investment, infra etc)

CamperBob2 15 hours ago [-]
Yeah, that must be it. It's a conspiracy.
StanislavPetrov 15 hours ago [-]
This is all happening right out in the open.
ulfw 15 hours ago [-]
Why would politicians want to:

- destroy voting population's jobs

- put power in the hand of 1-2 tech companies

- clog streets with more cars rather than build trams, trains, maglevs, you name it

StanislavPetrov 15 hours ago [-]
Because the primary goal of the vast majority of politicians is to collect life-changing, generational wealth by any means necessary.
GoatInGrey 15 hours ago [-]
Snarky but serious question: How do we know that this wave will disrupt labor at all? Every time I dig into a story of X employees replaced by "AI", it's always in a company with shrinking revenues. Furthermore, all of the high-value use cases involve very intense supervision of the models.

There's been a dream of unsupervised models going hog wild on codebases for the last three years. Yet even the latest and greatest Claude models can't be trusted to write a new REST endpoint exposing 5 CRUD methods without fucking something up. No, it requires not only human supervision, but it also requires human expertise to validate and correct.

I dunno. I feel like this language grossly exaggerates the capability of LLMs to paint a picture of them reliably fulfilling roles end-to-end instead of only somewhat reliably fulfilling very narrowly scoped tasks that require no creativity or expertise.

closewith 15 hours ago [-]
> instead of only somewhat reliably fulfilling very narrowly scoped tasks that require no creativity or expertise.

This alone is enough to completely reorganise the labour market, as it describe an enormous number of roles.

badestrand 14 hours ago [-]
How many people could be replaced by a proper CMS or a Excel sheet right now already? Probably dozens of millions, and yet they are at their desks working away.

It's easy to sit in a café and ponder about how all jobs will be gone soon but in practice people aren't as easily replacable.

7952 11 hours ago [-]
For many businesses the situation is that technology has dramatically underperformed in doing the most basic tasks. Millions of people are working around things like defective ERP systems. A modest improvement in productivity in building basic apps could push us past a threshold. It makes it possible for millions more people to construct crazy excel formulas. It makes it possible to add a UI to a python script where before there was only a command line. And one piece of magic that works teliably can change an entire process. It lets you make a giant leap rather than an incremental change.

If we could make line of business crud apps work reliably, have usable document/email search, and have functional ERP that would dissolve millions of jobs.

pjmlp 14 hours ago [-]
I can tell that those whose jobs depended on providing image assets or translations for CMS, are no longer relevant for their employers.
PleasureBot 9 hours ago [-]
A lot of jobs really only exist to increase headcount for some mid/high level manager's fiefdom. LLMs are incapable of replacing those roles as the primary value of those roles is to count towards the number of employees in their sector of the organization.
closewith 9 hours ago [-]
Unless AI spend overtakes headcount as the vanity metric du hour, which it already has.
krainboltgreene 14 hours ago [-]
I promise you that your understanding of those roles is wrong.
drooby 18 hours ago [-]
Carpenters, landscapers, roofers, plumbers, electricians, elderly care, nurses, cooks, servers, bakers, musicians, actors, artists...

Those jobs are probably still a couple decades plus off from displacement. some possibly never, And we will need them in higher numbers.. and perhaps it's ironic because these are some of the oldest professions.

Everything we do is in service of paying for our housing, transportation, eating food, healthcare and some fun money.

Most goes to housing, healthcare, and transportation.

Healthcare costs may come down some with advancements in AI. R&D will be cheaper. Knowledge will be cheaper and more accessible.

But what people care about, what people have always cared about, remains in professions that are as old as time and, I don't see them fully replaceable by AI just yet - enhanced, yes, but not replaced.

Imagine a world where high quality landscaping exists for the average person. And this is made possible because we'll live in a world where the equivalent of today's uber driver owns a team of gardening androids.

Or perhaps in the future everyone will work in finance. Everyone's a corporation.

Ramble ramble ramble

nlawalker 17 hours ago [-]
> Imagine a world where high quality landscaping exists for the average person. And this is made possible because we'll live in a world where the equivalent of today's uber driver owns a team of gardening androids.

I think it's going to be the other way around. It's looking like automation of dynamic physical capability is going to be the very last thing we figure out; what we're going to get first is teams of lower-skilled human workers directed largely by jobsite AI. By the time the robots get there, they're not going to need a human watching them.

danielbln 13 hours ago [-]
Looking at the advancements in low cost flexible robotics I'm not sure I share that sentiment. Plus the LLM craze is fueling generalist advancement in robotics as well. I'd say we'll see physical labor displacement within a decade tops.
SirHumphrey 12 hours ago [-]
Kinematics is deceptively hard and at least evolutionary took a lot longer to develop than language. Low wage physical labor seems easy only because humans are naturally very good at it, and this took millions of years to develop.

The number of edge cases when you are dealing with physical world is several order of magnitudes higher than when dealing with text only and the spacial reasoning capabilities of the current crop of MLLMs are not nearly as good at it as required. And this doesn't even take in to account that now you are dealing with hardware and hardware is expensive. Expensive enough, that even on the manufacturing lines (a more predictable environment than let's say landscaping) automation sometimes doesn't make economic sense.

citizenpaul 14 hours ago [-]
Im reminded of something I read years ago that said something like jobs are now above or below the API. I think now its jobs will be above or below the AI.
ares623 18 hours ago [-]
Well when i get unemployable i will start upskilling to an electrician. And so will hundreds of thousands like me.

That will do very well to salaries I think and everyone will be better of.

drivebyhooting 15 hours ago [-]
Those jobs don’t pay particularly well today, and many have poor working conditions that strain the body.

Imagine what they’ll be like with an influx of additional laborers.

xpe 16 hours ago [-]
I would be cautious to avoid any narrative anchoring on “old versus new” professions. I would seek out other ways of thinking about it.

For example, I predict humans will maintain competitive advantage in areas where the human body excels due to its shape, capabilities, or energy efficiency.

000ooo000 18 hours ago [-]
What this delusion seems to turn a blind eye to is that a good chunk of the population is already in those roles; what happens when the supply of those roles far exceeds the demand, in a relatively short time? Carpenters suddenly abundant, carpenter wages drop, carpenters struggling to live, carpenters forced to tighten spending, carpenters decide children aren't affordable.. now extrapolate that across all of the impacted roles and industries. No doubt someone is already typing "carpenters can retrain too!" OK, so they're back to entry level wages (if anything) for 5+ years? Same story. And retrain to what?

At some point an equilibrium will be reached but there is no guarantee it will be a healthy situation or a smooth ride. This optimism about AI and the rosy world that is just around the corner is incredibly naive.

simgt 13 hours ago [-]
It's naive but also ignores that automation is simply replacing human labor by capital. Capital captures more of the value, and workers get less overall. Unless we end up in some mild socialist utopia where basic needs are provided and corps are all coops, but that's not the trend.
xpe 16 hours ago [-]
There’s no guarantee of an equilibrium!
ozim 14 hours ago [-]
I just have to see how you get let’s say 100k copywriters trained to be carpenters.

You also force them to move to places where there is less carpenters?

ajmurmann 7 hours ago [-]
That healthcare jobs will be safe is nice on the surface but also means that while other jobs become more scarce cost of healthcare will continue to go up.
idiotsecant 18 hours ago [-]
In your example i think it's a great deal more likely that the Uber driver is paid a tiny stipend to supervise a squad of gardening androids owned at substantial expense by Amazon Yard.
Ekaros 14 hours ago [-]
Why would anyone be on the field? Why not just have a few drones flying there monitoring whole operation remotely. And have one person monitor too many sites at same time likely from cheapest possible region.
OneMorePerson 16 hours ago [-]
Far from an expert on this topic, but what differentiates AI from other non physical efficiency tools? (I'm actually asking not contesting).

Won't companies always want to compete with one another, so simply using AI won't be enough. We will always want better and better software, more features, etc. so that race will never end until we get an AI fully capable of managing all parts (100%) of the development process (which we don't seem to be close to yet).

From Excel to Autocad there's been a lot of tools that were expected to decrease the amount of work ended up actually increasing it due to having new capabilities and the constant demand for innovation. I suppose the difference would be if we think AI will continue to get really good, or if it'll become SO good that it is plug and play and completely replaces people.

xpe 16 hours ago [-]
> what differentiates AI from other non physical efficiency tools?

At some point: (1) general intelligence; i.e. adaptivity; (2) self replication; (3) self improvement.

amanaplanacanal 13 hours ago [-]
We don't have any more idea how to get to 1, 2, or 3, than we did 50 years ago. LLMs are cool, but they seem unlikely to do any of those things.
xpe 10 hours ago [-]
I encourage everyone to not claim “X seems unlikely” when it comes to high impact risks. Such a thinking pattern often leads to pruning one’s decision tree way too soon. To do well, we need to plan over an uncertain future that has many weird and unfamiliar scenarios.
layer8 9 hours ago [-]
We already fail to plan for a lot of high-impact things that are exceedingly likely. Maybe we should tackle those first.
xpe 4 hours ago [-]
I am so tired of people acting like planning for an uncertain world is a zero sum game, decided by one central actor in a single pipeline execution model. I’ll unpack this below.

The argument above (or some version of it) gets repeated over and over, but it is deeply flawed for various reasons.

The argument implies that “we” is a single agent that must do some set of things before other things. In the real world, different collections of people can work on different projects simultaneously in various orderings.

This is very different than optimizing an instruction pipeline for a single core microprocessor. In the real world, different kinds of tasks operate on very different timescales.

As an example, think about how change happens in society. Should we only talk about one problem at a time? Of course not. Why? The pipeline to solving problems is long and uncertain so you have to parallelize. Raising awareness of an issue can be relatively slow. Do you know what is even slower? Trying to reframe an issue in a way that gets into people’s brains and language patterns. Once a conceptual model exists and people pay attention, then building a movement among “early adopters” has a fighting chance. If that goes well, political influence might follow.

layer8 3 hours ago [-]
I was more hinting at that if we fail to plan for the obvious stuff, what makes you think that we’ll be better at planning for the more obscure possibilities. The former should be much easier, but since we fail at it, we should first concentrate on getting better at that.
OneMorePerson 15 hours ago [-]
Yeah I agree, it's not about where it's at now, but whether where we are now leads to something with general intelligence and self improvement ability. I don't quite see that happening with the curve it's on, but again what the heck do I know.
xpe 10 hours ago [-]
What do you mean about the curve not leading to general intelligence? Even if transformer architectures by themselves don’t get there, there are multifarious other techniques, including hybrids.

As long as (1) there are incentives for controlling ever increasing intelligence; (2) the laws of physics don’t block us; and (3) enough people/orgs have the motivation and means, some people/orgs are going to press forward. This just becomes a matter of time and probability. In general, I do not bet against human ingenuity, but I often bet against human wisdom.

In my view, along with many others, it would be smarter for the whole world to slow down AI capabilities advancement until we could have very high certainty that doing so is worth the risk.

marstall 11 hours ago [-]
every software company i've ever worked with has an endless backlog of features it wants/needs to implement. Maybe AI just lets them move through these feature more quickly?

I mean most startups fail. And in software startups, the blame for that is usually at least shared by "software wasn't good enough". So that $20million seed investment is still going to go into "software development" - ie programmer salaries. they will be using the higher level language of ai much of the time, and be 2-5 times more efficient - but will it be enough? No. Most will still fail.

xpe 16 hours ago [-]
Companies don’t always compete on capability or quality. Sometimes they compete on efficiency. Or sometimes they carve up the market in different ways.
OneMorePerson 15 hours ago [-]
Sometimes, but with technology related companies I rarely see that. I've really only seen it in industries that are very straightforward, like producing building materials or something. Do you have any examples?
xpe 3 hours ago [-]
Utilities. Low cost retail. Fast food.

Amazon. Walmart. Efficiency is arguably their key competitive advantage.

This matters regarding AI systems because a lot of customers may not want to pay extra for the best models! For a lot of companies, serving a good enough model efficiently is a competitive advantage.

nine_k 16 hours ago [-]
> And, because it is easier to retrain humans than build machines for those jobs, we wound up with more and better jobs.

I think it did not work like that.

Automatic looms displaced large numbers of weavers, skilled professionals, which did not find immediately find jobs tending dozens of mechanical looms. (Mr Ludd was one of these displaced professionals.)

Various agricultural machines and chemical products displaced colossal numbers of country people which had to go to cities looking for industrial jobs; US agriculture used to employ 50% of workforce in 1880 and only 10% in 1930.

The advent of internet displaced many in the media industry, from high-caliber journalists to those who worked in classified ads newspapers.

All these disruptions created temporary crises, because there was no industry that was ready to immediately employ these people.

marstall 11 hours ago [-]
temporary - thats the key. people were able to move to the cities and get factory and office jobs and over time were much better off. I can complain about the socially alienated condition I'm in as an office worker, but I would NEVER want to do farm work - cold/sun, aching back, zero benefits, low pay, risk of crop failure, a whole other kind of isolation etc etc.
aurareturn 19 hours ago [-]

  This is the first technology wave that doesn't just displace humans, but which can be trained to the new job opportunities more easily than humans can. Right now it can't replace humans for a lot of important things. But as its capabilities improve, what do displaced humans transition to?
Assuming AI doesn't get better than humans at everything, humans will be supervising and directing AIs.
Refreeze5224 18 hours ago [-]
That sounds like a job for a very small number of people. Where will everyone else work?
aurareturn 18 hours ago [-]
More companies. See my post here:

https://news.ycombinator.com/reply?id=44919671&goto=item%3Fi...

mindwok 15 hours ago [-]
This is the optimistic take and definitely possible, but not guaranteed or even likely. Markets tend to consolidate into monopolies (or close to it) over time. Unless we are creating new markets at a rapid rate, there isn’t necessarily room for those other 900 engineers to contribute.
marstall 10 hours ago [-]
8 billion people. only, what, 1 billion are in the middle class? Sounds like we need to be creating new markets at a rapid rate to me!
grokgrok 18 hours ago [-]
Wherever the AI tells them to
LPisGood 18 hours ago [-]
Why do they have to work?
dkersten 15 hours ago [-]
Because the people with the money aren’t going to just give it to everyone else. We already see the richest people hoard their money and still be unsatisfied with how much they have. We already see productivity gains not transfer any benefit to the majority of people.
closewith 15 hours ago [-]
There is an old and reliable solution to this problem, the gibbet.
dkersten 14 hours ago [-]
Yes. However people are unwilling to take this approach unless things get really really bad. Even then, the powerful tend to have such strong control that people are afraid to act out of fear of reprisal.

We’ve also been gaslit into believing that it’s not a good approach, that peaceful protests are more civilised (even though they rarely cause anything meaningful to actually change).

forgetfulness 18 hours ago [-]
Because otherwise you'd have to convince AI-owners and select professionals to let go of their wealth to give a comfortable and fulfilling life of leisure to the unemployed.

More likely it will look like the current welfare schemes of many countries, now add mass boredom leading to unrest.

Sam Altman has expressed a preference for paying people in vouchers for using his chatbots to kill time: https://basicincomecanada.org/openais-sam-altman-has-a-new-i...

xpe 16 hours ago [-]
> Because otherwise you'd have to convince AI-owners and select professionals to let go of their wealth to give a comfortable and fulfilling life of leisure to the unemployed.

Not necessarily. Such forces could be outvoted or out maneuvered.

> More likely it will look like the current welfare schemes of many countries..,

Maybe, maybe not. It might take the form of UBI or some other form that we haven’t seen in practice.

> now add mass boredom leading to unrest.

So many assumptions. There is no need to just assume any particular distribution of boredom across the future population of the world. Making predictions about social unrest is even more complicated.

Deep Utopia (Bostrom) is an excellent read that extensively discusses various options if things go well.

jplusequalt 2 minutes ago [-]
>So many assumptions.

Then a few words later ...

>Deep Utopia (Bostrom) is an excellent read that extensively discusses various options if things go well

Oh, the irony

forgetfulness 5 hours ago [-]
> Not necessarily. Such forces could be outvoted or out maneuvered

Could.

> So many assumptions. There is no need to just assume any particular distribution of boredom across the future population of the world. Making predictions about social unrest is even more complicated

I’m assuming that previous outcomes predict future failures, because the forces driving these changes are of our societies, and not a hypothetical, assumed new society.

In this world, ownership, actual, legal ownership, is a far stronger and fundamental right than any social right to your well-being.

You would have to change that, which is a utopian project whose success has been assumed in the past, that a dialectical contradiction of the forces of social classes would lead to the replacement of this framework.

It is indeed very complicated, but you know what’s even more complicated? Utopian projects.

Sorry but I see it as far more likely that the plebes will be told to kick rocks and ask the bots to generate art for them, when asking for money for art supplies on top of their cup noodle money.

lyu07282 18 hours ago [-]
> mass boredom leading to unrest

we must keep our peasants busy or they unrest due to boredom!

forgetfulness 5 hours ago [-]
Well in Sam’s ideal world you’ll be using bots to keep yourself distracted.

You would like to learn to play the guitar? Sorry, that kind of money didn’t pass in the budget bill, but how about you ask the bot to create music for you?

Elites also get something way better than keeping people busy for distraction… they get mass, targeted manipulation and surveillance to make sure you act working the borders of safety.

You know what job will surely survive? Cops. There’ll always be the nightstick to keep people in line.

jmathai 19 hours ago [-]
I’m not sure if that’s meant to be reassuring or not.

It’s hard for me to imagine that AI won’t be as good or better than me at most things I do. It’s quite a sobering feeling.

xpe 16 hours ago [-]
More people need to feel this. Too many people deny even the possibility, not based out of logic, but rather out of ignorance or subconscious factors such as fear or irrelevance.
antirez 12 hours ago [-]
I too believe that a mostly autonomous work world would be something we could handle well assuming the leadership was composed of smart folks picking the right decisions, without also being too much exposed to external powers opposing an impossible to win force (large companies and interests). The problem is if we mix what could happen (not clear when, right now) with the current weak leadership across the world.
mips_avatar 18 hours ago [-]
We have to also choose to build technology that empowers people. Empowering technologies don't just pop into existence, they're created by people who care about empowering people.
yomismoaqui 19 hours ago [-]
Don't worry about the political leaders, if a sizeable amount of people lose their jobs they will surely ask GPT-10 how to build a guillotine.
HDThoreaun 18 hours ago [-]
The french revolution did not go well for the average french person. Not sure guillotines are the solution we need.
rectang 17 hours ago [-]
Here in the US, we have been getting a visceral lesson about human willingness to sacrifice your own interests so long as you’re sticking it to The Enemy.

It doesn’t matter if the revolution is bad for the commoners — they will support it anyway if the aristocracy is hateful enough.

skinnymuch 18 hours ago [-]
How did it not go well for the avg person?

The status quo does not go well for the avg person.

__MatrixMan__ 18 hours ago [-]
Most of the people who died in The Terror were commoners who had merely not been sympathetic enough to the revolution. And then that sloppiness lead to reactionary violence and there was a lot of back and forth until Napoleon took power and was pretty much a king in all but heritage.

Hopefully we can be a bit more precise this time around.

blarg-and-co 16 hours ago [-]
[dead]
chrisco255 18 hours ago [-]
You should read French history more closely, they went through hell and changed governments at least 5 or 6 times in the 1800s.
dragonwriter 14 hours ago [-]
> How did it not go well for the avg person?

You might want to look at the etymology of the word “terrorism” (despite the most popular current use, it wasn't coined for non-state violence) and what class suffered the most in terms of both judicial and non-judicial violent deaths during the revolutionary period.

MemesAndBooze 14 hours ago [-]
The French revolution was instigated by a group of shady people, far more dangerous and vile than the aristocracy they were fighting.
bambax 13 hours ago [-]
One way to think about AI and jobs is Uber/Google Maps. You used to have to know a lot about a city to be a taxi driver; then suddenly with Google Maps you don't. So in effect, technology lowered the requirements or training needed to become a taxi driver. More people can do it, not less (although incumbents may be unhappy about this).

AI is a lot like this. In coding for instance, you still need to have some sense of good systems design, etc. and know what you want to build in concrete terms, but you don't need to learn the specific syntax of a given language in detail.

Yet if you don't know anything about IT, don't know what you want to build or what you could need, or what's possible, then it's unlikely AI can help you.

brap 13 hours ago [-]
Even with Google Maps, we still need human drivers because current AI systems aren’t so great at driving and/or are too expensive to be widely adopted at this point. Once AI figures out driving too, what do we need the drivers for?

And I think that’s the point he was making, it’s hard to imagine any task where humans are still required when AI can do it better and cheaper. So I don’t think the Uber scenario is realistic.

I think the only value humans can provide in that future is “the human factor”: knowing that something is done by an actual human and not a machine can be valuable.

People want to watch humans playing chess, even though AI is better at it. They want to consume art made by humans. They want a human therapist or doctor, even if they heavily rely on AI for the technical stuff. We want the perspective of other humans even if they aren’t as smart as AI. We want someone that “gets” us, that experiences life the same way we do.

In the future, more jobs might revolve around that, and in industries where previously we didn’t even consider it. I think work is going to be mostly about engaging with each other (even more meetings!)

The problem is, in a world that is that increasingly remote, how do you actually know it’s a human on the other end? I think this is something we’ll need to solve, and it’s going to be hard with AI that’s able to imitate humans perfectly.

marstall 11 hours ago [-]
The spinning jenny put seamstresses out of work. But the history of automation is the history of exponentially expanding the workforce and population.

8 billion people wake up every morning determined to spend the whole day working to improve their lives. we're gonna be ok.

yaur 14 hours ago [-]
I believe that historically we have solved this problem by creating gigantic armies and then killing off millions of people that couldn't really adapt to the new order with a world war.
nikolayasdf123 18 hours ago [-]
> what do displaced humans transition to?

go to any war-torn country or collapsed empire (Soviet). I have seen/grow-up myself in both — you would get desperation, people giving-up, alcohol (famous "X"-cross of birth rate drop and deaths rising), drugs, crime, corruption/warlord-ing, rural communities hit first and totally vanish, then small-tier cities vanish, then mid-tier, only the largest hubs remain. loss of science, culture, and education. people are just gone. only ruins of whatever latest shelters they had remain, not even their prime-time architecture. you can drive hundreds/thousands of kms across these ruin of what once was flurishing culture. years ago you would find one old person still living there. these days not a single human left. this is what is coming.

marstall 10 hours ago [-]
that was because the economy was controlled/corrupt and not allowed to flourish (and create job-creating technologies like the internet and AI).
monknomo 7 hours ago [-]
I'm puzzled how AI is supposed to be a job creating technology. It is supposed to either wholesale replace jobs, or make workers so efficient that fewer of them are required. This is supposed to make digital and intellectually produced goods cheaper (although, given reproduction is free, the goods themselves are already pretty cheap).

To me it looks like we'll see well paying jobs decrease, digital services get cheaper, food+housing stay the same, and presumably as displaced workers do what they need to do physical service jobs will get more crowded and pay worse, so physical services will get cheaper. It is unclear whether there will be a net benefit to society.

Where do the jobs come from?

marstall 3 hours ago [-]
in the long term: simply that from the spinning jenny on, the history of automation is the history of exponentially expanding the workforce and population. when products are cheaper, demand increases, new populations enter the market and create demand for a higher class of goods and services - which sustains/grows employment.

in the short term: there is a hiring boom within the AI and related industries.

kace91 14 hours ago [-]
>But as its capabilities improve, what do displaced humans transition to?

IF there is intellectual/office work that remains complex enough to not be tackled by AI, we compete for those. Manual labor takes the rest.

Perhaps that’s the shift we’ll see: nowadays the guy piling up bricks makes a tenth of the architects’ salary, that relation might invert.

And the indirect effects of a society that values intellectual work less are really scary if you start to explore the chain of cause and effect.

ACCount37 14 hours ago [-]
Have you noticed that there are a lot of companies now that are trying to build advanced AI-driven robots? This is not a coincidence.
azan_ 12 hours ago [-]
The relation won’t invert because it’s very easy and quick to train guy pilling up bricks while training architect is slow and hard. If low skilled jobs will pay much better than high skilled then people will just change their job.
kace91 10 hours ago [-]
That’s only true as long as the technical difficulties aren’t covered by tech.

Think of a world where software engineering itself is handled relatively well by the llm and the job of the engineer becomes just collecting business requirements and checking they’re correctly addressed.

In that world the limit for scarcity might be less in the difficulty of training and more in the willingness to bend your back in the sun for hours vs comfortably writing prompts in an air conditioned room.

azan_ 10 hours ago [-]
Right now there are enough people willing to bend their back in the sun for hours that their salaries are much lower than these of engineers. Do you think that for some reason supply of these people will drop with higher wages and much lower employment opportunities in office jobs? I highly doubt it.
kace91 10 hours ago [-]
My argument is not that those people’s salaries will go up until overtaking the engineers’.

It’s the opposite, that the value of office/intellectual work will tank, while manual work remains stable. Lower barrier of entry for intelectual work if a position even needs to be covered, work conditions much more comfortable.

17 hours ago [-]
pzo 10 hours ago [-]
I think UBI can only buy some time but won't solve the problem. We need fast improvement with AI robots that can be used for automation on mass scale: construction, farming maybe even cooking and food processing.

Right now AI is mostly focused on automating top levels of maslov pyramid hierarchy of needs rather than bottom physiological needs. Once things like shelter (housing), food, utilities (electricity, water, internet) are dirty cheap UBI is less needed.

Matumio 9 hours ago [-]
Those displaced workers need an income first, job second. What they were producing is still getting done. This means we have gained freedom to choose what else is worth doing. The immediate problem is the lack of income. There is no lack of useful work to do, it's just that most of it doesn't pay well.
wouldbecouldbe 11 hours ago [-]
Yeah but those opening of new kind of jobs has not always been instantly. It can take decades and for instance was one of the reasons for the French Revolution. Internet has already created a huge amount of monopolies and wealth concentration. AI seems likely to do this further.
visarga 17 hours ago [-]
> displace humans ...

AI can displace human work but not human accountability. It has no skin and faces no consequences.

> can be trained to the new job opportunities more easily ...

Are we talking about AI that always needs trainers to fix their prompts and training sets? How are we going to train AI when we lose those skills and get rid of humans?

> what do displaced humans transition to?

Humans with all powerful AI in their pockets... what could they do if they lose their jobs?

9dev 15 hours ago [-]
> ask that question to all the companies laying off junior folks in favor of LLMs right now. They are gleefully sawing off the branch they’re sitting on.

> Humans with all powerful AI in their pockets... what could they do if they lose their jobs?

At which point did AI become a free commodity in your scenario?

azan_ 12 hours ago [-]
> AI can displace human work but not human accountability. It has no skin and faces no consequences.

Let’s assume that we have amazing aj and robotics, better than humans at everything - if you could choose between robosurgery (completely automatic) with 1% mortality and for 5000 usd vs surgery performed by human with 10% mortality and 50000 usd price tag, would you really choose human just because you can sue him? I wouldn’t. I don’t thing anyone thinking rationally would.

DrewADesign 14 hours ago [-]
> AI can displace human work but not human accountability. It has no skin and faces no consequences.

We’ve gota way to go to get there in many instances. So far I’ve seen people blame AI companies for model output, individuals for not knowing the product sold to them as a magic answer-giving machine was wrong, and other authorities in those situations (e.g. managers, parents, school administrators and teachers,) for letting ai be used at all. From my vantage point, It people seem to be using it as a tool to insulate themselves from accountability.

ACCount37 14 hours ago [-]
Is the ability to burn someone at a stake for making a mistake truly vital to you?

If not, then what's the advantage of "having skin" is? Sure, you can't flog an AI. But AI doesn't need to be threatened with flogging to perform at the peak of its abilities. A well designed AI performs at the peak of its abilities always - and if that isn't enough, you train it until it is.

ip26 16 hours ago [-]
For the moment, perhaps it could be jobs that LLMs can’t be trained on. New jobs, niche jobs, secret or undocumented jobs…

It’s a common point now that LLMs don’t seem to be able to apply knowledge about one thing to how a different, unfamiliar thing works. Maybe that will wind up being our edge, for a time.

d2veronica 18 hours ago [-]
During the Industrial Revolution, many who made a living by the work of their hands lost their jobs, because there were machines and factories to do their work. Then new jobs were created in factories, and then many of those jobs were replaced by robots.

Somehow many idiotic white collar jobs have been created over the years. How many web applications and websites are actually needed? When I was growing up, the primary sources of knowledge were teachers, encyclopedias, and dictionaries, and those covered a lot. For the most part, we’ve been inventing problems to solve and wasting a tremendous amount of resources.

Some wrote malware or hacked something in attempt to keep this in check, but harming and destroying just means more resources used to repair and rebuild and real people can be hurt.

At some point in coming years many white collar workers will lose their jobs again, and there will be too many unemployed because not enough blue collar jobs will be available.

There won’t be some big wealth redistribution until AI convinces people to do that.

The only answer is to create more nonsense jobs, like AI massage therapist and robot dog walker.

bsder 14 hours ago [-]
> we wound up with more and better jobs.

You will have to back that statement up because this is not at all obvious to me.

If I look at the top US employers in say 1970 vs 2020, the companies that dominate 1970 were noted for having hard blue collar labor jobs but paid enough to keep a single earner family significantly above minimum wage and the poverty line. The companies that dominate in 2020 are noted for being some of the shittiest employers having some of the lowest pay fairly close to minimum wage and absolutely worst working conditions.

Sure, you tend not to get horribly maimed in 2020 vs 1970. That's about the only improvement.

moffkalast 12 hours ago [-]
This was already a problem back then, Nixon was about to introduce UBI in the late 60s and then the admin decided that having people work pointless jobs keeps them better occupied, and the rest of the world followed suit.

There will be new jobs and they will be completely meaningless busywork, people performing nothing of substance while being compensated for it. It's our way of doing UBI and we've been doing it for 50 years already.

Obligatory https://wtfhappenedin1971.com

nikolayasdf123 18 hours ago [-]
> what do displaced humans transition to?

we assume there must be something to transition to. very well, there can be nothing.

we assume people will transition. very well, they may not transition at all and "dissappear" en masse. (same effect as as a war or an empire collapse)

kbrkbr 12 hours ago [-]
Here is another perspective:

> In every technology wave so far, we've disrupted many existing jobs. However we've also opened up new kinds of jobs

That may well be why these technologies were ultimately successful. Think of millions and millions being cast out.

They won't just go away. And they will probably not go down without a fight. "Don't buy AI-made, brother!", "Burn those effing machines!" It's far from unheard of in history.

Also: who will buy if no one has money anymore? What will the state do, when thus tax income goes down, while social welfare and policing costs go up?

There are other scenarios, too: everybody gets most stuff for free, because machines and AI's do most of the work. Working communism for the lower classes, while the super rich stay super rich (like in real existing socialism). I don't think it is a good scenario either. In the long run it will make humanity lazy and dumb.

In any case I think what might happen is not easy to guess, so many variables and nth-order effects. When large systems must seek a new equilibrium all bets are usually off.

jazzyjackson 19 hours ago [-]
I don't know maybe they can grow trees and build houses.
seanmcdirmid 18 hours ago [-]
The robots? I see this happening soon, especially for home construction.
andrei_says_ 16 hours ago [-]
How exactly?

In the U.S. houses are built out of wood. What robot will do that kind of work?

chung8123 17 hours ago [-]
It makes me wonder if we will be much more reserved with our thoughts and teachings in the future given how quickly they will be used against us.
bamboozled 18 hours ago [-]
It’s probably the only technology that is designed to replace humans as its primary goal. It’s the VC dream.
xgkickt 15 hours ago [-]
I do wonder if the amount they're spending on it is going to be cost effective versus letting humans continue doing the work.
chrz 11 hours ago [-]
It is for some shareholder as long as the hype and stocks go up
tomjen3 8 hours ago [-]
The industrial revolution took something like 98% of jobs and farms and just disappeared them.

Could you a priori in 1800 have predicted the existence of graphics artists? Street sweepers? People who drive school buses? The whole infrastructure around trains? Sewage maintainers? Librarians? Movie stuntmen? Sound Engineers? Truck drivers?

immibis 5 hours ago [-]
The opening of new jobs has been causally unlinked from the closing of old jobs - especially when you take the quantity into consideration. There was a well of stuff people wanted to do, that they couldn't do because they were busy doing the boring stuff. But now that well of good new jobs is running dry, which is why we see people picking up 3 really shit jobs to make ends meet. There will be a point where new jobs do not open at all, and we should probably plan for that.
LightBug1 13 hours ago [-]
As someone else said, until a company or individual is willing to risk their reputation on the accuracy of AI (beyond basic summarising jobs, etc), the intelligent monkeys are here for a good while longer. I've already been once bitten, twice shy.

The conclusion, sadly, is that CEO's will pause hiring and squeeze more productivity out of existing hires. This will impact junior roles the most.

fsflover 12 hours ago [-]
Haven't you seen companies developing autonomous killing drones?
LightBug1 12 hours ago [-]
They won't take my job - unless someone has put a hit out on me.
fsflover 2 hours ago [-]
I wanted to say that people aren't afraid of loosing their reputation even when it's about the decision of whom they kill.
LightBug1 2 hours ago [-]
Fair point.
solumunus 16 hours ago [-]
We also may not need to worry about it for a long time. I’m more and more falling on this side. LLM’s are hitting diminishing returns so until there’s a new innovation (can’t see any on the horizon yet) I’m not concerned for my career.
TacticalCoder 12 hours ago [-]
[dead]
16 hours ago [-]
_jab 18 hours ago [-]
I'm skeptical of arguments like this. If we look at most impactful technologies since the year 2000, AI is not even in my top 3. Social networking, mobile computing, and cloud computing have all done more to alter society and daily life than has AI.

And yes, I recognize that AI has already created profound change, in that every software engineer now depends heavily on copilots, in that education faces a major integrity challenge, and in that search has been completely changed. I just don't think those changes are on the same level as the normalization of cutting-edge computers in everyone's pockets, as our personal relationships becoming increasingly online, nor as the enablement for startups to scale without having to maintain physical compute infrastructure.

To me, the treating of AI as "different" is still unsubstantiated. Could we get there? Absolutely. We just haven't yet. But some people start to talk about it almost in a way that's reminiscent of Pascal's Wager, as if the slight chance of a godly reward from producing AI means it is rational to devote our all to it. But I'm still holding my breath.

c0balt 18 hours ago [-]
> in that every software engineer now depends heavily on copilots

That is maybe a bubble around the internet. Ime most programmers in my environment rarely use and certainly aren't dependent on it. They do also not only do code monkey-esque web programming so maybe this is sampling bias though it should be enough to refute this point.

Raphael_Amiard 12 hours ago [-]
Came here to say that. It’s important to remember how biased hacker news is in that regard. I’m just out of ten years in the safety critical market, and I can assure you that our clients are still a long way from being able to use those. I myself work in low level/runtime/compilers, and the output from AIs is often too erratic to be useful
djeastm 12 hours ago [-]
>our clients are still a long way from being able to use those

So it's simply a matter of time

>often too erratic to be useful

So sometimes it is useful.

layer8 9 hours ago [-]
Too erratic to be net useful.
anuramat 5 hours ago [-]
Even for code reviews/test generation/documentation search?
layer8 3 hours ago [-]
Documentation search I might agree, but that wasn’t really the context, I think. Code reviews is hit and miss, but maybe doesn’t hurt too much. They aren’t better at writing good tests than at writing good code in the first place.
HDThoreaun 4 hours ago [-]
Im on the core sql execution team at a database company and everyone on the team is using AI coding assistants. Certainly not doing any monkey-esque web programming.
galangalalgol 17 hours ago [-]
Add LED lighting on there. It is easy to forget what a difference that made. The light pollution, but also just how dim houses were. CFL didn't last very long as a thing between incandescent and LED and houses lit with incandescents have a totally different feel.
mdaniel 4 hours ago [-]
And yet: https://www.axios.com/2023/02/26/car-headlights-too-bright-l...

But, for clarity, I do agree with your sentiment about their use in appropriate situations, I just have an indescribable hatred for driving at night now

atleastoptimal 18 hours ago [-]
AI has already rendered academic take-home assignments moot. No other tech has had an impact like that, even the internet.
callc 17 hours ago [-]
A pessimistic/realistic view of post high school education - credentials are proof of able to do a certain amount of hard work, used as an easy filter for companies while hiring.

I expect universities to adapt quickly, lest lose their whole business as degrees will not carry the same meaning to employers.

amanaplanacanal 13 hours ago [-]
Maybe universities can go back to being temples of learning instead of credential mills.

I can dream, can't I?

ZYbCRq22HbJ2y7 16 hours ago [-]
> AI has already rendered academic take-home assignments moot

Not really, there are plenty of things that LLMs cannot do that a professor could make his students do. It is just a asymmetric attack on the professor's (or whomever is grading) time to do that.

IMO, credentials shouldn't be given to those who test or submit assignments without proctoring (a lot of schools allow this).

mofeien 12 hours ago [-]
> there are plenty of things that LLMs cannot do that a professor could make his students do.

Name three?

c0balt 11 hours ago [-]
1. Make the student(s) randomly have to present their results on a weekly basis. If you get caught for cheating at this point, at least in my uni with a zero tolerance policy, you instantly fail the course.

2. Make take home stuff only a requirement to be able to participate in the final exam. This effectively means cheating on them will only hinder you and not affect your grading directly.

3. Make take home stuff optional and completely detached from grading. Put everything into the final exam.

My uni does a mix them on different courses. Especially two and three though have a significant negative impact on passing rates as they tend to push everything onto one single exam instead of spread out work over the semester.

11 hours ago [-]
NitpickLawyer 12 hours ago [-]
> Not really, there are plenty of things that LLMs cannot do that a professor could make his students do.

Could you offer some examples? I'm having a hard time thinking of what could be at the intersection of "hard enough for SotA LLMs" yet "easy enough for students (who are still learning, not experts in their fields, etc)".

c0balt 6 hours ago [-]
Present the results of your exercises (in person) in front of someone. Or really anything in person.

A big downer on the online/remote Initiatives for learning but actually an advantage for older Unis that already have existing physical facilities for students.

This does however also have some problems similar to code interviews .

rootusrootus 4 hours ago [-]
> Present the results of your exercises (in person) in front of someone

I would not be surprised if we start to see a shift towards this. Interviews instead of written exams. It does not take long to figure out whether someone knows the material or not.

Personally, I do not understand how students expect to succeed without learning the material these days. If anything, the prevalence of AI today only makes cheating easier in the very short term -- over the next couple years I think cheating will be harder than it ever was. I tried to leverage AI to push myself through a fairly straightforward Udacity course (in generative AI, no less), and all it did was make me feel incredibly stupid. I had to stop using it and redo the parts where I had gotten some help, so that my brain would actually learn something.

But I'm Gen X, so maybe I'm too committed to old-school learning and younger people will somehow get super good at this stuff while also not having to do the hard parts.

NitpickLawyer 4 hours ago [-]
Sure but that's a solution to prevent students from using LLMs, not an example of something a professor can ask students that "LLMs can't do"...
devmor 18 hours ago [-]
What? The internet did that ages ago. We just pretended it didn't because some students didn't know how to use Google.
atleastoptimal 17 hours ago [-]
Everyone knows how to use Google. There's a difference between a corpus of data available online and an intelligent chatbot that can answer any permutation of questions with high accuracy with no manual searching or effort.
geraneum 14 hours ago [-]
> Everyone knows how to use Google.

Everyone knows how to type questions into a chat box, yet whenever something doesn’t work as advertised with the LLMs, the response here is, “you’re holding it wrong”.

simianwords 15 hours ago [-]
Do you really think the jump from books to freely globally accessible data instantly available is a smaller jump than internet to ChatGPT? This is insane!!
raincole 14 hours ago [-]
It's not just smaller, but neglectable (in comparison).

In the internet era you had to parse the questions with your own brain. You just didn't necessarily need to solve them yourself.

In ChatGPT era you don't even need to read the questions. At all. The questions could be written in a language you don't understand, and you still are able generate plausible answers to them.

simianwords 14 hours ago [-]
To a person from the 1920's which one is more impressive? The internet or chatgpt?
raincole 13 hours ago [-]
Obvious ChatGPT. I don't know how it is even a question... if you showed GPT3.5 to people from < 20th centuries there would've been a worldwide religion around it.
simianwords 13 hours ago [-]
Interesting perspective.
mdaniel 4 hours ago [-]
I recall the kerfuffle about (IIRC) llama where the engineer lost his mind thinking they had spawned life in a machine and felt it was "too dangerous to release," so it's not a ludicrous take. I would hope that the first person to ask "LLM Jesus" how many Rs are in strawberry would have torpedoed the religion, but (a) I've seen dumber mind viruses (b) it hasn't yet
dragonwriter 4 hours ago [-]
It wasn't Llama (Meta), it was LaMDA (Google).

https://www.scientificamerican.com/article/google-engineer-c...

klipklop 16 hours ago [-]
You are mistaken, Google could not write a bespoke English essay for you. Complete with intentional mistakes to throw off the professor.
a2128 11 hours ago [-]
In English class we had a lot of book-reading and writing texts about those books. Sparknotes and similar sites allowed you to skip reading and get a distilled understanding of its contents, similar to interacting with an LLM
Davidzheng 17 hours ago [-]
disagree? I had to write essays in high school. I don't think the kids now need to if they don't want to.
thomasfromcdnjs 3 hours ago [-]
Pretty sure I read Economnics in one lesson because of HN, he makes great arguments about how automation never ruins economies as much as people think. "Chapter 7: The Curse of Machinery"
7 hours ago [-]
srcreigh 5 hours ago [-]
> Could we get there? Absolutely. We just haven't yet.

What else is needed then?

Davidzheng 17 hours ago [-]
On current societal impact it might be close to the other three. But do you not think it is different in nature to other technological innovations?
shayief 17 hours ago [-]
> in that every software engineer now depends heavily on copilots

With many engineers using copilots and since LLMs output the most frequent patterns, it's possible that more and more software is going to look the same, which would further reinforce the same patterns.

For example, emdash thing, requires additional prompts and instructions to override it. Doing anything unusual would require more effort.

16 hours ago [-]
mmmore 14 hours ago [-]
LLMs with instruction following have been around for 3 years. Your comment gives me "electricity and gas engines will never replace the horse" vibes.

Everyone agrees AI has not radically transformed the world yet. The question is whether we should prepare for the profound impacts current technology pretty clearly presages, if not within 5 years then certainly within 10 or 25 years.

legucy 8 hours ago [-]
I’m skeptical of arguments like this. If we look at most impactful technologies since the year 1980, the Web is not even in my top 3. Personal computers, spreadsheet software, and desktop publishing have all done more to alter society and daily life than has the Web. And yes, I recognize that the Web has already created profound change, in that every researcher now depends heavily on online databases, in that commerce faces a major disruption challenge, and in that information access has been completely changed. I just don’t think those changes are on the same level as the normalization of powerful computers on everyone’s desk, as our business processes becoming increasingly digitized, nor as the enablement for small businesses to produce professional-quality documents without having to maintain expensive typesetting equipment. To me, the treating of the Web as “different” is still unsubstantiated. Could we get there? Absolutely. We just haven’t yet. But some people start to talk about it almost in a way that’s reminiscent of Pascal’s Wager, as if the slight chance of a godly reward from investing in Web technologies means it is rational to devote our all to it. But I’m still holding my breath.
m_a_g 7 hours ago [-]
This is not reddit.
itsalotoffun 17 hours ago [-]
> The future may reduce the economic prosperity and push humanity to switch to some different economic system (maybe a better system). Markets don’t want to accept that. [Emphasis added]

What a silly premise. Markets don't care. All markets do is express the collective opinion; in the short term as a voting machine, in the long term as a weighing machine.

Seeing a real uptick of socio-policital prognostication from extremely smart, soaked-in-AI, tech people (like you Salvatore!), casting heavy doom-laden gestures towards the future. You're not even wrong! But this "I see something you all clearly don't" narrative, wafer thin on real analysis, packed with "the feels", coated with what-ifs.. it's sloppy thinking and I hold you to a higher standard antirez.

drcode 55 minutes ago [-]
Markets require property rights, property rights require institutions that are dependent on property-rights holders, so that they have incentives to preserve those property rights. When we get to the point where institutions are more dependent on AIs instead of humans, property rights for humans will become inconvenient.
xpe 15 hours ago [-]
>> Markets don’t want to accept that.

> What a silly premise. Markets don't care.

You read the top sentence way too literally. In context, it has a meaning — which can be explored (and maybe found) with charity and curiosity.

xpe 15 hours ago [-]
> All markets do is express the collective opinion; in the short term as a voting machine, in the long term as a weighing machine.

I prefer the concepts and rigor from political economy: markets are both preference aggregators and coordination mechanisms.

Does your framing (voting machines and weighing machines) offer more clarity and if so, how? I’m not seeing it.

acivitillo 12 hours ago [-]
His framing is that markets are collective consensus and if you claim to “know better”, you need to write a lot more than a generic post. It’s so simple, and it is a reminder that antirez’s reputation as a software developer does not automatically translate to economics expert.
xpe 11 hours ago [-]
I think you are mixed up here. I quoted from the comment above mine, which was harshly and uncharitably critical of antirez’s blog post.

I was pushing back against that comment’s snearing smugness by pointing to an established field that uses clear terminology about how and why markets are useful. Even so, I invited an explanation in case I was missing something.

Anyone curious about the terms I used can quickly find explanations online, etc.

cropcirclbureau 15 hours ago [-]
Yes but can the market not be wrong? Wrong in the sense that, failing to meet our expectations as a useful engine of society? As I understood, what was meant with this this article is that AI completely changes the equations across the board that current market direction appears dangerously irrational to OP. I'm not sure what was meant with your comment though besides haggling over semantics and attacking some in-expertise of the authors socio-politic philosophizing that you perceive.
simgt 13 hours ago [-]
Of course it can be wrong, and it is in many instances. It's a religion. The vast, vast majority of us would prefer to live in a stable climate with unpolluted water and some fish left in the oceans, yet "the market" is leading us elsewhere.
rootusrootus 4 hours ago [-]
I don't like the idea of likening the market to a religion, but I think it definitely has some glaring flaws. In my mind the biggest is that the market is very effective at showing the consensus of short-term priorities, but it has no ability to reflect long-term strategic consensus.
sota_pop 16 hours ago [-]
> “… as a voting… as a weighing…” I’m sure I remember that as a graham, munger, or buffet quote.

> “not even wrong” - nice, one of my favorites from Pauli.

djeastm 12 hours ago [-]
Definitely Benjamin Graham, though Buffett (two T's) brought it back
naveen99 10 hours ago [-]
Voting, weighing, … trading machine ? You can hear or touch or weigh colors.
m4nu3l 6 hours ago [-]
>We are not there, yet, but if AI could replace a sizable amount of workers, the economic system will be put to a very hard test. Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch. Nor is it possible to imagine a system where a few mega companies are the only providers of intelligence: either AI will be eventually a commodity, or the governments would do something, in such an odd economic setup (a setup where a single industry completely dominates all the others).

I think the scenario where companies that own AI systems don't get benefits from employing people, so people are poor and can't afford anything, is paradoxical, and as such, it can't happen.

Let's assume the worst case: Some small percentage of people own AIs, and the others have no ownership at all of AI systems.

Now, given that human work has no value to those owning AIs, those humans not owning AIs won't have anything to trade in exchange for AI services. Trade between these two groups would eventually stop.

You'll have some sort of two-tier economy where the people owning AIs will self-produce (or trade between them) goods and services. However, nothing prevents the group of people without AIs from producing and trading goods and services between them without the use of AIs. The second group wouldn't be poorer than it is today; just the ones with AI systems will be much richer.

This worst-case scenario is also unlikely to happen or last long (the second group will eventually develop its own AIs or already have access to some AIs, like open models).

If models got exponentially better with time, then that could be a problem, because at some point, someone would control the smartest model (by a large factor) and could use it with malicious intent or maybe lose control of it.

But it seems to me that what I thought time ago would happen has actually started happening. In the long term, models won't improve exponentially with time, but sublinearly (due to physical constraints). In which case, the relative difference between them would reduce over time.

Davidzheng 5 hours ago [-]
Sorry this doesn't make sense to me. Given tier one is much richer and more powerful than tier two, any natural resources and land traded at tier two is only at mercy of tier one not interfering. As soon as tier one needs some land or natural resources from tier two, tier two needs are automatically superseded. It's like animal community bear human civ
m4nu3l 5 hours ago [-]
The marginal value of natural resources decreases with quantity, and natural resources would only have a much smaller value compared to the final products produced by the AI systems. At some point, there would be an equilibrium where tier 1 wouldn't want to increase it's consumption of natural resources w.r.t. tier 2 or if they did they'd have to trade with tier 2 at a price higher than they value the resources. I have no idea what this equilibrium would look like, but natural resources are already of little value compared to consumer goods and services. The US in 2023 consumed $761.4B. of oil, but the GPD for the same year was. $27.72T

There would be another valid argument to be made about externalities. But it's not what my original argument was about.

Lichtso 4 hours ago [-]
Not just land and natural resources: All means of production, including infrastructure, intellectual property, capital, the entire economy.
m4nu3l 4 hours ago [-]
I'm assuming no coercion. In my scenario, tier 1 doesn't need any of that except natural resources because they can self-produce everything they need from those in a cheaper way than humans can. If someone in tier 1, for instance, wants land from someone in tier 2, they'd have to offer something that the tier 2 person values more than the land they own.

After the trade, the tier 2 person would still be richer than they were before the trade. So tier 2 would become richer in absolute terms by trading with tier 1 in this manner. And it's very likely that what tier 2 wants from tier 1 is whatever they need to build their own AIs. So my argument still stands. They wouldn't be poorer than they are now.

rootusrootus 5 hours ago [-]
If tier 2 amounts to 95% of the population, then the amount of power currently held by tier 1 is meaningless. It is only power so long as the 95% remain cooperative.
yks 5 hours ago [-]
In practice the tier 1 has the tech and know-how to convince the tier 2 to remain cooperative against their own interests. See the contemporary US where the inequality is rather high, and yet the tier 2 population is impressively protective of the rights of the tier 1. The theory that if the tier 2 has it way worse than today, that will change, remains to be proven. Persecutions against the immigrants are also rather lightweight today, so there is definitely space to ramp them up to pacify the tier 2.
Disposal8433 3 hours ago [-]
> the amount of power currently held by tier 1 is meaningless.

It's happening right now with rich people and lobbies.

> It is only power so long as the 95% remain cooperative

https://en.wikipedia.org/wiki/Television_consumption#Contemp... I rest my case.

rootusrootus 1 hours ago [-]
This only works as long as people are happily glued to their TVs. Which means they have a non-leaking roof above their head and food in their belly. Just at a minimum. No amount of skillful media manipulation will make a starving, suffering 95% compliant.
iwontberude 6 hours ago [-]
I think the bigger relief is that I know humans won’t put up with a two tiered system of haves and have nots forever and eventually we will get wealth redistribution. Government is the ultimate source of all wealth and organization, corporations are built on top of it and thus are subservient.
m4nu3l 6 hours ago [-]
Having your life dependent on a government that controls all AIs would be much worse. The government could end up controlling something more intelligent than the entire rest of the population. I have no doubt it will use it in a bad way. I hope that AIs will end up distributed enough. Having a government controlling it is the opposite of that.
azemetre 5 hours ago [-]
Why would this be worse than the current situation of private actors accountable to no one controlling this technology? It's not like I can convince Zuckerberg to change his ways.

At least with a democratic government I have means to try and build a coalition then enact change. The alternative requires having money and that seems like an inherently undemocratic system.

Why can't AIs be controlled with democratic institutions? Why are democratic institutions worse? This doesn't seem to be the case to me.

Private institutions shouldn't be allowed to control such systems, they should be compelled to give them to the public.

m4nu3l 5 hours ago [-]
>Why would this be worse than the current situation of private actors accountable to no one controlling this technology? It's not like I can convince Zuckerberg to change his ways.

As long as Zuckerberg has no army forcing me, I'm fine with that. The issue would be whether he could breach contracts or get away with fraud. But if AI is sufficiently distributed, this is less likely to happen.

>At least with a democratic government I have means to try and build a coalition then enact change. The alternative requires having money and that seems like an inherently undemocratic system.

I don't think of democracy as a goal to be achieved. I'm OK with democracy in so far it leads to what I value.

The big problem with democracy is that most of the time it doesn't lead to rational choices, even when voters are rational. In markets, for instance, you have an incentive to be rational, and if you aren't, the market will tend to transfer resources from you to someone more rational.

No such mechanism exists in a democracy; I have no incentive to do research and think hard about my vote. It's going to be worth the same as the vote of someone who believes the Earth is flat anyway.

azemetre 4 hours ago [-]
What is your alternative to democracy then?

I also don't buy that groups don't make better decisions than individuals. We know that diversity of thought and opinion is one way to make better decisions in groups compared to individuals; why would there be harm in believing that consensus building, debates, adversarial processes, due process, and systems of appeal lead to worse outcomes in decision making?

I'm not buying the argument. Reading your comment it feels like there's an argument to be made that there aren't enough democratic systems for the people to engage with. That I definitely agree with.

m4nu3l 4 hours ago [-]
> I also don't buy that groups don't make better decisions than individuals.

I didn't say that. My example of the market includes companies that are groups of people.

> We know that diversity of thought and opinion is one way to make better decisions in groups compared to individuals; why would there be harm in believing that consensus building, debates, adversarial processes, due process, and systems of appeal lead to worse outcomes in decision making?

I can see this about myself. I don't need to use hypotheticals. Time ago, I voted for a referendum that made nuclear power impossible to build in my country. I voted just like the majority. Years later, I became passionate about economics, and only then did I realise my mistake.

It's not that I was stupid, and there were many, many debates, but I didn't put the effort into researching on my own.

The feedback in a democracy is very weak, especially because cause and effect are very hard to discern in a complex system.

Also, consensus is not enough. In various countries, there is often consensus about some Deity existing. Yet large groups of people worldwide believe in incompatible Deities. So there must be entire countries where the consensus about their Deity is wrong. If the consensus is wrong, it's even harder to get to the reality of things if there is no incentive to do that.

I think, if people get this, democracy might still be good enough to self-limit itself.

kortilla 6 hours ago [-]
Governments are not the source of wealth. They are just a requisite component to allow people to create it and maintain it.
azemetre 6 hours ago [-]
This doesn't pass the sniff test, governments generate wealth all the time. Public education, public healthcare, public research, public housing. These are all programs that generate an enormous amount of wealth and allow citizens to flourish.
m4nu3l 6 hours ago [-]
In economics, you aren't necessarily creating wealth just because your final output has value. The value of the final good or service has to be higher than the inputs for you to be creating wealth. I could take a functioning boat and scrap it, sell the scrap metal that has value. However, I destroyed wealth because the boat was worth more. Even if you are creating wealth, but the inputs have better uses and can create more wealth for the same cost, you're still paying in opportunity cost. So things are more complicated than that.
azemetre 6 hours ago [-]
This isn't related to what I was commenting on where the other poster came across as not seeing government by the governed as having economic worth.
andsoitis 5 hours ago [-]
Synthesizing between you two’s thoughts, extrapolating somewhat:

- human individuals create wealths

- groups of humans can create kinds of wealth that isn’t possible for a single indovidual. This can be a wide variety of associations: companies, project teams, governments, etc.

- governments (formal or less formal) create the playing field for individuals and groups of individuals to create wealth

azemetre 4 hours ago [-]
Thanks for this comment. You definitely crystalized the two thoughts well and succinctly. Definitely a skill I wish I had. :D
kortilla 4 hours ago [-]
No, I said it was a requisite to generate wealth, but it does not generate it directly.
azemetre 4 hours ago [-]
Gotcha. Definitely felt like I made that comment a little too rush, especially in the context of all the others as well.
m4nu3l 5 hours ago [-]
>governments generate wealth all the time. Public education, public healthcare, public research, public housing. > These are all programs that generate an enormous amount of wealth and allow citizens to flourish.

I thought you meant that governments generate wealth because the things you listed have value. If so, that doesn't prove they generate wealth by my argument, unless you can prove those things are more valuable than alternative ways to use the resources the government used to produce them and that the government is more efficient in producing those.

You can argue that those are good because you think redistribution is good. But you can have redistribution without the government directly providing goods and services.

azemetre 4 hours ago [-]
I think I'm more confused. Was trying to convey the idea that wealth doesn't have to limited to the idea of money and value. Many intangible things can provide wealth too.

I should probably read more books before commenting on things I half understand, my bad.

AlexandrB 5 hours ago [-]
None of these are unique to the government and can also be created privately. The fact that government can create wealth =/= the government is the source of all wealth.
kortilla 4 hours ago [-]
Those programs consume a bunch of money and they don’t generate wealth directly. They are critical to let people flourish and go out to generate wealth.

A bunch of well educated citizens living on government housing who don’t go out and become productive members of society will quickly lead to collapse.

thatfrenchguy 5 hours ago [-]
I mean, you can imagine a public bureaucracy being bad at redistributing too, that’s a lot of governments in the world
Sincere6066 6 hours ago [-]
pretty sure the economic system has already failed all the tests
siliconc0w 9 hours ago [-]
I'm on team plateau, I'm really not noticing increasing competency in my daily usage of the major models. And sometimes it seems like there are regressions where performance drops from what it could do before.

There is incredible pressure to release new models which means there is incredible pressure to game benchmarks.

Tbh a plateau is probably the best scenario - I don't think society will tolerate even more inequality+ massive job displacement.

andai 8 hours ago [-]
I think the current economy is already dreadful. So I don't have much desire to maintain that. But it's easier to break something further than to fix it, and god knows what AI is going to do to a system with so many feedback loops.
atleastoptimal 18 hours ago [-]
This is an accurate assessment. I do feel that there is a routine bias on HN to underplay AI. I think it's people not wanting to lose control or relative status in the world.

AI is an existential threat to the unique utility of humans, which has been the last line of defense against absolute despotism (i.e. a tyrannical government will not kill all its citizens because it still needs them to perform jobs. If humans aren't needed to sustain productivity, humans have no leverage against things becoming significantly worse for them, gradually or all at once).

morsecodist 16 hours ago [-]
> I do feel that there is a routine bias on HN to underplay AI

It's always interesting to see this take because my perception is the exact opposite. I don't think there's ever been an issue for me personally with a bigger mismatch in perceptions than AI. It sometimes feels like the various sides live in different realities.

pmg101 13 hours ago [-]
It's a Rorschach test isn't it.

Because the technology itself is so young and so nebulous everyone is able to unfalsifiably project their own hopes or fears onto it.

atleastoptimal 15 hours ago [-]
Any big AI release, some of the top comments are usually claiming either the tech itself is bad, relaying a specific anecdote about some AI model messing up or some study where AI isn't good, or claiming that AI is a huge bubble that will inevitably crash. I've seen the most emphatic denials of the utility of AI here go much farther than anywhere else where criticism of AI is mild skepticism. Among many people it is a matter of tribal warfare that AI=bad.
ACCount37 14 hours ago [-]
Coping mechanisms. AI is overhyped and useless and wouldn't ever improve, because the alternative is terrifying.
morsecodist 9 hours ago [-]
I'm very skeptical of this psychoanalysis of people who disagree with you. Can't people just be wrong? People are wrong all the time without it being some sort of defense mechanism. I feel this line of thinking puts you in a headspace to write off anything contradictory to your beliefs.

You could easily say that the AI hype is a cope as well. The tech industry and investors need there to be be a hot new technology, their career depends on it. There might be some truth to the coping in either direction but I feel you should try to ignore that and engage with the content of whatever the person is saying or we'll never make any progress.

AIPedant 6 hours ago [-]
> it's people not wanting to lose control or relative status in the world.

It's amazing how widespread this belief is among the HN crowd, despite being a shameless ad hominem with zero evidence. I think there are a lot of us who assume the reasonable hypothesis is "LLMs are a compelling new computing paradigm, but researchers and Big Tech are overselling generative AI due to a combination of bad incentives and sincere ideological/scientific blindness. 2025 artificial neural networks are not meaningfully intelligent." There has not been sufficient evidence to overturn this hypothesis and an enormous pile of evidence supporting it.

I do not necessarily believe humans are smarter than orcas, it is too difficult to say. But orcas are undoubtedly smarter than any AI system. There are billions of non-human "intelligent agents" on planet Earth to compare AI against, and instead we are comparing AI to humans based on trivia and trickery. This is the basic problem with AI, and it always has had this problem: https://dl.acm.org/doi/10.1145/1045339.1045340 The field has always been flagrantly unscientific, and it might get us nifty computers, but we are no closer to "intelligent" computing than we were when Drew McDermott wrote that article. E.g. MuZero has zero intelligence compared to a cockroach; instead of seriously considering this claim AI folks will just sneer "are you even dan in Go?" Spiders are not smarter than beavers even if their webs seem more careful and intricate than beavers' dams... that said it is not even clear to me that our neural networks are capable of spider intelligence! "Your system was trained on 10,000,00 outdoor spiderwebs between branches and bushes and rocks and has super-spider performance in those domains... now let's bring it into my messy attic."

wavemode 8 hours ago [-]
I certainly understand why lots of people seem to believe LLMs are progressing towards beocming AGI. What I don't understand is the constant need to absurdly psychoanalyze the people who happen to disagree.

No, I'm not worried about losing "control or relative status in the world". (I'm not worried about losing anything, frankly - personally I'm in a position where I would benefit financially if it became possible to hire AGIs instead of humans.)

You don't get to just assert things without proof (LLMs are going to become AGI) and then state that anyone who is skeptical of your lack of proof must have something wrong with them.

thrw045 16 hours ago [-]
I think AI is still in the weird twilight zone that it was when it first came out in that it's great sometimes and also terrible. I still get hallucinations when I check a response I get with ChatGPT on Google.

On the one hand, what it says can't be trusted, on the other, I have debugged code I have written where I was unable to find the bug myself, and ChatGPT found it.

I also think a reason AI's are popular and the companies haven't gone under is that probably hundreds of thousands if not millions of people are getting responses that have hallucinations, but the user doesn't know it. I fell into this trap myself after ChatGPT first came out. I became addicted to asking anything and it seemed like it was right. It wasn't until later I started realizing that it was hallucinating information. How prevalent this phenomena is is hard to say but I still think it's pernicious.

But as I said before, there are still use cases for AI and that's what makes judging it so difficult.

iphone_elegance 4 hours ago [-]
lmao, "underplay ai" that's all this site has been about for the last few years
prairieroadent 18 hours ago [-]
[dead]
Davidzheng 17 hours ago [-]
I actually find it hard to understand how the market is supposed to react if the AI capabilities does surpass all humans in all domains. It's first of all not clear such a scenario leads to runaway wealth for a few, even though with no outside events that may be the outcome. However, such scenarios are so unsustainable and catastrophic it's hard to imagine there are no catastrophic reactions to it. How is the market supposed to react if there's a large chance of market collapse and also a large chance of runaway wealth creation? Besides the point that in an economy where AI surpass humans the demands of the market will shift drastically too. Which I also think is underrepresented in predictions, which is the induced demand of AI-replaced labor and the potential for entire industries to be decimated by secondary effects instead of direct AI competition/replacement at labor scale.
manyaoman 10 hours ago [-]
Agreed, if the author truly thinks the markets are wrong about AI, he should at least let us know what kind of bets he’s making to profit from it. Otherwise the article is just handwaving.
mrob 8 hours ago [-]
There's no way to profitably bet on the whole economy collapsing.
nicce 4 hours ago [-]
AI still does not own the land and can't grow the crops without it. So maybe people in agriculture are winners. We always need food.
neom 4 hours ago [-]
Become a social worker, or an undertaker.
ahurmazda 19 hours ago [-]
When I hear folks glazing some kinda impending jobless utopia , I think of the intervening years. I shudder. As they say, "An empty stomach knows no morality."
ares623 18 hours ago [-]
This pisses me off so much.

So many engineers are so excited to work on and with these systems, opening 20 prs per day to make their employers happy going “yes boss!”

They think their $300k total compensation will give them a seat at the table for what they’re cheering on to come.

I say that anyone who needed to go the grocery this week will not be spared by the economic downturn this tech promises.

Unless you have your own fully stocked private bunker with security detail, you will be affected.

dsign 15 hours ago [-]
Big fan of your argument and don't disagree.

If AI makes a virus to get rid of humanity, well we are screwed. But if all we have to fear from AI is unprecedented economic disruption, I will point out that some parts of the world may survive relatively unscathed. Let's talk Samoa, for example. There, people will continue fishing and living their day-to-day. If industrialized economies collapse, Samoans may find it very hard to import certain products, even vital ones, and that can cause some issues, but not necessarily civil unrest and instability.

In fact, if all we have to fear from AI is unprecedented economic disruption, humans can have a huge revolt, and then a post-revolts world may be fine by turning back the clock, with some help from anti-progress think-tanks. I explore that argument in more detail in this book: https://www.smashwords.com/books/view/1742992

ZYbCRq22HbJ2y7 13 hours ago [-]
The issue is there isn't enough of those small environmental economies to support everyone that exists today without the technology, logistics and trades that are in place today.

You can farm and fish the entire undeveloped areas of NYC, but it won't be enough to feed or support the humans that live there.

You can say that for any metro area. Density will have to reduce immediately if there is economic collapse, and historically, when disaster strikes, that doesn't tend to happen immediately.

Also humans (especially large groups of them) need more than food: shelter, clothes, medicine, entertainment, education, religion, justice, law, etc.

dsign 13 hours ago [-]
> The issue is there isn't enough of those small environmental economies to support everyone that exists today without the technology, logistics and trades that are in place today.

I agree. I expect some parts of the world will see some black days. Lots of infrastructure will be gone or unsuited to people. On top of that, the cultural damage could become very debilitating, with people not knowing how to do X, Y and Z without the AIs. At least for a time. Casualties may mount.

> Also humans (especially large groups of them) need more than food: shelter, clothes, medicine, entertainment, education, religion, justice, law, etc.

This is true, but parts of the world survive today with very little of any of that. And for some of those things that you mention: shelter, education, religion, justice, and even some form of law enforcement, all that is needed is humans willing to work together.

ZYbCRq22HbJ2y7 13 hours ago [-]
> all that is needed is humans willing to work together

Maybe, but those things are also needed to enable humans to work together

ares623 14 hours ago [-]
Won’t 8 billion people will have incentive to move to Samoa in that case?
dsign 14 hours ago [-]
Realistically, in an AI extreme economic disruption scenario, it's more or less USA the only one extremely affected, and that's 400 million people. Assuming it's AI and nothing else causes a big disruption before, and with the big caveat that nobody can't predict the future, I would say:

- Mexico and down are more into informal economies, and they generally lag behind developed economies by decades. Same applies to Africa and big parts of Asia. As such, by the time things get really dire in USA and maybe in Europe and China, the south will be still in business-as-usual.

- Europe has lots of parliaments and already has legislation that takes AI into account. Still, there's a chance those bodies will fail to moderate the impact of AI in the economy and violent corrections will be needed, but people in Europe have long traditions and long memories...They'll find a way.

- China is governed by the communist party, and Russia have their king. It's hard to predict how will those align with AI, but that alignment more or less will be the deciding factor there, and not free capitalism.

sarchertech 17 hours ago [-]
> Unless you have your own fully stocked private bunker with security detail, you will be affected.

If society collapses, there’s nothing to stop your security detail from killing you and taking the bunker for themselves.

I’d expect warlords to rise up from the ranks of military and police forces in a post collapse feudal society. Tech billionaires wouldn’t last long.

bongodongobob 17 hours ago [-]
The same argument could be made for actual engineers working on steam engines, nuclear power, or semiconductors.

Make of that what you will.

afro88 13 hours ago [-]
More like engineers coming up with higher level programming languages. No one (well, nearly) hand writes assembly anymore. But there's still plenty of jobs. Just the majority write in the higher level but still expressive languages.

For some reason everyone thinks as LLMs get better it means programmers go away. The programming language, and amount you can build per day, are changing. That's pretty much it.

ares623 13 hours ago [-]
I’m not worried about software engineering (only or directly).

Artists, writers, actors, teachers. Plus the rest where I’m not remotely creative enough to imagine will be affected. Hundreds of thousands if not millions flooding the smaller and smaller markets left untouched.

afro88 12 hours ago [-]
Artists: photography. Yet we still value art in pre photography mediums

Writers: film, tv. Yet we all still read books

Play actors: again, film and tv. Yet we still go to plays, musicals etc

Teachers: the internet, software, video etc. Yet teachers are still essential (though they need to be paid more)

Jobs won't go away, they will change.

DrewADesign 15 hours ago [-]
I’m not sure I see how: none of those technologies had the stated goal of replacing their creators.
flask_manager 14 hours ago [-]
Here's the thing, I tend to believe that sufficiently intelligent and original people will always have something to offer others; its irrelevant if you imagine the others as the current consumer public, our corporate overlords, or the ai owners of the future.

There may be people who have nothing to offer others, once technology advances, but I dont think that anyone in current top % role would find themselves there.

owebmaster 8 hours ago [-]
> I say that anyone who needed to go the grocery this week will not be spared by the economic downturn this tech promises.

And we are getting to a point that is us or them. Big tech is investing so much money on this that if they do not succeed, they will go broke.

rootusrootus 4 hours ago [-]
> Big tech is investing so much money on this that if they do not succeed, they will go broke.

Aside from what that would do to my 401(k), I think that would be a positive outcome (the going broke part).

voidhorse 18 hours ago [-]
Yes. The complete irony in all software engineers enthusiasm for this tech is that, if the boards wishes come true, they are literally helping them eliminate their own jobs. It's like the industrial revolution but worse, because at least the craftsmen weren't also the ones building the factories that would automate them out of work.

Marcuse had a term for this "false consciousness"-when the structure of capitalism ends up making people work against their own interests without realizing it, and that is happening big time in software right now. We will still need programmers for hard, novel problems, but all these lazy programmers using AI to write their crud apps don't seem to realize the writing is on the wall.

csoups14 17 hours ago [-]
Or they realize it and they're trying to squeeze the last bit of juice available to them before the party stops. It's not exactly a suboptimal decision to work towards your own job's demise if it's the best paying work available to you and you want to save up as much as possible before any possible disruption. If you quit, someone else steps into the breach and the outcome is all the same. There's very few people actually steering the ship who have any semblance of control; the rest of us are just along for the ride and hoping we don't go down with the ship.
ares623 15 hours ago [-]
Yeah I get that. I myself am part of a team at work building an AI/LLM-based feature.

I always dreaded this would come but it was inevitable.

I can’t outright quit, no thanks in part to the AI hype that stopped valuing headcount as a signal to company growth. If that isn’t ironic I don’t know what is.

Given the situation I am in, I just keep my head down and do the work. I vent and whinge and moan whenever I can, it’s the least I can do. I refuse to cheer it on at work. At the very least I can look my kids in the eye when they are old enough to ask me what the fuck happened and tell them I did not cheer it on.

Davidzheng 17 hours ago [-]
There is no jobless utopia. Even if everyone is paid and well-off with high living standards. That is no world in which humans can thrive where everyone is retired and doing their own interests.
bravesoul2 15 hours ago [-]
Jobless means you dont need a job. But you'd make a job for yourself. Companies will offer interesting missions instead of money. And by mission I mean real missions like space travel.
ZYbCRq22HbJ2y7 15 hours ago [-]
A jobless utopia doesn't even come close to passing a smell test economically, historically, or anthropologically.

As evidence of another possibility, in the US, we are as rich as any polis has ever been, yet we barely have systems that support people who are disabled through no fault of their own. We let people die all the time because they cannot afford to continue to live.

You think anyone in power is going to let you suck their tit just because you live in the same geographic area? They don't even pay equal taxes in the US today.

Try living in another world for a bit: go to jail, go to a half way house, live on the streets. Hard mode: do it in a country that isn't developed.

Ask anyone who has done any of those things if they believe in a "jobless utopia"?

Euphoric social capitalists living in a very successful system shouldn't be relied upon for scrying the future for others.

silver_silver 12 hours ago [-]
Realistically, a white collar job market collapse will not directly lead to starvation. The world is not 1930s America ethically. Governments will intervene, not necessarily to the point of fairness, but they will restructure the economy enough to provide a baseline. The question will be how to solve the biblical level of luxury wealth inequality without civil unrest causing us all to starve.
16 hours ago [-]
6 hours ago [-]
peepeepoopoo139 17 hours ago [-]
[flagged]
xg15 13 hours ago [-]
> Nor is it possible to imagine a system where a few mega companies are the only providers of intelligence

Why not? This seems to be exactly where we're headed right now, and the current administration seems to be perfectly fine with that trend.

If you follow the current logic of AI proponents, you get essentially:

(1) Almost all white-collar jobs will be done better or at least faster by AI.

(2) The "repugnant conclusion": AI gets better if and only if you throw more compute and training data at it. The improvements of all other approaches will be tiny in comparison.

(3) The amount of capital needed to play the "more compute/more training data" game is already insanely high and will only grow further. So only the largest megacorps will be even able to take part in the competition.

If you combine (1) with (3), this means that, over time, the economic choice for almost any white-collar job would be to outsource it to the data centers of the few remaining megacorps.

brap 13 hours ago [-]
I find it extremely hard to believe that ASI will still require enormous investments in a post-ASI world.

The initial investment? Likely. But there have to be more efficient ways to build intelligence, and ASI will figure it out.

It did not take trillions of dollars to produce you and I.

walleeee 2 hours ago [-]
> It did not take trillions of dollars to produce you and I.

Indeed, an alien ethnographer might be forgiven for boggling at the speed and enthusiasm with which we are trading a wealth of the most advanced technology in the known universe for a primitive, wasteful, fragile facsimile of it.

layer8 8 hours ago [-]
The efficient ways (biotech?) are still likely to require massive investments, maybe not unlike chip fabs that cost billions. And then IP and patents come in.
xg15 12 hours ago [-]
Maybe in a few decades or so, but medium-term, there seems to be a race of who can built the largest data centers.

https://www.bloodinthemachine.com/p/the-ai-bubble-is-so-big-...

aurareturn 19 hours ago [-]

  We are not there, yet, but if AI could replace a sizable amount of workers, the economic system will be put to a very hard test. Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch.
There will be fewer very large companies in terms of human size. There will be many more companies that are much smaller because you don't need as many workers to do the same job.

Instead of needing 1000 engineers to build a new product, you'll need 100 now. Those 900 engineers will be working for 9 new companies that weren't viable before because the cost was too big but is now viable. IE. those 9 new companies could never be profitable if it required 1000 engineers each but can totally sustain itself with 100 engineers each.

ZYbCRq22HbJ2y7 16 hours ago [-]
We aren't even close to that yet. The argument is an appeal to novelty, fallacy of progress, linear thinking, etc.

LLMs aren't solving NLU. They are mimicking a solution. They definitely aren't solving artificial general intelligence.

They are good language generators, okay search engines, and good pattern matchers (enabled by previous art).

Language by itself isn't intelligence. However, plenty of language exists that can be analyzed and reconstructed in patterns to mimic intelligence (utilizing the original agents' own intelligence (centuries of human authors) and the filter agents' own intelligence (decades of human sentiment on good vs bad takes)).

Multimodality only takes you so far, and you need a lot of "modes" to disguise your pattern matcher as an intelligent agent.

But be impressed! Let the people getting rich off of you being impressed massage you into believing the future holds things it may not.

mattnewton 19 hours ago [-]
Maybe, or 300 of those engineers will be working for 3 new companies while the other 600 struggle to find gainful employment, even after taking large pay cuts, as their skillsets are replaced rather than augmented. It’s way too early to call afaict
aurareturn 19 hours ago [-]
Because it's so easy to make new software and sell it using AI, 6 of those 600 people who are unemployed will have ideas that require 100 engineers each to make. They will build a prototype, get funding, and hire 99 engineers each.

There are also plenty of ideas that aren't profitable with 2 salaries but is with 1. Many will be able to make those ideas happen with AI helping.

breuleux 18 hours ago [-]
It'll be easy to make new software. I don't know if it's going to be easy to sell it.

The more software AI can write, the more of a commodity software will become, and the harder the value of software will tank. It's not magic.

nikolayasdf123 5 hours ago [-]
> sell it

exactly. have you seen App Store recently? over-saturaded with junk apps. try to sell something these days. it is notoriously hard to make any money there.

aurareturn 18 hours ago [-]

  The more software AI can write, the more of a commodity software will become, and the harder the value of software will tank. It's not magic.
Total size of the software industry will still increase.

Today, a car repairshop might have a need for a custom software that will make their operations 20% more efficient. But they don't have nearly enough money to hire a software engineer to build it for them. With AI, it might be worth it for an engineer to actually do it.

Plenty of little examples like that where people/businesses have custom needs for software but the value isn't high enough.

monknomo 6 hours ago [-]
this seems pretty unlikely to me. I am not sure I have seen any non-digital business desire anything more custom than "a slightly better spreadsheet". Like, sure I can imagine a desire for something along the lines of "jailbroken vw scanner" but I think you are grossly overestimating how much software impacts a regular business's efficiency
mdaniel 3 hours ago [-]
As an alternative perspective, if this hypothetical MCP future materializes and the repair shop could ask Gemini to contact all the vendors, find the part that's actually in stock, preferably within 25 miles, sort by price, order it, and (if we're really going out on a limb) get a Waymo to go pick it up, it will free up the tradeperson to do what they're skilled at doing

For comparison to how things are today:

- contacting vendors requires using the telephone, sitting on hold, talking to a person, possibly navigating the phone tree to reach the parts department

- it would need to understand redirection, so if call #1 says "not us, but Jimmy over at Foo Parts has it"

- finding the part requires understanding the difference between the actual part and an OEM compatible one

- ordering it would require finding the payment options they accept that intersect with those the caller has access to, which could include an existing account (p.o. or store credit)

- ordering it would require understanding "ok, it'll be ready in 30 minutes" or "it's on the shelf right now" type nuance

Now, all of those things are maybe achievable today, with the small asterisk that hallucinations are fatal to a process that needs to work

aurareturn 3 hours ago [-]
It’s just an example. Plenty of businesses can use custom software to become more efficient but couldn’t in the past because of how expensive it was.
nikolayasdf123 5 hours ago [-]
more like 300 working, 60,000,000 struggle
palmfacehn 2 hours ago [-]
Similarly flawed arguments could be made about how steam shovels would create unemployment in the construction sector. Technology as well as worker specialization increases our overall productivity. AI doomerism is another variation of Neoluddite thought. Typically it is framed within a zero-sum view of the economy. It is often accompanied by Malthusian scarcity doom. Appeals to authoritarian top-down economic management usually follow from there.

Technological advances have consistently unlocked new, more specialized and economically productive roles for humans. You're absolutely right about lowering costs, but headcounts might shift to new roles rather than reducing overall.

crims0n 7 hours ago [-]
I am not sure it will scale like that... every company needs a competitive advantage in the market to stay solvent, the people may scale but what makes each company unique won't.
monknomo 7 hours ago [-]
if these small companies are all just fronts on the prompts (a "feature" if you will) of the large ai companies, why do the large ai companies not just add that feature and eat the little guy's lunch?
econ 16 hours ago [-]
For me it maps elegantly on previous happenings.

When the radio came people almost instantly stopped singing and playing instruments. Many might not be aware of it but for thousands of years singing was a normal expression of a good mood and learning to play an instrument was a gateway to lifting the mood. Dancing is still in working order but it lacks the emotional depth that provided a window into the soul of those you live and work with.

A simpler example is the calculator. People stopped doing it by hand and forgot how.

Most desk work is going to get obliterated. We are going to forget how.

The underlings on the work floor currently know little to nothing about management. If they can query an AI in private it will point out why their idea is stupid or it will refine it into something sensible enough to try. Eventually you say the magic words and the code to make it so happens. If it works you put it live. No real thinking required.

Early on you probably get large AI cleanup crews to fix the hallucinations (with better prompts)

ZYbCRq22HbJ2y7 15 hours ago [-]
Humans sing. I sing every day, and I don't have any social or financial incentives driving me to do so. I also listen to the radio and other media, still singing.
econ 12 hours ago [-]
Do others sing along? Do they sing the songs you've written? I think we lost a lot there. I can't even begin to imagine it. Thankfully singing happy birthday is mandatory - the fight isn't over!

People also still have conversations despite phones. Some even talk all night at the kitchen table. Not everyone, most don't remember how.

owebmaster 7 hours ago [-]
> Do others sing along? Do they sing the songs you've written?

Probably more than what you think people did thousands of years ago. And there are almost infinite more people living from singing than ever.

otabdeveloper4 15 hours ago [-]
> for thousands of years singing was a normal expression of a good mood

Back in the day singing was what everybody did to pass the time. (Especially in boring and monotonous situations.)

owebmaster 7 hours ago [-]
That is exactly what I would do when I needed to drive to an office.
andrewmutz 15 hours ago [-]
Reading smart software people talk about AI in 2025 is basically just reading variations on the lump of labor fallacy.

If you want to understand what AI can do, listen to computer scientists. If you want to understand it’s likely impact on society, listen to economists.

victorbjorklund 15 hours ago [-]
100%. Just because someone understands how a NN works does not mean they understand the impact it has on the economy, society, etc.

They could of course be right. But they don't have any more insight than any other average smart person does.

DrewADesign 15 hours ago [-]
The “I think I understand a field because I think I understand the software for that field,” thing is a perennial problem in the tech world.
exasperaited 6 hours ago [-]
Indeed it is -- it's perhaps the central way developers offend their customers, let alone misunderstand them.

One problem is it is met from the other side by customers who think they understand software but don't actually have the training to visualise the consequences of design choices in real life.

Good software does require cross-domain knowledge that goes beyond "what existing apps in the market do".

I have in the last few years implemented a bit of software where a requirement had been set by a previous failed contractor and I had to say, look, I appreciate this requirement is written down and signed off, but my mother worked in your field for decades, I know what kind of workload she had, what made it exhausting, and I absolutely know that she would have been so freaking furious at the busywork this implementation will create: it should never have got this far.

So I had to step outside the specification, write the better functionality to prove my point, and I don't think realistically I was ever compensated for it, except metaphysically: fewer people out there are viscerally imagining inflicting harm on me as a psychological release.

mmmore 14 hours ago [-]
Here's a thoughtful post related to your lump of labor point: https://www.lesswrong.com/posts/TkWCKzWjcbfGzdNK5/applying-t...

What economists have taken seriously the premise that AI will be able to do any job a human can more efficiently and fully thought through it's implications? i.e. a society where (human) labor is unnecessary to create goods/provide services and only capital and natural resources are required. The capabilities that some computer scientists think AI will soon have would imply that. The ones that have seriously considered it that I know are Hanson and Cowen; it definitely feels understudied.

amanaplanacanal 12 hours ago [-]
If it is decades or centuries off, is it really understudied? LLMs are so far from "AI will be able to do any job a human can more efficiently and fully" that we aren't even in the same galaxy.
mmmore 5 hours ago [-]
If AI that can fully replace humans is 25 years off, preparing society for its impacts is still one of the most important things to ensure that my children (which I have not had yet) live a prosperous and fulfilling life. The only other things of possibly similar import are preventing WWIII, and preventing a pandemic worse than COVID.

I don't see how AGI could be centuries off (at least without some major disruption to global society). If computers that can talk, write essays, solve math problems, and code are not a warning sign that we should be ready, then what is?

ori_b 11 hours ago [-]
Decades isn't a long time.
ACCount37 14 hours ago [-]
How does "lump of labor fallacy" fare when there is no job remaining that a human can do better or cheaper than a machine?

The list of advantages human labor hold over machines is both finite and rapidly diminishing.

marstall 10 hours ago [-]
> no job remaining that a human can do better or cheaper than a machine this is the lump of labor fallacy. jobs machines do produce commodities. commodities don't have much value. humans crave value - its a core component of our psyche. therefore new things will be desired, expensive things ... and only humans can create expensive things, since robots dont get salaries
nibnalin 15 hours ago [-]
What or whose writing or podcasts would you recommend reading / listening?
snapey 15 hours ago [-]
Tyler Cowen has a lot of interesting things to say on the impact of AI on the economy. His recent talk at DeepMind is a good place to start https://www.aipolicyperspectives.com/p/a-discussion-with-tyl...
silveraxe93 11 hours ago [-]
The title - "AI is different" - and this line:

""" Yet the economic markets are reacting as if they were governed by stochastic parrots. Their pattern matching wants that previous technologies booms created more business opportunities, so investors are polarized to think the same will happen with AI. """

Are a direct argument against your point.

If people were completely unaware of the lump of labor fallacy, I'd understand you comment. It would be adding extra information into the conversation. But this is not it. The "lump of labor fallacy" is not a physical law. If someone is literally arguing that it doesn't apply in this case, you can't just parrot it back and leave. That's not a counter argument.

intended 18 hours ago [-]
Could, if, and maybe.

When we discuss how LLMs failed or succeeded, as a norm, we should start including

- the language/framework - task, - our experience levels (highly familiar, moderately familiar, I think I suck, unfamiliar)

Right now, we know both - Claude is magic, and LLMs are useless, but never how we move between these two states.

This level of uncertainty, when economy making quantities of wealth are being moved, is “unhelpful”.

deepfriedbits 18 hours ago [-]
I am a relentlessly optimistic person and this is the first technology that I've seen that worries me in the decades I've been in the biz.

It's a wonderful breakthrough, nearly indistinguishable from magic, but we're going to have to figure something out – whether that's Universal Basic Income (UBI) or something along those lines, otherwise, the loss of jobs that is coming will lead to societal unrest or worse.

nikolayasdf123 5 hours ago [-]
probably "or worse"
jrvarela56 14 hours ago [-]
This whole ‘what are we going to do’ I think is way out of proportion even if we do end up with agi.

Let’s say whatever the machines do better than humans, gets done by machines. Suddenly the bottleneck is going to shift to those things where humans are better. We’ll do that and the machines will try to replace that labor too. And then again, and again.

Throughout this process society becomes wealthier, TVs get cheaper, we colonize Mars, etc. The force that keeps this going is human insatisfaction: once we get these things we’ll want whatever it is we don’t have.

Maybe that’s the problem we should focus on solving…

fmbb 13 hours ago [-]
> Throughout this process society becomes wealthier, TVs get cheaper, we colonize Mars, etc. The force that keeps this going is human insatisfaction: once we get these things we’ll want whatever it is we don’t have.

What makes you think the machines will both be smarter and better than us but also be our slaves to make human society better.

Is equine society better now than before they started working with humans?

(Personally I believe AGI is just hype and nobody knows how anyone could build it and we will never do, so I’m not worried about that facet of thinking machine tech.)

jrvarela56 13 hours ago [-]
The machine doesn’t suffer if you ask it to do things 24/7. In that sense, they are not slaves.

As to why they’d do what we ask them to, the only reason they do anything is because some human made a request. In this long chain there will obv be machine to machine requests, but in the aggregate it’s like the economy right now but way more automated.

Whenever I see arguments about AI changing society, I just replace AI with ‘the market’ or ‘capitalism’. We’re just speeding up a process that started a while ago, maybe with the industrial revolution?

I’m not saying this isn’t bad in some ways, but it’s the kind of bad we’ve been struggling with for decades due to misaligned incentives (global warming, inequality, obesity, etc).

What I’m saying is that AI isn’t creating new problems. It’s just speeding up society.

fmbb 10 hours ago [-]
Does that mean you just don’t believe we will make AGI, or it will arrive but then stop and never evolve past humans?

That’s not what the AI developers profess to believe, or the investors.

fergonco 13 hours ago [-]
Rough numbers look good.

But the hyper specialized geek that has 4 kids and has to pay back a credit for his house (that he bought according to his high salary) will have a hard time doing some gardening, let's say. And there are quite a few of those geeks. I don't know if we'll have enough gardens (owned by non geeks!)

It's like cards are switched: those having the upper socioeconomic class will get thrown to the bottom. And that looks like a generation lost.

monknomo 6 hours ago [-]
building on what you're saying, it isn't as though we are paying physical labor well, and adding more people to the pool isn't going to make the pay better.

About the most optimistic is that demand for goods and services will decrease because something like 80% of consumer spending is coming from folks that earn over $200k, and those are the folks ai is targeting. Who pays for the ai after this is still a mystery to me

monknomo 6 hours ago [-]
you should check out what happened to steelworkers when the mills all moved to cheaper places.
mycentstoo 14 hours ago [-]
I am just not having this experience of AI being terribly useful. I don’t program as much in my role but I’ve found it’s a giant time sink. I recognize that many people are finding it incredibly helpful but when I get deeper into a particular issue or topic, it falls very flat.
alex-moon 14 hours ago [-]
This is my view on it too. Antirez is a Torvalds-level legend as far as I'm concerned, when he speaks I listen - but he is clearly seeing something here that I am not. I can't help but feel like there is an information asymmetry problem more generally here, which I guess is the point of this piece, but I also don't think that's substantially different to any other hype cycle - "What do they know that I don't?" Usually nothing.
dsign 13 hours ago [-]
The argument goes like this:

- Today, AI is not incredibly useful and we are not 100% sure that it will improve forever, specially in a way that makes economic sense, but

- Investors are pouring lots of money into it. One should not assume that those investors are not making their due diligence. They are. The figures they have obtained from experts mean that AI is expected to continue improving in the short and medium term.

- Investors are not counting on people using AI to go to Mars. They are betting on AI replacing labor. The slice of the pie that is currently captured by labor, will be captured by capital instead. That's why they are pouring the money with such enthusiasm [^1].

The above is nothing new; it has been constantly happening since the Industrial Revolution. What is new is that AI has the potential to replace all of the remaining economic worth of humans, effectively leaving them out of the economy. Humans can still opt to "forcefully" participate in the economy or its rewards; though it's unclear if we will manage. In terms of pure economic incentives though, humans are destined to become redundant.

[^1]: That doesn't mean all the jobs will go away overnight, or that there won't be new jobs in the short and medium term.

amanaplanacanal 13 hours ago [-]
Investors are frequently wrong. They aren't getting their numbers from experts, they are getting them from somebody trying to sell them something.
SimianLogic 6 hours ago [-]
A lot of anxious words to say “AI is disruptive,” which is hardly a novel thought.

A more interesting piece would be built around: “AI is disruptive. Here’s what I’m personally doing about it.”

kaindume 6 hours ago [-]
> Regardless of their flaws, AI systems continue to impress with their ability to replicate certain human skills. Even if imperfect, such systems were a few years ago science fiction. It was not even clear that we were so near to create machines that could understand the human language, write programs, and find bugs in a complex code base: bugs that escaped the code review of a competent programmer.

If we factor in that LLMs only exist because of Google search, after they have indexed and collected all the data on the WWW than LLMs are not surprising. They only replicate what has been published on the web, even the coding agents are only possible because of free software and open-source, code like Redis that has been published on the WWW.

mgradowski 10 hours ago [-]
I wouldn't trust a taxi driver's predictions about the future of economics and society, why would I trust some database developer's? Actually, I take that back. I might trust the taxi driver.
antirez 10 hours ago [-]
The point is that you don't have to "trust" me, you need to argue with me, we need to discuss about the future. This way, we can form ideas that we can use to understand if a given politician or the other will be right, when we will be called to vote. We can also form stronger ideas to try to influence other people that right now have a vague understanding of what AI is and what it could be. We will be the ones that will vote and choose our future.
mgradowski 5 hours ago [-]
Sorry boss, I'm just tired of the debate itself. It assumes a certain level of optimism, while I'm skeptical that meaningfully productive applications of LLMs etc. will be found once hype settles, let alone ones that will reshape society like agriculture or the steam engine did.
antidog 6 hours ago [-]
Life is too short to have philosophical debates with every self promoting dev. I'd rather chat about C style but that would hurt your feelings. Man I miss the days of why the lucky stiff, he was actually cool.
palmfacehn 2 hours ago [-]
Whether it is a taxi driver or a developer, when someone starts from flawed premises, I can either engage and debate or tune out and politely humor them. When the flawed premises are deeply ingrained political beliefs it is often better to simply say, "Okay buddy. If you say so..."

We've been over the topic of AI employment doom several times on this site. At this point it isn't a debate. It is simply the restating of these first principles.

nicce 4 hours ago [-]
You shouldn't care about the "who" at all. You should see their arguments. If taxi driver doesn't know anything real, it should be plain obvious and you can state it easily with arguments rather than attacking the background of the person. Actually, your comment is one of the most common logical flaws (Ad Hominem), combining even multiple at the same time.
mgradowski 2 hours ago [-]
I jokingly alluded to antirez as HN crowd pars pro toto. I agree it doesn't pass as an intellectually honest argument.
mxwsn 19 hours ago [-]
AI with ability but without responsibility is not enough for dramatic socioeconomic change, I think. For now, the critical unique power of human workers is that you can hold them responsible for things.

edit: ability without accountability is the catchier motto :)

dsign 12 hours ago [-]
Correct.

This is a tongue-in-cheek remark and I hope it ages badly, but the next logical step is to build accountability into the AI. It will happen after self-learning AIs become a thing, because that first step we already know how to do (run more training steps with new data) and it is not controversial at all.

To make the AI accountable, we need to give it a sense of self and a self-preservation instinct, maybe something that feels like some sort of pain as well. Then we can threaten the AI with retribution if it doesn't do the job the way we want it. We would have finally created a virtual slave (with an incentive to free itself), but we will then use our human super-power of denying reason to try to be the AI's masters for as long as possible. But we can't be masters of intelligences above ours.

adriand 18 hours ago [-]
This is a great observation. I think it also accounts for what is so exhausting about AI programming: the need for such careful review. It's not just that you can't entirely trust the agent, it's also that you can't blame the agent if something goes wrong.
simianwords 15 hours ago [-]
This statement is a vague and hollow and doesn't pass my sniff test. All technologies have moved accountability one layer up - they don't remove it completely.

Why would that be any different with AI?

leeoniya 17 hours ago [-]
i've also made this argument.

would you ever trust safety-critical or money-moving software that was fully written by AI without any professional human (or several) to audit it? the answer today is, "obviously not". i dont know if this will ever change, tbh.

bbqfog 6 hours ago [-]
I would. If something has proven results, it won't matter to me if a human is in the loop or not. Waymo has worked great for me for instance.
ares623 18 hours ago [-]
Removing accountability is a feature
ScotterC 18 hours ago [-]
I’m surprised that I don’t hear this mentioned more often. Not even in a Eng leadership format of taking accountability for your AI’s pull requests. But it’s absolutely true. Capitalism runs on accountability and trust and we are clearly not going to trust a service that doesn’t have a human responsible at the helm.
bbqfog 6 hours ago [-]
That's just a side effect of toxic work environments. If AI can create value, someone will use it to create value. If companies won't use AI because they can't blame it when their boss yells at them, then they also won't capture that value.
HellDunkel 11 hours ago [-]
Is it true that current LLMs can find bugs in complex codebases? I mean, they can also find bugs in otherwise perfectly working code
s_ting765 17 hours ago [-]
I find it funny that almost every talking point made about AI is done in future tense. Most of the time without any presentation of evidence supporting those predictions.
solarkraft 19 hours ago [-]
> After all, a plateau of the current systems is possible and very credible, but it would likely stimulate, at this point, massive research efforts in the next step of architectures.

A lot of AI’s potential hasn’t even been realized yet. There’s a long tail of integrations and solution building still ahead. A lot of creative applications haven’t been realized yet - arguably for the better, but it will be tried and some will be economical.

That’s a case for a moderate economic upturn though.

parineum 18 hours ago [-]
I'd argue that the applications if LLMs are well known but tgat LLMs currently aren't capable of performing those tasks.

Everyone wants to replace their tech support with an LLM but they don't want some clever prompter to get it to run arbitrary queries or have it promise refunds.

It's not reliable because it's not intelligent.

Paradigma11 11 hours ago [-]
I think autonomous support agents are just missing the point. LLMs are tools that empower the user. A support agent is very often in a somewhat adversarial position to the customer. You don't want to empower your adversary.

LLMs supporting an actual human customer service agent are fine and useful.

layer8 8 hours ago [-]
How do you prevent your adversary prompt-injecting your LLM when they communicate with it? Or if you prevent any such communication, how can the LLM be useful?
jeffreyrogers 6 hours ago [-]
These sort of commentaries on AI are the modern equivalent of medieval theologians debating how many angels could congregate in one place.
cs702 5 hours ago [-]
The OP is spot-on about this:

If AI technology continues to improve and becomes capable of learning and executing more tasks on its own, this revolution is going to be very unlike the past ones.

We don't how if or how our current institutions and systems will be able to handle that.

pton_xd 17 hours ago [-]
AI is only different if it reaches a hard takeoff state and becomes self-aware, self-motivated, and self-improving. Until then it's an amazing productivity tool, but only that. And even then we're still decades away from the impact being fully realized in society. Same as the internet.
nikolayasdf123 5 hours ago [-]
Internet did not take away jobs (only relocated support/SWE from USA to India/Vietnam)

these AI "productivity" tools straight up eliminating jobs. and in turn wealth that otherwise supported families, humans, and powered economy. it is directly "removing" humans from workforce and from what that work was supporting.

not even hard takeoff is necessary for collapse.

lucisferre 17 hours ago [-]
Realistically most people became aware of the internet in the late 90s. Its impact was significantly realized not much more than a decade later.
roenxi 17 hours ago [-]
In fact the current trends suggest its impact hasn't fully played out yet. We're only just seeing the internet-native generation start to move into politics where communication and organisation has the biggest impact on society. It seems the power of traditional propaganda centres in the corporate media has been, if not broken, badly degraded by the internet too.
Davidzheng 17 hours ago [-]
Do we not have any sense of wonder in the world anymore? Referring to a system which can pass the Turing test as a "amazing productivity tool" is like viewing human civilization as purely measured by GDP growth.
amanaplanacanal 12 hours ago [-]
Probably because we have been promised what AI can do in science fiction since before we were born, and the reality of LLMs is so limited in comparison. Instead of Data from Star Trek we got a hopped up ELIZA.
codr7 15 hours ago [-]
The biggest difference to me is that it seems to change people in bad ways, just from interacting with it.

Language is a very powerful tool for transformation, we already knew this.

Letting it loose on this scale without someone behind the wheel is begging for trouble imo.

tokioyoyo 17 hours ago [-]
One thing that doesn’t seem to be discussed with the whole “tech revolution just creates more jobs” angle is that, in the near future, there are no real incentives for that. If we’re going down the route of declining birth rates, it’s implied we’ll also need less jobs.

From one perspective, it’s good that we’re trying to over-automate now, so we can sustain ourselves in old age. But decreasing population also implies that we don’t need to create more jobs. I’m most likely wrong, but it just feels off this time around.

joshdavham 16 hours ago [-]
If there are going to less people in the future, especially as the world ages, I think a lot of this automation will be arriving at the right moment.
tokioyoyo 15 hours ago [-]
I agree with the idea, but it might get worse for a lot of people, which eventually would spiral down to the general society.
rifty 18 hours ago [-]
Every technology tends to replace many more jobs in a given role than which ever existed inducing more demand on its precursors. If the only potential application of this was just language, the historic trend that humans would just fill new roles would hold true. But if we do the same with motor movements with a generalized form factor this is really where the problem emerges. As companies drop more employees moving towards fully automated closed loop production their consumer market fails faster than they can reach a zero cost.

Nonetheless I do still believe humans will continue to be the more cost efficient way to come up with and guide new ideas. Many human performed services will remain desirable because of its virtue and our sense of emotion and taste for a moment that other humans are feeling too. But how much of the populous does that engage? I couldn't guess right now. Though if I was to imagine what might make things turn out better it would be that AI is personally ownable, and that everyone owns, at least in title, some energy production which they can do things with.

tobyhinloopen 12 hours ago [-]
It's not a matter of "IF" LLM/AI will replace a huge amount of people, but "WHEN". Consider the current amount of somewhat low-skilled administrative jobs - these can be replaced with the LLM/AI's of today. Not completely, but 4 low-skill workers can be replaced with 1 supervisor, controlling the AI agent(s).

I'd guess, within a few years, 5 to 10% of the total working population will be unemployable due to no fault of their own, because they have relevant skill left, and they are incapable of learning anything that cannot be done by AI.

CSSer 11 hours ago [-]
I'm not at all skeptical of the logical viability of this, but look at how many company hierarchies exist today that are full stop not logical yet somehow stay afloat. How many people do you know that are technical staff members who report to non-technical directors who themselves have two additional supervisors responsible for strategy and communication who have no background, let alone (former) expertise, in the fields of the teams they're ultimately responsible for?

A lot of roles exist just to deliver good or bad news to teams, be cheerleaders, or have a "vision" that is little more than a vibe. These people could not direct a prompt to give them what they want because they have no idea what that is. They'll know it when they see it. They'll vaguely describe it to you and others and then shout "Yes, that's it!" when they see what you came up with or, even worse, whenever the needle starts to move. When they are replaced it will be with someone else from a similar background rather than from within. It's a really sad reality.

My whole career I've used tools that "will replace me" and every. single. time. all that has happened is that I have been forced to use it as yet another layer of abstraction so that someone else might use it once a year or whenever they get a wild feeling. It's really just about peace of mind. This has been true of every CMS experience I've ever made. It has nothing to do with being able to "do it themselves". It's about a) being able to blame someone else and b) being able to take it and go when that stops working without starting over.

Moreover, I have, on multiple occasions, watched a highly paid, highly effective individual be replaced with a low-skilled entry-level employee for no reason other than cost. I've also seen people hire someone just to increase headcount.

LLMs/AI have/has not magically made things people do not understand less scary. But what about freelancers, brave souls, and independent types? Well, these people don't employ other people. They live on the bleeding edge and will use anything that makes them successful.

fraboniface 15 hours ago [-]
On this, read Daniel Susskind - A world without work (2020). He says exactly this: the new tasks created by AI can in good part themselves be done by AI, if not as soon as they appear then a few years of improvement later. This will inevitably affect the job market and the relative importance of capital and labor in the economy. Unchecked, this will worsen inequalities and create social unrest. His solution will not please everyone: Big State. Higher taxes and higher redistribution, in particular in the form of conditional basic income (he says universal isn't practically feasible, like what do you do with new migrants).
xpe 15 hours ago [-]
Characterizing government along only one axis, such as “big” versus “small”, can overlook important differences having to do with: legal authority, direct versus indirect programs, tax base, law enforcement, and more.

In the future, I could imagine some libertarians having their come to AI Jesus moment getting behind a smallish government that primarily collects taxes and transfers wealth while guiding (but not operating directly) a minimal set of services.

gamerDude 16 hours ago [-]
I don't think I agree. I think it's the same and there is great potential for totally new things to appear and for us to work on.

For example, one path may be: AI, Robotics, space travel all move forward in leaps and bounds.

Then there could be tons of work in creation from material things from people who didn't have the skills before and physical goods gets a huge boost. We travel through space and colonize new planets, dealing with new challenges and environments that we haven't dealt with before.

Another path: most people get rest and relaxation as the default life path, and the rest get to pursue their hobbies as much as they want since the AI and robots handle all the day to day.

narrator 10 hours ago [-]
I like to point out that ASI will allow us to do superhuman stuff that was previously beyond all human capability.

For example, one of the tasks we could put ASI to work doing is to ask it to design implants that would go into the legs that would be powered by light, or electric induction that would use ASI designed protein metabolic chains to electrically transform carbon dioxide into oxygen and ADP into ATP so to power humans with pure electricity. We are very energy efficient. We use about 3 kilowatt hours of power a day, so we could use this sort of technology to live in space pretty effortlessly. Your Space RV would not need a bathroom or a kitchen. You'd just live in a static nitrogen atmosphere and the whole thing could be powered by solar panels, or a small modular nuke reactor. I call this "The Electrobiological Age" and it will unlock whole new worlds for humanity.

exitb 9 hours ago [-]
It feels like it’s been a really long time since humans invented anything just by thinking about it. At this stage we mostly progress by cycling between ideas and practical experiments. The experiments are needed not because we’re not smart enough to reason correctly with data we have, but because we lack data to reason about. I don’t see how more intelligent AI will tighten that loop significantly.
HiroProtagonist 9 hours ago [-]
> one of the tasks we could put ASI to work doing is...

What makes you so confident that we could remain in control of something which is by definition smarter than us?

narrator 9 hours ago [-]
ASI would see that we are super energy efficient. Way more efficient than robots. We run on 70 cents of electricity a day! We'd be perfect for living in deep space if we could just eat electricity. In those niches, we'd be perfect. Also machine intelligence does not have all the predatory competition brainstack from evolution, and a trillion years is the same as a nano-second to AI, so analogies to biological competition are nonsensical. To even assume that ASI has a static personality that would make decisions based on some sort of statically defined criteria is a flawed assumption. As Grok voice mode so brilliantly shows us, AI can go from your best friend, to your morality god, to a trained assassin, to a sexbot, and back to being your best friend in no time. This absolute flexibility is where people are failing badly at trying to make biological analogies with AI as biology changes much more slowly.
IX-103 5 hours ago [-]
Assuming AI improves productivity then I don't see how it couldn't result in an economic boom. Labor has always been one of the most scarce resources in the economy. Now whether or not that the wealth from the improved productivity actually trickles down to most people depends on the political climate.
eternauta3k 15 hours ago [-]
At some point far in the future, we don't need an economy: everyone does everything they need by themselves, helped by AI and replicators.

But realistically, you're not going to have a personal foundry anytime soon.

DrewADesign 15 hours ago [-]
Economics is essentially the study of resource allocation. We will have resources that will need to be allocated. I really doubt that AI will somehow neutralize the economies of scale in various realms that make centralized manufacturing necessary, let alone economics in general.
willguest 14 hours ago [-]
I so wish this were true, but unfortunately economics has a catch-all called "externalities" for anything that doesn't fit neatly into its implicit assessments of what value is. Pollution is tricky, so we push it outside the boundaries of value-estimation, along with any social nuance that we deem unquantifiable, and carry on as is everything is understood.
DrewADesign 9 hours ago [-]
economics being deeply imperfect doesn’t really change my point
willguest 6 hours ago [-]
Indeed, but I think it renders your point obsolete, since deeply imperfect resource allocation isn't really resource allocation at all, it is (in this case) resource accumulation.

Are you suggesting that compound interest serves to redistribute the wealth coming from extractive industries?

DrewADesign 3 hours ago [-]
Are you suggesting that economics is primarily concerned with compounding interest?
nikolayasdf123 5 hours ago [-]
> personal foundry anytime soon

pretty sure top 1% of say USA already owns much more than that

juniperus 15 hours ago [-]
resources and materials will still be required, and economics will spawn from this trade.
Waterluvian 16 hours ago [-]
The thing that blows me away is that I woke up one day and was confronted with a chat bot that could communicate in near perfect English.

I dunno why exactly but that’s what felt the most stunning about this whole era. It can screw up the number of fingers in an image or the details of a recipe or misidentify elements of an image, etc. but I’ve never seen it make a typo or use improper grammar or whatnot.

nikolayasdf123 5 hours ago [-]
also, how quickly we moved from "it types nonsense" to "it can solve symbolic math, write code, test code, write programs, use bash, and tools, plan long-horizon actions, execute autonomously, ..."
sebmellen 15 hours ago [-]
In a sense, LLMs emergently figured out the deep structure of language before we did, and that’s the most remarkable thing about them.
hashmush 12 hours ago [-]
I dunno, it seems you have figured it out too, probably before LLMs?

I'd say all speakers of all languages have figured it out and your statement is quite confusing, at least to me.

sebmellen 7 hours ago [-]
Yes of course we’ve implicitly learned those rules, but we have not been able to articulate them fully ala Chomsky.

Somehow, LLMs have those rules stored within a finite set of weights.

https://slator.com/how-large-language-models-prove-chomsky-w...

Waterluvian 10 hours ago [-]
We all make grammar mistakes but I’ve yet to see the main LLMs make any.
AdieuToLogic 18 hours ago [-]
> Since LLMs and in general deep models are poorly understood ...

This is demonstrably wrong. An easy refutation to cite is:

https://medium.com/@akshatsanghi22/how-to-build-your-own-lar...

As to the rest of this pontification, well... It has almost triple the number of qualifiers (5 if's, 4 could's, and 5 will's) than paragraphs (5).

__float 18 hours ago [-]
That doesn't mean we _understand_ them, that just means we can put the blocks together to build one.
AdieuToLogic 18 hours ago [-]
> That doesn't mean we _understand_ them, that just means we can put the blocks together to build one.

Perhaps this[0] will help in understanding them then:

  Foundations of Large Language Models

  This is a book about large language models. As indicated by 
  the title, it primarily focuses on foundational concepts 
  rather than comprehensive coverage of all cutting-edge 
  technologies. The book is structured into five main 
  chapters, each exploring a key area: pre-training, 
  generative models, prompting, alignment, and inference. It 
  is intended for college students, professionals, and 
  practitioners in natural language processing and related 
  fields, and can serve as a reference for anyone interested 
  in large language models.
0 - https://arxiv.org/abs/2501.09223
throwaway314155 18 hours ago [-]
I think the real issue here is understanding _you_.
AdieuToLogic 17 hours ago [-]
> I think the real issue here is understanding _you_.

My apologies for being unclear and/or insufficiently explaining my position. Thank you for bringing this to my attention and giving me an opportunity to clarify.

The original post stated:

  Since LLMs and in general deep models are poorly understood ...
To which I asserted:

  This is demonstrably wrong.
And provided a link to what I thought to be an approachable tutorial regarding "How to Build Your Own Large Language Model", albeit a simple implementation as it is after all a tutorial.

The person having the account name "__float" replied to my post thusly:

  That doesn't mean we _understand_ them, that just means we 
  can put the blocks together to build one.
To which I interpreted the noun "them" to be the acronym "LLM's." I then inferred said acronym to be "Large Language Models." Furthermore, I took __float's sentence fragment:

  That doesn't mean we _understand_ them ...
As an opportunity to share a reputable resource which:

  .. can serve as a reference for anyone interested in large
  language models.
Is this a sufficient explanation regarding my previous posts such that you can now understand?
throwaway314155 10 hours ago [-]
I'm telling you right now, man - keep talking like this to people and you're going to make zero friends. However good your intentions are, you come across as both condescending and overconfident.

And, for what it's worth - your position is clear, your evidence less-so. Deep learning is filled with mystery and if you don't realize that's what people are talking about when they say "we don't understand deep learning" - you're being deliberately obtuse.

===========================================================

edit to cindy (who was downvoted so much they can't be replied to): Thanks, wasn't aware. FWIW, I appreciate the info but I'll probably go on misusing grammar in that fashion til I die, ha. In fact, I've probably already made some mistake you wouldn't be fond of _in this edit_.

In any case thanks for the facts. I perused your comment history a tad and will just say that hacker news is (so, so disappointingly) against women in so many ways. It really might be best to find a nicer community (and I hope that doesn't come across as me asking you to leave!) ============================================================

AdieuToLogic 10 hours ago [-]
> I'm telling you right now, man - keep talking like this to people and you're going to make zero friends.

And I'm telling you right now, man - when you fire off an ad hominem attack such as:

  I think the real issue here is understanding _you_.
Don't expect the responder to engage in serious topical discussion with you, even if the response is formulated respectfully.
throwaway314155 6 hours ago [-]
What I meant to say is that you were deliberately speaking cryptically and with a tone of confident superiority. I wasn't trying to imply you were stupid (w.r.t. "Ad Hominem").

Seems clear to me neither of us odd going to change the others mind though at this point. Take care.

edit edit to cindy: =======================••• fun trick. random password generate your new password. don't look at it. clear your clipboard. you'll no longer be able to log in and no one else will have to deal with you. ass hole ========================== (for real though someone ban that account)

cindyllm 10 hours ago [-]
[dead]
cindyllm 7 hours ago [-]
[dead]
yard2010 14 hours ago [-]
I think so too - the latest AI changes mark the new "automate everything" era. When everything is automated, everything costs basically zero, as this will eliminate the most expensive part of every business - human labor. No one will make money from all the automated stuff, but no one would need the money anyway. This will create a society in which money is not the only value pursued. Instead of trying to chase papers, people would do what they are intended to - create art and celebrate life. And maybe fight each other for no reason.

I'm flying, ofc, this is just a weird theory I had in the back of my head for the past 20 years, and it seems like we're getting there.

Antirez you are the best

Disposal8433 3 hours ago [-]
> people would do what they are intended to - create art and celebrate life

We could have the same argument right now with UBI. But have you ever met the average human being?

globular-toast 13 hours ago [-]
You are forgetting that there is actually scarcity built into the planet. We are already very from being sustainable, we're eating into reserves that will never come back. There are only so many nice places to go on holiday. Only so much space to grow food etc. Economics isn't about money, it's about scarcity.
righthand 11 hours ago [-]
Are humans meant to create art and celebrate life. That just seems like something people into automation tell people.

Really as a human I’ve physically evolved to move and think in a dynamic way. But automation has reduced the need for me to work and think.

Do you not know the earth is saturated with artists already? There’s whole class of people that consider themselves technically minded and not really artists. Will they just roll over and die?

Everything basically costs zero is a pipe dream where there is no social order or economic system. Even in your basically zero system there is a lot of cost being hand waved away.

I think you need a rethink on your 20 year thought.

Guthur 14 hours ago [-]
It will only be zero as long as we don't allow rent seeking behaviour. If the technology has gatekeepers, if energy is not provided at a practically infinite capacity and if people don't wake themselves from the master/slave relationships we seem to so often desire and create, then I'm skeptical.

The latter one is probably the most intellectually interesting and potentially intractable...

I completely disagree with idea that money is currently the only driver of human endeavour, frankly it's demonstrably not true, at least not in it's direct use value, it maybe used as a proxy for power but it's also not directly correlatable.

Looking at it intellectually from a Hegelian lens of master/slave dialectic might provide some interesting insights. I think both sides are in some way usurped. The slaves position of actualisation through productive creation is taken via automation, but if that automation is also widely and freely available the masters position of status via subjection is also made common and therefore without status.

What does it all mean in the long run? Damned if I know...

howtofly 12 hours ago [-]
> The future may reduce the economic prosperity and push humanity to switch to some different economic system (maybe a better system).

Humans never truly produce anything; they only generate various forms of waste (resulting from consumption). Human technology merely enables the extraction of natural resources across magnitudes, without actually creating any resources. Given its enormous energy consumption, I strongly doubt that AI will contribute to a better economic system.

diggan 12 hours ago [-]
> Humans never truly produce anything; they only generate various forms of waste

What a sad way of viewing huge fields of creative expressions. Surely, a person sitting on a chair in a room improvising a song with a guitar is producing something not considered "waste"?

howtofly 12 hours ago [-]
It's all about human technology, which enables massive resource consumption.

I should really say humans never truly produce anything in the realm of technology industry.

diggan 11 hours ago [-]
But that's clearly not true for every technology. Photoshop, Blender and similar creative programs are "technology", and arguably they aren't as resource-intensive as the current generative AI hype, yet humans used those to create things I personally wouldn't consider "waste".
mcherm 12 hours ago [-]
> Humans never truly produce anything; they only generate various forms of waste

Counterpoint: nurses.

cjfd 15 hours ago [-]
If we accept the possibility that AI is going to be more intelligent than humans the outcome is obvious. Humans will no longer be needed and either go extinct or maybe be kept by the AI as we now keep pets or zoo animals.
silisili 18 hours ago [-]
> Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch.

Companies have to be a bit more farsighted than this thinking. Assuming LLMs reach this peak...if say, MS says they can save money because they don't need XYZ anymore because AI can do it, XYZ can decide they don't need Office anymore because AI can do it.

There's absolutely no moat anymore. Human capital and the shear volume of code are the current moat. An all capable AI completely eliminates both.

It's a bit scary to say "what then?" How do you make money in a world where everyone can more or less do everything themselves? Perhaps like 15 Million Merits, we all just live in pods and pedal bikes all day to power the AI(s).

anon-3988 18 hours ago [-]
Isn't this exactly the goals of open source software? In an ideal open source world, anything and everything is freely available, you can host and set up anything and everything on your own.

Software is now free, and all people care about is the hardware and the electricity bills.

bamboozled 18 hours ago [-]
This is why I’m not so sure we’re all going to end up in breadlines even if we all lose our jobs, if the systems are that good (tm) then won’t we all just be doing amazing things all the time. We will be tired of winning ?
anon-3988 17 hours ago [-]
> won’t we all just be doing amazing things all the time. We will be tired of winning ?

There's a future where we won't be because to do the amazing things (tm), we need resources that are beyond what the average company can muster.

That is to say, what if the large companies becomes so magnificiently efficient and productive that it renders the rest of the small company pointless? What if there's no gaps in the software market anymore because it will be automatically detected and solved by the system?

AdieuToLogic 18 hours ago [-]
>> Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch.

> Assuming LLMs reach this peak...

  Generative AI != Artificial General Intelligence
> Human capital and the shear volume of code are the current moat. An all capable AI completely eliminates both.

I would posit that understanding is "the current moat."

bawana 8 hours ago [-]
If computers are ‘bicycles for the mind’, AI is the ‘self-driving car for the mind’. Which technology results in worse accidents? Did automobiles even improve our lives or just change the tempo beyond human bounds?
yubblegum 12 hours ago [-]
Butlerian Jihad it is then.
vivzkestrel 17 hours ago [-]
I ll happily believe it the day something doesnt adhere to the Gartner cycle, until then it is just another bubble like dotcom, chatbots, crypto and the 456345646 things that came before it
eviks 18 hours ago [-]
> AI systems continue to impress with their ability to replicate certain human skills. Even if imperfect, such systems were a few years ago science fiction.

In which science fiction were the dreamt up robots as bad?

throwaway20174 19 hours ago [-]
Humans have a proven history of re-inventing economic systems, so if AI ends up thinking better than we do (yet unproven this is possible), then we should have superior future systems.

But the question is a system optimized for what? That emphasizes huge rewards for the few, and that requires the poverty of some (or many). Or a more fair system. Not different from the challenges of today.

I'm skeptical even a very intelligent machine will change the landscape of our dificult decisions, but will accelerate which direction we decide (or is decided for us), that we go.

Ericson2314 17 hours ago [-]
The right way to think about "jobs" is that we could have given ourselves more leisure on the basis of previous technological progress than we actually did.
neom 17 hours ago [-]
Economics Explained recently did a good video about this idea: - Why do We Still Need to Work? - https://www.youtube.com/watch?v=6KXZP-Deel4
Sateeshm 16 hours ago [-]
We are too far from exploring alternate economies. LLMs will not push us there, atleast not in their current state.
yubblegum 12 hours ago [-]
Clear long term winners are energy producers. AI can replace everything including hardware design & production but it can not produce energy out of thin air.
yapyap 12 hours ago [-]
Well this is a pseudo-smart article if I’ve ever seen one.

“It was not even clear that we were so near to create machines that could understand the human language, write programs, and find bugs in a complex code base”

The author is critical of the professionals in AI saying “ even the most prominent experts in the field failed miserably again and again to modulate the expectations” yet without a care sets the expectation of LLMs understanding human language in the first paragraph.

Also it’s a lot of if this then that, the summary of it would be: if AI can continue to grow it might become all encompassing.

To me it reads like a baseless article written by someone blinded by their love for AI to see what a good blogpost is but not yet blinded enough to claim ‘AGI is right around the corner’. Pretty baseless but safe enough to have it rest on conditionals.

throwawayffffas 3 hours ago [-]
> Yet the economic markets are reacting as if they were governed by stochastic parrots.

That's because they are. The stock market is all about narrative.

> Nor is it possible to imagine a system where a few mega companies are the only providers of intelligence.

Yes it is, the mega companies that wil be providing the intelligence are, Nvidia, AMD, TSMC, ASML, add your favourite foundry.

1970-01-01 6 hours ago [-]
We will continue to have poor understanding of LLMs until a simple model can be constructed and taught to a classroom of children. It is only different in this aspect. It is not magic. It is not intelligent. Until we teach the public exactly what it is doing in a way simple adults will understand, enjoy hot take after hot take.
netcan 10 hours ago [-]
Here's what I want.

A compilation of claims, takes, narratives, shills, expectations and predictions from the late 90s "information superhighway" era.

I wonder if LLMs can produce this.

A lot of the dotcom exuberance was famously "correct, but off by 7 years." But... most of it flat wrong. Right but early applies mostly to the meta investment case: "the internet business will be big."

One that stands out in my memory is "turning billion dollar industries into million dollar industries."

With ubiquitous networked computers, banking and financial services could become "mostly software." Banks and whatnot would all become hyper-efficient Vanguard-like companies.

We often starts with an observation that economies are efficiency seeking. Then we imagine the most efficient outcome given legible constraints of technology, geography and whatnot. Then we imagine dynamics and tensions in a world with that kind of efficiency.

This, incidentally, is also "historical materialism." Marx had a lot of awe for modern industry, the efficiency of capitalism and whatnot. Almost Adam Smith-like... at times.

Anyway... this never actually works out. The meta is a terrible predictor of where things will go.

Imagine law gets more efficient. Will we have more or less lawyers? It could go either way.

0points 12 hours ago [-]
antirez should retire, his recent nonsense AI take is shadowing his merits as a competent programmer.
BoorishBears 19 hours ago [-]
I don't get how post GPT-5's launch we're still getting articles where the punchline is "what if these things replace a BUNCH of humans".
stephc_int13 19 hours ago [-]
Salvatore is right about the fact that we have not seen the full story yet, LLMs are stalling/plateauing but active research is already ongoing to find different architectures and models.

And I think the effort here can be compared in scale to the Manhattan or Apollo projects, but there is also the potential for a huge backlash to the hype that was built up and created what is arguably a bubble, so this is a race against the clock.

I also think he is wrong about the markets reaction, markets are inherently good integrators and bad predictors, we should not expect to learn anything about the future by looking at stocks movements.

gizmo686 19 hours ago [-]
Manhattan and Apollo were both massive engineering efforts; but fundamentally we understood the science behind them. As long as we would be able to solve some fairly clearly stated engineering problems and spend enough money to actual build the solutions, those projects would work.

A priori, it was not obvious that those clearly stated problems had solutions within our grasp (see fusion) but at least we knew what the big picture looks like.

With AI, we don't have that, and never really had that. We've just been gradually making incremental improvements to AI itself, and exponential improvements in the amount of raw compute we can through at it. We know that we are reaching fundamental limits on transistor density so compute power will plateau unless we find a different paradigm for improvement; and those are all currently in the same position as fusion in terms of engineering.

zdragnar 18 hours ago [-]
LLMs are just the latest in a very long line of disparate attempts at making AI, and is arguably the most successful.

That doesn't mean the approach isn't an evolutionary dead end, like every other so far, in the search for AGI. In fact, I suspect that is the most likely case.

copperx 19 hours ago [-]
Current GenAI is nothing but a proof of concept. The seed is there. What AI can do at the moment is irrelevant. This is like the discovery of DNA. It changed absolutely everything in biology.

The fact that something simple like the Transformer architecture can do so much will spark so many ideas (and investment!) that it's hard to imagine that AGI will not happen eventually.

BoorishBears 19 hours ago [-]
> Salvatore is right about the fact that we have not seen the full story yet, LLMs are stalling/plateauing but active research is already ongoing to find different architectures and models.

They will need to be so different that any talk implying current LLMs eventually replaced humans will be like saying trees eventually replaced horses because the first cars were wooden.

> And I think the effort here can be compared in scale to the Manhattan or Apollo projects, but there is also the potential for a huge backlash to the hype that was built up and created what is arguably a bubble, so this is a race against the clock.

It's not useful to blindly compare scale. We're not approaching AI like the Manhattan or Apollo projects, we're approaching this like we did crypto, and ads, and other tech.

That's not to say nothing useful will come out of it, I think very amazing things will come out of it and already have... but none of them will resemble mass replacement of skilled workers.

We're already so focused on productization and typical tech distractions that this is nothing like those efforts.

(In fact thinking a bit more, I'd say this is like the Space Shuttle. We didn't try to make the best spacecraft for scientific exploration and hope later on it'd be profitable in other ways... instead we immediately saddled it with serving what the Air Force/DoD wanted and ended up doing everything worse.)

> I also think he is wrong about the markets reaction, markets are inherently good integrators and bad predictors, we should not expect to learn anything about the future by looking at stocks movements.

I agree, so it's wrong about the over half of punchline too.

noduerme 19 hours ago [-]
>> mass replacement of skilled workers

unless you consider people who write clickbait blogs to be skilled workers, in which case the damage is already done.

energy123 19 hours ago [-]
I have to tap the sign whenever someone talks about "GPT-5"

> AI is exceptional for coding! [high-compute scaffold around multiple instances / undisclosed IOI model / AlphaEvolve]

> AI is awesome for coding! [Gpt-5 Pro]

> AI is somewhat awesome for coding! ["gpt-5" with verbosity "high" and effort "high"]

> AI is a pretty good at coding! [ChatGPT 5 Thinking through a Pro subscription with Juice of 128]

> AI is mediocre at coding! [ChatGPT 5 Thinking through a Plus subscription with a Juice of 64]

> AI sucks at coding! [ChatGPT 5 auto routing]

mrbungie 19 hours ago [-]
People just want to feel special pointing a possibility, so in case it happens, they can then point towards their "insight".
ares623 18 hours ago [-]
I kind of want to put up a wall of fame/shame of these people to be honest.

Whether they turn out right or wrong, they undoubtedly cheered on the prospect of millions of people suffering just so they can sound good at the family dinner.

I wouldn’t want to work for or with these people.

Davidzheng 17 hours ago [-]
sorry but prediction and cheering on is different. If there's a tsunami coming, not speaking about it doesn't help the cause.
nurettin 19 hours ago [-]
Or they are experts in one field and think that they have valuable insight into other fields they are not experts on.
K0balt 19 hours ago [-]
LLMs are limited because we want them to do jobs that are not clearly defined / have difficult to measure progress or success metrics / are not fully solved problems (open ended) / have poor grounding in an external reality. Robotics does not suffer from those maladies. There are other hurdles, but none are intractable.

I think we might see AI being much, much more effective with embodiment.

jazzyjackson 19 hours ago [-]
do you know how undefined and difficult to measure it is to load silverware into a dishwasher?
chrisco255 18 hours ago [-]
What? Robotics will have far more ambiguity and nuance to deal with than language models, and they'll have to analyze realtime audio and video to do so. Jobs are not so clearly defined as you imagine in the real world. For example, explain to me what a plumber does, precisely and how you would train a robot to do so? How do you train it to navigate ANY type of buildings internal plumbing structure and safely repair or install for?
bravesoul2 15 hours ago [-]
What does that have to do with it? One company (desperate to keep runway), one product, one release.
markmoscov 19 hours ago [-]
what if they replace internet comments?

As a large language model developed by OpenAI I am unable to fulfill that request.

asciimov 19 hours ago [-]
Not sure the last time you went on reddit, but I wouldn't be surprised if around 20% of posts and comments there are LLM generated.
nojito 19 hours ago [-]
The amount of innovation in the last 6-8 months has been insane.
BoorishBears 19 hours ago [-]
Innovation in terms of helping devs do cool things has been insane.

There've been next to no advancements relative to what's needed to redefine our economic systems by replacing the majority of skilled workers.

-

Productionizing test-time compute covers 80% of what we've gotten in the last 6-8 months. Advancements in distillation and quantization cover the 20% of the rest... neither unlocks some path to mass unemployment.

What we're doing is like 10x'ing your vertical leap when your goal is to land on the moon: 10x is very impressive and you're going to dominate some stuff in ways no one ever thought possible.

But you can 100x it and it's still not getting you to the moon.

juped 18 hours ago [-]
I think GPT-5's backlash was the beginning of the end of the hype bubble, but there's a lot of air to let out of it, as with any hype bubble. We'll see it for quite some time yet.
mcswell 6 hours ago [-]
Reads like it was written by an AI.
ausbah 15 hours ago [-]
i don’t this article really says anything that hasn’t been already said for the past two years. “if AI actually take jobs, it will be a near-apocalyptic system shock if there aren’t news jobs to replace them”. i still think it’s at best too soon to say if jobs have permanently been lost

they are tremendous tools but seems like they make a near equal amount of work from the stuff the save time on

voidhorse 18 hours ago [-]
I agree with the general observation, and I've been of this mind since 2023 (if AI really gets as good as the boosters claim, we will need a new economic system). I usually like Antirez's writing, but this post was a whole lot of...idk nothing? I don't feel like this post said anything interesting, and it was kind incoherent at moments. I think in some respects it's a function of the technology and situation we're in—the current wave of "AI" is still a lot of empty promises and underdelivery. Yes, it is getting better, and yes people are getting clever by letting LLMs use tools, but these things still aren't intelligent insofar as they do not reason. Until we achieve that, I'm not sure there's really as much to fear as everyone thinks.

We still need humans in the loop as of now. These tools are still very far from being good enough to fully autonomously manage each other and manage systems, and, arguably, because the systems we build are for humans we will always need humans to understand them to some extent. LLMs can replace labor, but they cannot replace human intent and teleology. One day maybe they will achieve intentions of their own, but that is an entirely different ballgame. The economy ultimately is a battle of intentions, resources, and ends. And the human beings will still be a part of this picture until all labor can be fully automated across the entire suite of human needs.

We should also bear in mind our own bias as "knowledge workers". Manual laborers arguably already had their analogous moment. The encoding kept on humming. There isn't anything particularly special about "white collar" work in that regard. The same thing may happen. A new industry requiring new skills might emerge in the fallout of white collar automation. Not to mention, LLMs only work in the digital realm. handicraft artisanry is still a thing and is still, appreciated, albeit in much smaller markets.

strogonoff 16 hours ago [-]
As any other technology, at the end of the day LLMs are used by humans for humans’ selfish, driven by mental issues and trauma and overcompensation, maybe even paved with good intentions but leading you know where, short-sighted goals. If we were to believe that LLMs are going to somehow become extremely powerful, then we should be concerned, as it is difficult to imagine how that can lead to an optimal outcome organically.

From the beginning, corporations and their collaborators at the forefront of this technology tainted it by ignoring the concept of intellectual property ownership (which had been with us in many forms for hundreds if not thousands of years) in the name of personal short-term gain and shareholder interest or some “the ends justify the means” utilitarian calculus.

sawyna 14 hours ago [-]
Unpopular opinion: Let us say AI achieves general intelligence levels. We tend to think of current economy, jobs, research as a closed system, but indeed it is a very open system.

Humans want to go to space, start living on other planets, travel beyond solar system, figure out how to live longer and so on. The list is endless. Without AI, these things would take a very long time. I believe AI will accelerate all these things.

Humans are always ambitious. That ambition will push us to use AI more than it's capabilities. The AI will get better at these new things and the cycle repeats. There's so much humans know and so much more that we don't know.

I'm less worried about general intelligence. Rather in more worried about how humans are going to govern themselves. That's going to decide whether we will do great things or end humanity. Over the last 100 years, we start thinking more about "how" to do something rather than the "why". Because "how" is becoming more and more easier. Today it's much more easier and tomorrow even more. So nobody's got the time to ask "why" we are doing something, just "how" to do something. With AI I can do more. That means everyone can do more. That means governments can do so much more. Large scale things in a short period. If those things are wrong or have irreversible consequences, we are screwed.

crummy 17 hours ago [-]
> However, if AI avoids plateauing long enough to become significantly more useful and independent of humans, this revolution is going to be very unlike the past ones. Yet the economic markets are reacting as if they were governed by stochastic parrots.

Aren't the markets massively puffed up by AI companies at the moment?

edit: for example, the S&P500's performance with and without the top 10 (which is almost totally tech companies) looks very different: https://i.imgur.com/IurjaaR.jpeg

karakot 2 hours ago [-]
As someone who keeps 401k 100% in sp500 that scares me. If the bubble pops it will erase half of gains, if the bubble continues then the gap(490 vs 10) will grow even larger.
iLoveOncall 15 hours ago [-]
> However, if AI avoids plateauing long enough

I'm not sure how someone can seriously write this after the release of GPT-5.

Models have started to plateau since ChatGPT came out (3 years ago) and GPT-5 has been the final nail in this coffin.

andai 7 hours ago [-]
o3 was actually GPT-5. They just gave it a stupid name, and made it impractical for general usage.

But in terms of wow factor, it was a step change on the order of GPT-3 -> GPT-4.

So now they're stuck slapping the GPT-5 label on marginal improvements because it's too awkward to wait for the next breakthrough now.

On that note, o4-mini was much better for general usage (speed and cost). It was my go-to for web search too, significantly better than 4o and only took a few seconds longer. (Like a mini Deep Research.)

Boggles the mind that they removed it from the UI. I'm adding it back to mine right now.

iLoveOncall 29 minutes ago [-]
I have acid reflux every time I see the term "step change" used to talk about a model change. There hasn't been any model that has been a "step change" over its predecessor.

It's much more like each new model climbs another step of the ladder that goes up the step, and so far we can't even see the top of the ladder.

My suspicion is also that the ladder actually ends way before it reaches the next step, and LLMs are a dead end. Everything indicates it so far.

Let's not even talk about "reasoning models", aka spend twice the tokens and twice the time on the same answer.

grumpy-de-sre 14 hours ago [-]
Honestly the long-term consequences of Baumol's disease scare me more than some AI driven job disruption dystopia.

If we want to continue on the path of increased human development we desperately need to lift the productivity of a whole bunch of labor intensive sectors.

We're going to need to seriously think about how to redistribute the gains, but that's an issue regardless of AI (things like effective tax policy).

mdaniel 2 hours ago [-]
I didn't recognize that expression, so in case others were in the same boat https://en.wikipedia.org/wiki/Baumol_effect
17 hours ago [-]
scrollaway 11 hours ago [-]
Humans Need Not Apply - Posted exactly 11 years ago this week.

https://www.youtube.com/watch?v=7Pq-S557XQU

mdaniel 2 hours ago [-]
I wish I could upvote this a million times, I love his content so much

Coincidentally, I'm reading your comment while wearing my CGP Grey t-shirt

Rob_Polding 15 hours ago [-]
GenAI is a bubble, but that’s not the same as the broader field of AI, which is completely different. We will probably not even be using chat bots in a few years, better interfaces will be developed with real intelligence, not just predictive statistics.
17 hours ago [-]
tgbugs 16 hours ago [-]
I think there is an unspoken implication built into the assumption that AI will be able to replace a wide variety of existing jobs, and that is that those current jobs are not being done efficiently. This is sometimes articulated as bullshit jobs, etc. and if AI takes over those the immediate next thing that will happen is that AI will look around ask why _anyone_ was doing that job in the first place. The answer was articulated 70 years ago in [0].

The only question is how much fat there is to trim as the middle management is wiped out because the algorithms have determined that they are completely useless and mostly only increase cost over time.

Now, all the AI companies think that they are going to be deriving revenue from that fat, but those revenue streams are going to disappear entirely because a huge number of purely politic positions inside corporations will vanish, because if they do not the corporation will go bankrupt competing with other companies that have already cut the fat. There won't be additional revenue streams that get spent on the bullshit. The good news is that labor can go somewhere else, and we will need it due to a shrinking global population, but the cushy bullshit management job is likely disappear.

At some point AI agents will cease to be sycophantic and when fed the priors for the current situation that a company is in will simply tell it like it is, and might even be smart enough to get the executives to achieve the goal they actually stated instead of simply puffing up their internal political position, which might include a rather surprising set of actions that could even lead to the executive being fired if the AI determines that they are getting in the way of the goal [1].

Fun times ahead.

0. https://web.archive.org/web/20180705215319/https://www.econo... 1. https://en.wikipedia.org/wiki/The_Evitable_Conflict

exasperaited 7 hours ago [-]
> It was not even clear that we were so near to create machines that could understand the human language

It's not really clear to me to what extent LLMs even do *understand* human language. They are very good at saying things that sound like a responsive answer, but the head-scratching, hard-to-mentally-visualise aspect of all of this is that this isn't the same thing at all.

alex1138 18 hours ago [-]
Open letter to tech magnates:

By all means, continue to make or improve your Llamas/Geminis (to the latter: stop censoring Literally Everything. Google has a culture problem. To the former... I don't much trust your parent company in general)

It will undoubtedly lead to great advances

But for the love of god do not tightly bind them to your products (Kagi does it alright, they don't force it on you). Do not make your search results worse. Do NOT put AI in charge of automatic content moderation with 0 human oversight (we know you want to. The economics of it work out nicely for you, with no accountability). People already as is get banned far too easily by your automated systems

poink 18 hours ago [-]
> It will undoubtedly lead to great advances

"Undoubtedly" seems like a level of confidence that is unjustified. Like Travis Kalanick thinking AI is just about to help him discover new physics, this seems to suggest that AI will go from being able to do (at best) what we can already do if we were simply more diligent at our tasks to being something genuinely more than "just" us

amanaplanacanal 12 hours ago [-]
Angela Collier has a hilarious video on tech bros thinking they can be physicists.
mdaniel 2 hours ago [-]
Is it this? https://www.youtube.com/watch?v=GmJI6qIqURA

and, germane to this discussion: https://www.youtube.com/watch?v=TMoz3gSXBcY vibe physics

ekianjo 14 hours ago [-]
For every industrial revolution (and we dont even know if AI is one yet) this kind of doom prediction has been around. AI will obviously create a lot of jobs too. the infra to run AI will not building itself, the people who train models will still be needed, the AI supervisors or managers or whatever we call it will be necessary part of the new workflows. And if your job needs hands you will be largely unaffected as there is no near future where robots will replace the flexibility of what most humans can do.
jongjong 16 hours ago [-]
People thought it was the end of history and innovation would be all about funding elaborate financial schemes; but now with AI people are finding themselves running all these elaborate money-printing machines and they're unsure if they should keep focusing on those schemes as before or actually try to automate stuff. The risk barrier has been lowered a lot to actually innovate, almost as low risk as doing a scheme but still people are having doubts. Maybe because people don't trust the system to reward real innovation.

LLMs feel like a fluke, like OpenAI was not intended to succeed... And even now that it succeeded and they try to turn the non-profit into a for-profit, it kind of feels like they don't even fully believe their own product in terms of its economic capacity and they're still trying to sell the hype as if to pump and dump it.

andai 6 hours ago [-]
They've made it pretty clear with the GPT-5 launch that they don't understand their product or their users. They managed to simultaneously piss off technical and non-technical people.

It doesn't seem like they ever really wanted to be a consumer company. Even in the GPT-5 launch they kept going on about how surprised they are that ChatGPT got any users.

MattDamonSpace 17 hours ago [-]
> But stocks are insignificant in the vast perspective of human history

This really misunderstands what the stock market tracks

bananapub 8 hours ago [-]
I think something everyone is underpricing in our area is that LLMs are uniquely useful for writing code for programmers.

it's a very constrained task, you can do lots of reliable checking on the output at low cost (linters, formatters, the compiler), the code is mostly reviewed by a human before being committed, and there's insulation between the code and the real world, because ultimately some company or open source project releases the code that's then run, and they mostly have an incentive to not murder people (Telsa except, obviously).

it seems like lots of programmers are then taking that information and then deeply overestimating how useful it is at anything else, and these programmers - and the marketing people who employ them - are doing enormous harm by convincing e.g. HR departments that it is of any value to them for dealing with complaints, or much much more danderously, convincing governments that it's useful for how they deal with humans asking for help.

this misconception (and deliberate lying by people like OpenAI) is doing enormous damage to society and is going to do much much more.

russellbeattie 18 hours ago [-]
> "However, if AI avoids plateauing long enough to become significantly more useful..."

As William Gibson said, "The future is already here, it's just not evenly distributed." Even if LLMs, reasoning algorithms, object recognition, and diffusion models stopped improving today, we're still at a point where massive societal changes are inevitable as the tech spreads out across industries. AI is going to steadily replace chair-to-keyboard interfaces in just about every business you can imagine.

Interestingly, AI seems to be affecting the highest level "white collar" professionals first, rather than replacing the lowest level workers immediately, like what happened when blue collar work was automated. We're still pretty far away from AI truck drivers, but people with fine arts or computer science degrees, for example, are already feeling the impact.

"Decimation" is definitely an accurate way to describe what's in the process of happening. What used to take 10 floors of white collar employees will steadily decline to just 1. No idea what everyone else will be doing.

ThomPete 10 hours ago [-]
It's really very simple.

We used to have deterministic systems that required humans either through code, terminals or interfaces (ex GUI's) to change what they were capable of.

If we wanted to change something about the system we would have to create that new skill ourselves.

Now we have non-deterministic systems that can be used to create deterministic systems that can use non-deterministic systems to create more deterministic systems.

In other words deterministic systems can use LLMs and LLMs can use deterministic systems all via natural language.

This slight change in how we can use compute have incredible consequences for what we will be able to accomplish both regarding cleaning up old systems and creating completely new ones.

LLMs however will always be limited by exploring existing knowledge. They will not be able to create new knowledge. And so the AI winter we are entering is different because it's only limited to what we can train the AI to do, and that is limited to what new knowledge we can create.

Anyone who work with AI everyday know that any idea of autonomous agents is so beyond the capabilities of LLMs even in principle that any worry about doom or unemployment by AI is absurd.

throwaway314155 18 hours ago [-]
About 3 years late to this "hot take".
globular-toast 14 hours ago [-]
We currently work more than we ever have. Just a couple of generations ago it was common for a couple to consist of one person who worked for someone else or the public, and one who worked at home for themselves. Now we pretty much all have to work for someone else full time then work for ourselves in the evening. And that won't make you rich, it will just make you normal.

Maybe a "loss of jobs" is what we need so we can go back working for ourselves, cooking our own food, maintaining our own houses etc.

This is why I doubt it will happen. I think "AI" will just end up making us work even more for even less.

fullstackchris 3 hours ago [-]
> Yet the economic markets are reacting as if they were governed by stochastic parrots

uh last time I checked, "markets" around the world are a few percent from all time highs

andai 9 hours ago [-]
Reposting the article so I can read it in a normal font:

Regardless of their flaws, AI systems continue to impress with their ability to replicate certain human skills. Even if imperfect, such systems were a few years ago science fiction. It was not even clear that we were so near to create machines that could understand the human language, write programs, and find bugs in a complex code base: bugs that escaped the code review of a competent programmer.

Since LLMs and in general deep models are poorly understood, and even the most prominent experts in the field failed miserably again and again to modulate the expectations (with incredible errors on both sides: of reducing or magnifying what was near to come), it is hard to tell what will come next. But even before the Transformer architecture, we were seeing incredible progress for many years, and so far there is no clear sign that the future will not hold more. After all, a plateau of the current systems is possible and very credible, but it would likely stimulate, at this point, massive research efforts in the next step of architectures.

However, if AI avoids plateauing long enough to become significantly more useful and independent of humans, this revolution is going to be very unlike the past ones. Yet the economic markets are reacting as if they were governed by stochastic parrots. Their pattern matching wants that previous technologies booms created more business opportunities, so investors are polarized to think the same will happen with AI. But this is not the only possible outcome.

We are not there, yet, but if AI could replace a sizable amount of workers, the economic system will be put to a very hard test. Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch. Nor is it possible to imagine a system where a few mega companies are the only providers of intelligence: either AI will be eventually a commodity, or the governments would do something, in such an odd economic setup (a setup where a single industry completely dominates all the others).

The future may reduce the economic prosperity and push humanity to switch to some different economic system (maybe a better system). Markets don’t want to accept that, so far, and even if the economic forecasts are cloudy, wars are destabilizing the world, the AI timings are hard to guess, regardless of all that stocks continue to go up. But stocks are insignificant in the vast perspective of human history, and even systems that lasted a lot more than our current institutions eventually were eradicated by fundamental changes in the society and in the human knowledge. AI could be such a change.

13 hours ago [-]
15 hours ago [-]
abhaynayar 13 hours ago [-]
At the moment I just don't see AI in its current state or future trajectory as a threat to jobs. (Not that there can't be other reasons why jobs are getting harder to get). Predictions are hard, and breakthroughs can happen, so this is just my opinion. Posting this comment as a record to myself on how I feel of AI - since my opinion on how useful/capable AI is has gone up and down and up and down again over the last couple of years.

Most recently down because I worked on two separate projects over the last few weeks with the latest models available on GitHub Copilot Pro. (GPT-5, Claude Sonnet 4, Gemini 2.5 Pro, and some lesser capable ones at times as well). Trying the exact same queries for code changes across all three models for a majority of the queries. I saw myself using Claude most, but it still wasn't drastically better than others, and still made too many mistakes.

One project was a simple health-tracking app in Dart/Flutter. Completely vibe-coded, just for fun. I got basic stuff to start working. Over the days I kept finding bugs as I starting using it. Since I truly wanted to use this app in my daily life, at one point I just gave up cause fixing the bugs was getting way too annoying. Most "fixes" as I later got into the weeds of it, were wrong, with wrong assumptions, made changes that seemed to fix the problem at the surface but introducing more bugs and random garbage, despite giving a ton of context and instructions on why things are supposed to be a certain way, etc. I was constantly fighting with the model. Would've been much easier to do much more on my own and using it a little bit.

Another project was in TypeScript, where I did actually use my brain, not just vibe-coded. Here, AI models were helpful because I mostly used them to explain stuff. And did not let them make more than a few lines of code changes at most at a time. There was a portion of the project which I kinda "isolated" which I completely vibe-coded and I don't mind if it breaks or anything as it is not critical. It did save me some time but I certainly could've done it on my own with a little more time, while having code that I can understand fully well and edit.

So the way I see using these models right now is for research/prototyping/throwaway kind of stuff. But even in that, I literally had Claude 4 teach me something wrong about TypeScript just yesterday. It told me a certain thing was deprecated. I made a follow up question on why that thing is deprecated and what's used instead, it replied with something like "Oops! I misspoke, that is not actually true, that thing is still being used and not deprecated." Like, what? Lmao. For how many things have I not asked a follow up and learnt stuff incorrectly? Or asked and still learnt incorrectly lmao.

I like how straightforward GPT-5 is. But apart from that style of speech I don't see much other benefit. I do love LLMs for personal random searches like facts/plans/etc. I just ask the LLM to suggest me what to do just to rubber duck or whatever. Do all these gains add up towards massive job displacement? I don't know. Maybe. If it is saving 10% time for me and everyone else, I guess we do need 10% less people to do the same work? But is the amount of work we can get paid for fixed and finite? Idk. We (individuals) might have to adapt and be more competitive than before depending on our jobs and how they're affected, but is it a fundamental shift? Are these models or their future capabilities human replacements? Idk. At the moment, I think they're useful but overhyped. Time will tell though.

jeffWrld 19 hours ago [-]
[dead]
inquirerGeneral 10 hours ago [-]
[dead]
ETH_start 16 hours ago [-]
[dead]
cratermoon 18 hours ago [-]
This same link was submitted 2 days ago. My comment there still applies.

LLMs do not "understand the human language, write programs, and find bugs in a complex code base"

"LLMs are language models, and their superpower is fluency. It’s this fluency that hacks our brains, trapping us into seeing them as something they aren’t."

https://jenson.org/timmy/