Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
[flagged] Ask HN: Why are so many people in tech in denial about AI?
7 points by arisAlexis on Nov 1, 2024 | hide | past | favorite | 50 comments
I have regular conversations with fellow developers and their mindset goes like: today X can't be done so tomorrow AI will also not be able to do X for Y biased reason.

In the meanwhile the most important people in tech including Nvidia CEO, Sam, Hinton, Musk and many others believe that AI will very soon be able to do everything a coder can do today. Why does it matter if it's in 2 years of 5 years? It's much earlier than your retirement date. Nobody is planning for this.

I believe this is a case of "normalcy bias" where crowds refuse to see reality because it's too disturbing.



> the most important people in tech including Nvidia CEO, Sam, Hinton, Musk and many others believe that AI will very soon be able to do everything a coder can do today.

They all have a serious financial interest in that being the case, so that's entirely unsurprising and not terribly persuasive.

> I believe this is a case of "normalcy bias" where crowds refuse to see reality because it's too disturbing.

I believe that nobody can actually know what all this will be right now, so what you're seeing isn't one group denying reality and another group accepting it. You're seeing two groups speculating about the future and expressing different opinions.


> > the most important people in tech including Nvidia CEO, Sam, Hinton, Musk and many others believe that AI will very soon be able to do everything a coder can do today. > They all have a serious financial interest in that being the case, so that's entirely unsurprising and not terribly persuasive.

perhaps more importantly there have been constant hype cycles of tech CEOs saying the same thing, and plenty of submarine advertising efforts to hype them in the same way OP is.

this skepticism isn't coming from nowhere; plenty of history of CEOs lying through their teeth to the world


Level headed take. There’s no secrets to AGI that Altman/Musk/etc know that you or I don’t. The cutting edge research is out of the bag and in the open. They see AGI as an inevitable result of funding continued research in the space. There is most certainly some hubris there as well that they think they are uniquely positioned to be the one to bring it across the line.

Either way, it’s all speculation about the future. I think if we’re being truly honest and objective, transformer models are interesting and occasionally useful, but they are still a very far cry from AGI.

I also think AGI will become marketing term. We’re going to see it lose any meaning as the capitalists try to upsell us and convince us “mission accomplished, we now have the world’s first AGI”, at the nearest convenient moment, if it helps them juice short term profits.


Not unique to capitalism, cf. 20th Century fascist states, communist states, totalitarian states in general have all upsold nonsense for short-term benefits. Capitalism tends to have fewer direct murderous/genocidal side effects and the emperor is told he is naked more quickly, directly and effectively.


From my perspective, there are few things that are hitting at the same time.

One, the FAANGs have been captured by MBA types, not computer scientists, so they do not have the background to carefully gauge what is and is not technically possible. Given that the C Suite is invested, are you as a middle manager going to speak up. When you have people, even Musk, claiming that X will be possible in Y years, I discount it. Even Hinton isn't close to the tools here.

Two, there are areas that they seem to have genuine use. Boilerplate email generators, musicians, graphic designers, and visual effects people should watch their back. Professors who merely add problems from another textbook than the assigned one are likely also in trouble. Maybe things like logic programming or unit tests, not sure, but those seem harder to mess up.

Three, we are seeing what happens when a statistical engine, and not a logic engine run amok. If you add non relevant information or change the order of elements, AFAIK, these tools cannot incorporate the change in information. So I think that their usefulness as a teaching tool is also overstated. If we ever get an engine that can explain it's choices, well, that is also a difference.

Lastly, tech is really looking for a genuinely transformational technology. They arguably haven't had a real hit since cloud computing. Maybe since the iPhone. Their last few attempts have run into the difficulty that the universe may be harder to model inside a silicon box than is worth it (self driving cars, cryptocurrency, video game streaming), and if they have to go from never ending growth companies to large S&P 500 companies that have to compete... well... things will be different. Especially compensation for medium talent software engineers.

However: Deepmind seems like they are 50 years in the future for all of this, so if someone there says I'm wrong about any of this, listen to them and not me.


Some of us have loooooong experience with AI (I've shipped product with AI features, and also have a 30Y-old AI degree) but also of marketing hype and outright tech-based fraud (eg NFTs). There is some useful stuff going on, but outrageous self-serving wishful thinking by people with a track record of difficulty with reality makes it very hard to believe any of their promises and predictions.


[flagged]


Gates has plently of problems with reality (eg he spent years trying to deny / suppress the Internet as we now know it, along with many extremely questionable business practices and an utter disregard for user privacy and security IMHO), and a Nobel is very evidently no guarantee that all a winner's thoughts are correct. They may be right about aspects of AI or not.

It baffles me that some people seem to succumb to the "argument from authority" fallacy.


So it baffles you that people trust their own experience and developed intuitions instead of just throwing them out because some big name people contradict them?


Yes exactly. What I hear is physics teachers saying they know better than Bohr and Einstein.


Hinton (who's the only person on the list in your post who could really qualify as an expert here) has already made some glaringly wrong predictions about AI progress, such as the prediction that radiologists would be obsolete by now.

No-one can predict the future. Experts have a slightly better track record than tea leaves – but only slightly. There are experts in $DISCIPLINE, but there are no experts on "the future of $DISCIPLINE".


Of course AI is better than radiologists today but it will take some time to replace them. I give up though just wanted to see reactions.


"Of course AI is better"? In all cases for all types of analysis and treatment? That is again trivially falsifiable. To be clear, I'm researching physically next to the AI people on campus and some of them are doing very smart stuff (including in medicine) but I don't think that any of the researchers or profs would repeat the casual claim that you just made.


[flagged]


[flagged]


[flagged]


Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly, in this thread and others. It's not what this site is for, and destroys what it is for.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


If you're not going to listen to the answers, why did you ask the question?


Bohr and Einstein would be aghast at the idea of someone giving them credibility for who they are rather than questioning the merits of their ideas.

It should perhaps give you pause that you’re implicitly conflating a group of arrogant capitalists with historical greats of science. Altman or Musk are not building or researching AI, they are marketing and selling it, and they will continue to do so in spite of the current or future state or capabilities.

I would be careful not to confuse the loudest in the room with the smartest.


Bohr and Einstein did the math and had the proofs. and the math held up.

Copilot has yet to give me a consistently correct set of "exploited in the wild" list of answers when I give it a bunch of CVEs. ChatGPT gave me last year's tax data for municipalities in Canada.


Which actual teachers, when, with what words?

There are idiotic people in all professions, and many of us make idiotic statements from time to time, but that in no way supports your main argument anyway.


> Hinton is a Nobel prize winner.

Oh, right, well, then, that makes him an expert on everything, then. Excuse me, I've just got to take a very large dose of Vitamin C; Linus Pauling assures me that it prevents/treats virtually everything.

> Gates has no problems with reality either.

I mean, Gates-era Microsoft was absolutely prone to flights of fancy.

Like, for every hundred things that the industry seizes on as "this will change everything", maybe five ultimately become a thing at all, and maybe one actually does change everything.


To be fair, he is a Nobel Prize winner for his AI research. But that doesn't mean anything since we are all speculating about the proximity of something that clearly doesn't exist -- AGI.


Because we've seen the hype cycle run before, With AI, with VR, where unrealistic promises are made and widely believed for no real reason, and no one cares when they're never realized.

"AI can do X" for all X is an unrealistic hope.


Ai passes the Turing test and mensa admission test and aces lawyer,coding and medical tests. That's my point, traumatized opinions are biased.


All opinions are biased. It is a tautology. I guess my biased question is to do with cost. I’ve not been able to run any models of the caliber that could pass a medical test on my local machine. What kind of hardware do I need to own or to lease in the cloud to write code that doesn’t arbitrarily include comments without comment initializing tokens, and how much will the whole pipeline run me? Is that already cheaper than my salary? Some 25% of code at Google is now written by AI, but all of that code is still reviewed by humans. Is it worth it if you still have to pay for both? Quite possibly, for some of the huge companies, but most devs are working in smaller shops: is it still a reasonable expense for them?


Is passing a formulaic test a high bar? Lot of people pass these tests with training. Taking the lawyer example, does model that passed test also be able to correctly write all documentation for a case from start to end? Without hallucinating any non existing cases or references?


Umm yes actually just google the latest news on this. They are better than paralegals too


For legal stuff, I have heard of lawyers being excoriated in court because of presenting AI-hallucinated case law; not sure when it started passing this bar.


Is it possible the people administering these tests are less than honest about the outcome? Perhaps they have a vested interest in the results they've shown?


It's also not the case that all (or even most) AIs ace all (or even most) meaningful tests that aren't pattern or memory based. Adding multi-digit uncommon integers is too hard for many LLMs for example.

My fellow uni students could pass tests on one of my courses by regurgitating the (wrong) answers that had been published and reused for years. I was not popular with faculty for pointing that out, and had most of a year's papers cancelled after the fact...


once it hits the "For Dogs" phase you know it's done


Hinton chooses his language extremely carefully. He avoids using words which imply more than the present. It's maths. Remarkable maths but still maths.

What's absent is anything remotely like inductive reasoning and thought.

The others I cannot speak to. Many of them are charletans and exploit what they do not actually understand for the value of speculation.


How is Nvidia CEO the biggest company in the world a charlatan when he says there will be no more programmers in 5 years? Can you explain? That's why I made this post to hear opinions incomprehensible to me.


It's the same as Musk saying a Tesla will be able to drive itself from NY to LA in 2 years [2016].

https://x.com/elonmusk/status/686279251293777920


It's a bubble. He's grabbing as much cash as he can until it pops.

The real bias here is from you, who apparently can't see reality or practice basic cynicism.


People said this when Cobol was invented. Programmers change as Languages change. Nvidea will make money by selling chips and fanciful quotes by their leadership to the future are entertainment and pr, not plans or pronouncements.


That claim as you have stated it (I haven't seen the original words) is clearly trivially absurd and void: eg I will be continuing to develop and maintain my own code well beyond then at all sorts of levels from data gathering and system scripts up to my research modelling. He is someone who clearly benefits from mantaining hype. If he made the claim as you state then jail time may await him. Else perphaps the message was more nuanced and you are not reasoning carefully enough. Elsewhere on this story you seem to be making black and white statements when things are just not like that in practice.


He made exactly that claim and so have all the prominent people in tech. You can find it online too and all their messages. But as I said in the post, no amount of facts or evidence can swing opinion l.


You really do seem to be very loose with facts and claims.

Eg you make claims about "all" which are obviously and trivially false.

It's not a good look, and not good for your future IMHO.


> so have all the prominent people in tech

This is not at all true. There is no consensus here.


Counter-question: why are so many people in tech blindly guzzling marketing fluff from professional bullshit-artists who have a financial incentive to lie?


people in tech hold a lot of tech stocks too mate

nVidia's big spikes paid off my car. run that hype train bro, take it to the moon


Most people in tech aren't in AI, not even adjacently. They're building/maintaining web apps and legacy systems or corporate IT infrastructure or Salesforce instances or Wordpress sites. These fields lag behind the cool kids like OpenAI/Anthropic. They may be sold some AI solution but not seeing it as a threat as they've been sold off the shelf software for decades. It'll take time.


It won't take time. The directors will replace them with 20$/month subscriptions and it will happen suddenly much like what happens with textbook cases of normalcy bias (Wikipedia).


I have gone through more than one cycle of automation already, and I'm not scared by the prospect of another. Most of the work programmers used to do back when I first entered the industry has long since been taken over by software, but there are far more programmers than ever.

If a significant share of the work today's programmers are doing is routine enough that new automation can handle it, so much the better! Human programmers will simply move on up to the next level of abstraction, as we have done before, and get on with the more interesting work of managing the robots.

I think my attitude is less "denial" than "cautious skepticism". I have heard many, many, many announcements of radical world-changing new technologies over the years, but as bright and shiny as the prospects appear to be, the realities invariably feature shortcomings and inconveniences and new problems that people then have to work around and deal with. I see no fundamental reason to expect that new LLM-driven automations will be any different.


> In the meanwhile the most important people in tech including Nvidia CEO, Sam, Hinton, Musk

So, two people with a major vested interest, someone who has a past record of making ludicrously over-optimistic claims about this stuff (yes, Geoffrey, we still need radiologists), and, well, do I really need to address Musk?

I mean, if you’re going to do an argument to authority, you can probably do better than this.

People have been claiming that we won’t need programmers anymore any day now for, at this point, about 65 years (entertainingly, this started with COBOL, the theory being that managers could just use COBOL to tell the computer what to do).


[flagged]


I mean, it's evidently disturbing _you_, but really I'm not sure why. Today's LLM-based programming tools are obviously not going to replace human programmers. So really all you've got is forward-looking claims from some people who largely have a history of making incorrect forward-looking claims, and/or are making claims about something far outside their area of expertise.


I mean I really don't understand how: you can think that today's llms are going to be the same tomorrow as scaling is exponential, it's non sensical thinking to me and 2) how there is expert consensus on it but still not get it. That's why I made the thread. It doesn't bother me at all, I'm not in the field anymore I made the thread to see how far detached is the tech crowd from what is happening.


> you can think that today's llms are going to be the same tomorrow as scaling is exponential

Irrelevant (even if true, which is far from established); they're still not going to suddenly become capable of reasoning. They will still 'hallucinate' (terrible euphemism, that, but we are where we are).

> how there is expert consensus on it but still not get it.

Only one of the people you mentioned could be considered an expert (Hinton). And he's an expert his a history of very wrong predictions about this sort of thing. There is _by no means_ expert consensus on this.


I’m much more disturbed by the amount of faith purported NIs are willing to put in the predictions of other purported NIs about the capabilities of hypothetical so-called AIs when actually extant so-called AIs evince nothing like those capabilities than I am by the growth curve of so-called AIs.

In other words, consider your own authority bias before worrying too much about the normalcy bias of others, especially when there have been so-called AI winters before and may well be again.


Hilarious posterior after my post got flagged

https://www.reddit.com/r/cscareerquestions/s/dOLxoZCgjl

Denial


AI has to be trained - if you’re asking it to grind leetcode or take a standardized test or make the millionth version of a shop webpage, it will rock it.

If you’re asking it to do something novel, good luck.

The kinds of things AI can do were already offshored long ago. So nope, not worried.


"It is difficult to get a man to understand something when his salary depends on his not understanding it." -Upton Sinclair (https://www.oxfordreference.com/display/10.1093/acref/978019...)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: