I will go meta into what you posted here: That people are classifying themselves as "AI skeptics". Many people are treating this in terms of tribal conflict and identity politics. On HN, we can do better! IMO the move is drop the politics, and discuss things on their technical merits. If we do talk about it as a debate, we can do it when with open minds, and intellectual honesty.
I think much of this may be a reaction to the hype promoted by tech CEOs and media outlets. People are seeing through their lies and exaggerations, and taking positions like "AI/LLMs have no values or uses", then using every argument they hear as a reason why it is bad in a broad sense. For example: Energy and water concerns. That's my best guess about the concern you're braced against.
> I will go meta into what you posted here: That people are classifying themselves as "AI skeptics"
The comment you're replying to is calling other people AI skeptics.
Your advice has some fine parts to it (and simonw's comment is innocuous in its use of the term), but if we're really going meta, you seem to be engaging in the tribal conflict you're decrying by lecturing an imaginary person rather than the actual context of what you're responding to.
To me, "Tip for AI skeptics" reads as shorthand for "Tip for those of you who classify as AI skeptics".
That is why the meta commentary about identity politics made complete sense to me. It's simply observing that this discussion (like so many others) tends to go this way, and suggests a better alternative - without a straw man.
I read it more as a claim that people who advocate against AI are picking arguments as a means to an end rather than because they actually believe or care about what they're saying.
Expecting a purely technical discussion is unrealistic because many people have significant vested interests. This includes not only those with financial stakes in AI stocks but also a large number of professionals in roles that could be transformed or replaced by this technology. For these groups, the discussion is inherently political, not just technical.
I don't really mind if people advocate for their value judgements, but the total disregard for good faith arguments and facts is really out of control. The number of people who care at all about finding the best position through debate and are willing to adjust their position is really shockingly small across almost every issue.
Totally agree. It seems like a symptom of a larger issue: people are becoming increasingly selfish and entrenched in their own bubbles. It’s hard to see a path back to sanity from here.
This depends on the particular group of rationalists. An unfortunately outsized and vocal group with strong overlap in the tech community has gone to notions of quasi mathematical reasoning distorting things like EV ("expected value"). Many have stretched "reason" way past the breaking point to articles of faith but with a far more pernicious affect than traditional points of religious dogma that are at least more easily identifiable as "faith" due to their religious trappings.
Edit: See Roko's Basilisk as an example. wherein something like variation on Christian hell is independently reinvented for those not donating enough to bring about the coming superhuman AGI, who will therefore punish you- or the closest simulation it can spin up in VR if you're long gone- for all eternity. The infinite negative EV far outweighing any positive EV of doing more than subsist in poverty. Even managed to work in that it could be a reluctant, but otherwise benevolent super AI such that, while benevolent, it wanted to exist, and to maximize its chances it bound itself to a promise in the future to do these things as an incentive for people to get it to exist.
yeah maybe around the time of Archimedes it was closer to the top, but societies in which people are willing to die for abstract ideas tend to be one... where the value of life isn't quite as high as it is nowadays (ie no matter how much my inner nerd has a love and fascination for that time period, no way i'm pressing the button on any one-way time machines...).
I mean, Archimedes stands out because he searched for the truth and documented it. I'm sure most people on the planet at that time would have burned you for being a witch, or whatever fabled creature was in vogue at the time.
Only among the people who are yelling, perhaps? I find the majority of people I talk with have open minds and acknowledge the opinions of others without accepting them as fact.
> a large number of professionals in roles that could be transformed or replaced by this technology.
Right, "It is difficult get a man to understand something when his salary depends on his not understanding it."
I see this sort of irrationality around AI at my workplace, with the owners constantly droning on about "we must use AI everywhere." They are completely and irrationally paranoid that the business will fail or get outpaced by a competitor if we are not "using AI." Keep in mind this is a small 300 employee, non-tech company with no real local competitors.
Asking for clarification or what they mean by "use AI" they have no answers, just "other companies are going to use AI, and we need to use AI or we will fall behind."
There's no strategy or technical merit here, no pre-defined use case people have in mind. Purely driven by hype. We do in fact use AI. I do, the office workers use it daily, but the reality is it has had no outward/visible effect on profitability, so it doesn't show up on the P&L at the end of the quarter except as an expense, and so the hype and mandate continues. The only thing that matters is appearing to "use AI" until the magic box makes the line go up.
I've heard the same breathless parroting of the marketing hype at large O(thousands ppl) cloud tech companies. A quote from leadership:
> This is existential. If we aren't early adopters of AI tools we will be left behind and will never catch up.
This company is dominant in the space they operate in. The magnitude of the delusion is profound. Ironically, this crap is actually distracting and affects quality, so it could affect competitiveness--just not how they hope.
I've seen the same trend. AI neeeds to be everywhere, preferably yesterday, but apart from hooking everything up to an LLM withot regards for the consequences nobody seems to know what the AI is supposed to do.
Politics is the set of activities that are associated with making decisions in groups, or other forms of power relations among individuals, such as the distribution of status or resources.
Most municipalities literally do not have enough spare power to service this 1.4 trillion dollar capital rollout as planned on paper. Even if they did, the concurrent inflation of energy costs is about as political as a topic can get.
Economic uncertainty (firings, wage depression) brought on by the promises of AI is about as political as it gets. There's no 'pure world' of 'engineering only' concerns when the primary goals of many of these billionaires is leverage this hype, real and imagined, into reshaping the global economy in their preferred form.
The only people that get to be 'apolitical' are those that have already benefitted the most from the status quo. It's a privilege.
Hear hear, It's funny having seen the same issue pop up in video game forums/communities. People complaining about politics in their video games after decades of completely straight faced US military propaganda from games like Call of Duty but because they agree with it it wasn't politics. To so many people politics begins where they start to disagree.
There are politics and there are Politics, and I don't think the two of you are using the same definition. 'Making decisions in groups' does not require 'oversimplifying issues for the sake of tribal cohesion or loyalty'. It is a distressingly common occurrence that complex problems are oversimplified because political effectiveness requires appealing to a broader audience.
We'd all be better off if more people withheld judgement while actually engaging with the nuances of a political topic instead of pushing for their team. The capacity to do that may be a privilege but it's a privilege worth earning and celebrating.
My definition is the definition. You cannot nuance wash the material conditions that are increasing tribal polarization. Rising inequality and uncertainty create fear and discontent, people that offer easy targets for that resentment will have more sway.
The rise of populist polemic as the most effective means for driving public behavior is also downstream from 'neutral technical solutions' designed to 'maximize engagement (anger) to maximize profit'. This is not actually a morally neutral choice and we're all dealing with the consequence. Adding AI is fuel for the fire.
I would rather not trust the first person who claims <outgroup> wants to starve me. Polemnics may be legitimate - they may not be, I haven't thought about it deeply - but they are undoubtedly worth dropping from my own information diet.
I mean, it is intellectually honest to point out that the AI debate at the point is much more a religious or political than strictly technical really. Especially the way tech CEOs hype this as the end of everything.
> IMO the move is drop the politics, and discuss things on their technical merits.
I'd love this but it's impossible to have this discussion with someone who will not touch generative AI tools with a 10 foot pole.
It's not unlike when religious people condemn a book they refuse to read. The merits of the book don't matter, it's symbolic opposition to something broader.
Okay, but a lot of people are calling environmental and content theft arguments "political" in an attempt to make it sound frivolous.
It's fine if you think every non-technical criticism against AI is overblown. I use LLMs, but it's perfectly fine to start from a place of whether it's ethical, or even a net good, to use these in the first place.
People saying "ignoring all of those arguments, let's just look at the tech" are, generously, either naive or shilling. Why would we only revisit these very important topics, which are the heart of how the tech would alter our society, after it's been fully embraced?
Well they're separate issues. Someone could plausibly take the position that air travel should be banned for environmental reasons, but that has no relevance to the utility of air travel. If a group of people were loudly proclaiming that planes were not only bad but useless, anyone who routinely uses planes would obviously find them non-credible.
They're not separate at all, especially if the question is "How hard should we push people to use this."
To be clear, there are a lot of people who routinely used airplanes who don't post-covid, but insisted that they had to. Yeah, I think it's pretty wasteful to fly across the country for a 30 minute meeting. Most don't fly at all. I don't know what mass-psychosis white collar industries were under to think that was necessary.
They're 100% separate. "Planes aren't useful" and "planes don't work" are completely different sentiments than "it's pretty wasteful to fly across the country for a 30 minute meeting" and "we shouldn't push people hard to use air travel".
I know for a fact that planes work, because I've been on a plane and observed it lifting me high off the ground and rapidly transporting me to a distant location. The fact that planes typically emit CO2 doesn't make their existence and utility some kind of mass hallucination.
This distinction may sound a bit silly, because I assume we all agree that planes literally work. But the point I'm making is as it applies to AI. Like many people, I know from experience that AI isn't vaporware and is extremely useful for many purposes. I'm sure many others haven't had the same experience for various reasons, and factually report their observations in good faith — but that's different from pushing a narrative which one wishes to be true, regardless of how valid the reasons for that wish may be.
> I know for a fact that planes work, because I've been on a plane and observed it lifting me high off the ground and rapidly transporting me to a distant location. The fact that planes typically emit CO2 doesn't make their existence and utility some kind of mass hallucination.
You're arguing with an imaginary person. Read what I wrote. I didn't call AI vaporware, I said we shouldn't consider its integration into society purely on technical merits, ignoring the cost, which I think could be big if OpenAI's very public plans are made reality. You're making a strawman.
The mass hallucination is not that plane's are useful, it's that a plane is the only reasonable solution to human communication.
Honestly, you're just further illustrating the complete erosion of nuance that comes when you paint people with concerns about AI as frivolous.
Driving a massive truck in the city is stupid too and most short flights should be replaced with high speed rail. And AI wastes a monumental amount of resources.
> The environmental argument is frivolous as long as people fly to Vegas for the weekend or drive a F150 to the office. Why is this as special domain?
I keep seeing arguments like this. They sound like a bit like a form of nihilsm. Do you really think we shouldn't worry about risks to the environment simply because we're all hypocrites on that front in one way or another? I get the frustration and have been guilty of using this type of argument myself in the past, but refusing to discuss a problem because the people raising the concern are imperfect human beings doesn't seem like a tenable position.
Charitably, I think you can read into that a not-unreasonable (if unproven) assertion that there are many lower hanging fruits on the tree that would do endlessly more good for the cause to pick than data centers, and AI at least has the arguable potential upside of alleviating some of those specific burdens-- better health care and less environmental pollution through various improved forms of automation. Or at least when addressing the stub claim of the sort in the GP comment, you should assume these fairly straightforward subclaims pre-emptively and respond to this stronger form of the argument. It saves time, at least if you're going to seek out a discussion it does. Plenty of counter claims to them, but it gets the conversational ball rolling in a productive direction and if the response in turn is less constructive then you also know not to bother any more.
I think that form of argument is called "whataboutism". Whether flights waste energy or are environmentally unfriendly is really a separate issue. Both things can be bad.
I wouldn't ignore those arguments but most of the time, they're so poorly formed (eg. using data without logic), they aren't really worth listening to. If you believe AI provides no value, then any environmental cost is too high for you but you can't convey that by trying to dramatize how high it is. That's dishonest and I think people rightly turn off it.
Right now Open"AI", Oracle and everyone else are burning billions of dollars to buy and run these llms, they raise the price of energy around them, they provide negative economic benefit. It's dishonest of you to pretend that isn't the case.
I didn't know AI provides negative economic benefit overall. Is that what you're saying or just the it's negative for the local economies because it drives up power prices? That's an obviously small-scale, short-term and solvable problem.
We’ve all used the tools, and they’re… fine. They probably will contribute modestly to overall productivity in certain fields, but they certainly aren’t as transformative or magical as the current hype suggests. I’m not sure why you insist that we continue to fawn over these things.
No, we've used it, you are creating a strawman argument assuming "AI skeptics" are illiterate and/or incapable of understanding. You ironically are the one refusing to accept the possibility that you are wrong.
> No, we've used it, you are creating a strawman argument
There exists a class of "ai-skeptic" who proudly proclaim they have never and will never use AI. Examples are not hard to find, though I see them more on reddit/instagram/bluesky than I do on HN.
If that does not describe you then my comment is not about you.
Maybe you've used it-- but a very large number of the AI skeptic comments I see that actually cite particular experiences, even comments in the pages of HN, amount to things like, "ChatGPT hallucinated when I asked about the local price of product X and if it was in stock anywhere around. How can anyone take LLM and AI seriously?"
Or worse, things like "Real science and real engineering doesn't rely on tools that behave randomly.".
> it's impossible to have this discussion with someone who will not touch generative AI tools with a 10 foot pole.
Why? Would you say the same if the topic was about recreational drugs? Or, to bring it closer to home, if the topic was about social media?
I think you're being disingenuous by making the analogy to religious people refusing to read a certain book. A book is a foundational source of information. OTOH, one can be informed about GenAI without having used GenAI; you can study the math behind the model, the transformer architecture, etc---the foundational sources of information on this topic. If our goal is to "drop the politics, and discuss things on their technical merits" well I don't see how it can get more purely technical than that.
The frustrating thing is when you're debating people who firmly believe that generative AI "has no utility"... but also refuse to ever try it themselves.
(Which they might even justify because they've read the transformer paper or whatever. That doesn't help inform you if these things actually have practical applications!)
Yep. Same for the other direction: there is a very strong correlation between identity politics and praising AI on Twitter.
Then there's us who are mildly disappointed on the agents and how they don't live their promise, and the tech CEOs destroying the economy and our savings. Still using the agents for things that work better, but being burned out for spending days of our time fixing the issues the they created to our code.
The adoption and use of technology frequently (even typically) has a political axis, it's kind of just this weird world of consumer tech/personal computers that's nominally "apolitical" because it's instead aligned to the axis of taste/self-identity so it'll generate more economic activity.
> On HN, we can do better! IMO the move is drop the politics, and discuss things on their technical merits.
Zero obligation to satisfy HN audience; tiny proportion of the populace. But for giggles...
Technical merits: there are none. Look at Karpathy's GPT on Github. Just some boring old statistics. These technologies are built on top of mathematical principles in textbooks printed 70-80 years ago.
The sharding and distribution of work across numerous machines is also a well trodden technical field.
There is no net new discovery.
This is 100% a political ploy on the part of tech CEOs who take advantage of the innumerate/non-technical political class that holds power. That class is bought into the idea that massive leverage over resource markets is a win for them, and they won't be alive to pay the price of the environmental destruction.
It's not "energy and water" concerns, it's survival of the species concerns obfuscated by socio-political obligations to keep calm carry on and debate endlessly, as vain circumlocution is the hallmark of the elders whose education was modeled on people being VHS cassettes of spoken tradition, industrial and political roles.
IMO there is little technical merit to most software. Maps, communication. That's all that's really needed. ZIRP era insanity juiced the field and created a bunch of self-aggrandizing coder bros whose technical achievements are copy-paste old ideas into new syntax and semantics, to obfuscate their origins, to get funded, sell books, book speaking engagements. There is no removing any of this from politics as political machinations gave rise to the dumbest era of human engineering effort ever.
The only AI that has merit is robotics. Taking manual labor of people that are otherwise exploited by bougie first worlders in their office jobs. People who have, again with the help of politicians, externalized their biologies real needs on the bodies of poorer illiterates they don't have to see as the first-world successfully subjugated them and moved operations out of our own backyard.
Source: was in the room 30 years ago, providing feedback to leadership how to wind down local manufacturing and move it all over to China. Powerful political forces did not like the idea of Americans having the skills and knowledge to build computers. It ran afoul of their goals to subjugate and manipulate through financial engineering.
Americans have been intentionally screwed out of learning hands on skills with which they would have political leverage over the status quo.
There is no removing politics from this. The situation we are in now was 100% crafted by politics.
I think much of this may be a reaction to the hype promoted by tech CEOs and media outlets. People are seeing through their lies and exaggerations, and taking positions like "AI/LLMs have no values or uses", then using every argument they hear as a reason why it is bad in a broad sense. For example: Energy and water concerns. That's my best guess about the concern you're braced against.