Hacker Newsnew | past | comments | ask | show | jobs | submit | JCM9's commentslogin

Amusing that the bits the “manufacturer asked to be redacted” in the images appear to be the identifiers for common off-the-shelf electronic components, including a standard memory card. Is that really super secret IP?


It is if you are a camera manufacturer. Another example https://www.cined.com/whats-inside-a-red-mini-mag-the-contro...


could be a PR / brand identity management thing. They dont want their slogan to be come "The official Storage Medium of Deadly Disasters".


At this point if you don’t think AI is a bubble then I don’t know what to tell you.


Tell me why quantum mechanics stops at SU(3) in SU(3) x SU(2) x U(1) symmetry group.


This is all a bit hyperbolic. Stopping minting pennies made sense and has precedent. There used to be half penny coins.

Also, pennies are still legal tender. Folks can take them to a bank or other venue and cash them in. They’re not “trash.”


> Folks can take them to a bank

FWIW my bank refuses to accept unrolled coins, long before this month's retirement of the penny.


One of the reasons why I changed banks. My new bank has a coin counting machine in the lobby, you throw your coins in, it consumes them, and gives you a slip that you take to the teller.

As I understand it, coins are considered a government service. Banks and retailers pay to deal with them. Buying them from the public for face value actually saves them money.


It's so easy to use coins, pennies included, in day-to-day transactions I never accumulate any. Accumulating pennies or other coins is a concept I don't understand. You can spend up to 4 pennies in any purchase you do, and if you don't can't never receive more than 4. For nickels, dimes, and quarters, the maximum is smaller.


If a person has good basic arithmetic skills and it is a priority for them, then yes they can use coins easily. However, a lot of people either can't do the math or are unwilling to use change correctly.

For myself, it's such a priority that I'll get upset with myself if I have more than 4 pennies.

Japan has more coins (in regular use) than USA, so giving the correct amount is even more important or you'll end up carrying a lot of coins. 1000 yen is the smallest bill so... Example: 999 yen. 500,4x100,50,4x10,5,4x1 yen coins, 19 coins total.


When I used to use cash I used to do this all the time. I would nearly always overpay to minimise the number of coins in my pocket. For example I had a bill of £1.63 and I was paying with two £1 coins, I would get 37 pence in change, which would be a minimum of four coins (20p, 10p, 5p, 2p). So I would pay £2.13 to get a 50p or £2.03 to get two 20p coins.

95% of the time the person serving me would clock on to what I was doing but the other 6% it'd take some persuasion, and occasionally they would insist on giving me back my overpayment before ringing it up.


Most of our supermarkets have at least one self service machine that accepts change. Once a week I pour any loose change in then settle the rest with a card.


I guess this might depend where you are.

When I lived in Scotland there was a "loose change" machine at the local Tesco. You pour in your coins and it would give you a receipt you could take to a cashier to get cash back - but the downside was that it charged you something like 10% of the total as a fee. Which I wouldn't pay.

Edit: I just searched and the Tesco documentation says "There is a 25p transaction fee and an 11.5% processing fee on the total amount of coins you put in the Coinstar centre. For charity donations, this processing fee is reduced to 8.9%." (wow, how generous!)


I meant self service machines where you pay for your shopping. There are usually one or two that accept change.


Ahh, somehow I misunderstood. Thanks for the clarification!


Same here in the US.

Back in the day, I'd sift through my jar of change and keep the quarters, which were good for parking meters and laundry. The rest went into the Coinstar machine. The fee for counting dimes, nickles, and pennies seemed OK.

The machine always had some weird foreign coins or subway tokens left over by the previous customer in the reject bin, which was potentially interesting.


FWIW in the US many of those machines offer to skip the fee if you take the money in the form of a gift card for Amazon or Walmart or similar.


That actually never occurred to me, I assumed they only took bills. Welp, problem solved and I'm no longer out of milk.


Obligatory Dr. Strangelove reference:

*You don't think I'd go into combat with loose change in my pocket, do you?"

But I must admit that I never formed the habit of bringing change with me when I go somewhere. So it piled up at home. The quarters were easy: They got saved for parking, laundry, etc. But I ended up with a sack of pennies that I finally cashed in at the bank.


It's odd how banks have largely stopped operating change counting machines.

In my childhood we'd hoard loose change then make a trip to the local po-dunk bank serving my neighborhood surrounded by corn fields, and even there they'd take our bucket of loose change and dump it into a counting machine for free.

It was a game to try guess the amount we'd get in paper cash...

Now you have to pay for this service at a grocery store using a cumbersome machine operated by Coinstar.


COVID happened. However, all three of the banks I visit regularly (over branch of a national bank, two branches of a local credit union) all have coin counting machines in the lobby, though it took awhile for them to be added back to the branches that took theirs out.


No doubt COVID kicked skimpflation into high gear, but this was already a pattern I noticed long before 2019.

It seemed to generally coincide with the demise of retail in general, and of course the elimination of bank-teller interactions and emergence of ATM machines. All of these things are a blurry mess from my past...


Fun fact: modern dimes, quarters, and half-dollars all have the same value by weight -- about $20 per pound.


This is true by design (silver coins had a weight chosen by value of silver), not coincidence. Also while nickels and pennies don’t match.


A lot of banks just have one of those coin counting machine things (like Coinstar but not Coinstar).

Coinstar also often has zero commission options like gift cards that are an easy way to cash in extra change without paying fees.


Average gift card has a discount of 8 to 20% built in. Looks like Coinstar is currently charging 12.9%, so a gift card could actually be more profitable for them.


If you're feeding it pennies you can also just give it one penny at a time to avoid the fees.


A credit union local to me waives the fee if you are a member.


Mine had the machines, then ripped them out, over the cost to them the regional bank they deal with imposed and other excuses. Coinstar (some) gift card is the only no-fee I've found in my area, but then you're stuck with a gift card instead of cash.


I have talked to my bank and was told not to roll them, they just throw them in a machine to count them and deposit the money in my count. It is not uncommon to see people bring in a box of coins and the bank takes care of them.


> FWIW my bank refuses to accept unrolled coins, long before this month's retirement of the penny.

My (edit: old) bank refused to accept unrolled coins back in the early 2000s.


So roll them?


I have like 2 dollars in coins, not even a roll of pennies. I just thought I'd try depositing it while running a different bank errand and they were like "naw, go to coinstar with your poor person money"


Coin counting machines exist for decades (and I hope still produced), why not all banks have them?


In Canada, I've only ever seen these in grocery stores, operating for a fee (and they don't accept commissions) and a singular credit union branch (because they serve the underbanked at that particular location).


My bank used to have one, but it merged with another bank and the machine got taken away "to serve a larger branch"


Is that legal?


It seems more reasonable than the outright refusal of many businesses to accept cash at all, and plus this transaction isn't even a "debt" to which the penny would be legal tender.


As I understand it, more than X dollars worth of coins is not legal tender. I learned this due to an absurd case in Detroit, where someone stole bags of coins from an armored car, got caught, and claimed their crime was not a felony because it was below the dollar limit for a felony. Of course the judge treated their request with the disdain that it deserved.


There are multiple laws that could have been broken to make it a felony, but if the only reason it would have been a felony was the dollar amount, I'm actually less inclined to side with the judge.

This is all third-hand, through a game of internet-telephone, and my money (if you'll pardon the pun) is on there being additional factors though


I think when the half penny was discontinued it had the same buying power as the dime does now or something like that.

So this is long overdue.


> They’re not “trash.”

I live in the Eurozone. We had 1 and 2 cent coins for a while. Where I live these were quickly deprecated and I think in most other Eurozone countries too by now.

I have thrown any of these coins straight in the bin soon as somebody gave them. Too much hassle and requires too big a wallet to drag along, for literally pennies.

When I first realized dealing with coins was inversely proportional to their denomination I threw out less than a Euros worth.

I do not understand anyone who doesn't throw out their pennies.


Throwing out cent coins doesn’t seem like an environmental waste to you, like throwing out aluminum cans?

Yes they’re impractical to carry and use but does anyone actually do that? Why not do the standard practice of accumulate them in a jar instead of throwing them in the trash like waste?

it’s easy to take them home and throw them in a jar until suddenly the jar is a Kg of metal that can be fed to whatever coinstar like machine is around.


Metals are separated here, but compared to all the other waste I generate, I'd say it's... pennies on the dollar. Storing and collecting things is by itself an expense too: space, energy (you probably store them in a controlled environment), and so on.


At least leave them on the counter, drop them in a charity box, or leave them somewhere else where someone will pick them up.


I refuse to accept them. I got them anyway. Never seen a charity box. The homeless person also doesnt want them.


I stored some 1 and 2 cent coins in 2005 betting they will become collectible in a few decades.


There are many things that will become collectibles. I don't want to spend the energy and time storing various items on the chance they might become valuable.


*precedent


Thanks and fixed. Darn autocorrect.


Perplexity is one small iteration away from just a classic AI wrapper.

It was amazing early on in demonstrating what search could be, but frankly there’s not much reason for it to exist much longer.

The big players can, and are, just replicating its core functionality. The moat is gone.

I’d have to agree that they’re probably near the top of the list of companies about to get wiped out by a bubble deflation. Possible they get acquired by some sucker looking to establish AI creds but the market for that has probably passed as Wall Street is becoming super skeptical of all things AI at the moment.


Perplexity had a big advantage over the competition when model hallucination was bad. That gap has narrowed enough for now.

Perplexity beats Google, ChatGPT, and Claude if you want an answer with citations and want it fast. Claude deep research is more thorough, but that's going to be a wait. ChatGPT web search is slower, uses few citations, and looks like more of the answer is coming from the model than the results. It's also possible that the quality and speed of Perplexity would vanish with scale and the only reason they look so good right now is because they have so many fewer users.

The mid to long term problems I see are:

#1 Google could cut them off from Youtube and a big chunk of their value is gone without recourse.

#2 over time the open web is going to shrivel and switch to paid or even die as ad revenue drops and bot traffic increases (already at this point probably, just countermeasures haven't been fully adopted.)

#3 goes with the previous point, more content is going to be AI generated and not fact checked which will dramatically drop the value of the output. This of course is a problem for all LLMs. Google may be the one who has a big advantage here given their advanced AI research and that they already have an index of the pre-LLM internet.


The author isn’t wrong here.

With the Wall Street wagons circling on the AI bubble expect more and more puff PR attempts to portray “no guys really, I know it looks like we have no business model but this stuff really is valuable! We just need a bit more time and money!”


It’s not good, and is a sign the market is getting increasingly bearish on the future of AI from a business standpoint. That doesn’t mean the tech is bad, but these are signs Wall Street is saying the math doesn’t add up here and thus there’s storms building on the horizon.


Coreweave has taken on a ton of debt to pay for everything they’re building. Investors can make money by lending Coreweave money and charging interest (aka a bond).

Separately, investors can buy a derivative product that is a bet that Coreweave won’t be able to pay this money back. This is a called a “credit default swap.” If Coreweave starts missing payments or can’t pay back the loan this instrument pays out.

The price of the instrument is linked to the likelihood that Coreweave won’t be able to repay the money. Given growing questions around their financial business model the price of these derivatives has been rocketing up over the last few months. In plain speak this means the market increasingly thinks Coreweave won’t be able to repay these loans.

Thats mirroring broader Wall Street sentiment these last few months that the math isn’t adding up on AI and all the spend committed isn’t mapping out against money likely to be available to pay for all that. Investors are increasingly making plays for the AI bubble popping and the price of these credit default swaps shooting up is one metric indicative of that downturn positioning.

The data on this is available in various financial data platforms and has been written about by financial news outlets.


Yes, the price of Coreweave default swaps has jumped 53% since October. In the eyes of the bond markets they’re basically toast… a ticking debt bomb waiting to implode.


I’m bullish on AI as tech but folks are starting to sniff out that the financials of everything going on at the moment aren’t sustainable for much longer.

I hope we have more of a “reality correction” than full blown bubble bursting, but the data is increasingly looking like we’re about to have a massive implosion that wipes out a generation of startups and sets the VC ecosystem back a decade.


The tech is way underpriced right now. It's basically a subsidized market right now, with the money flowing in coming from the private sector.

The problem here is that it remains to be seen who is willing to pay for the service once it's priced at cost or even with a margin. And based on valuations of AI companies one would expect a huge margin.


It's hard for me to imagine paying real money for something that gives me a maybe-hallucinated answer that I need to check every single time. A flaky test is worse than a failing test.


Plus I can run a reasonable LLM on my own hardware, so I don't even need to pay anyone else. And what I can run locally is only going to get better and better.


This is true, but this is also true for on-premise hosting vs cloud. And cloud has been booming for at least a decade before LLMs appeared. I suspect AI will follow a similar trajectory, i.e. companies don't move their AI deployments on-prem until they hit a certain scale.


This is very true, but I think the other point is that AI doesn't have much "moat". If a competitor can take a pre-trained Chinese LLM, fine tune it a bit, fiddle with the prompt, and ship a product which is not as good but way cheaper, then you've (or Oracle's) got a problem.


Actually, in that scenario the AI labs (OpenAI, Anthropic, etc) have a problem. The cloud providers (including Oracle!) will do with the models what they've been doing with open source software: just take it and run it on their infra and charge money for providing it as-a-service.

This is why you're seeing the AI labs now try to build their own data centers.


_sigh_

Yes LLMs hallucinate, no it's no longer 2022 and ChatGPT (gpt-3.5) is the pinnacle of LLM tech. Modern LLMs in an agentic loop can self correct, you still need to be on guard but if used correctly (yes, yes, holding it wrong etc. etc.) can do many, many tasks that do not suffer from "need to check every single time".


I must be holding it wrong then, because in my ChatGPT history I've abandoned 2/3rds of my conversations recently because it wasn't coming up with anything useful.

Granted, most of that was debugging some rather complicated typescript types in a custom JSX namespace, which would probably be considered hard even for most humans as well as there being comparatively few resources on it to be found online, but the issue is that overall it wasted more of my time than it saved with its confidently wrong answers.

When I look at my history I don't see anything that would be worth twenty bucks - what I see makes me think that I should be the one getting paid.


I think the reason people talk past each other on this is that some of them are using LLMs for every little question they have, and others are using them only for questions that they can't trivially answer some other way. Sure, if all your questions have straightforward, uncontroversial answers then the LLMs will often find them on the first try, but on the other hand you'd also find them on the first try on wikipedia, or the man page, or a google search. You'll only think the ChatGPT is useful if you've forgotten how to use the web.

If you're only asking genuinely difficult questions, then you need to check every single time. And it's worse, because for genuinely difficult questions, it's often just as hard to check whether it's giving garbage as it would have been to learn enough to answer the question in the first place.


If a coworker is wrong 40% or 60% of the time I’ll ignore their suggestion either way


As you should, but an LLM is not a human, nor is it categorically 40-60% wrong, so I'm not sure what your point is.


> Modern LLMs in an agentic loop can self correct

If the problem as stated is "Performing an LLM query at newly inflated cost $X is an iffy value proposition because I'm not sure if it will give me a correct answer" then I don't see how "use a tool that keeps generating queries until it gets it right" (which seems like it is basically what you are advocating for) is the solution.

I mean, yeah, the result will be more correct answers than if you just made one-off queries to the LLM, but the costs spiral out of control even faster because the agent is going to be generating more costly queries to reach that answer.


Apologies that you're taking on the chin here. Generally, I'll just skip fantastical HN threads with a critical mass of BS like this, with pity, rather than an attempt to share (for more on that c.f. https://news.ycombinator.com/item?id=45929335)

Been on HN 16 years and never seen anything like the pack of people who will come out to tell you it doesn't work and they'll never pay for it and it's wrong 50% of the time, etc.

Was at dinner with an MD a few nights back and we were riffing on this, came to the conclusion is was really fun for CS people when the idea was AI would replace radiologists, but when the first to be mowed down are the keyboard monkeys, well, it's personal and you get people who are years into a cognitive dissonance thing now.


I just totally disagree.

I want AI to be as strong as possible. I want AGI, I especially want super intelligence. I will figure out a new and better job if you give me super intelligence.

The problem is not cognitive dissonance, the problem is we don't have what we are pretending we have.

We have the dot com bubble but with a bunch of Gopher servers and the web browser as this theoretical idea yet to be invented and that is the bull case. The bear case is we have the dot com bubble but still haven't figured out how to build the actual internet. Massive investment in rotary phone capacity because everyone in the future is going to be using so much phone dial up bandwidth when we finally figure out how to build the internet.


Yeah, it really pulled the veil away, didn't it? So much dismissiveness and uninformed takes, from a crowd that had been driving automation forward for years and years and you'd think they'd get more familiar with these new class of tools, warts and all.


I just can't understand how anyone who actually uses the tools all the time can say this.


Say what exactly? Driving automation of all kind with Claude Code level tools has been incredibly fruitful. And once you spent sufficient time with them you know when and where they fall on their faces and when they provide real tangible reproducible benefits. I could not care less for the AI hype or bubble or whatever, I just use what I see works as I'm staring these tools down for 10h+/day.

The problem is that these conversations are increasingly drifting apart as everyone has different priors and experiences with this stuff. Some are stuck in 2023, some have so very specialized tasks that it's more work whipping the agent in line that it saves, others found a ton of automation cases where this stuff provides clear net benefits.

Don't care for AGI, AI girlfriends or LLM slop, but strap 'em in a loop and build a cage for them to operate in without lobotomizing themselves and there's absolutely something to be gained there (for me, at least).


really? >>many tasks that do not suffer from "need to check every single time"

like which tasks?

How do you decide whether you need to check or not?

If you're asking it to complete 100 sequences, and if the error rate is 5%, which 5% of the sequences do you think it messed up or _thought_ otherwise? if the 5% is in the middle, would the next 50 sequences be okay?


> really? >>many tasks that do not suffer from "need to check every single time"

> like which tasks?

Making slop.


If I ask an LLM to guess what number I’m thinking of and it’s wrong 99.9% of the time, the error is not in the LLM.


I wonder if a price correction would be a boon for open source, with the economics of smaller / self hosted models making a lot more sense when API prices have to surge.


It's not actually subsidized and the economics of smaller/self-hosted models are a much, much, worse nightmare (source: guy who spent last 2 years maintaining llama.cpp && any provider you can think of) (why is it bad? same reason why 20 cars vs. 1 bus is bad. same reason why only being able to use transportation if you own a car would be bad)


> It's not actually subsidized

Source?


Source on it being subsidized? :) (there isn't one, other than an aggro subset of people lying to eachother that somehow literally everyone is losing money, while posting record profit margins) (https://en.wikipedia.org/wiki/Hitchens%27s_razor)


If it's not profitable, it's running on capital. Subsidized.


And really the reason that it would be like that is that the models don't learn, per se, within their lifetime.

I'm told that each model is cashflow positive over its lifetime, which suggests that if the companies could just stop training new models the money would come raining down.

If they have to keep training new models though to keep pace with the changes in the world though then token costs would be only maybe 30% electricity and 70% model depreciation -- i.e. the costs of training the next generation of model so that model users don't become stranded 10 years in the past.


> it remains to be seen who is forced to pay

via govt relationships, long term irreplaceable services, debt or convictions.. Also don't forget the surveillance budgets and the best spigots there, win.


It's not subsidized, lol.

Generally, I worry HN is in a dark place with this stuff - look how this thread goes, ex. descendant of yours is at "Why would I ever pay for this when it hallucinates." I don't understand how you can be a software engineer and afford to have opinions like that. I'm worried for those who do, genuinely, I hope transitions out there are slow enough, due to obstinance, that they're not cast out suddenly without the skills to get something else.


> It's not subsidized, lol.

It's subsidised by VC funding. At some point the gravy train stops and they have to pivot to profit so that the VCs deliver return-on-investment. Look at Facebook shoving in adverts, Uber jacking up the price, etc.

> I don't understand how you can be a software engineer and afford to have opinions like that

I don't know how you can afford not to realise that there's a fixed value prop here for the current behaviour and that it's potentially not as high as it needs to be for OpenAI to turn a profit.

OpenAI's ridiculous investment ability is based on a future potential it probably will never hit. Assuming it does not, the whole stack of cards falls down real quick.

(You can Ctrl-C/Ctrl-V OpenAI for all the big AI providers)


This is all about OpenAI, not about AI being subsidized...with some sort of directive to copy/paste "OpenAI" for all the big AI providers? (presumably you meant s/OpenAI/$PROVIDER?)

If that's what you meant: Google. Boom.

Also, perhaps you're a bit new to industry, but that's how these things go. They burn a lot of capital building it out b/c they can always fire everyone and just serve at cost -- i.e. subsidizing business development is different from subsiziding inference, unless you're just sort of confused and angry at the whole situation and it all collapses into everyone's losing money and no one will admit it.


You're replying to a story about a hyperscaler worrying investors about how much they're leveraging themselves for a small number of companies.

From the article: > OpenAI faces questions about how it plans to meet its commitments to spend $1.4tn on AI infrastructure over the next eight years.

Someone needs to pay for that 1.4 trillion, that's 2/3 of what Microsoft makes this year. If you think they'll make that from revenue, that's fine. I don't. And that's just the infra.


I'm a big fan and user of AI but I don't see how you can say it's not subsidized. You can't just ignore the costs of training or staff or marketing or non-model software dev. The price charged for inference has to ultimately cover all those things + margin.

Also, the leaked numbers being sent to Ed Zitron suggest that even inferencing is underwater on a cost basis, at least for OpenAI. I know Anthropic claims otherwise for themselves.


A huge margin or a huge market at a moderate margin. But yes, the net profit has to be huge.


You're saying the unit economics are bad?


> I’m bullish on AI as tech

I'm not bullish in the stock market sense.

Which isn't the same as saying LLMs and related technology aren't useful... they are.

But as you mentioned the financials don't make sense today, and even worse than that, I'm not sure how they could get the financials to make sense because no player in the space on the software side has a real moat to speak of, and I don't believe its possible to make one.

People have preferences over which LLM does better at job $XYZ, but I don't think the differences would stand up to large price changes. LLM A might feel like its a bit better of a coding model than LLM B, but if LLM A suddenly cost 2x-3x, most people are going to jump to LLM B.

If they manage to price fix and all jump in price, I think the amount of people using them would drop off a cliff.

And I see the ultimate end result years from now (when the corporate LLM providers might, in a normal market, finally start benefiting from a cross section of economies of scale and their own optimizations) being that most people will be able to get by using local models for "free" (sans some relatively small buy-in cost, and whatever electricity they use).


I think this is the rational take that everyone seems to be ignoring.


The most sobering statistic I've seen is that the entire combined amount of consumer spending on AI products is currently less than the revenue of Genshin Impact.


Indeed, bad for consumer AI. But I would expect B2B spending on AI dwarfs consumer spending, I wonder what that comparable B2B revenue would be.


It certainly does but B2B revenue can also be much more "fake", in a sense. i.e. if Microsoft spends $500 million on OpenAI, which makes OpenAI spends $500 million on Azure... where does the profit come from? There have been a few interesting articles (which I unfortunately can't look up right now) recently describing how incestuous a lot of the B2B AI spend is, which is reminiscent of the dot-com bubble.


Well, Genshin Impact is at the forefront of predatory B2C business practice. It is a gacha game, engineered to extract as much money from its prey as possible. On the other end, most AI company can afford to be generous with their user/consumer right now because they are being bankrolled by magic money. The real test will be when they have to start the enshitification. Will the product still be enough to convince consumer to spend an amount of money guarantying a huge margin for the service provider ? Will they have to rely on whale desperately needing to talk to their IA girlfriend ? Or company and people who went deep into the whole vibe coding thing, and can't work without an agent ? I think it is hard to say right now. But considering the price of the hardware and running it, I don't think they will have to price the service insanely to at least be profitable. To be as profitable as the market seems to believe, that's another story.


Regardless of your feelings on Genshin/gacha (which I agree is predatory), the point is that the revenue of a single game developed by a few hundred people is currently making more money than an entire industry which is "worth" trillions of dollars, according to the stock market, and is, according to Sam Altman, so fundamentally important to the US economy that the US government is an insurer of last resort who will bail out AI companies if their stock price falls too much.


Isn't AI just as bad if not worse here? I'd bet there are far more people who have been duped by ChatGPT (and others) to think it's their friend, lover, or therapist than people who are addicted to Genshin Impact.


I would be curious to see how it compares to the combined revenue of gay furry gacha games and VNs. Are we talking parity, multiples, or orders of magnitude? Anything other than the latter would be a bucket of cold water.


The money is in business licenses. Why only look at consumer? Consumers are mostly still using the free version which exists to convince employers to pay.


AI's consumer monetization will be ad-based or as a feature for a product users want to pay for. Businesses will be the primary customer for AI.


wow, that is alarming


At this point I'm just hoping we can continue to postpone reality until after Christmas.


I don't know about a decade... the dotcom bubble bursting was pretty close to normal within 5 years or so. Still a long time, and from personal experience the 50% pay cut from before and a year later was anything but fun.


The market as a whole always recovers. But individual companies, or even entire industries can vanish without a trace. So betting on the entire market is a fairly safe bet, long-term. Betting on OpenAI is much more risky.


This would basically start to turn cloud providers into CoLo facilities that just host these servers.

Makes sense longer term for NVidia to build this but adds to the bear case for AWS et al long term on AI infrastructure.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: