OP here. I've been experimenting with using LLMs not just for code, but to 'patch' my own passive personality traits in my marriage. Ended up going down a rabbit hole trying to visualize the topology of that relationship. We couldn't find a name for the shape, so we're calling it a 'Recursive Lemniscate.' Curious if anyone recognizes this structure from topology literature.
In my recent explorations [1] I noticed it got really stuck on the first thing I said in the chat, obsessively returning to it as a lens through which every new message had to be interpreted. Starting new sessions was very useful to get a fresh perspective. Like a human, an AI that works on a writing piece with you is too close to the work to see any flaw.
Interesting I’ve noticed the same behavior with Gemini 3.0 but not with Claude, and Gemini 2.5 did not have this behavior. I wonder what tuning is optimising for here.
The lede says Perl died because it was "reactionary" and "culturally conservative", but the content says Perl died because it had bad culture, the culture of angry, socially corrosive anonymous internet commenters.
If Perl had had a good culture, then conserving it would have been good!
That was effectively the culture of the Internet in general at that time. It was the "wild west" for years, because, well, it _was_ a modern incarnation of the same phenomenon.
There are other ways to look at it. Back in the old days when computers required lots of planning to program, the technical problem of having a buggy program was also a people problem of not planning carefully enough. But now we have fast computers and cheap storage and version control and autosave and so many other things, such that perfectly conscientious human planning of the activity of programming is no longer necessary. In many cases you can just bang stuff out by trial and error.
My point is, we have often discovered technical solutions for things that used to be regarded as people problems.
So maybe a lot of things are just problems, which may be solvable through either technical or people means.
A decade ago, IBM was spending enormous amounts of money to tell me stuff like "cognitive finance is here" in big screen-hogging ads on nytimes.com. They were advertising Watson, vaporware which no one talks about today. Are they bitter that someone else has actually made the AI hype take off?
I don't know that I'd trust IBM when they are pitching their own stuff. But if anybody has experience with the difficulty of making money off of cutting-edge technology, it's IBM. They were early to AI, early to cloud computing, etc. And yet they failed to capture market share and grow revenues sufficiently in those areas. Cool tech demos (like the Watson Jeopardy) mimic some AI demos today (6-second videos). Yeah, it's cool tech, but what's the product that people will actually pay money for?
I attended a presentation in the early 2000s where an IBM executive was trying to explain to us how big software-as-a-service was going to be and how IBM was investing hundreds of millions into it. IBM was right, but it just wasn't IBM's software that people ended up buying.
Xerox was also famously early with a lot of things but failed to create proper products out of it.
Google falls somewhere in the middle. They have great R&D but just can’t make products. It took OpenAI to show them how to do it, and the managed to catch up fast.
"They have great R&D but just can’t make products"
Is this just something you repeat without thinking? It seems to be a popular sentiment here on Hacker News, but really makes no sense if you think about it.
Products: Search, Gmail, Chrome, Android, Maps, Youtube, Workspace (Drive, Docs, Sheets, Calendar, Meet), Photos, Play Store, Chromebook, Pixel ... not to mention Cloud, Waymo, and Gemini ...
So many widely adopted products. How many other companies can say the same?
I don't think Google is bad at building products. They definitely are excellent at scaling products.
But I reckon part of the sentiment stems from many of the more famous Google products being acquisitions orignally (Android, YouTube, Maps, Docs, Sheets, DeepMind) or originally built by individual contributors internally (Gmail).
Then here were also several times where Google came out with multiple different products with similar names replacing each other. Like when they had I don't know how many variants of chat and meeting apps replacing each other in a short period of time. And now the same thing with all the different confusing Gemini offerings. Which leads to the impression that they don't know what they are doing product wise.
Starting with an acquisition is a cheap way of accelerating once your company reaches a certain size.
Look at Microsoft - Powerpoint was an acquisition. They bought most of the team that designed and built Windows NT from DEC. Frontpage was an acquisition, Azure came after AWS and was led by a series of people brought in in acquisitions (Ray Ozzie, Mark Russinovich, etc.). It's how things happen when you're that big.
Because those were "free time" projects. It wasn't directed to do by the company, somebody at the company with their flex time - just thought it was a good idea and did it. Googlers don't get this benefit any more for some reason.
Leadership's direction at the time was to use 20% of your time in unstructured exploration and cool ideas like that, though good point of the other poster that that is no longer a policy.
Those are all free products, some of them are pretty good. But free is the best business strategy to get a product to the top of the market. Are others better, are you willing to spend money to find out? Clearly, most people are not interested. The fact that they can destroy the market for many different types of software by giving it away and still stay profitable is amazing. But that's all they are doing. If they started charging for everything there would be better competition and innovation. You could move a whole lot of okay-but-not-great cars, top every market segment you want, if you gave them away for free. Only enthusiasts would remain to pay for slightly more interesting and specific features. Literally no business model can survive when their primary product is competing with good-enough free products.
They come up with tons and tons of products like Google Glass and Google+ and so on and immediately abandon them. It is easy to see that there is no real vision. They make money off AdSense and their cloud services. That's about it.
Google does abandon a lot of stuff, but their core technologies usually make their way into other, more profitable things (collaborative editing from Wave into Docs; loads of stuff from Google+; tagging and categorizing in Photos from Picasa (I'm guessing); etc)
It annoyed me recently that they dropped support for some Nest/Google Home thermostats. Of course, they politely offered to let me buy a replacement for $150.
> Products: Search, Gmail, Chrome, Android, Maps, Youtube, Workspace (Drive, Docs, Sheets, Calendar, Meet), Photos, Play Store, Chromebook, Pixel ... not to mention Cloud, Waymo, and Gemini ...
Many of those are acquisitions. In-house developed ones tend to be the most marginal on that list, and many of their most visibly high-effort in-house products have been dramatic failures (e.g. Google+, Glass, Fiber).
I was extremely surprised that Google+ didn't catch on. The week before Google+ launched, me and all my friends agreed that Facebook is toast, Google will do the same thing but better, and everyone has a Gmail account so there will be basically zero barrier to entry. Obviously, we were wrong; Google+ managed to snatch defeat out of the jaws of victory, Google+ never got significant traction, and Facebook managed to keep growing and now they're yet another Big Evil Tech Corporation.
Honestly, I still don't really know how Google managed to mess that up.
I got early access to Google+ because of where I worked at the time. The invite-only thing had worked great for GMail but unfortunately a social network is useless if no-one else is on it. Then the real names thing and the resulting drumbeat of horror stories like "Google doxxed me to my violent ex-husband" killed what little momentum they had stone dead. I still don't know why they went so hard on that, honestly.
I think the sentiment is usually paired with discussion about those products as long-lasting, revenue-generating things. Many of those ended up feeding back into Search and Ads. As an exercise, out of the list you described, how many of those are meaningfully-revenue-generating, without ads?
A phrasing I've heard is "Google regularly kills billion-dollar businesses because that doesn't move the needle compared to an extra 1% of revenue on ads."
And, to be super pedantic about it, Android and YouTube were not products that Google built but acquired.
They bought YouTube but you have to give Google a hell of a lot of credit for turning it into what it is today. Taking ownership of YouTube at the time was seen by many as taking ownership of an endless string of copyright lawsuits, suing them into oblivion.
Youtube maintains an independent campus from the google/alphabet mothership, I'm curious how much direction they get, as (outwardly, at least) appear to run semi-autonomously.
Before Google touched Android it was a cool concept but not what we think of today. Apparently it didn't even run on Linux. That concept came after the acquisition.
Notably all other than Gemini are from a decade or more ago. They used to know how to make products, but then they apparently took an arrow in the knee.
Search was the only mostly original product. With the exception of YouTube which was a purchase, Android and ChromeOS all the other products were initially clones.
Google had less incentive. Their incentive was to keep API bottled up and in brewing as long as possible so their existing moats in search, YouTube can extend in other areas. With openai they are forced to compete or perish.
Even with gemini in lead, its only till they extinguish or make chatgpt unviable for openai as business. OpenAI may loose the talent war and cease to be leader in this domain against google (or Facebook) , but in longer term their incentive to break fresh aligns with average user requirements . With Chinese AI just behind, may be google/microsoft have no choice either
Google was especially well positioned to catch up because they have a lot of the hardware and expertise and they have a captive audience in gsuite and at google.com.
The original statistical machine translation models of the 90's, which were still used well into the 2010's, were famously called the "IBM models" https://en.wikipedia.org/wiki/IBM_alignment_models These were not just cool tech demos, they were the state of the art for decades. (They just didn't make IBM any money.)
Neither cloud computing nor AI are good long term businesses. Yes, there's money to be made in the short term but only because there's more demand than there is supply for high-end chips and bleeding edge AI models. Once supply chains catch up and the open models get good enough to do everything we need them for, everyone will be able to afford to compute on prem. It could be well over a decade before that happens but it won't be forever.
This is my thinking too. Local is going to be huge when it happens.
Once we have sufficient VRAM and speed, we're going to fly - not run - to a whole new class of applications. Things that just don't work in the cloud for one reason or another.
- The true power of a "World Model" like Genie 2 will never happen with latency. That will have to run locally. We want local AI game engines [1] we can step into like holodecks.
- Nobody is going to want to call OpenAI or Grok with personal matters. People want a local AI "girlfriend" or whatever. That shit needs to stay private for people.
- Image and video gen is a never ending cycle of "Our Content Filters Have Detected Harmful Prompts". You can't make totally safe for work images or videos of kids, men in atypical roles (men with their children = abuse!), women in atypical roles (woman in danger = abuse!), LGBT relationships, world leaders, celebs, popular IPs, etc. Everyone I interact with constantly brings these issues up.
- Robots will have to be local. You can't solve 6+DOF, dance
routines, cutting food, etc. with 500ms latency.
- The RIAA is going door to door taking down each major music AI service. Suno just recently had two Billboard chart-topping songs? Congrats - now the RIAA lawyers have sued them and reached a settlement. Suno now won't let you download the music you create. They're going to remove the existing models and replace them with "officially licensed" musicians like Katy Perry® and Travis Scott™. You won't retain rights to anything you mix. This totally sucks and music models need to be 100% local and outside of their reach.
It is very misleading or outright perverse to write "they were selling software as a service in the IBM 360 days" when there was no public network that could be used to the deliver the service. (There were wide-area networks, but each one was used by a single organization and possibly a few of its most important customers and suppliers, hence the qualifier "public" above.)
But anyways, my question to you is, was there any software that IBM charged money for as opposed to providing the software at no additional cost with the purchase or rental of a computer?
I do know that no one sold software software (i.e., commercial off-the-shelf software) in the 1960s: the legal framework that allowed software owners to bring lawsuits for copyright violations appeared in the early 1980s.
There was an organization named SHARE composed of customers of IBM whereby one customer could obtain software written by other other customers (much like the open-source ecosystem) but I don't recall money ever changing hands for any of this software except a very minimal fee (orders of magnitude lower than the rental or purchase price of a System/360, which started at about $660,000 in 2025 dollars).
Also, IIUC most owners or renters of a System/360 had to employ programmers to adapt the software IBM provided. There is software with that quality these days, too (.e.g, ERP software for large enterprises) but no one calls that a software as a service.
> but it just wasn't IBM's software that people ended up buying.
Well, I mean, WebSphere was pretty big at the time; and IBM VisualAge became Eclipse.
And I know there were a bunch of LoB applications built on AS/400 (now called "System i") that had "real" web-frontends (though in practice, they were only suitable for LAN and VPN access, not public web; and were absolutely horrible on the inside, e.g. Progress OpenEdge).
...had IBM kept up the pretense of investment, and offered a real migration path to Java instead of a rewrite, then perhaps today might be slightly different?
I still have PTSD from how much Watson was being pushed by external consultants to C levels despite it being absolutely useless and incredibly expensive. A/B testing? Watson. Search engine? Watson. Analytics? Watson. No code? Watson.
I spent days, weeks arguing against it and ended up having to dedicate resources to build a PoC just to show it didn’t work, which could have been used elsewhere.
If anything, the fact they built such tooling might be why they're so sure it won't work. Don't get me wrong, I am incredibly not a fan of their entire product portfolio or business model (only Oracle really beats them out for "most hated enterprise technology company" for me), but these guys have tentacles just as deep into enterprises as Oracle and are coming up dry on the AI front. Their perspective shouldn't be ignored, though it should be considered in the wider context of their position in the marketplace.
Apples and Oranges from an enterprise perspective, with the additional wrinkle that consumer tech is generally ad-supported (ugh) while Enterprise stuff is super-high margin and paid for in actual currency.
If you assume the napkin math is correct on the $800bn yearly needed to service interest rates on these CAPEX loans, then you’d need the collective revenue of the major players (OpenAI, Google, Anthropic, etc) to pull in as much revenue in a year as Apple, Alphabet, and Samsung combined.
Let’s assume OpenAI is responsible for much of this bill, say, $400bn. They’d need a very generous conversion rate of 24% for their monthly users (700m) to the Pro plan for an entire year to cover that bill, for one year. That’s a conversion rate better than anyone else in the XaaS world who markets to consumers and enterprises alike, and paints a picture of just how huge the spend from enterprises would need to be to subsidize consumer free usage.
And all of this is just for existing infrastructure. As a number of CEBros have pointed out recently (and us detractors have screamed about from the beginning), the current CAPEX on hardware is really only good for three to five years before it has to be replaced with newer kit at a larger cost. Nevermind the realities of shifting datacenter designs to capitalize on better power and cooling technologies to increase density that would require substantial facility refurbishment to support them in a potential future.
The math just doesn’t make sense if you’re the least bit skeptical.
It makes it suspect when combined with the obvious incentive to make the fact that IBM is basically non-existent in the AI space look like an intentional, sagacious choice to investors. It very may well be, but CEOs are fantastically unreliable narrators.
No, I don’t trust a word Sundar or Satya say about AI either. CEOs should be hyping anything they’re invested in, it’s literally their job. But convincing investors that every thing they don’t invest in heavily is worthless garbage is effectively part of their job too.
What is more convincing is when someone invests heavily (and is involved heavily) and then decides to stop sending good money after bad (in their estimation). Not that they’re automatically right, but is at least pay attention to their rationales. You learn very little about the real world by listening to the most motivated reasoner’s nearly fact-free bloviation.
> What is more convincing is when someone invests heavily (and is involved heavily) and then decides to stop sending good money after bad (in their estimation).
Commercially, Watson was a joke from beginning to end. If their argument is that Watson’s failure indicates a machine that can at the very least convincingly lie to you will definitely fail to make money, that’s an insipid argument.
Yeah I was going to say the same thing ha. I get what they’re (the commenter) saying, but one could also argue IBM is putting their money where their mouth is by not investing.
I suspect the reality is that they missed the boat, as they have missed tens of other boats since the mainframe market dried up. I guess you could argue they came to the boat too early with their pants on backwards (i.e. Watson), and then left before it showed up. But it’s hard to tell from the outside.
Maybe that will turn out to be a good decision and Microsoft/Google/etc. will be crushed under the weight of hundreds of billions of dollars in write-offs in a few years. But that doesn’t mean they did it intentionally, or for the right reasons.
IBM has been "quietly" churning out their Granite models, with the latest of which performing quite well against LLaMa and DeepSeek. So not Anthropic-level hype but not sitting it out completely either. They also provide IP indemnification for their models, which is interesting (Google Cloud does the same).
I see Watson stuff at work. It’s not a direct to consumer product, like ChatGPT, but I see it being used in the enterprise, at least where I’m at. IBM gave up on consumer products a long time ago.
Just did some brief Wikipedia browsing and I'm assuming it's WatsonX and not Watson? It seems Watson has been pretty much discontinued and WatsonX is LLM based. If it is the old Watson, I'm curious what your impressions of it is. It was pretty cool and ahead of its time, but what it could actually do was way over promised and overhyped.
I’m not close enough to it to make any meaningful comments. I just see the name pop up fairly regularly. It is possible that some of it is WatsonX and everyone just says Watson for brevity.
One big ones used heavily is Watson AIOps. I think we started moving to it before the big LLM boom. My usage is very tangential, to the point where I don’t even know what the AI features are.
It's good we are building all this excess capacity which will be used for applications in other fields or research or open up new fields.
I think the dilemma I see with building so much data centers so fast is exactly like whether I should buy latest iPhone now or should wait few years when the specs or form factor improves later on. The thing is we have proven tech with current AI models so waiting for better tech to develop on small scale before scaling up is a bad strategy.
IBM did a lot of pretty fragmented and often PR-adjacent work. And getting into some industry-specific (e.g. healthcare) things that didn't really work out. But my understanding is that it's better standardized and embedded in products these days.
Not to be rude, but that didn't answer my question.
Taking a look at IBM's Watson page, https://www.ibm.com/watson, it appears to me that they basically started over with "watsonx" in 2023 (after ChatGPT was released) and what's there now is basically just a hat tip to their previous branding.
I think that's essentially accurate even if some work from IBM Research in particular did carry over. As I recall my timelines, yes, IBM rebooted and reorganized Watson to a significant degree while continuing to use a derivation of the original branding (and took advantage of Red Hat platforms/products).
Yep. When a brand has tarnished itself enough, it makes sense for the brand to step back. Nowadays, we interact with their more popular properties, such as Redhat.
My limited understanding (please take with a big grain of salt) is that they 1.) sell mainframes, 2.) sell mainframe compute time, 3.) sell mainframe support contracts, 4.) sell Red hat and Redhat support contracts, and 5.) buy out a lot of smaller software and hardware companies in a manner similar to private equity.
Mainframe for sure, but IBM has TONS of products in their portfolio that get bought. They also have IBM Cloud which is popular. Then there is the Quantum stuff they've been sinking money into for the last 20 years or so.
I can think of nothing more peak HN than criticizing a company worth $282 Billion with $6 billion in profit (for startup kids that means they have infinite runway and then some) that has existed for over 100 years with "I'm not even sure what they do these days". I mean the problem could be with IBM... what a loser company!
:) As much I love ragging on ridiculous HN comments, I think this one is rooted in some sensibility.
IBM doesn’t majorly market themselves to consumers. The overwhelming majority of devs just aren’t part of the demographic IBM intends to capture.
It’s no surprise people don’t know what they do. To be honest it does surprise me they’re such a strongly successful company, as little as I’ve knowingly encountered them over my career.
There's no fix for this problem in hiring upfront. Anyone can cram and fake if they expect a gravy train on the other end. If you want people to work after they're hired, you have to be able to give direct negative feedback, and if that doesn't work, fire quickly and easily.
>Anyone can cram and fake if they expect a gravy train on the other end.
If you're still asking trvia, yes. Maybe it's time to shift from the old filter and update the process?
If you can see in the job that a 30 minute PR is the problem, then maybe replace that 3rd leetcode round with 30 minutes of pair programming. Hard to chatGPT in real time without sounding suspicion.
That approach to interviewing will cause a lot of false negatives. Many developers, especially juniors, get anxious when thrown into a pair programming task with someone they don't know and will perform badly regardless of their actual skills.
I understand that and had some hard anxiety myself back then. Even these days I may be a bit shakey when love coding in an interview setting?
But is the false negative for a nervous pair programmer worse than a false positive for a leetcode question? Ideally a good interviewer would be able to separate the anxiety from the actual thinking and see that this person can actually think, but that's another undervalued skill among industry.
I don’t know why people are so hesitant to just fire bad people. It’s pretty obvious when someone starts actually working if they’re going to a net positive. On the order of weeks, not months.
Given how much these orgs pay, both direct to head hunters and indirect in interview time, might as well probationally hire the whoever passes the initial sniff test.
That also lets you evaluate longer term habits like punctuality, irritability, and overall not-being-a-jerkness.
Not so fast. I "saved" guys from being fired by asking to be more patient with them. The last one was not in my team as I moved out to lead another team. Turned out the guy did not please an influencial team member, who then complained about him.
What I saw instead was a young silent guy, given boring work and was longing for more interesting work. A tad later he took ownership of a neglected project, completed it and made a name of himself.
It takes considerably more effort and skill to treat colleagues as humans rather than "outputs" or ticket processing nodes.
Most (middle) management is an exercise in ass-covering, rather than creating healthy teams. They get easily scared when "Jira isn't green", and look someone else to blame for not doing the managing part correctly
Sunk cost. You've spent... 20 to 100 hours on interviews. Maybe more. Doing it again is another expense.
Onboarding. Even with good employees, it can take a few months to get the flow of the organization, understanding the code base, and understanding the domain. Maybe a bit of technology shift too. Firing a person who doesn't appear to be preforming in the first week or two or three would be churning through that too fast.
Provisional hiring with "maybe we'll hire you after you move here and work for us for a month" is a non-starter for many candidates.
At my current job and the job previous it took two or three weeks to get things fully set up. Be it equipment, provisioning permissions, accounts, training (the retail company I worked at from '10 to '14 - they sent every new hire out to a retail store to learn about how the store runs (to get a better idea of how to build things for them and support their processes).
... and not every company pays Big Tech compensation. Sometimes it's "this is the only person who didn't say «I've got an offer with someone else that pays 50% more»". Sometimes a warm body that you can delegate QA testing and pager duty to (rather than software development tasks) is still a warm body.
It's really not obvious to calculate the output of any employee even with years of data, way harder for a software engineer or any other job with that many facets. If you've found a proven and reliable way evaluate someone in the first 2 weeks you just solved one of the biggest HR problems ever.
What if, and hear me out, we asked the people a new employee has been onboarding with? I know, trusting people to make a fair judgment lacks the ass-covering desired by most legal departments but actually listening to the people who have to work with a new hire is an idea so crazy it might just work.
> I don’t know why people are so hesitant to just fire bad people.
"Bad" is vague, subjective moralist judgement. It's also easily manipulated and distorted to justify firing competent people who did no wrong.
> It’s pretty obvious when someone starts actually working if they’re going to a net positive. On the order of weeks, not months.
I feel your opinion is rather simplistic and ungrounded. Only the most egregious cases are rendered apparent in a few weeks worth of work. In software engineering positions, you don't have the chance to let your talents shine through in the span of a few weeks. The cases where incompetence is rendered obvious in the span of a few weeks actually spells gross failures in the whole hiring process, which failed to verify that the candidate failed to even meet the hiring bar.
> (...) might as well probationally hire the whoever passes the initial sniff test.
This is a colossal mistake, and one which disrupts a company's operations and the candidates' lives. Moreover, it has a chilling effect on the whole workforce because no one wants to work for a company ran by sociopaths that toy with people's lives and livelihood as if it was nothing.
The bar for “junior” has quietly turned into “mid-level with 3 years of production experience, a couple of open-source contributions, and perfect LeetCode” while still paying junior money. Companies list “0-2 years” but then grill candidates on system design, distributed tracing, and k8s internals like they’re hiring for staff roles. No wonder the pipeline looks broken.
I’ve interviewed dozens of actual juniors in the last six months. Most can ship features, write clean code, and learn fast, but they get rejected for not knowing the exact failure modes of Raft or how to tune JVM garbage collection on day one. The same companies then complain they “can’t find talent” and keep raising the bar instead of actually training people.
Real junior hiring used to mean taking someone raw, pairing them heavily for six months, and turning them into a solid mid. Now the default is “we’ll only hire someone who needs zero ramp-up” and then wonder why the market feels empty.
I used to agree with this, but nowadays it seems that the factors constricting housing supply are in many places related to zoning and regulation, not the ability for developers to make a profit, so rent control might have much less downside than economists have conventionally assumed.
EDIT: I am seeing a nephew comment that says rent control could make NIMBY politics even worse because it makes renters' interests more like homeowners'. Hadn't thought of that.
Everything risks aggravating NIMBYism. It's hard to see how housing costs can come down in a lot of cities simply because housing is seen as an investment and people won't idly standby if the value decreases because of policies.
I cringe every time some youngster suggests we go down the socialist/communist path because none of them have any real-life experience with how bad Eastern Europe was! Here's a hint: It's was fucking beyond belief horrible--everyone was poor, there was nothing to buy in the stores, and people spent most of their free time drinking themselves to death.
On the other hand, I live in Norway. Bits like healthcare and a good safety net makes things nicer. I'm from the US originally. This nice stuff could be adapted for the US if folks would put their energy into helping others rather than spite.
Mismanagement of resources is bad no matter what system is used. Just because some under one sort of ideology and corrupt leaders failed doesn't mean that folks can't take the bits that were good, adapt and improve them, and see good results.
Norway is a petro-state that managed to amass a national wealth fund. Cherish what you have. Understand that it doesn't necessarily translate to every community.
Americans pay more for worse outcomes, so this is clearly a political/priorities issue, not an issue with wealth.
Other counterexamples are the other European countries with the same safety net which are not petro states (they do have colonial wealth though).
A lot of this was possible because of high corporate taxes and high marginal taxes on high incomes, so in theory this model could apply in most places.
Not all european countries have colonial wealth. There is universal healthcare in croatia and that nation started from scratch essentially 30 years ago and isn’t really a very strong economy today either.
If this is your take, you've missed the point. I said there is no reason good bits can't be adapted to one's society. it isn't that one system will work for everywhere or that it'll even look the same. Some things are unique to Norway, but other things definitely are pretty widespread.
You see this with healthcare in different places: Details change and sometimes it is lacking, but lots of places offer healthcare to its citizens that is low-cost to free when you need it. There is a lot of variation in what countries can do. Some places are poor but still manage to a point. Some places just refuse, like the US - heck, the US has oil and could have funded things for its citizens and keeps bragging about being rich, but they aren't gonna use it for the immediate welfare of its citizens.
Doesn't it seem ingenuine how everything good is socialism and everything bad is communism? Also if socialism can't compete with capitalism then it's doomed. Socialism must make capitalism illegal in order to succeed and I don't want to be in a place where capitalism is illegal. And "market socialism" is not socialism either.
reply