The whole point of markdown is that it is easily readable and editable and the structure is evident without being rendered. That it doesn't strictly need to be rendered in all or any context is its utility.
> China's technocratic rule…seems a lot better at creating a coherent strategy for economic growth and international soft power.
This requires that those in/with the power actually have altruistic, or at least not solely selfish, concerns. How rampant is government/bureaucratic corruption in China?
I elided the population starving part in order to not distract from the possibility of truly selfless governance strategy. It may very well be the case that millions starving is considered "acceptable losses" ("the needs of the billions outweigh the needs of the millions") in executing on that strategy. Which, make no mistake, would be truly tragic and should be undesirable. But that not everyone sees it that way is really what we're fighting against.
"I have a machine that feeds everyone, no one shall go hungry."
"But mah profits!"
"You only need profits so you yourself can eat, but that's now a solved problem"
"But mah profits. How will we know who's winning?"
Corruption definitely happens in China but even as a US person I can think of at least one major case where there were very real consequences for that. How many US govt officials have been executed for corruption? https://en.wikipedia.org/wiki/Li_Zaiyong
Millions starving during the Great Leap forward was very much NOT part of the plan, it was the result of some very misguided agricultural practices.
My point is that in the same period, China has gone from "oops we accidentally caused the 2nd largest mass starvation event in history" to "we have the largest high speed rail network and manufacturing base in the world and nobody is even close."
While the US went from "what's a postwar superpower to do? How bout some megaprojects?" To "I'm drowning in entitlements and houses now cost the same as the average lifetime GDP per capita".
China is so technocratic and efficient that it has been faking growth and population statistics for the last decade, hides youth unemployment numbers, and raids due diligences companies who may provide external investors more realistic data about the economy or local companies.
Also, China has its own real estate bubble, so it is not immune to those issues. At least in the US people have some recourse at the individual level.
There's DeWALT, Craftsman, Stanley, etc carpentry/mechanic power tool brands who make a wide variety of all manner of tools and tooling; the equivalents in computers (at least UNIXy) are coreutils (fileutils, shellutils, and textutils), netpbm, sed, awk, the contents of /usr/bin, and all their alternative, updated brands like fd, the silver searcher, and ripgrep are, or the progression of increased sharpening in revision control tools from rcs, sccs, svn, to mercurial and git; or telnet-ssh, rcp-rsync, netcat-socat. Even perl and python qualify as multi-tool versions of separate power tools. I'd even include language compilers and interpreters in general as extremely sharp and powerful power multi-tools, the machine shop that lets you create more power tools. When you use these, you're working with your hands.
GenAI is none of that, it's not a power tool, even though it can use power tools or generate output like the above power tools do. GenAI is hiring someone else to build a bird house or a spice rack, and then saying you had a hand in the results. It's asking the replicator for "tea, earl grey, hot". It's like how we elevate CEOs just because they're the face of the company, as if they actually did the work and were solely responsible for the output. There's skill in organization and direction, not all CEOs get undeserved recognition, but it's the rare CEO who's getting their hands dirty creating something or some process, power tools or not. GenAI lets you, everyone, be the CEO.
Why else do you think I go to work everyday? Because I have a “passion” for sitting at a computer for 40 hours a week to enrich private companies bottom line or a SaaS product or a LOB implementation? It’s not astroturfing - it’s realistic
Would you be happier if I said I love writing assembly language code by hand like I did in 1986?
Joanne probably had to field some "sorry, this can't be expensed" situations, and/or those were reduced because people knew another human was doing the work and they'd get called out, trying to game/abuse the system was less or just naturally discouraged. That was high trust, by both the employee and by Joanne.
With the employees needing to use Concur directly, there's a tendency, since there's a diversity in how each employee will handle the specifics, to try to "save money" by denying reimbursements for any random violation, making sure all i's were dotted and t's crossed. The automated system itself encourages this because it's so low effort to deny and send the expense form back, potentially wearing down the employe that they just give up. Joanne could avoid all that at scale because there was little/no diversity in how expenses were handled. If an i needed to be dotted, she could handle it, and she knew all the i's that needed to be dotted across all expense reports.
I currently have someone to handle my expense reports who sits in front of Concur for me! And that person routinely asks me for specific detail without me having to mess with Concur at all, things like "who was at this dinner you gave me a receipt for" or "I can't find the receipt for this company card charge".
I work at an organization that uses concur. My team work at the other end of the process. They take Concur outputs and pay the claimants back. We find that somehow makes us the support department, adding users, training them, and worse still teaching them how to get around rejections. It is a bit less work for us than the paper forms I started my career with. It does rather push the overhead of claiming the expense onto the claimers though, many of which tend to be those whose time is most expensive. I'm not sure it works out.
It's not about the consumption of raw materials or repurposing of the raw materials used for chips.
peterlk said:
> How many hospitals, roads, houses, machine shops, biomanufacturing facilities, parks, forests, laboratories, etc. could we build with the money we’re spending on pretraining models that we throw away next quarter?
It's about using the money for to build things that we actually need and that have more long term utility. No one expects someone with a 100M signing bonus at Meta to lay bricks, but that 100M could be used to buy a lot of bricks and pay a lot of brick layers to build hospitals.
Again people confuse paper wealth and material assets. If you take half of money of 0.001% people imagine there will be material change in world of atoms but thats not true. You can't take 8 mil Richard Mille watch and build an apartment building. We are mostly resource constrained. There are no material assets to convert all the paper wealth into. Telsa's physical assets are like 5% of Tesla's market cap the rest is cultish belief in Elon. You can't convert that into a hospital. It's trivial to observe on AI side there is unlimited amount of $ available and yet companies are supplied constrained on the atoms side from gas turbines having 3-4 year lead times to ASML running 24/7 prod cycle and yet unable to meet demand.
You can tax wealth, assets and paper wealth as well. Some countries like Switzerland does it. Annual tax is 0.05-0.3% and that what should billionaires pay to the society.
You can and Pollock paintings will go for 80 mil instead of 110 and luxury assets will drop in price but will still be owned by same people. Switzerland is tiny so not very constrained. There is some elasticity for converting paper wealth into physical things but it is miniscule. I think COVID should've being a pretty strong lesson there.
I think it's a mistake to believe that this money would exist if it was to be spent on these things. The existence of money is largely derived from society scale intention, excitement or urgency. These hospitals, machine shops, etc, could not manifest the same amount of money unless packaged as an exciting society scale project by a charismatic and credible character. But AI, as an aggregate, has this pull and there are a few clear investment channels in which to pour this money. The money didn't need to exist yesterday, it can be created by pulling a loan from (ultimately) the Fed.
I mean, you're just talking about spending money. Google isn't trying to build data centers for fun. These massive outlays are only there because the folks making them think they will make much more money than they spend.
Maybe it comes down to the definition of "toil". Some people find typing to be toiling, so they latch on to not having to type as much when using LLMs. Other people see "chores" as toiling, and so dream of household robots to take on the burden of that toil. Some people hate driving and consider that to be needless toil, so self-driving cars answer that—and the ads for Waymo latch onto this.
Personally, I am not stymied by typing nor chores nor driving. For me, typing is like playing a musical instrument: at some point you stop needing to think about how to play and you just play. The interaction and control of the instrument just comes out of your body. At some point in my life, all the "need to do things around the house" just became the things I do, and I'm not bothered by doing them, such that I barely notice doing them. But it's complex: the concept of "chores" is front and center when you're trying to get a teenager to be responsible for taking care of themselves (like having clean clothes, or how the bathroom is safer if it's not a complete mess) and participating in family/household responsibilities (like learning that if you don't make a mess, there's nothing to clean up). Can you really be effective at directing someone/something else without knowing how to do it yourself? Probably for some things, but not all.
> Maybe it comes down to the definition of "toil".
For sure.
I idealize a future where people can spend more time doing things they want to do, whatever those avocations might be. Freedom from servitude. I guess some kind of Star Trek / The Culture hybrid dream.
The world we have is so far from that imaginary ideal. Implicit in that ideal would be elimination of inequality, and I'm certain there are massive forces that would oppose that elimination.
And not just the definition, but the assumption that a specific toil is necessarily universal. I've had more than one conversation that started with someone else saying "using the LLM saves me soooo much time typing, think of how much time typing you'd save by using an LLM". But when I examine my processes and where I'm spending my time, typing isn't even on my list, so this claim is talking right past me and I can't see it all. Even when I was a hunt-and-peck typer on the c64 I didn't consider the typing to be a/the major factor in how long something took to program so much so that I continued with two-finger typing until I was forced to take a touch-typing class in highschool (back when that was still a thing, and we split the exercises between typewriters and computers).
"I'm able to put my shirt on so much faster with this shirt-buttoning machine, and I don't spend time tediously buttoning shirts and maybe having to rebutton when I misalign the buttons and buttonholes. You should get one to button your shirts, you're wasting time by not using a buttoning machine".
"I wear t-shirts."
(Obviously a contrived and simplistic example for fun)
At first I thought you were referring to the debates over using vim or using emacs, but I think you mean to refer to the discussions about learning to use/switching to powerful editors like vim or emacs. If you learn and use a sharp, powerful editor and learn to type fast, the "burden" of editing and typing goes away.
I tend to believe that, in all things, the quality of the output and how it is received is what matters and not the process that leads to producing the output.
If the ends justifies the means is a well-worn disagreement/debate, and I think the only solid conclusion we've come to as a society is that it depends.
That's a moral debate, not suitable for this discussion.
The discussion at hand is about purity and efficiency. Some people are process oriented, perfectionists, purists that take great pride in how they made something. Even if the thing they made isn't useful at all to anyone except to stroke their own ego.
Others are more practical and see a tool as a tool, not every hammer you make needs to be beautiful and made from the best materials money can buy.
Depending on the context either approach can be correct. For some things being a detail oriented perfectionist is good. Things like a web framework or a programming language or an OS. But for most things, just being practical and finding a cheap and clever way to get to where you want to go will outperform most over engineering.
It sure is myopic to think that the debate over if the ends justifies the means is solely a moral consideration, and then literally list cases where the value of the means compared to the ends is a judgment call results in "it depends".
No, deterministic means that given the same inputs—source code, target architecture, optimization level, memory and runtime limits (because if the optimizer has more space/time it might find better optimizations), etc—a compiler will produce the same exact output. This is what reproducible builds is about: tightly controlling the inputs so the same output is produced.
That a compiler might pick among different specific implementations in the same equivalency class is exactly what you want a multi-architecture optimizing compiler to do. You don't want it choosing randomly between different optimization choices within an optimization level, that would be non-deterministic at compile time and largely useless assuming that there is at most one most optimized equivalent. I always want the compiler to choose to xor a register with itself to clear it if that's faster than explicitly setting it to zero if that makes the most sense to do given the inputs/constraints.
Determinism may be required for some compiler use cases, such as reproducible builds, and several replies have pointed that out. My point isn't that determinism is unimportant, but that it isn't intrinsic to compilation itself.
There are legitimate compiler use cases e.g. search‑based optimization, superoptimization, diversification etc where reproducibility is not the main constraint. It's worth leaving conceptual space for those use cases rather than treating deterministic output as a defining property of all compilers
Given the same inputs, the desire for search-based optimization, superoptimization, or diversification should still be predictable and deterministic, even if it produces something that is initially unanticipated. It makes no sense that that a given superoptimization search would produce different output—would determine some other method is now more optimized than another—if the initial input and state is exactly the same. It is either the most optimal given the inputs and the state or it is not.
You are attempting to hedge and leave room for a non-deterministic compiler, presumably to argue that something like vibe-compilation is valuable. However, you've offered no real use cases for a non-deterministic compiler, and I assert that such a tool would largely be useless in the real world. There is already a huge gap between requirements gathering, the expression of those requirements, and their conversion into software. Adding even more randomness at the layer of translating high level programming languages into low level machine code would be a gross regression.
i dont think theres anything that makes it essentiall that llms are non-deterministic though
if you rewrote the math to be all fixed point precision on big ints, i think you would still get the useful LLM results?
if somebody really wanted to make a compiler in an LLM, i dont think that nondetermism is problem
id really imagine an llm compiler being a set of specs, dependency versions, and test definitions to use though, and you'd introduce essential nondetermism by changing a version number, even if the only change was the version name from "experimental" to "lts"
reply