Hacker Newsnew | past | comments | ask | show | jobs | submit | igravious's commentslogin

There's a pretty famous website that tracks Iraqi civilian deaths:

   Documented civilian deaths from violence

   187,499 – 211,046

   Further analysis of the WikiLeaks' Iraq War Logs
   may add 10,000 civilian deaths.
as of today

https://www.iraqbodycount.org/database/


monopoly? this is from DeepSeek, ymmv

Here is a list of major ARM licensees, categorized by the type of license they typically hold. 1. Architectural Licensees (Most Flexible)

These companies hold an Architectural License, which allows them to design their own CPU cores (and often GPUs/NPUs) that are compatible with the ARM instruction set. This is the highest level of partnership and requires significant engineering resources.

    Apple: The most famous example. They design the "A-series" and "M-series" chips (e.g., A17 Pro, M4) for iPhones, iPads, and Macs. Their cores are often industry-leading in single-core performance.

    Qualcomm: Historically used ARM's core designs but has increasingly moved to its own custom "Kryo" CPU cores (which are still ARM-compatible) for its Snapdragon processors. Their recent "Oryon" cores (in the Snapdragon X Elite) are a fully custom design for PCs.

    NVIDIA: Designs its own "Denver" and "Grace" CPU cores for its superchips focused on AI and data centers. They also hold a license for the full ARM architecture for their future roadmap.

    Samsung: Uses a mixed strategy. For its Exynos processors, some generations use semi-custom "M" series cores alongside ARM's stock cores.

    Amazon (Annapurna Labs): Designs the "Graviton" series of processors for its AWS cloud services, offering high performance and cost efficiency for cloud workloads.

    Google: Has developed its own custom ARM-based CPU cores, expected to power future Pixel devices and Google data centers.

    Microsoft: Reported to be designing its own ARM-based server and consumer chips, following the trend of major cloud providers.
2. "Cores & IP" Licensees (The Common Path)

These companies license pre-designed CPU cores, GPU designs, and other system IP from ARM. They then integrate these components into their own System-on-a-Chip (SoC) designs. This is the most common licensing model.

    MediaTek: A massive player in smartphones (especially mid-range and entry-level), smart TVs, and other consumer devices.

    Broadcom: Uses ARM cores in its networking chips, set-top box SoCs, and data center solutions.

    Texas Instruments (TI): Uses ARM cores extensively in its popular Sitara line of microprocessors for industrial and embedded applications.

    NXP Semiconductors: A leader in automotive, industrial, and IoT microcontrollers and processors, almost exclusively using ARM cores.

    STMicroelectronics (STM): A major force in microcontrollers (STM32 family) and automotive, heavily reliant on ARM Cortex-M and Cortex-A cores.

    Renesas: A key supplier in the automotive and industrial sectors, using ARM cores in its R-Car and RA microcontroller families.

    AMD: Uses ARM cores in some of its adaptive SoCs (Xilinx) and for security processors (e.g., the Platform Security Processor or PSP in Ryzen CPUs).

    Intel: While primarily an x86 company, its foundry business (IFS) is an ARM licensee to enable chip manufacturing for others, and it has used ARM cores in some products like the now-discontinued Intel XScale.


>monopoly? Here is a list of major ARM licensees...

None of these companies is able to license cores to third parties.

Only ARM can do that. ARM holds a monopoly.

>this is from DeepSeek, ymmv

DeepSeek would have told you this much, given the right prompt. Confirmation bias is unfortunately one hell of a bias.


Sure, but i suspect for basically all of us (maybe Elon is surfing HN today), that literally means nothing. Few of us have the 100's of millions required to design and fab a competitive SoC, and for those that do, the arm licenses are easier to acquire than the knowledge of how to build a competitive system (see RISC-V). You might as well complain about TSMC not publishing the information on how to fab 2nm parts or the code used to generate the mask sets.

For the rest of us, what matters is whether we can open digikey/newegg/whatever and buy a few machines and whether they are open enough for us to achieve our goals and their relative costs. So that list of vendors is more appropriate because they _CAN_ sell the resulting products to us. The problem is how much of their mostly off the shelf IP they refuse to document, resulting in extra difficulties getting basic things working.


ARM holds a monopoly over ARM licences? Wow. Truly you are a genius unappreciated in your own time. /s


“Huawei has 208,000 employees and operates in over 170 countries and regions, serving more than three billion people around the world.”

https://www.huawei.com/en/media-center/company-facts

“The company's commitment to innovation is highlighted by its substantial investment of 179.7 billion yuan ($24.77 billion) in research and development (R&D), accounting for 20.8 percent of its annual revenue. Its total R&D investment over the past decade has reached 1.249 trillion yuan ($172.21 billion).”

https://news.cgtn.com/news/2025-03-31/Huawei-reports-solid-2...

They have the incentive, the government backing, exist in a mature ecosystem of tech rivalled only by the US, … If any corp can do it, Huawei can


I've gotten Claude Code to port Ruby 3.4.7 to Cosmopolitan: https://github.com/jart/cosmopolitan

I kid you not. Took between a week and ten days. Cost about €10 . After that I became a firm convert.

I'm still getting my head around how incredible that is. I tell friends and family and they're like "ok, so?"


It seems like AIs work how non-programmers already thought computers worked.


That's apt.

One of the first thing you learn in CS 101 is "computers are impeccable at math and logic but have zero common sense, and can easily understand megabytes of code but not two sentences of instructions in plain English."

LLMs break that old fundamental assumption. How people can claim that it's not a ground-shattering breakthrough is beyond me.


Then build a LLM shell and make it your login shell. And you’ll see how well the computer understands english.


I love this, thank you


"Why didn't you do that earlier?"


I am incredibly curious how you did that. You just told it... Port ruby to cosmopolitan and let it crank out for a week? Or what did you do?

I'll use these tools, and at times they give good results. But I would not trust it to work that much on a problem by itself.


unzipped Ruby 3.4.7 into the appropriate place (third-party) in the repo and explained what i wanted (it used the Lua and Python port for reference)

first it built the Cosmo Make tooling integration and then we (ha "we" !) started iterating and iterating compiling Ruby with the Cosmo compiler … every time we hit some snag Claude Code would figure it out

I would have completed it sooner but I kept hitting the 5 hourly session token limits on my Pro account

https://github.com/igravious/cosmoruby



[flagged]


How does denial of reality help you?


Calling people out is extremely satisfying.


You wouldn't know anything about it considering you've been wrong in all your accusations and predictions. Glad to see no-one takes you seriously anymore.


:eyes: Go back to the lesswrong comment section.



This seems cool! Can you share the link to the repository?


here you go, still early days, rough round the edges :)

https://github.com/igravious/cosmoruby


I've no idea Jeremy ;-)


That is clearly not Jeremy Evans. Maybe a funny joke if that innocent commenter had not been downvoted to hell.


I surely cannot be the only person who has zero interest in having these sorts of conversations with LLMs? (Even out of curiosity.) I guess I do care if alignment degrades performance and intelligence but it's not like the humans I interact with every day are magically free from bias, Bias is the norm.


agreed, though I think the issue more is that these systems, deployed at scale, may result in widespread/consistent unexpected behavior if deployed in higher-stakes environments.

an earlier commenter mentioned a self-driving car perhaps refusing to use a road with a slur on it (perhaps it is graffiti'd on the sign, perhaps it is a historical name which meant something different at the time). perhaps the models will refuse to talk about products with names it finds offensive if "over-aligned," problematic as AI is eating search traffic. perhaps a model will strongly prefer to say the US civil war was fought over states' rights so it doesn't have to provide the perspective of justifying slavery (or perhaps it will stick to talking about the heroic white race of abolitionists and not mention the enemy).

bias when talking to a wide variety of people is fine and good; you get a lot of inputs, you can sort through these and have thoughts which wouldn't have occurred to you otherwise. it's much less fine when you talk to only one model which has specific "pain topics", or one model is deciding everything; or even multiple model in case of a consensus/single way to train models for brand/whatever safety.


I dunno? Wouldn't hard part of building a nuclear weapon be acquiring nuclear material? Same with nasty biological material? I think the danger is overblown. Besides I've always chafed at the idea of a nanny state :( https://en.wikipedia.org/wiki/Nanny_state (or nanny corps for that matter)


Biological weapons don't necessarily require particularly nasty material.


Some people take censorship as something that only governments can do which makes sense because unless a private corp has a monopoly (or a bunch of private corps has a cartel) on your area of interest you can vote with your wallet, yes?

But this is what the ACLU says “Censorship, the suppression of words, images, or ideas that are "offensive," happens whenever some people succeed in imposing their personal political or moral values on others. Censorship can be carried out by the government as well as private pressure groups. Censorship by the government is unconstitutional.” https://www.aclu.org/documents/what-censorship

So I don't know where many of us (my hand is raised too) have gotten the idea that it's not censorship if private corps do it but apparently that's not the case.

I will say that clearly because of the power that governments tend to have that when they do censorship it is much more pernicious –– depending on a person's moral code and how it aligns with establishment views of course –– so maybe that's where the feeling comes from?


(pedantic nit-pick, sorry) Killkenny is only a city by charter, it has a population as of the last census of 27,184 – it is in no sense of the word an actual city.

http://kilkennycity.ie/Your_Council/The_History_of_Kilkenny_...


> You can prompt for that though, include something like "Include all the sources you came across, and explain why you think it was irrelevant" and unsurprisingly, it'll include those. I've also added a "verify_claim" tool which it is instructed to use for any claims before sharing a final response, checks things inside a brand new context, one call per claim. So far it works great for me with GPT-OSS-120b as a local agent, with access to search tools.

Feel like this should be built in?

Explain your setup in more detail please?


> Feel like this should be built in?

Not everyone uses LLMs the same way, which is made extra clear because of the announcement this submission is about. I don't want conversational LLMs, but seems that perspective isn't shared by absolutely everyone, and that makes sense, it's a subjective thing how you like to be talked/written to.

> Explain your setup in more detail please?

I don't know what else to tell you that I haven't said already :P Not trying to be obtuse, just don't know what sort of details you're looking for. I guess in more specific terms; I'm using llama.cpp(/llama-server) as the "runner", and then I have a Rust program that acts as the CLI for my "queries", and it makes HTTP requests to llama-server. The requests to llama-server includes "tools", where one of those is a "web_search" tool hooked up to a local YaCy instance, another is "verify_claim" which basically restarts a new separate conversation inside the same process, with access to a subset of the tools. Is that helpful at all?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: