Hacker Newsnew | past | comments | ask | show | jobs | submit | Sevii's commentslogin

Nope its totally dead

The problem is that health insurance companies squander immense amounts of money on adjudicating claims. Huge amounts of GDP are spent on fights between insurers and providers over what is covered.

You can deduce that cannot be true using the medical loss ratios, which is money flowing out to healthcare providers. At roughly 85% or so, that means 15% is left for the entirety of the rest of the business, including adjudication.

https://www.kff.org/private-insurance/medical-loss-ratio-reb...

https://www.oliverwyman.com/our-expertise/insights/2023/mar/...

That is not to say the adjudication process is done well. In fact, it is hugely wasteful, either intentionally or unintentionally, and the problem is that the government does not audit the insurance companies often enough, nor does it levy penalties sufficient to incentivize proper and efficient adjudication.

The government should be doing constant random checks on claims to see if they were processed and adjudicated in a timely and efficient manner with a sufficiently low error rate on behalf of the adjudicators, and the government is basically doing none of that.


A lot of it seems to be porting open source projects to rust for other open source projects to consume.

AI providers can only charge what the market can bear. AI isn't worth 20k/month for 'PHD' level work. But people are willing to pay for several $200/month subscriptions.

But fundamentally AI compute is a commodity. GPUs are made in factories at scale. Assuming AI quality tapers off eventually supply will catch up to demand.

Finally open weights models are good enough that the leading labs cannot charge high margins.


Works for me with a Pro sub at https://gemini.google.com/app


Are agents actually capable of answering why they did things? An LLM can review the previous context, add your question about why it did something, and then use next token prediction to generate an answer. But is that answer actually why the agent did what it did?


It depends. If you have an LLM that uses reasoning the explanation for why decisions are made can often be found in the reasoning token output. So if the agent later has access to that context it could see why a decision was made.


Reasoning, in majority of cases, is pruned at each conversation turn.


The cursor-mirror skill and cursor_mirror.py script lets you search through and inschpekt all of your chat histories, all of the thinking bubbles and prompts, all of the context assembly, all of the tool and mcp calls and parameters, and analyze what it did, even after cursor has summarized and pruned and "forgotten" it -- it's all still there in the chat log and sqlite databases.

cursor-mirror skill and reverse engineered cursor schemas:

https://github.com/SimHacker/moollm/tree/main/skills/cursor-...

cursor_mirror.py:

https://github.com/SimHacker/moollm/blob/main/skills/cursor-...

  The German Toilet of AI

  "The structure of the toilet reflects how a culture examines itself." — Slavoj Zizek

  German toilets have a shelf. You can inspect what you've produced before flushing. French toilets rush everything away immediately. American toilets sit ambivalently between.

  cursor-mirror is the German toilet of AI.

  Most AI systems are French toilets — thoughts disappear instantly, no inspection possible. cursor-mirror provides hermeneutic self-examination: the ability to interpret and understand your own outputs.

  What context was assembled?
  What reasoning happened in thinking blocks?
  What tools were called and why?
  What files were read, written, modified?

  This matters for:

  Debugging — Why did it do that?
  Learning — What patterns work?
  Trust — Is this skill behaving as declared?
  Optimization — What's eating my tokens?

  See: Skill Ecosystem for how cursor-mirror enables skill curation.
----

https://news.ycombinator.com/item?id=23452607

According to Slavoj Žižek, Germans love Hermeneutic stool diagnostics:

https://www.youtube.com/watch?v=rzXPyCY7jbs

>Žižek on toilets. Slavoj Žižek during an architecture congress in Pamplona, Spain.

>The German toilets, the old kind -- now they are disappearing, but you still find them. It's the opposite. The hole is in front, so that when you produce excrement, they are displayed in the back, they don't disappear in water. This is the German ritual, you know? Use it every morning. Sniff, inspect your shits for traces of illness. It's high Hermeneutic. I think the original meaning of Hermeneutic may be this.

https://en.wikipedia.org/wiki/Hermeneutics

>Hermeneutics (/ˌhɜːrməˈnjuːtɪks/)[1] is the theory and methodology of interpretation, especially the interpretation of biblical texts, wisdom literature, and philosophical texts. Hermeneutics is more than interpretive principles or methods we resort to when immediate comprehension fails. Rather, hermeneutics is the art of understanding and of making oneself understood.

----

Here's an example cursor-mirror analysis of an experiment with 23 runs with four agents playing several turns of Fluxx per run (1 run = 1 completion call), 1045+ events, 731 tool calls, 24 files created, 32 images generated, 24 custom Fluxx cards created:

Cursor Mirror Analysis: Amsterdam Fluxx Championship -- Deep comprehensive scan of the entire FAFO tournament development:

amsterdam-flux CURSOR-MIRROR-ANALYSIS.md:

https://github.com/SimHacker/moollm/blob/main/skills/experim...

amsterdam-flux simulation runs:

https://github.com/SimHacker/moollm/tree/main/skills/experim...


Just an update re German toilets: No toilet set up in the last 30 years (I know of) uses a shelf anymore. This reduces water usage by about 50% per flush.


But then what do you have to talk about all day??!


LLMs often already "know" the answer starting from the first output token and then emulate "reasoning" so that it appeared as if it came to the conclusion through logic. There's a bunch of papers on this topic. At least it used to be the case a few months ago, not sure about the current SOTA models.


Wait, that's not right, let me think through this more carefully...


of course not, but it can often give a plausible answer, and it's possible that answer will actually happen to be correct - not because it did any - or is capable of any - introspection, but because it's token outputs in response to the question might semi-coincidentally be a token input that changes the future outputs in the same way.


Well, the entire field of explainable AI has mostly thrown in the towel..


LLMs are continuously improving. So something that didn't work a year ago became possible in November. If you tried to build Openclaw in 2024 it wouldn't have worked. Openclaw isn't groundbreaking, but it is extremely on the edge of the LLM capability curve.


The industrial revolution was extremely hard on individual craftspeople. Jobs became lower paying and lower skilled. People were forced to move into cities. Conditions didn't improve for decades. If AI is anything comparable it's not going to get better in 5-10 years. It will be decades before the new 'jobs' come into place.


Seriously, it took nearly ~150 years before the people actually benefited from the industrial revolution. Saying that we need to condemn two lifetimes worth of suffering to benefit literally a few thousand people out of billions is absolutely ludicrous.


But think about corporate aristocracy and their children!


This is basically not true. It's hard to debate this when we don't start from a position of truth.


It pretty much is, unless you think it's totally cool to work in highly dangerous jobs that paid poorly while being treated like chattel slaves. There is a reason why the 1800s had the most violent labor actions in the US, it wasn't because they were treated "well."

Completely disingenuous, learn your labor history.


People didn't feel the benefits for 150 years? Just absolute nonsense.


I think the AI sales orgs are just immature. It's hard to say this but Google's Gemini sales team might be more professional.


What do you like about Gemini sales team?


AI isn’t good enough to do consulting yet.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: