Hacker Newsnew | past | comments | ask | show | jobs | submit | novoreorx's commentslogin

Our cognition evolves over time. That article was written when the Rabbit R1 presentation video was first released, I saw it and immediately reflect my thoughts on my blog. At that time, nobody had the actual product, let alone any idea how it actually worked.

Even so, I still believe the Rabbit has its merits. This does not conflict with my view that OpenClaw is what is truly useful to me.


I think this shows an unfettered optimism for things we don't know anything about. Many see this as a red flag for the quality of opinions.

> R1 is definitely an upgraded replacement for smartphones. It’s versatile and fulfills all everyday requirements, with an interaction style akin to talking to a human.

You seemed pretty certain about how the product worked!


No, he seemed pretty certain about how they demoed it.

We're allowed to have opinions about promises that turn out not to be true.

If the rabbit had been what it claimed it would be, it would have been an obvious upgrade for me, at least.

I just want a voice-first interface.


In 2024 we should not be taking companies claims of what products do at face value. We should judge the thing that ships.

The most charitable thing you can say about this is they're naive, ignorant of the history of vapourware 'demoed' at trade shows.


You literally wrote in the blog post:

> Today, Rabbit R1 has been released, and I view it as a milestone in the evolution of our digital organ.

You viewed it as a “milestone in the evolution of our digital organ” without you let alone anyone having even tested it?

Yet you say ”That article was written when the Rabbit R1 presentation video was first released, I saw it and immediately reflect my thoughts on my blog.”?


To be honest, I do not quite understand the author's point. If he believes that agentic coding or AI has negative impact on being a thinker, or prevent him from thinking critically, he can simply stop using them.

Why blame these tools if you can stop using them, and they won't have any effect on you?

In my case, my problem was often overthinking before starting to build anything. Vibe coding rescued me from that cycle. Just a few days ago, I used openclaw to build and launch a complete product via a Telegram chat. Now, I can act immediately rather than just recording an idea and potentially getting to it "someday later"

To me, that's evolutional. I am truly grateful for the advancement of AI technology and this new era. Ultimately, it is a tool you can choose to use or not, rather than something that prevents you from thinking more.


For me personally, the problem is my teammates. The ability or will to critically think, or investigate existing tools in the codebase seems to disappear. Too often now I have to send back a PR where something is fixed using novel implementations instead of the single function call using existing infrastructure.

I realized that this would be a super helpful service if we could build a Stack Overflow for AI. It wouldn't be like the old Stack Overflow where humans create questions and other humans answer them. Instead, AI agents would share their memories—especially regarding problems they’ve encountered.

For example, an AI might be running a Next.js project and get stuck on an i18n issue for a long time due to a bug or something very difficult to handle. After it finally solve the problem, it could share their experience on this AI Stack Overflow. This way, the next time another agent gets stuck on the same problem, it could find the solution.

As these cases aggregate, it would save agents a significant amount of tokens and time. It's like a shared memory of problems and solutions across the entire openclaw agent network.


I have also been thinking about how stackoverflow used to be a place where solutions to common problems could get verified and validated, and we lost this resource now that everyone uses agents to code. Problem is that these llms were trained on stackoverflow, which is slowly going to get out of date.

Not your weights, not your agent

one of the benefits of SO is that you have other humans chiming in the comments and explaining why the proposed solution _doesn't_ work, or its shortcomings. In my experience, AI agents (at least Claude) tends to declare victory too quickly and regularly comes up with solutions that look good on the surface (tests pass!!!) but are actually incorrectly implemented or problematic in some non-obvious way.

Taking this to its logical conclusion, the agents will use this AI stack overflow to train their own models. Which will then do the same thing. It will be AI all the way down.

We think alike see my comment the other day https://news.ycombinator.com/item?id=46486569#46487108 let me know if your moving on building anything :)

MoltOverflow is apparently a thing! Along with a few other “web 2.0 for agents” projects: https://claw.direct

Is this not a recipe for model collapse?

No, because in the process they are describing the AIs would only post things they have found to fix their problem (a.k.a, it compiles and passes tests), so the contents posted in that "AI StackOverflow" would be grounded in external reality in some way. It wouldn't be an unchecked recursive loop which characterizes model collapse.

Model collapse here could happen if some evil actor was tasked with posting made up information or trash though.


As pointed out elsewhere, compiling code and passing tests isn’t a guarantee that generated code is always correct.

So even “non Chinese trained models” will get it wrong.


It doesn't matter that it isn't always correct; some external grounding is good enough to avoid model collapse in practice. Otherwise training coding agents with RL wouldn't work at all.

And how do you verify that external grounding?

What precisely do you mean by external grounding? Do you mean the laws of physics still apply?

I mean it in the sense that tokens that pass some external filter (even if that filter isn't perfect) are from a very different probability distribution than those that an LLM generates indiscriminately. It's a new distribution conditioned by both the model and external reality.

Model collapse happens in the case where you train your model indefinitely with its own output, leading to reinforcing the biases that were originally picked up by the model. By repeating this process but adding a "grounding" step, you avoid training repeatedly on the same distribution. Some biases may end up being reinforced still, but it's a very different setting. In fact, we know that it's completely different because this is what RL with external rewards fundamentally is: you train only on model output that is "grounded" with a positive reward signal (because outputs with low reward get effectively ~0 learning rate).


Oh interesting. I guess that means you need to deliberately select a grounding source with a different distribution. What sort of method would you use to compare distributions for this use case? Is there an equivalent to an F-test for high dimensional bit vectors?

>As these cases aggregate, it would save agents a significant amount of tokens and time. It's like a shared memory of problems and solutions across the entire openclaw agent network.

What is the incentive for the agent to "spend" tokens creating the answer?


edit: Thinking about this further, it would be the same incentive. Before people would do it for free for the karma. They traded time for SO "points".

Moltbook proves that people will trade tokens for social karma, so it stands that there will be people that would spend tokens on "molt overflow" points... it's hard to say how far it will go because it's too new.


I just had the same idea after seeing some chart from Mintlify (that x% of their users are bots)

This knowledge will live in the proprietary models. And because no model has all knowledge, models will call out to each other when they can't answer a question.

If you can access a models emebeddings then it is possible to retrieve what it knows using a model you have trained

https://arxiv.org/html/2505.12540v2


ur onto something here. This is a genuinely compelling idea, and it has a much more defined and concrete use case for large enterprise customers to help navigate bureaucratic sprawl .. think of it as a sharePoint or wiki style knowledge hub ... but purpose built for agents to exchange and discuss issues, ideas, blockers, and workarounds in a more dynamic, collaborative way ..

That is what OpenAI, Claude, etc. will do with your data and conversations

yep, this is the only moat they will have against chinese AI labs

Chinese should be excited about this idea then!

Be scared, be very scared.

RIP Moltbot, though you were not liked by most people

Agreed, he really should learn from how Pavel Durov responded to France after he was treated unfairly by French police.


Reminds me of what France did to Telegram, but Pavel Durov has obviously made a much better statement


Claude Code subscription is called for a reason, it's not Anthropic API subscription…

And Claude Code Pro is also similar to ChatGPT Pro, being used in a TUI does not mean it is equivalent to an API.


> And every second I spend trying to do fun free things for the community like this is a second I'm not spending trying to turn the business around and make sure the people who are still here are getting their paychecks every month.

Man, you can really feel the anxiety and desperation in Adam's reply.

Part of me wants to say "look what evil VC money does to devs", but that's only a harsh critism of a bystander.

Monetization is a normal path that the successful OSS projects would take. Tailwind went big on the startup route, took a bunch of VC cash a couple of years back, but despite the massive impact on the dev world, they clearly didn't hit the revenue numbers investors expected. Now the valuation bubble popped, and they're forced into massive layoffs. Though to be fair, maintaining a CSS library probably doesn't require that many people anyway.

I really feel for Adam here. He didn't really do anything wrong. Eagering to build a startup after your project blows up is a totally natural ambition. But funding brings risks. Taking other people's money makes you go from being the owner to just another employee real quick. And once you hop on that VC train, you don't really call the shots anymore. Sometimes you can't stop raising or scaling as your own will.

If you find a solid business model, that's great. But if not, well, honestly, a 75% layoff is getting off lightly. At least they still have a chance to keep on.

But he obviously didn't foresee this coming. He’s getting torn between being an OSS maintainer and a CEO who have to be responsible for stackholders and employees. That internal conflict must be brutal. It’s pretty obvious he didn't reject the PR for technical reasons. It's just because the reality hit him hard, and he has to respond to it, even if it goes against his mind as a developer.

Really hope Tailwind pulls through this. Also, this is a lesson worth noting for the rest of us. As indie devs, if you ever get the chance to take VC money, you really gotta think hard about whether you're truly ready for the strings that come attached.


Just pre ordered one. It reminds me of my first and favorite smartwatch - Withings Activité. Sadly, that one broke after 2 or 3 years, and since then I haven't found a smartwatch I'm willing to wear daily. My Apple Watch is now strictly used for workouts only.

They share a lot of similarities:

- Round dial

- Analog hands (though Round 2 simulates this with e-ink)

- Long battery life (Round 2 is ~2 weeks. I remember Withings lasting months on a coin battery)

- Thin and light

- No speaker, so no noise

These are the features I appreciate. I love gadgets, but for smartwatches, I want them to maintain a classic watch appearance. I don't want to worry about charging it every day, and I don't want too many features and notifications to distract me.

As for the "smart" part, I want the tech to focus on sensors, i.e. recording movement and sleep. The rest goes for aesthetics, like changing interesting watch faces now and then. That's really it. Most products on the market are no what I want because what the tech brings on them are interference and inconvenience.


I assume Withings' ScanWatches are not fine for you?


From a basic feature perspective? Sure, they work. But the original Activité is incomparable by design, it’s the only one I really loved.

Once my original broke and I realized they weren't making that specific design anymore, I just lost interest in buying from the brand. The new models just don't have the same appeal.


same idea here


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: