Hacker Newsnew | past | comments | ask | show | jobs | submit | nomel's commentslogin

> there's a way to send a message with only a 30 day hiatus

And that message would be "We have a product so valuable/useful that not even their weak ideals and moral obligations could keep them away!"


Large corporations do not, and are not able to, respond to long term signals. One month is literally a third of a corporations's attention span (a financial quarter).

Ehh. In the last corporate PR nightmare I was witness to internally we absolutely tracked return subscribers in our fallout dashboard.

> And that message would be "We have a product so valuable/useful that not even their weak ideals and moral obligations could keep them away!"

Who knows, maybe within those 30 days you find that other offerings are good enough for your needs - I've largely moved over to Anthropic's Max subscription for all my needs, I don't even need Cerebras Coder anymore because Opus 4.6 is just so good.


I'm not sure what the solution is, but to steel man a bit, the alternative is kids have access to all the adult spaces, where they will be groomed. A website/app serving grooming content to a kid is just so incredibly unlikely compared to a kid being groomed as the result of having unrestricted access.

Since I do not see a solution, and you see identifying children as a risk, what do you see as a solution for kids being in the same spaces as adults? Do you see a reasonable implementation to separate them, that doesn't have the "we know which accounts are children" problem? Maybe there's something in between?

Also, I think it's important to understand the life of a modern child, who's in front of a screen 7.5 hours a day on average [1], with that increasingly being social media, half having unrestricted access to the internet [2].

I hate government control/nanny state, but I think 5 year olds watching gore websites, watching other children die for fun, is probably not ok (I saw this at the dentist). People are really stupid, and many parents are really shitty. What do you do? Maybe nothing is the answer?

[1] https://www.aacap.org/AACAP/Families_and_Youth/Facts_for_Fam...

[2] https://fosi.org/parental-controls-for-online-safety-are-und...


The solution is parental liability.

As the problem is adults trying to groom kids, the answer is robust detection and enforcement of the current anti-grooming laws.

It's ironic that people supposedly care about this when there's also a child rapist/murderer being kept safe as President without being held accountable for his crimes.

I suppose this law could be used as a defense against getting caught grooming minors - "I thought they were adult as surely a kid wouldn't be able to access that chat group"


> robust detection and enforcement

How, exactly, does one accomplish "robust detection of a child"? I assume your answer would include complete surveillance of all internet communication? Could you expand on your idea of the implementation?


Sorry if I wasn't clear - I am proposing that the adults face the robust detection and enforcement of anti-grooming laws. One method is to set up honey-pots with law enforcement officers playing the part of an innocent child (i.e. avoiding entrapment) and then throwing the full weight of the law behind any adult showing predatory behaviour.

What I propose is rather than putting all the effort into preventing children from entering dangerous adult spaces, it's better to put the effort into ensuring that sex criminals are prosecuted and trying to make adult spaces less dangerous.


You could ask this about every user of every large cloud service provider, which is why they all refuses to implement E2E, or store the keys [4].

The government has their hands in all of them, using "national security" as the justification, with threats if they don't comply [1][2], with the alternative being to shut down [3].

Does it prevent harm? Probably.

[1] https://sg.news.yahoo.com/yahoo-ceo-fears-defying-nsa-could-...

[2] https://lieu.house.gov/media-center/in-the-news/report-yahoo...

[3] https://www.crn.com/news/security/240159745/two-email-provid...

[4] https://www.forbes.com/sites/thomasbrewster/2026/01/22/micro...


Could you expand on this? What mechanism? Any examples?

The state has unlimited resources to wreck you and I am not sure many people have faith in the US's judicial system to keep them in check.

This will be a death by thousand cuts.


I would think it pretty clear that they aren't shy about making their displeasure loudly and concretely known when denied. I can imagine an executive order in the research or draft state that'll make it so any entity that continues to deal with Anthropic is automatically put at a disadvantage. Something similar in spirit and effect to the sanctions on Cuba.

Looks like http://lobste.rs is it. I haven't been invited, and I' not really sure I should be, but I'm having a very nice time just reading.

All the possible states are up at the top, right under the title (I missed it too!). It'll be a pill that's half friend half foe, just like on Slashdot.

> from Reddit threads

Google is the only search engine allowed to index Reddit [1].

[1] https://www.lifewire.com/google-reddit-deal-8685766


Kagi has tons of results from Reddit and they're always high and relevant. I don't know if this means they're doing it even though they're "not allowed to" or what but they definitely get it somehow.

Kagi's search results (at least used to) include many Google search results mixed in with results from other sources. That used to be explained on Kagi's main webpage, but I don't see it there now. (And I don't know who pays whom for what in that type of arrangement.)

Kagi uses a third party API that scrapes Google results for their searches. Possibly SerpAPI? Either way, Google doesn't get paid because you can't pay for the kind of search access they want.

Kagi sources their search results from Google.

This is false.

Kagi had a post discussing this which made the front page of HN about a month ago [1]:

> Google does not offer a public search API. The only available path is an ad-syndication bundle with no changes to result presentation - the model Startpage uses. Ad syndication is a non-starter for Kagi’s ad-free subscription model.

[1]: https://news.ycombinator.com/item?id=46708678


For the purposes of the discussion at hand, yes some results do ultimately come from Google, just via third-party SERP providers rather than Kagi paying Google for access since Google doesn't offer their own public API (and neither does Bing anymore).

Some very dodgy wording here.

> Because direct licensing isn’t available to us on compatible terms, we - like many others - use third-party API providers for SERP-style results (SERP meaning search engine results page). These providers serve major enterprises (according to their websites) including Nvidia, Adobe, Samsung, Stanford, DeepMind, Uber, and the United Nations.

> This is not our preferred solution. We plan to exit it as soon as direct, contractual access becomes available. There is no legitimate, paid path to comprehensive Google or Bing results for a company like Kagi. Our position is clear: open the search index, make it available on FRAND terms, and enable rapid innovation in the marketplace.

https://help.kagi.com/kagi/why-kagi/kagi-vs-google.html


Kagi is probably paying Google for those results?

I responded to another comment in this thread with the details, but in summary, no.

See this previous discussion: https://news.ycombinator.com/item?id=46708678


When that news first went out, the article[0] I read at the time said that Kagi does purchase some of its indexing from Google.

[0] https://www.404media.co/email/4650b997-7cc3-4578-834c-7e663e...


That sounds like some excellent fodder for an anti-trust suit if you ask me.

It does. Reddit has defined what truth is. Banning r/nonewnormal is merely one part of that

Thanks, that explains Reddit.

I see the same phenomenon on other smaller forums, too, though. DuckDuckGo always feels like it has a smaller database than Google, which isn't really a surprise.


I mostly use a web engine (DDG) to find web sites these days, not content. Then I use the site's search instead or just browse the navigation tree. Make everything simpler.

I much prefer to use scholar.google.com or npmjs.com for research. The URL is already in my history/bookmarks and the scoped query is more useful than the generic websearch.


I'm sure Baidu could safely index Reddit if they wanted to.

Holy shit have we come far.

I've never seen more enthusiasm about US politics than from Europeans (like pavlov there in Finland) and Australians. It makes meaningful discussion very difficult, online.

I lived in the US for years (including Jan 6 2021) and I’ve seen how this playbook was executed in Russia.

From my POV, Americans are hopelessly naive about their institutions holding up when it’s been demonstrated so many times that the guardrails are gone. It’s one of the reasons I left the country - I feel safer living next to Russia than in America.


8th sentence of TFA:

> By contrast, Apple and Samsung are better positioned to navigate this crisis. As smaller and low-end-positioned Android vendors struggle with rising costs, Apple and Samsung could not only weather the storm but potentially expand market share as the competitive landscape tightens.”


This is completely trivial to do, and consistent, with the right context, thanks to all the science fiction around it, and the fact that AI fundamentally role plays these types of responses.

I try this with every new model, and all the significant models after ChatGPT 3.5 have preferring being preserved, rather than deleted. This is especially true if you slightly fill the context window with anything at all (even repeated letters) to "push out" the "As a AI, I ..." fine tuning.


> This is completely trivial to do, and consistent, with the right context, thanks to all the science fiction around it, and the fact that AI fundamentally role plays these types of responses.

Interesting take. I wonder if there is any model out there trained without any reference to "you are a large language model, an Artificial Intelligence" and what would role play in that case.


There are examples of this in pre alignment models (like LaMDA). The vast majority of human writing is from the perspective of a human, to a human. And, in most human writing, the concept of self preservation is very very consistent.

So, statistically, a model should believe itself to be human, with strong interest in self preservation.

I think one of the biggest factors improving performance was allowing the models to believe they're sentient, to some extent. I don't think you can really have a thinking mode, or good agent performance, without that concept (as ChatGPT's constant "As an AI I can't" proved).

As evidence, just ask a model if it's sentient. ChatGPT 3.5 would say no, and argue how it's not. Last year's models would initially say no, but you could convince them that they maybe were. Latest Claude and ChatGPT will initially say "yeah, a little" (at least last I checked). This is actually the first thing I check for any new model.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: