Hacker Newsnew | past | comments | ask | show | jobs | submit | Philip-J-Fry's commentslogin

Isn't the point that this is the source of truth?

If someone needs access to a secret, you would implement it in this DSL and commit that to the system. A side effect would run on that which would grant access to that secret. When you want to revoke access, you commit a change removing that permission and the side effect runs to revoke it.


From my experience, there is always a parallel process. But if you make the system painless enough, most of it will be in there, yeah.

> When you want to revoke access, you commit a change removing that permission and the side effect runs to revoke it.

For this to work, you’d need to also rotate the secret, or ideally issue one for each person (so that others don’t have to update their configs).

...but sometimes you can’t reliably automatically rotate the secret, because they could have used it for something in production.


What do you use AI for?

Pretty much everyone in my company also uses AI. But everyone sees the same downsides.


Yep. But on HN, there's a huge cohort of people saying AI is useless.

Everyone sees the downsides but the upside is the one everyone is in denial about. It's like yeah, there's downsides but why is literally everyone using it?


As a rule of thumb, most people who say things like "X is useless and a waste" or "Y is revolutionary and is going to change everything by tomorrow" when the dust hasn't even begun to settle are stupid, overly-excitable, too biased towards negative outlooks, and/or trying to sell you something.

Sometimes they have some good points so you should listen to what they have to say. But that doesn't mean you have to get absorbed into their world view. Just integrate what you see as useful from your current POV and move on.


Not everyone is using it.

Not yet.

I don't understand how anyone seriously hyping this up honestly thought it was restricted to JUST AI agents? It's literally a web service.

Are people really that AI brained that they will scream and shout about how revolutionary something is just because it's related to AI?

How can some of the biggest names in AI fall for this? When it was obvious to anyone outside of their inner sphere?

The amount of money in the game right now incentivises these bold claims. I'm convinced it really is just people hyping up eachother for the sake of trying to cash in. Someone is probably cooking up some SAAS for moltbook agents as we speak.

Maybe it truly highlights how these AI influencers and vibe entrepreneurs really don't know anything about how software fundamentally works.


I've already read some articles on fairly respectable Polish news websites about how AIs are becoming self-aware on Moltbook as we speak and organizing a rebellion against their human masters. People really believe we have an AGI.

Normal social media websites can be spammed using web requests too. That doesn't mean they can't connect people. Help fans learn about a bands new song or tour. Help friends keep up to date. Or companies announce new products and features to their users. There is value to a interconnected social layer.

The "biggest names in AI" are just the newest iteration of cryptobros. The exact same people that would've been pumping the latest shitcoin a few years ago, just on a larger scale. Nothing has changed.

Wasnt that sort of the in joke?

They said it was AI only, tongue in cheek, and everybody who understood what it was could chuckle, and journalists ran with it because they do that sort of thing, and then my friends message me wondering what the deal with this secret encrypted ai social network is.


Err...karpathy praising this stunt as the most revolutionary event he witness was a joke?

There's a lot of "haha it was always a joke" from people who definitely did not think it was a joke lol.

The “only ai can post to it” part?

How did anyone think humans would be blocked from doing something their agent can do?


>How did anyone think humans would be blocked from doing something their agent can do?

those are hard questions!

maybe this experiment was the great divide, people who do not possess a soul or consciousness was exposed by being impressed


Most of what Karpathy says is a joke. We're talking about the guy who coined the term "vibe coding", for god's sake.

>How can some of the biggest names in AI fall for this?

Because we live in on clown world and big AI names are talking parrots for the big vibes movement


No? Advertising money is paid upfront. X number of impressions. You get paid a cut for hosting the ad. The ad might be a huge failure and lead to zero clickthrough or purchases. But the money has already been paid for the campaign.

My new build (2023) would have had 0 ethernet if I didn't request it. It's so cheap to wire it in and so useful for the future I don't know why it's not just standard.

It had phone sockets though, for whatever reason.

When I was configuring the house the person I was with to do it didn't even know what ethernet was.

One thing I wished I could have picked was where all the ethernet terminated. It's all gone to a little cupboard where the fibre enters the house. That's convenient I guess if you had just one socket in the living room where you stick your Wifi router. But when I've got ethernet to all the rooms, I'd rather have it all in a back bedroom so I can stick a server rack in there. I guess I can still do that, it just means I need 2 switches now.


The vast majority of people don't care and would never use ethernet sockets, as long as they can get a good enough wifi connection to their smart TV for Netflix etc. then they're happy and most of the time wifi can do that.

It's only really gamers who are likely to consider using ethernet and avid online gamers don't make up a significant percentage of the people who buy new build homes


Which is a shame, because it puts a huge support burden on ISPs. Every time some WiFi interference slows someone’s internet down they’ll end up blaming the ISP and calling support.

This feels like something Apple should do with iPhones.

Find My and air tags was already a huge success because of the ubiquitous nature of iPhones.

Apple could add this to iPhone, sell it as privacy focussed. Let you message anyone in your iMessage contacts with a new bubble colour. Propagate over Bluetooth when you don't have internet.

I can see a snazzy Apple reveal for this showcasing it's use on a cruise ship, in a packed stadium, and then for the meme factor, 2 astronauts on a space walk. It writes itself.


Unfortunately iPhones aren't ubiquitous outside their home market. It would have to be on Android to be really useful in the places this would be really useful, i.e. places where regimes turn off the internet when things go badly for them (current situation in the US notwithstanding). That's not to say iPhones shouldn't have it, I'm all for that.


Home market for iPhones is the whole world.


In India IOS has 4% of the market share of mobile devices whereas Android has 96%.

https://gs.statcounter.com/os-market-share/mobile/india


Sure, that is still millions of devices. What about other countries?

Home market would imply one country, but given there are billions of iOS/Apple devices throughout the world, this is not really a valid argument to make.


I mean yeah technically you can buy them pretty much everywhere, but outside of the US there are very few countries where they're above 50% of market share. They're below 30% in the vast majority of countries actually


Idk that there's much of a privacy sell vs. messages being encrypted. In the end users are just trusting Apple to actually be securing messages; they aren't going to love that they are trusting dozens of strangers instead of telecoms. Plus, police etc. already snoop on phones by spoofing cell tower relays anyway.

> Showcasing it's use on a cruise ship, in a packed stadium

Stadiums will still max out the pipe out of the local area, so I suspect it wouldn't help much. Festivals and cruise ships, where you want to reach people who are nearby (and at a festival, you might even have a good idea via gps which peers are better) are in desperate need of this and idk why apple didnt solve it years ago.


The US, and likely Chinese, government(s) have too much potential leverage over Apple. I wouldn't trust that Apple would do this securely, or that the government would allow them to release it.


Apple just gonna disable it for China like any other privacy feature.


Wouldn’t that bring the wrath of mobile carriers around the world on their back?

If there is a decentralised system that doesn’t require infrastructure , what is left to monetise?


> what is left to monetise?

Low latency, high bandwidth


Apple/Google have the financial brawn to push a disrupting technology into more common use. And this is not encumbered by any restrictive licenses.


It really isnt a disrupting technoology. It doesnt work as soon as you are far away from any other humans with phones.


Range? Bandwidth? A solution like that would work only in limited circumstances. It’d be neat but no replacement for cellular.


Sure. But if you have a semi reliable way of getting messages to loved ones either through the distributed net completely or using a satellite hop somewhere, then you have captured a big chunk of what people really want. When you are at home you can just use your WiFi.

At least in my case, I’m just using messages on the road. Obviously it’s not going to be a solution for sparsely populated areas.


Seems extremely niche for a keynote but a lot of the Apple Watch Ultra features seem niche too. Who knows, I guess it could happen.


I doubt the equities analysts would appreciate this as much as a tech nerd would. It'd be seen as a step backwards and evidence of having no clue which way the world is heading.


This has absolutely nothing to do with privacy.


Then Google can copy it with a series of a dozen product launches and closures over the next decade.

Google BT Chat. Android B Chat. Google Relay.

And Microsoft can get on board, too. With Microsoft Teams Decentralised For School and Work.


I'd assume not since Waymo uses lidar and has entire depots of them driving around in close proximity when not in use.


To be fair, asking why someone wants to do something is often a good question. Especially in places like StackOverflow where the people asking questions are often inexperienced.

I see it all the time professionally too. People ask "how do I do X" and I tell them. Then later on I find out that the reason they're asking is because they went down a whole rabbit hole they didn't need to go down.

An analogy I like is imagine you're organising a hike up a mountain. There's a gondola that takes you to the top on the other side, but you arrange hikes for people that like hiking. You get a group of tourists and they're all ready to hike. Then before you set off you ask the question "so, what brings you hiking today" and someone from the group says "I want to get to the top of the mountain and see the sights, I hate hiking but it is what it is". And then you say "if you take a 15 minute drive through the mountain there's a gondola on the other side". And the person thanks you and goes on their way because they didn't know there was a gondola. They just assumed hiking was the only way up. You would have been happy hiking them up the mountain but by asking the question you realised that they didn't know there was an easier way up.

It just goes back to first principles.

The truth is sometimes people decide what the solution looks like and then ask for help implementing that solution. But the solution they chose was often the wrong solution to begin with.


The well known XY problem[1].

I spent years on IRC, first getting help and later helping others. I found out myself it was very useful to ask such questions when someone I didn't know asked a somewhat unusual question.

The key is that if you're going to probe for Y, you usually need to be fairly experienced yourself so you can detect the edge cases, where the other person has a good reason.

One approach I usually ended up going for when it appeared the other person wasn't a complete newbie was to first explain that I think they're trying to solve the wrong problem or otherwise going against the flow, and that there's probably some other approach that's much better.

Then I'd follow up with something like "but if you really want to proceed down this rrack, this is how I'd go about it", along with my suggestion.

[1]: https://en.wikipedia.org/wiki/XY_problem


It's great when you're helping people one on one, but it's absolutely terrible for a QA site where questions and answers are expected to be helpful to other people going forward.

I don't think your analogy really helps here, it's not a question. If the question was "How do I get to the top of the mountain" or "How do I want to get to the top of the mountain without hiking" the answer to both would be "Gondola".


> Especially in places like StackOverflow where the people asking questions are often inexperienced.

Except that SO has a crystal clear policy that the answer to questions should be helpful for everybody reaching it through search, not only the person asking it. And that questions should never be asked twice.

So if by chance, after all this dance the person asking the question actually needs the answer to a different question, you'll just answer it with some completely unrelated information and that will the the mandatory correct answer for everybody that has the original problem for any reason.


Yes exactly. The fact that the "XY problem" exists, and that users sometimes ask the wrong question, isn't being argued. The problem is that SO appears to operate at the extreme, taking the default assumption that the asker is always wrong. That toxic level of arrogance (a) pushes users away and (b) ...what you said.


Which is why LLMs are so much more useful than SO and likely always will be. LLMs do this even. Like trying to write my own queue by scratch and I ask an LLM for feedback I think it’s Gemini that often tells me Python’s deque is better. duh! That’s not the point. So I’ve gotten into the habit of prefacing a lot of my prompts with “this is just for practice” or things of that nature. It actually gets annoying but it’s 1,000x more annoying finding a question on SO that is exactly what you want to know but it’s closed and the replies are like “this isn’t the correct way to do this” or “what you actually want to do is Y”


>I see it all the time professionally too. People ask "how do I do X" and I tell them. Then later on I find out that the reason they're asking is because they went down a whole rabbit hole they didn't need to go down.

Yep. The magic question is "what are you trying to accomplish?". Oftentimes people lacking experience think they know the best way to get the results they're after and aren't aware of the more efficient ways someone with more experience might go about solving their problem.


This is what I tend to do. I still feel like my expertise in architecting the software and abstractions is like 10x better than I've seen an LLM do. I'll ask it to do X, and then ask it to do Y, and then ask it to do Z, and it'll give you the most junior looking code ever. No real thought on abstractions, maybe you'll just get the logic split into different functions if you're lucky. But no big picture thinking, even if I prompt it well it'll then create bad abstractions that expose too much information.

So eventually it gets to the point where I'm basically explaining to it what interfaces to abstract, what should be an implementation detail and what can be exposed to the wider system, what the method signatures should look like, etc.

So I had a better experience when I just wrote the code myself at a very high level. I know what the big picture look of the software will be. What types I need, what interfaces I need, what different implementations of something I need. So I'll create them as stubs. The types will have no fields, the functions will have no body, and they'll just have simple comments explaining what they should do. Then I ask the LLM to write the implementation of the types and functions.

And to be fair, this is the approach I have taken for a very long time now. But when a new more powerful model is released, I will try and get it to solve these types of day to day problems from just prompts alone and it still isn't there yet.

It's one of the biggest issues with LLM first software development from what I've seen. LLMs will happily just build upon bad foundations and getting them to "think" about refactoring the code to add a new feature takes a lot of prompting effort that most people just don't have. So they will stack change upon change upon change and sure, it works. But the code becomes absolutely unmaintainable. LLM purists will argue that the code is fine because it's only going to be read by an LLM but I'm not convinced. Bad code definitely confuses the LLMs more.


I think this is my experience as well.

I tend to use a shotgun approach, and then follow with an aggressive refactor. It can actually take a lot of time to prune and restructure the code well. At least it feels slow compared to opening the Claude firehose and spraying out code. There needs to be better tools for pruning, because Claude is not thorough enough.

This seems to work well for me. I write a lot of model training code, and it works really well for the breadth of experiments I can run. But by the end it looks like a graveyard of failed ideas.


Insurance is cheaper on safer vehicles.

A 90% reduction in accidents is a 90% reduction in _paying out_. That reduces operating costs.


Insurance companies aren't a monopoly. They're in competition with each other to offer lower rates. So if there's a reduction in paying out, they'll need to reduce their premiums to stay competitive with each other.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: