Hacker Newsnew | past | comments | ask | show | jobs | submit | ramraj07's commentslogin

E2ee meaning what? Im assuming for compliance reasons any company needs to be able to access all of their internal chats?

Not the OP, but I’m assuming they meant end-to-end-encryption.

The company (customer) would be able to see their chats, but the provider (Dock) would not. I don’t think you’d need to have the encryption on a per-user level, but you could. The main point being that the customer’s chats would only be visible to them, not Dock. It would make some features more difficult though, namely search.

I’m not sure it’s entirely required, but I’d expect it as an option in the non-free tiers.


Not coding agents but we do a lot of work trying to find the best tools, and the result is always that the simplest possible general tool that can get the job done always beats a suite of complicated tools and rules on how to use them.

Well, jump to definition isn't exactly complicated?

And you can use whatever interface the language servers already use to expose that functionality to eg vscode?


It can be: What definition to jump to if there are multiple (e.g. multiple Translation Units)? What if the function is overloaded and none of the types match?

With grep it's easy: Always shows everything that matches.


Sure, there might be multiple definitions to jump to.

With grep you get lots of false positives, and for some languages you need a lot of extra rules to know what to grep for. (Eg in Python you might read `+` in the caller, but you actually need to grep for __add__ to find the definition.)


I literally have this monitor already and these pixels are humongous. Even at 3 feet away. Also the viewing angle degradation is too much, so much so that it irritates to look at the edge of the screen from the center. A very poor monitor indeed.

Part of humanity will be sure. But what about the rest?

Climate change will not kill humanity off, but its likely to cause suffering we haven’t seen since WWII (or worse).

Unfortunately I've seen a glimpse of how the bottom 50% of people (not in a developed country but globally) get by today. If one doesnt care for their suffering and their lives, its easy to confirm that the society on average (not on median) will be more prosperous. But that will more likely manifest through a few hundred trillionaires living in space and a few ten thousand billionaires who serve them with their services.

The bottom billions will likely just starve, move around desperately due to war, famine, fire and flood, turned away at closed borders, and who knows what new type of cruelty that will bubble up in the future. What do we tell them?


I don't know of any serious climate scientist who believes that climate change will cause billions of deaths. Can you point to one?

As another comment pointed out, if you care for the bottom 50%, you should be extremely happy about progress over the past 50 (or 100 or 200) years.


The bottom 50% has indeed gotten better (kinda sure but not everywhere) but technological progress is not guaranteeing they'll save everyone _anymore_. Fewer nations care about climate change and taking care of the poorer parts of the world (hope you dont expect me to show why too).

The billion is my number. Almost all scientists and officials underestimate deliberately and naively death tolls. I have a metric that whatever number the new york times publishes for an earthquake on day one, you should multiply it by 100 for the eventual total count. No one estimated a million plus deaths from covid in the US. The moment I read the first comprehensive report from China I assumed 1% of the population will die. Unfortunately in places like India that actually was true. So yeah I do think at the minimum a billion people will die due to climate change in the next 50 years, and billions more will suffer in unthinkable ways. Thats _my_ estimate.


If you look how the lower 50% globally have done over the last 50 years they are way better off materially on the whole. Asia has gone from typically shacks and bicycles to aircon and suvs.

You want a spray that converts dog poo into dog diarrhea, and you think thats better?

The purpose of these reviewers is to flag the bug to you. You still need to read the code around and see if its valid and serious and worth a fix. Why does it matter if it then says the opposite after the fix? Did it even happen often or is this just an anecdote of a one time thing?

It’s like a linter with conflicting rules (can’t use tabs, rewrite to spaces; can’t use spaces, rewrite to tabs). Something that runs itself in circles and can also block a change unless the comment is resolved simply adds noise, and a bot that contradicts itself does not add confidence to a change.

Do you have actual experience with bugbot? Its live in our org and is actually pretty good, almost none of its comments are frivolous or wrong, and it finds genuine bugs most reviewers miss. This is unlike Graphite and Copilot, so no one's glazing AI for AIs sake.

Bugbot is now a valuable part of our SD process. If you have genuine examples to show that we are just being delusional or haven’t hit a roadblock, I would love to know.


I assume that this is the same as when Cursor spontaneously decides to show code review comments in the IDE as part of some upsell? In that case yes I’m familiar and they were all subtly wrong.

[flagged]


You can't ask people about their personal experience and then deny them the right to answer.

Wait, so Cursor has multiple code review products? I dunno man, if they market the bad one at me and don’t tell me about the good one then I don’t think that’s my fault.

As the other person said, Deep Research is invaluable; but generating hypotheses is not as good at the true bleeding edge of the research. The ChatGPT 4.0 OG with no guardrails, briefly generated outrageously amazing hypotehses that actually made sense. After that they have all been neutered beyond use in this direction.

Should google have involved though? Calico, Verily, isomorphic etc seem like theyre destined to not succeed.

At the time I first got involved, Google Health was still a thing but it was clear it was not going to be successful. I felt that Google's ML (even early on, they had tons of ML, just most of it wasn't known externally) was going to be very useful for genomics and drug discovery.

Verily was its own thing that was unrelated to my push in Research. I think Larry Page knew Andy Conrad and told him he could do what he wanted (which led to Verily focusing on medical devices, which is a terrible industry to be in). They've pivoted a few times without much real success. My hope is that Alphabet sheds Verily (they've been trying) or just admit it's a failure and shut it down. It was just never run with the right philosophy.

Calico... that came out of Larry and Art Levinson- I guess Larry thought Art knew the secret to living forever and by giving him billions Art would come up with the solution to immortality and Larry would have first access to it. But they were ultra-secretive and tried to have the best of both worlds- full access to Google3 and borg, but without Googlers having any access to calico. That, combined with a number of other things, have led Calico to just be a quiet and not very interesting research group. I expect it to disband at some point.

Isomorphic is more recent than any of the stuff I was involved in, and is DeepMind (specifically Demis's) attempt to commercialize their work with AlphaFold. However, everybody in the field knows the strategy of 1. solve protein structure prediction 2. ??? 3. design profitable drugs and get them approved... is not a great strategy because protein structure determine has not ever been the rate limiting step to identifying targets and developing leads. I agree I don't really see a future for it but Demis has at least 10-20 years of runway before he has to take off or bail.

All of my suggestions were just for Google to do research with the community and publish it (especially the model code and weights, but also pipelines to prep data for learning) and turn a few of the ideas into products in Google Cloud(that's how Google Genomics was born... I was talking to Jeff, and he said "if we compress the genome enough, we can store it all on Flash, which would make search fast but cheap, and we'd have a useful product for genomics analysis companies"). IMHO Jeff's team substantially achieved their goals before the DeepMind stuff- DeepVariant was well-respected, but almost every person who worked on it and related systems got burned out and moved on.

What is success, anyway, in biotech? Is it making a drug that makes a lot of money? What if you do that, but it costs so much that people go bankrupt taking it? Or is the goal to make substantial improvements to the technology, potentially discovering key biological details that truly improve people's lives? Many would say that becoming a successful real estate ownership company is the real destination of any successful pharma/biotech.


Whoa. Finally someone I relate with! Thanks for such amazing intel!

In my opinion forays into biology by moonshot hopefuls fail for one of two reasons: either they completely ignore all the current wisdom from academia and industry, or they recruit the very academia people who are culturally responsible for the science rot we have at this time. Calico (and CZI, and im starting to fear, Arc) fell prey to the latter. Once you recruit one tenured professor IMO youre done. The level of tenure track trauma and academic rot they bring in can burn even a trillion dollars into dead-end initiatives.

IMO (after decades of daydreaming about this scenario), the only plausible way to recreate a Bell labs for Biology is to start something behind a single radical person, and recruit the smartest undergrads into that place directly. Ensure that they never become experts at just one thing so they have no allegiance to a method or field. And then let that hoarde loose on a single problem and see what comes out. For better or worse neuralink seems to be doing that right. Just wish they didnt abuse the monkeys that much!

To me success in biotechnology is if I measurably help make a drug that makes a person smile and breathe easy that would otherwise not have. Surprisingly easy and hard at the same time.


If someone can write something to convert between json and xml ill bless your offspring.

JSON to XML should be easy but XML to JSON can not really be done 1 to 1 because of attributes. There is no clear way how to map them. You can establish a mapping for yourself but it will just be a convention on top of JSON.


Are LLMs too expensive, or not reliable enough at not making mistakes, or just something you haven't considered?

It's not something I generally need to do, so I haven't been keeping up with how good LLMs are at this sort of conversion, but seeing your question I was curious so I took a couple of examples from https://www.json.org/example.html and gave them to the default model in the ChatGPT app (GPT 5.2 - at least that's the default for my ChatGPT Plus account) and it seemed to get each of them right on the first attempt.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: