Hacker Newsnew | past | comments | ask | show | jobs | submit | rmwaite's commentslogin

I think AI can add a lot of functionality but on the margins. Making things “work better”. I think AI as a focal point—in that it is The feature is a mistake for most things. But making code completion work better or suggestions more accurate? Things that are largely invisible UI-wise.


I’ve always seen the hubris as an essential component of doing things “you didn’t know you couldn’t do.” A lot of great ideas are discounted as impossible and it takes hubris to fly in the face of that perceived impossibility. I reckon most of the time it doesn’t work out and the pessimism was warranted—but those times it does work out make up for it.


To be honest, if you’re using a tool that stores things as trees and blobs and almost every part of its functionality is influenced by that fact, then you just need to understand trees and blobs. This is like trying to teach someone how to interact with the file system and they are like “whoa whoa whoa, directories? Files? I don’t have time to understand this, I just want to organize my documents.” Actually I take that back, it isn’t /like/ that, it is /exactly/ that.


I see your point but … trees and blobs are an implementation detail that I shouldn’t need to know. This is different from files and directories ( at least directories ) in your example. What I want to know is that I have a graph and am moving references around - I don’t need to know how it’s stored.

The git mental model is more complex than cvs, but strangely enough the docs almost invariably refer to the internal implementation details which shouldn’t be needed to work with it.

I remember when git appeared - the internet was full of guides called ‘git finally explained ‘ , and they all started by explaining the plumbing and the implementation. I think this has stuck, and does not make things easy to understand.

Please note I say all this having been using git for close to 20 years, being familiar with the git codebase, and understanding it very well.

I just think the documentation and ui work very hard towards making it difficult to understand .


Thanks for this, I can’t believe this never occurred to me to try to do.


If you read carefully you will see that they never said AI has a theory of mind.


Then what do we do? lol.


We understand the meaning that we wish to convey and then intelligently choose the best method that we have at our disposal to communicate that.

LLMs find the most likely next word based on its billions of previously scanned word combinations and contexts. It's an entirely different process.


How does this intelligence work? Can you explain how 'meaning' is expressed in neurons, or whatever it is that makes up consciousness?

I don't think we know. Or if we have theories, the error bars are massive.

>LLMs find the most likely next word based on its billions of previously scanned word combinations and contexts. It's an entirely different process.

How is that different than using one's learned vocabulary?


How do you know we understand and LLMs don't? To an outsider they look the same. Indeed, that is the point of solipsism.


Because unlike a human brain, we can actually read the whitepaper on how the process works.

They do not "think", they "language", i.e. large language model.


What is thinking and why do you think that LLM ingesting content is not also reading? Clearly they're absorbing some sort of information from text content, aka reading.


I think you don't understand how llms work. They run on math, the only parallel between an llm and a human is the output.


Are you saying we don't run on math? How much do you know of how the brain functions?

This sort of Socratic questioning shows that no one truly can answer them because no one actually knows about the human mind, or how to distinguish or even define intelligence.


So do neurons.


Personally, I think you need to chill. He wasn’t “attacking” it, he was just commenting and includes his interpretation about it being from AI. Why don’t YOU just focus on the primary point of his comments instead of latching onto the AI part—or is it okay when you do it?


Users such as him routinely use the AI excuse to try to discredit valid posts without evidence. It is an altogether evil act, attempting to suppress valid discourse. This merits recognizance. I was only mocking this action. Even if something were to truly be written by AI, that is not a valid reason to try to discredit it. It's not okay for anyone to do it. Why ruin an otherwise excellent comment for no reason?


Exactly. I mean, if you asked people how probable the current LLMs would be (warts and all) 20 years ago I think there would have been a similar cynicism.


“You can’t tell people anything” I read this on a blog article a long time ago and I’m constantly reminded of it. This gives me that same feeling where someone is attempting to give some insight into their POV (precisely /because/ it’s alien to so many) and the responses (well, some of them) miss this point entirely.


I dunno, usually when I eat chocolate it makes me want to eat more chocolate. What is the mechanism for this?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: