I think AI can add a lot of functionality but on the margins. Making things “work better”. I think AI as a focal point—in that it is The feature is a mistake for most things. But making code completion work better or suggestions more accurate? Things that are largely invisible UI-wise.
I’ve always seen the hubris as an essential component of doing things “you didn’t know you couldn’t do.” A lot of great ideas are discounted as impossible and it takes hubris to fly in the face of that perceived impossibility. I reckon most of the time it doesn’t work out and the pessimism was warranted—but those times it does work out make up for it.
To be honest, if you’re using a tool that stores things as trees and blobs and almost every part of its functionality is influenced by that fact, then you just need to understand trees and blobs. This is like trying to teach someone how to interact with the file system and they are like “whoa whoa whoa, directories? Files? I don’t have time to understand this, I just want to organize my documents.” Actually I take that back, it isn’t /like/ that, it is /exactly/ that.
I see your point but … trees and blobs are an implementation detail that I shouldn’t need to know. This is different from files and directories ( at least directories ) in your example. What I want to know is that I have a graph and am moving references around - I don’t need to know how it’s stored.
The git mental model is more complex than cvs, but strangely enough the docs almost invariably refer to the internal implementation details which shouldn’t be needed to work with it.
I remember when git appeared - the internet was full of guides called ‘git finally explained ‘ , and they all started by explaining the plumbing and the implementation. I think this has stuck, and does not make things easy to understand.
Please note I say all this having been using git for close to 20 years, being familiar with the git codebase, and understanding it very well.
I just think the documentation and ui work very hard towards making it difficult to understand .
What is thinking and why do you think that LLM ingesting content is not also reading? Clearly they're absorbing some sort of information from text content, aka reading.
Are you saying we don't run on math? How much do you know of how the brain functions?
This sort of Socratic questioning shows that no one truly can answer them because no one actually knows about the human mind, or how to distinguish or even define intelligence.
Personally, I think you need to chill. He wasn’t “attacking” it, he was just commenting and includes his interpretation about it being from AI. Why don’t YOU just focus on the primary point of his comments instead of latching onto the AI part—or is it okay when you do it?
Users such as him routinely use the AI excuse to try to discredit valid posts without evidence. It is an altogether evil act, attempting to suppress valid discourse. This merits recognizance. I was only mocking this action. Even if something were to truly be written by AI, that is not a valid reason to try to discredit it. It's not okay for anyone to do it. Why ruin an otherwise excellent comment for no reason?
Exactly. I mean, if you asked people how probable the current LLMs would be (warts and all) 20 years ago I think there would have been a similar cynicism.
“You can’t tell people anything” I read this on a blog article a long time ago and I’m constantly reminded of it. This gives me that same feeling where someone is attempting to give some insight into their POV (precisely /because/ it’s alien to so many) and the responses (well, some of them) miss this point entirely.