Hacker Newsnew | past | comments | ask | show | jobs | submit | zeknife's commentslogin

Ruby has a similarly intuitive `3.times do ... end` syntax


go also has

    for range 5 { ... }


A human being informed of a mistake will usually be able to resolve it and learn something in the process, whereas an LLM is more likely to spiral into nonsense


You must know people without egos. Humans are better at correcting their mistakes, but far worse at admitting them.

But yes, as an edge case handler humans still have an edge.


LLMs by contrast love to admit their mistakes and self-flagellate, and then go on to not correct them. Seems like a worse tradeoff.


It's true that the big public-facing chatbots love to admit to mistakes.

It's not obvious to me that they're better at admitting their mistakes. Part of being good at admitting mistakes is recognizing when you haven't made one. That humans tend to lean too far in that direction shouldn't suggest that the right amount of that behavior is... less than zero.


Not when your goal is to create ASI: Artificial Sycophant Intelligence


and this is why LLM is getting cooked

they feed an internet data into that shit, they basically "told" LLM to behave because surprise surprise, human sometimes can be more nasty


You must know better humans than I do.


At least until they spend some time with it


It also doesn't need to be good for anything to turn the world upside down, but it would be nice if it was


I see about 40 paragraphs?


I assume you're not very interested in the subject if you think synthesizers aren't real instruments


I wouldn't trust a human to drive a car if they had perfect vision but were otherwise deaf, had no proprioception and were unable to walk out of their car to observe and interact with the world.


And yet deaf people regularly drive cars, as do blind-in-one-eye people, and I've never seen somebody leave their vehicle during active driving.


I didn't mean that a human driver needs to leave their vehicle to drive safely, I mean that we understand the world because we live in it. No amount of machine learning can give autonomous vehicles a complete enough world model to deal with novel situations, because you need to actually leave the road and interact with the world directly in order to understand it at that level.


> I've never seen somebody leave their vehicle during active driving.

Wake me up when the tech reaches Level 6: Ghost Ride the Whip [0].

[0] https://en.wikipedia.org/wiki/Ghost_riding


How many images do you need? What are the use-cases that need a bunch of artificial yet photoreal images produced or altered without human supervision?


I think people still expect a lot of trial and error before getting a usable image. At 2 cents per pull of the slot machine lever, it would still take a while, though.


Thinking is subconscious when working on complex problems. Thinking is symbolic or spatial when working in relevant domains. And in my own experience, I often know what is going to come next in my internal monologues, without having to actually put words to the thoughts. That is, the thinking has already happened and the words are just narration.


I too am never surprised by my brains narration but: Maybe the brain tricks you in never being surprised and acting like your thoughts are following a perfectly sensible sequence.

It would be incredibly tedious to be surprised every 5 seconds.


By what definition of turing test? LLMs are by no means capable of passing for human in a direct comparison and under scrutiny, they don't even have enough perception to succeed in theory.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: