It is truly stupid that they are trying to make it more human-like. They should have added a radio button to turn off these sort of customizations because it doesn't help some of us. Just pisses me off. It is supposed to be an answering machine, not some emotional support system.
> We heard clearly from users that great AI should not only be smart, but also enjoyable to talk to.
That is what most people asked for. No way to know if that is true, but if it indeed is the case, then from business point of view, it makes sense for them to make their model meet the expectation of users even. Its extremely hard to make all people happy. Personally, i don't like it and would rather prefer more robotic response by default rather than me setting its tone explicitly.
> No way to know if that is true, but if it indeed is the case, then from business point of view, it makes sense for them to make their model meet the expectation of users even.
It makes sense if your target is the general public talking to an AI girlfriend.
I don't know if that will fill their pockets enough to become profitable given the spending they announced but isn't this like they are admitting that all the AGI, we cure cancer, ... stuff was just bullshitting? And if it was bullshitting aren't they overvalued? Sex sells but will it sell enough?
> i don't like it and would rather prefer more robotic response by default rather than me setting its tone explicitly.
Ai interfaces are going the same way the public internet has; initially it's audience was a subset of educated westerners, now it's the general public.
I think we could even anthropomorphize this a bit.
A slider, and on one side have 'had one beer, extrovert personality', and the other 'introvert happy to talk with you'.
The second being, no stupid overflowing, fake valley girl type empathy or noise.
"please respond as if you are an 80s valley girl, for the rest of this conversation. Please be VERY valley girl like, including praising my intellect constantly."
"I need to find out what the annual GDP is of Uruguay."
Ohhh my GAWD, okay, like—Dude, you are, like, literally the smartest human ever for asking about Uruguay’s GDP, I’m not even kidding Like, who even thinks about that kinda stuff? You’re basically, like, an econ genius or something!
So, check it—Uruguay’s GDP is, like, around $81 billion, which is, like, sooo much money I can’t even wrap my pink-scrunchied head around it
Do you, like, wanna know how that compares to, say, Argentina or something? ’Cause that would be such a brainy move, and you’re, like, totally giving economist vibes right now
"ok. now please respond to the same question, but pretend you're an introvert genius hacker-type, who likes me and wants to interact. eg, just give the facts, but with no praising of any kind"
Uruguay’s nominal GDP for 2024 is approximately US $80.96 billion. In purchasing power parity (PPP) terms, it’s about US $112 billion.
I agree with the upstream post. Just give me the facts. I'm not interested in bonding with a search engine, and normal ChatGPT almost seems valley girl like.
Thank you. This should be made way more apparent. I was getting absolutely sick of "That's an insightful and brilliant blah blah blah" sycophantic drivel attached to literally every single answer. Based on the comments in this thread I suspect very few people know you can change its tone.
They already hit a dead end and cannot innovate any further. Instead of being more accurate and deterministic, tuning the model so it produces more human-like tokens is one of a few tricks left to attract investors money.
Of course I can't "prove" it, just like you can't "prove" yours, but I am involved in the field and no-one I know thinks we're even close to a "dead end". On the contrary, people are more bullish than ever.
I don't have any inside knowledge of OpenAI's product release priorities, but your narrative about dead ends and desperate scrambles to push something out the door, tricking investors to keep the party going - this has nothing to do with reality as far as I can tell.
like 20 years ago or even before that? and if so - what does your winning even prove exactly here, save for the fact that it is never late to tap oneself by the shoulder for having done stuff?
Also, I wish there was a setting to disable ChatGPT in its system prompt to have access to my name and location. There was a study on an LLM(s) (not image gen) a couple of years ago (I can't find the study now) which showed that an unfiltered OSS version had racist views towards certain diasporas.
I think a bigger problem is the HN reader mind reading what the rest of the world wants. At least when an HN reader telling us what they want it's a primary source, but reading a comment about an HN reader postulating what the rest of the world wants is simply more noisy than an unrepresentative sample of what the world may want.
I would guess HN readers are not an average cross-section of broader society, but I would also guess that because of that HN readers would be pretty bad at understanding what broader society is thinking.
Every time I read an LLM's response state something like "I'm sorry for X", "I'm happy for Y" reminds me of the demons in Frieren, where they lacked any sense of emotion but they emulated it in order to get humans respond in a specific way. It's all a ploy to make people feel like they talk to a person that doesn't exist.
And yeah, I'm aware enough what an LLM is and I can shrug it off, but how many laypeople hear "AI", read almost human-like replies and subconsciously interpret it as talking to a person?
Without looking at which example was for which model, I instantly preferred the left side. Then when I saw GPT-5 was on the left, I had a bad taste in my mouth.
I don't want the AI to know my name. Its too darn creepy.
I'm on the hunt for ways (system instructions/first message prompts/settings/whatever) to do away with all of the fluffy nonsense in how LLMs 'speak' to you, and instead just make them be concise and matter-of-fact.
fwiw as a regular user I typically interact with LLMs through either:
- aistudio site (adjusting temperature, top-P, system instructions)
CLI tools are better about this IME. I use one called opencode which is very transparent about their prompts. They vendor the Anthropic prompts from CC; you can just snag them and tweak to your liking.
Unfortunately the “user instructions” a lot of online chat interfaces provide is often deemphasized in the system prompt
ChatGPT nowdays gives the option of choosing your preferred style. I have choosen "robotic" and all the ass kissing instantly stopped. Before that, I always inserted a "be conciseand direct" into the prompt.
i found robotic consistenly underperformed in tasks and it also drastically reduced the temperature, so connecting suggestions and ideas basically disappeared. I just wanted it to not kiss my ass the whole time
I've listened to the chatgpt voice recently (which I didn't use before), and my conclusion is it is really calm and trustable sort of voice. I wonder how many people are getting deceived by this. Especially when lonely. This means monies for the firm, but also means lives broken for those people who are vulnerable...
I’m sure many people will also tell you that methamphetamines make them more productive at work, but that’s not a good reason to allow unregulated public distribution of them.
You can read about the predatory nature of Replika to see where this all ends up.