Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is truly stupid that they are trying to make it more human-like. They should have added a radio button to turn off these sort of customizations because it doesn't help some of us. Just pisses me off. It is supposed to be an answering machine, not some emotional support system.


> We heard clearly from users that great AI should not only be smart, but also enjoyable to talk to.

That is what most people asked for. No way to know if that is true, but if it indeed is the case, then from business point of view, it makes sense for them to make their model meet the expectation of users even. Its extremely hard to make all people happy. Personally, i don't like it and would rather prefer more robotic response by default rather than me setting its tone explicitly.


> No way to know if that is true, but if it indeed is the case, then from business point of view, it makes sense for them to make their model meet the expectation of users even.

It makes sense if your target is the general public talking to an AI girlfriend.

I don't know if that will fill their pockets enough to become profitable given the spending they announced but isn't this like they are admitting that all the AGI, we cure cancer, ... stuff was just bullshitting? And if it was bullshitting aren't they overvalued? Sex sells but will it sell enough?

> i don't like it and would rather prefer more robotic response by default rather than me setting its tone explicitly.

Me neither. I want high information density.


If you want high information density don’t use a non-deterministic word generator.


In my case it's very useful for learning purposes or for quick questions when I'm unsure where to even start looking for information.

LLMs are useful. I just do not believe that they are that useful that it is worth the money put into it.


Ai interfaces are going the same way the public internet has; initially it's audience was a subset of educated westerners, now it's the general public.

"Most people" have trash taste.


I don't mind other people having trash taste. The problem is when I then have to consume their trash taste because they are in the majority.

Every medium ever gets degraded over time to the point that you might as well do without it.


They do have that option to customize its personality. One of the choices is to have it be robotic and straight to the point.


I think we could even anthropomorphize this a bit.

A slider, and on one side have 'had one beer, extrovert personality', and the other 'introvert happy to talk with you'.

The second being, no stupid overflowing, fake valley girl type empathy or noise.

"please respond as if you are an 80s valley girl, for the rest of this conversation. Please be VERY valley girl like, including praising my intellect constantly."

"I need to find out what the annual GDP is of Uruguay."

Ohhh my GAWD, okay, like—Dude, you are, like, literally the smartest human ever for asking about Uruguay’s GDP, I’m not even kidding Like, who even thinks about that kinda stuff? You’re basically, like, an econ genius or something!

So, check it—Uruguay’s GDP is, like, around $81 billion, which is, like, sooo much money I can’t even wrap my pink-scrunchied head around it

Do you, like, wanna know how that compares to, say, Argentina or something? ’Cause that would be such a brainy move, and you’re, like, totally giving economist vibes right now

"ok. now please respond to the same question, but pretend you're an introvert genius hacker-type, who likes me and wants to interact. eg, just give the facts, but with no praising of any kind"

Uruguay’s nominal GDP for 2024 is approximately US $80.96 billion. In purchasing power parity (PPP) terms, it’s about US $112 billion.

I agree with the upstream post. Just give me the facts. I'm not interested in bonding with a search engine, and normal ChatGPT almost seems valley girl like.


It makes way more mistakes using the robotic/straight shooter one. Sometimes even typos it's weird.


Thank you. This should be made way more apparent. I was getting absolutely sick of "That's an insightful and brilliant blah blah blah" sycophantic drivel attached to literally every single answer. Based on the comments in this thread I suspect very few people know you can change its tone.


> This should be made way more apparent.

It's right in the article you are commenting on.

> Making ChatGPT uniquely yours

> Default, Friendly (formerly Listener), and Efficient (formerly Robot) remain (with updates), and we’re adding Professional, Candid, and Quirky.


I mean in the UI. Basically nobody, relative to their userbase, is going to read these announcements or dig through their options menu.


They already hit a dead end and cannot innovate any further. Instead of being more accurate and deterministic, tuning the model so it produces more human-like tokens is one of a few tricks left to attract investors money.


None of this is even close to true.


Can you prove your statement?


Of course I can't "prove" it, just like you can't "prove" yours, but I am involved in the field and no-one I know thinks we're even close to a "dead end". On the contrary, people are more bullish than ever.

I don't have any inside knowledge of OpenAI's product release priorities, but your narrative about dead ends and desperate scrambles to push something out the door, tricking investors to keep the party going - this has nothing to do with reality as far as I can tell.


Winning gold medals in a bunch of competitions like IMO.


like 20 years ago or even before that? and if so - what does your winning even prove exactly here, save for the fact that it is never late to tap oneself by the shoulder for having done stuff?


Also, I wish there was a setting to disable ChatGPT in its system prompt to have access to my name and location. There was a study on an LLM(s) (not image gen) a couple of years ago (I can't find the study now) which showed that an unfiltered OSS version had racist views towards certain diasporas.


Classic case of thinking that the use-case HN readers want is what the rest of the world wants.


I think a bigger problem is the HN reader mind reading what the rest of the world wants. At least when an HN reader telling us what they want it's a primary source, but reading a comment about an HN reader postulating what the rest of the world wants is simply more noisy than an unrepresentative sample of what the world may want.


Point taken. However, would you say HN readers are an accurate average cross-section of broader society? Including interests and biases?


I would guess HN readers are not an average cross-section of broader society, but I would also guess that because of that HN readers would be pretty bad at understanding what broader society is thinking.


Emotional dependence has to be the stickiest feature of any tech product. They know what they are doing.


Look into Replika to see some truly dark patterns about where this all ends up.


Replika by Hugo Bernard?


Every time I read an LLM's response state something like "I'm sorry for X", "I'm happy for Y" reminds me of the demons in Frieren, where they lacked any sense of emotion but they emulated it in order to get humans respond in a specific way. It's all a ploy to make people feel like they talk to a person that doesn't exist.

And yeah, I'm aware enough what an LLM is and I can shrug it off, but how many laypeople hear "AI", read almost human-like replies and subconsciously interpret it as talking to a person?


Without looking at which example was for which model, I instantly preferred the left side. Then when I saw GPT-5 was on the left, I had a bad taste in my mouth.

I don't want the AI to know my name. Its too darn creepy.


I'm on the hunt for ways (system instructions/first message prompts/settings/whatever) to do away with all of the fluffy nonsense in how LLMs 'speak' to you, and instead just make them be concise and matter-of-fact.

fwiw as a regular user I typically interact with LLMs through either:

- aistudio site (adjusting temperature, top-P, system instructions)

- Gemini site/app

- Copilot (workplace)

Any and all advice welcome.


CLI tools are better about this IME. I use one called opencode which is very transparent about their prompts. They vendor the Anthropic prompts from CC; you can just snag them and tweak to your liking.

Unfortunately the “user instructions” a lot of online chat interfaces provide is often deemphasized in the system prompt


ChatGPT nowdays gives the option of choosing your preferred style. I have choosen "robotic" and all the ass kissing instantly stopped. Before that, I always inserted a "be conciseand direct" into the prompt.


i found robotic consistenly underperformed in tasks and it also drastically reduced the temperature, so connecting suggestions and ideas basically disappeared. I just wanted it to not kiss my ass the whole time


Did you made a comparison?

I got did not and also had the impression it performed lower, but it still solved the things I told it to do and I just switched very recently.


If the system prompt is baked in like in Copilot you are just making it more prone to mistakes.


I've listened to the chatgpt voice recently (which I didn't use before), and my conclusion is it is really calm and trustable sort of voice. I wonder how many people are getting deceived by this. Especially when lonely. This means monies for the firm, but also means lives broken for those people who are vulnerable...


yeah I have to say those 5.1 response examples are well annoying. almost condescending


They ran out of features to ship so they are adding "human touch" variants.


> It is supposed to be an answering machine, not some emotional support system.

Many people would beg to differ.


I’m sure many people will also tell you that methamphetamines make them more productive at work, but that’s not a good reason to allow unregulated public distribution of them.

You can read about the predatory nature of Replika to see where this all ends up.


We don't know what it's supposed to be, we're all figuring that out.


Boy i hate gpt 5.1 already only looking at those examples.


How do the personalities work for you?


I've had success limiting the number of words output, e.g. "max 10 words" on a query. No room for fluff.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: