Is there a technical limitation that prevents chat histories from being stored locally on the user's computer instead of being stored on someone else's computer(s)
Why do chat histories need to be accessible by OpenAI, its service partners and anyone with the authority to request them from OpenAI
If users want this design, as suggested by HN commenters, if users want their chat histories to be accessible to OpenAI, its service providers and anyone with authority to request them from OpenAI, then wouldn't it also be true that these users are not much concerned with "privacy"
If so, then why would OpenAI proclaim they are "fighting the New York Times' invasion of user privacy", knowing that NYT is prohibited from making the logs public and users generally do not care much about "privacy" anyway
The restrictions on plaintiff NYT's use of the logs are greater than the restrictions, if any,^1 on OpenAI's use of them
1. If any such restrictions existed, for example if OpenAI stated "We don't do X" in a "privacy policy" and people interpreted this as a legally enforceable restriction,^2 how would a user verify that the statement was true, i.e., that OpenAI has not violated the "restriction". Silicon Valley companies like OpenAI are highly secretive
2. As opposed to a statement by OpenAi of what OpenAI allegedly does not do. Compare with a potentially legally-enforceable promise such as "OpenAI will not do X". Also consider that OpenAI may do Y, Z, etc. and make no mention of it to anyone. As it happens Silicon Valley companies generally have a reputation for dishonesty
Presumably for cross-device interactivity. If I interact with ChatGPT on my phone, then open it on my desktop. I might be a bit frustrated that I can't get to the chat I was having on my phone previously.
OpenAI could store the chat conversation in an encrypted format that only you, the user, can decrypt, with the client-side determining the amount of previous messages to include for additional context, but there's plenty of user overhead involved in an undertaking like that (likely a separate decryption password would be needed to ensure full user-exclusive access, etc).
I'd appreciate and use a feature like that, but I doubt most "average" users would care.
Syncthing could do that, if the software is designed to store locally.
Ever since I put the effort into Syncthing across my all devices (paired with restic on one of them for backup), I can't help but see how cross-device functionality and cloud this are the Sysco hash potatoes that balloons Big Corp services' profit margins.
Not saying it's easy to set up. But when you get there it's so liberating and you wish all software was bring-your-own-network.
SyncThing syncs only when both clients are running at the same time. Nobody who edits a document on a website expects that they'll need to leave that browser window open in order to see the document in a different browser.
Am I missing something? Is this seriously a heated HN debate over "why does this website need to store the text it sends to people who view the website?"?
We're not talking about collaborative tooling, just a record of what you've asked an AI assistant. If it doesn't sync right away, it's not the end of the world. I find that's true with most things.
And the clients don't need to be running at the same time if you have a third device that's always on and receiving the changes from either (like a backup system). Eventually everything arrives. It's not as robust as what Google or iCloud gives you, but it's good enough for me.
Chatgpt.com is essentially a CRUD app. What you're saying here amounts to saying that it could conceivably have been designed to work dramatically differently from all other CRUD apps. And obviously that's true, but why would it be?
It's a website! You submit text, that you'll view or edit later, so the server stores it. How is that controversial to a HN audience?
Also:
> the clients don't need to be running at the same time if you have a third device that's always on
An always-on device that stores data in order to sync it to clients is a server.
TBH it sounds like you're just imagining a very different service than the one openAI operates. You're imagining something where you send an input, the server returns an output - and after that they're out of the equation, and storing the output somewhere is a separate concern that could be left up to the user.
But the service they actually operate is functionally a collaborative document editor - the chat histories are basically rich text docs that you can view, edit, archive, share with others, and which are integrated with various server-side tools. And the document very obviously needs to be stored on the server to do all those things.
It's great that you'd enjoy a significantly worse product that requires you to also be familiar with a completely unrelated product.
For some reason, consumers have decided that they prefer a significantly better product that doesn't require any additional applications or technical expertise ¯\_(ツ)_/¯
Facebook messenger tries to marry end to end encryption with multi-device access and it's a horrible mess with some messages not being delivered to some devices for hours , days or ever.
I absolutely want OpenAI to keep all of my chats and I absolutely don't want them to share them ( voluntarily or by force) with any private agent.
I have exactly the same expectation of any document or communication platform. It's been long established as accepted compomise between security and convenience.
> Is there a technical limitation that prevents chat histories from being stored locally on the user's computer
People access ChatGPT through different interfaces: Web, desktop app, their phones, tablets.
Therefore the conversations are stored on the servers. It's really not some hidden plot against users to steal their data. It's just how most users expect their apps to work.
Nonsense. It's easy to design an app where the server stores all information in an encrypted form. If OpenAI "cared about privacy" like this PR piece claims, they would do this. They don't because they (obviously) don't care and they (obviously) want the data for their purposes.
"Easy" does not mean "lowest cost" or "easiest". It's far far far easier to stor conversations as plain text and return them as is, instead of having to encrypt, rotate keys, etc. etc.
That's a tricky system to get right and maintain
(Please do not interpret this as a defense of OpenAI! I just think that we shouldn't trivialize the task of encrypting user data so that it's not visible to the provider).
If I am sending HTTP POST requests using own choice of software via the command line to some website, e.g., an OpenAI server, then I can save those requests on local storage. I can keep a record of what I have done. This history does not need to be saved by OpenAI and consequently end up being included in a document production when (not if) OpenAI is sued. But I cannot control what OpenAI does, that's their decision
For example, I save all the POST request bodies I send over the internet in the local forward proxy's log. I add logs to tarballs and compress with an algorithm that allows for searching the logs in the tarballs without decompressing them
It does not matter what "reason" or "excuse" or "explanation" anyone presents, technical or otherwise, for why OpenAi does what it does
Why do chat histories need to be accessible by OpenAI, its service partners and anyone with the authority to request them from OpenAI
If users want this design, as suggested by HN commenters, if users want their chat histories to be accessible to OpenAI, its service providers and anyone with authority to request them from OpenAI, then wouldn't it also be true that these users are not much concerned with "privacy"
If so, then why would OpenAI proclaim they are "fighting the New York Times' invasion of user privacy", knowing that NYT is prohibited from making the logs public and users generally do not care much about "privacy" anyway
The restrictions on plaintiff NYT's use of the logs are greater than the restrictions, if any,^1 on OpenAI's use of them
1. If any such restrictions existed, for example if OpenAI stated "We don't do X" in a "privacy policy" and people interpreted this as a legally enforceable restriction,^2 how would a user verify that the statement was true, i.e., that OpenAI has not violated the "restriction". Silicon Valley companies like OpenAI are highly secretive
2. As opposed to a statement by OpenAi of what OpenAI allegedly does not do. Compare with a potentially legally-enforceable promise such as "OpenAI will not do X". Also consider that OpenAI may do Y, Z, etc. and make no mention of it to anyone. As it happens Silicon Valley companies generally have a reputation for dishonesty