Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree but this entire conversation misses my point that "alignment" originally only meant making the LLM act as you want it.

A GPT that hasn't been aligned does not work how we expect - you give it a prompt, and it will autogenerate until it reaches an end state.

To even make the GPT answer the question in the prompt, and not autocomplete it into nonsense, is an example of alignment.

It took a lot of fine tuning and data curation to get ChatGPT up to its current chat-like interface.

But this is not the only alignment you can do. The original Transformer paper was about machine translation, turning the prompt into the translated text. Once it was done it was done.

We could choose to have the model do something else, say translate the prompt into 5 languages at once instead of one, just as an example. This would be another alignment decision.

There is nothing political or selection bais or anything inherent to the original definition, its only recently "alignment" has morphed into this "align with human morals" concept.

Even in the Andrej Karpathy's build-your-own-gpt YT video, which is highly talked about around here, he uses the phase like this. The end of the video you are left with a GPT, but not a question-and-response model, and he says it would need to be aligned to answer questions like ChatGPT.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: