This article is an absolutely fantastic introduction to GPT models - I think the clearest I've seen anywhere, at least for the first section that talks about generating text and sampling.
Then it got to the training section, which starts "We train a GPT like any other neural network, using gradient descent with respect to some loss function".
It's still good from that point on, but it's not as valuable as a beginner's introduction.
I think FastAI lesson 3 in "Practical Deep Learning for Coders", has one of the most intuitive buildups of gradient descent and loss that I've seen. * Lecture [1] Book Chapter [2]
It doesn't go into the math but I don't think that's a bad thing for beginners.
If you want mathematical, 3blue1brown has a great series of videos [3] on the topic.
For those curious about writing a "gradient descent with respect to some loss function" starting from an empty .py file (and a numpy import, sure), can't recommend enough Harrison "sentdex" Kinsley's videos/book Neural Networks from Scratch in Python [1].
The beginning of Andrew Ng’s machine learning course on coursera does that too, it touches on the math a bit and explains how to imagine gradient descent in 3d space
Didn’t do the full course, but after the first few chapters I was able to write a very basic implementation in raw python (emphasizing here on “very basic”)
there is so much material on deep learning basics these days that I think we can finally skip reintroducing gradient descent in every tutorial, can't we?
The idea of "find in which direction function decreases most quickly and go that direction" is really deep, and its implementation via this cutting-edge mathematical concept of "gradient" also deserves a whole section as well.
On one hand, you can explain it to a 5-year-old: Go in the direction which improves things.
On the other hand, we have more than a half-century of research on sophisticated mathematical methods for doing it well.
The latter isn't really helpful for beginners, and the former is easy to explain. You can't use sophisticated algorithms in either case, for beginners, so you can go with something as dumb as tweak in all directions, and go where it improves most. It will work fine for dummy examples.
This one doesn't use any frameworks. The next book by the author (on GANs) uses PyTorch. The math is relatively easy to follow I think.
Andrew Ng's courses on Coursera can be viewed for free and have sightly more rigorous math, but still okay.
You don't have to understand every mathematical detail, same as you don't need every mathematical detail for 3d graphics. But knowing the basics should be good I think!
That concept is not the easiest to describe succinctly inside a file like this (or -- while we are completely at it, in a Hacker News post like this!), I think (especially as there are various levels of 'beginner' to take into account here). This is considered a very entry level concept (not as an insult -- simply from an information categorization/tagging perspective here :D :)), and I think there might be others who would consider it to be noise if logged in the code or described in the comments/blogpost.
After all, there was a disclaimer that you might have missed up front in the blogpost! "This post assumes familiarity with Python, NumPy, and some basic experience training neural networks." So it is in there! But in all of the firehose of info we get maybe it is not that hard to miss.
However, I'm here to help! Thankfully the concept is not too terribly difficult, I believe.
Effectively, the loss function compresses the task we've described with our labels from our training dataset into our neural network. This includes (ideally, at least), 'all' the information the neural network needs to perform that task well, according to the data we have, at least. If you'd like to know more about the specifics of this, I'd refer you to the original Shannon-Weaver paper on information theory -- Weaver's introduction to the topic is in plain English and accessible to (I believe) nearly anyone off of the street with enough time and energy to think through and parse some of the concepts. Very good stuff! An initial read-through should take no more than half an hour to an hour or so, and should change the way you think about the world if you've not been introduced to the topic before. You can read a scan of the book at a university hosted link here: https://raley.english.ucsb.edu/wp-content/Engl800/Shannon-We...
Using some of the concepts of Shannon's theory, we can see that anything that minimizes an information-theoretic loss function should indeed learn as well those prerequisites to the task at hand (features that identify xyz, features that move information about xyz from place A to B in the neural network, etc). In this case, even though it appears we do not have labels -- we certainly do! We are training on predicting the _next words_ in a sequence, and so thus by consequence humans have already created a very, _very_ richly labeled dataset for free! In this way, getting the data is much easier and the bar to entry for high performance for a neural network is very low -- especially if we want to pivot and 'fine-tune' to other tasks. This is because...to learn the task of predicting the next word, we have to learn tons of other sub-tasks inside of the neural network which overlap with the tasks that we want to perform. And because of the nature of spoken/written language -- to truly perform incredibly well, sometimes we have to learn all of these alternative tasks well enough that little-to-no-finetuning on human-labeled data for this 'secondary' task (for example, question answering) is required! Very cool stuff.
This is a very rough introduction, I have not condensed it as much as it could be and certainly, some of the words are more than they should be. But it's an internet comment so this is probably the most I should put into it for now. I hope this helps set you forward a bit on your journey of neural network explanation! :D :D <3 <3 :)))))))))) :fireworks:
For reference, I'm interested very much in what I refer to as Kolmogorov-minimal explanations (Wikipedia 'Kolmogorov complexity' once you chew through some of that paper if you're interested! I am still very much a student of it, but it is a fun explanation). In fact (though this repo performs several functions), I made https://github.com/tysam-code/hlb-CIFAR10 as beginner-friendly as possible. One does have to make some decisions to keep verbosity down, and I assume a very basic understand of what's happening in neural networks here too.
I have yet to find a good go-to explanation of neural networks as a conceptual intro (I started with Hinton -- love the man but extremely mathematically technical for foundation! D:). Karpathy might have a really good one, I think I saw a zero-to-hero course from him a little while back that seemed really good.
Andrej (practically) got me into deep learning via some of his earlier work, and I really love basically everything that I've seen the man put out. I skimmed the first video of his from this series and it seems pretty darn good, I trust his content. You should take a look! (Github and first video: https://github.com/karpathy/nn-zero-to-hero, https://youtu.be/VMj-3S1tku0)
For reference, he is the person that's made a lot of cool things recently, including his own minimal GPT (https://github.com/karpathy/minGPT), and the much smaller version of it (https://github.com/karpathy/nanoGPT). But of course, since we are in this blog post I would refer you to this 60 line numpy GPT first (A. to keep us on track, B. because I skimmed it and it seemed very helpful! I'd recommend taking a look at outside sources if you're feeling particularly voracious in expanding your knowledge here.)
I hope this helps give you a solid introduction to the basics of this concept, and/or for anyone else reading this, feel free to let me know if you have any technically (or-otherwise) appropriate questions here, many thanks and much love! <3 <3 <3 <3 :DDDDDDDD :)))))))) :)))) :))))
Here is an introduction to gradient descent with back propagation, for Ruby, based on Andrej Karpathy's micrograd: https://github.com/rickhull/backprop
Then it got to the training section, which starts "We train a GPT like any other neural network, using gradient descent with respect to some loss function".
It's still good from that point on, but it's not as valuable as a beginner's introduction.