I don't quite have the background to read this article as-is, could anyone recommend an introduction to STARKs? My google search results are full of cryptocurrency blogspam.
That would be Serge Lang? Haven't seen his or the Shilov linear algebra texts.
Later, after Nering and Halmos, which I read on my own, carefully, one word at a time, chewing on all the proofs, working nearly all the exercises, trying to understand the subject more broadly, I got pushed into an "advanced" course in linear algebra: Before the class started, to brush up a little, I wrote out 80 pages of the subject. During the course I blew away all the other students and did it again on the linear algebra part of the qualifying exams. The course was a waste, but maybe I "proved myself". There was a sense that the prof resented me, that is, wanted to see me struggle, expected that I would struggle and suffer like the other students. Silly nonsense.
But, beyond that silly course, I ran off into some other directions: (1) Took some course in optimization. So, covered some graph algorithms that ran in (n)log(n) instead of n^2 -- there was some linear algebra there. (2) Then there was a lot of linear programming -- mostly linear algebra. (3) Then there was nonlinear programming, e.g., Kuhn-Tucker conditions where linear algebra was the local linearization of the non-linear problems. (4) For more, there was some multi-objective linear programming. (5) Some chemists had some questions about group theory and quantum mechanics, so I wrote a paper on group representations, nearly all linear algebra. (6) And got to use some of Nering's use of finite fields and some abstract algebra with finite fields, groups, etc. in a course in coding theory. (7) Outside of school, did some with numerical linear algebra and iterative solution methods -- e.g., covered the M. Newman work applying some number theory and the Chinese remainder theorem to finite precision arithmetic to numerically exact matrix inversion. That exact matrix inversion is cute, and maybe has some really important applications, but the motivation at the time was ill-conditioned linear problems for the normal equations of regression analysis where the variables were nearly dependent, and that exactness is polishing a ...: The response is to return to the statistics problem and see what can be done there or with accepting and working with the ill-conditioned matrix. (8) I fell into the big pool of the FFT (fast Fourier transform) when it was a hot topic. So, the FFT can be regarded as mostly just more linear algebra. (9) There are the connections with Markov processes, and once I took a WWII paper on search and encounter rates at sea and used it with some Markov and Poisson math and some linear algebra to do a Monte Carlo of war at sea, in particular to see how long some submarines would last.
But, all that aside, after as much as Halmos and/or Nering, for more pure math, just take linear algebra as baby functional analysis, e.g., Banach space and, especially, Hilbert space. That's what Halmos wrote his book for, for a finite dimensional version of von Neumann's work -- Halmos was a post-doc under von Neumann at the time. E.g., get to address the issue of being locally compact, i.e., are the usual closed and bounded enough to be compact where every infinite subset has a limit point?
Point: Once you have worked carefully through Halmos, Nering, or anything else competitive, MOVE ON to applications, deeper theory, etc.
Warning: If you have a "job" and are working for someone, some boss, manager, know more math than they do, and use it for some work that is clearly really good for the work of the manager's group, WATCH OUT! The manager may get really, seriously torqued and work to get you fired. Similarly, a resume that shows some good work in pure/applied math, say, along with a lot in computing, can be career poison: E.g., (1) -- (9) above ended my career. With an honest resume, I couldn't get hired even to sweep floors, mow grass, etc., LITERALLY. The costs were high. The math background was worse that, I would guess, than a felony conviction. So, I'm doing a startup, a Web site. Buried deeply, there is some original pure/applied math, crucial, but won't be visible to the users or advertisers. The math might be quite valuable, but basically just have to keep it and its applications a secret.
Yes, I meant Serge Lang's books. I think his linear algebra texts are pretty damn good and elegant. Shilov as well, in his own Soviet math style. Check them out. I miss this simplicity and elegance in modern texts.
Halmos is fantastic, but the typesetting is too antique and cramped. I would pay top Dollar for a newer version written in TeX, mimicking the style of mid 90s Springer Undergraduate Mathematics Series.
> But this also means that the coordinate must be sampled from a set large enough that the attacker cannot guess it by random chance. If the modulus is near ( 2 ^ 256 ), this is clearly the case. But with a modulus of ( 2 ^ 64 - 2 ^ 32 + 1 ), we're not quite there, and if we drop to ( 2 ^ 31 - 1 ), it's definitely not the case. Trying to fake a proof two billion times until one gets lucky is absolutely within the range of an attacker's capabilities.
> To stop this, we sample r from an extension field. For example, you can define y where y ^ 3 = 5, and take combinations of 1, y and y ^ 2 .
This reads like trying to increase entropy without adding entropy. Given the analogy of bruteforcing a low entropy preimage in a hash, Concatenating the secret preimage with itself, or adding capitalization on the second occurence etc. does not increase entropy, its just a constant factor in computational complexity which both attacker and defender suffer.
I am probably misunderstanding what's written, but I suspect its due to the unclear exposition...
My understanding is that you do your math over some field F.
But then when you choose a random point to test your polynomial, you randomly select from G = F[5^1/3], an extension of the original field. And test your polynomial using arithmetic in that larger field.
The increased entropy happens when you select at random from the extended field — there’s more elements in G than in F, so an attacker has a lower chance of guessing your random value.
Exactly - we sample uniformly from an extension field, so entropy is proportional to the extension field size. The base field is almost irrelevant from a security perspective, since things like the Schwartz-Zippel lemma just care about the size of the field we sample from, even if the polynomial in question is (also) a polynomial over some smaller subfield.
That's a square root in real arithmetic. In finite fields you don't operate over the real numbers, but the numbers within the field.
An example of a finite field is the numbers modulo a prime, like p = 2^127 - 1.
Now, please find the square root of x = 113338949109682836687814795709948365013 mod 2^127 - 1. That is, find some number y such that y * y mod (2^127 - 1) = x.
>>> x = 113338949109682836687814795709948365013
>>> n = 1
>>> y = 80490928931346377909947075573303248723 + 170141183460469231731687303715884105727 * n
>>> y**2 % (2**127 - 1) == x
True
EDIT: Looks like we got lucky here since 2^127 - 1 happens to be prime.
Expensive in finite field arithmetic means much more expensive than addition and multiplication. Anyway, the sqrt here is not in terms of a specific operation, but the asymptotic cost of a particular algorithm (as in, algorithm X takes sqrt N steps)
> Expensive in finite field arithmetic means much more expensive than addition and multiplication.
Thanks!
> Anyway, the sqrt here is not in terms of a specific operation, but the asymptotic cost of a particular algorithm (as in, algorithm X takes sqrt N steps)
Yes, I am aware. I wanted to understand the side-tracked question anyway.
Right - in case it's not clear to all, the square root metric Vitalik mentions is about measuring both proof size and verifier complexity. Neither the prover nor the verifier is doing any square root computations.