Well, of course, because that statement is deeply incorrect—the described mistake would cause the most recent reading to have more weight.
If you have a set of readings, say, [0.1, 0.02, 0.3, 0.05, 0.08], normally when you average them you would get 0.55—the mean of the set.
Calculating the average by "averaging the new reading with the previous average" would mean new + old / 2 every time. That means that for each reading after the first, your "averages" would be: [0.06, 0.18, 0.115, 0.195].
If we add a new reading of 0.01 to each of these, in the first case, we would get an average of 0.46, and in the second case, 0.1025. As you can see, even taking into account the already-very-skewed numbers, the second case biases it much further in favor of the new reading (which, in this case, is very low compared to the existing readings).
yeah the language is slightly ambiguous enough where you can't for certain know. first vs newest.
the issue is that taken as a whole the quantization of the samples into 8 bins is a much bigger issue along with the problem of the no hardware watchdog or hardware malfunction alarms. along with the poor testing methodologies too.
I find the following asymmetry interesting: if you pick a random spot and start zooming out, you will pass through interiors of both dead and living cells in some ratio. But if you start zooming in, pretty much all of the cells you pass through will be dead.
Yes, this is inconsistent. When you zoom in 10 levels and then zoom out 10 levels again, pretty much all cells you see will be dead. But if you continue to zoom out further than 10 levels, you will see some mix of live and dead cells again.
Zooming in is easily defined: when you zoom into an on-state OTCA metapixel, you can check which pixels that make up the metapixel are on and which are off [1]. Same deal for zooming into an off-state OTCA metapixel. The reason you are seeing mostly dead pixels is that most pixels inside an OTCA metapixel are off, so the chance of hitting a pixel that is on is slim.
Zooming out is more tricky: when zooming out of an on-state pixel, that pixel could be at many possible positions inside an on-state or off-state OTCA metapixel. In their blog post the author says that they randomly select from the possible options in a way that adds "diversity" [2]. I think this could be improved to align with the statistics of zooming in.
Except that even in dead space when you zoom in you find the borders of all cells, and each of those borders is just as alive as any other live cell. I kept feel like, if I zoomed in to either an always-on cell or an always-off cell I would enter into something noticeably more static, but actually each one was just as alive as any of the universe around it.
For low dimensions, it might be useful to look at the convex properties of the Pareto front. If a point P is on Pareto front, it is not in the interior of the convex hull of undominated points. In two dimensions, one can compute the convex hull of N points in O(N log N) time. This typically allows for faster Pareto front computation, but not in the worst case.
Yeah I'm not sure this works. As far as I can tell the Pareto front doesn't have to be convex and can therefore contain points that are in the interior of the convex hull.
Of course if you accept convex combinations of options then the pareto front is part of the convex hull.
I bet they're referring to the objective space, not the input space.
It's easy to come up with pathological examples where that strategy doesn't help. E.g., if your objectives lie in the boundary of the unit square then the Pareto Front will have at most two points in it (near a corner -- which corner depends on whether you're looking at maxima or minima in each coordinate), but the entire set will be in the convex hull.
For a uniform sample it's more beneficial. Most points aren't in the hull (as the cardinality increases the hull density drops to 0), and in 2D the Pareto Front will be around 25% of the hull.
I'm not sure you can compute the hull any more quickly than this library in the first place, at least asymptotically, but there already exist some highly optimized solvers, so that might tip the scales a bit.
The author claims the same thing is true of software, but not with software itself.
hncynic 1 minute ago
This is great article. When I read "Taught in 30 Minutes" I was amazed at how the author seems to think that it was a "fantastic story" that was written in an unrealistic way - that the author has been able to write his book on his own (yet untrue). The author doesn't seem to be aware of the actual circumstances behind the creation of a book.
hncynic 1 minute ago
Great post. This is really the kind of article I don't think the author talks about in his blog post.
TL;DR - the world is everywhere.
I was very disappointed with the title, but here we are.
* web ui: https://openprocessing.org/sketch/126042/
* Numberfile video: https://youtu.be/lFQGSGsXbXE