Nice, but it's too late I needed a different API for future use in my custom sprintf so I made mulle-dtostr (https://github.com/mulle-core/mulle-dtostr). On my machine (AMD) that benchmarked in a quick try quite a bit faster even, but I was just checking that it didn't regress too badly and didn't look at it closer.
Can anyone comment on the "You can exceed the amperage specs" ? I made myself a little case study using 4x1W panels and using them in series and parallel. I got the distinct impression that running panels in parallel is better in not so bright conditions compared to a single panel. Whereas a serial configuration made it worse. Since the sun doesn't shine here that much. Running in parallel seems to be preferable, but it would slightly exceed the amperage spec of a few converter i sampled.
It be worth looking up the I-V curves of solar modules on a datasheet - a key factor is that the maximum power point of a solar module (for a given set of environmental conditions) is really dependent on the voltage that it is running at (whereas the current is more constant based on the light level, up to a certain voltage), so to get the maximum power out the resistance of the load needs to be matched to achieve that maximum power voltage (V_MP).
This is what MPPT controllers do, as this maximum power setpoint will change as environmental conditions change.
The problem is usually shading, not so much the capacity of the wiring, though, if you try hard enough you can exceed the rating of the final stretch of wiring to the inverter (or the little pigtail in the inverter if you connect all of the strings at max voltage/amperage). Shading can really upset the current flow in a set of parallel/series connected panels and can cause local hotspots due to overcurrent. Usually inverters are pretty smart about this and they'll detect that you are pushing it further than is responsible and they will just switch off. I've purposefully triggered such conditions to ensure my installation is safe and I was pretty impressed with how utterly painless fault detection, isolation and recovery are in modern inverters. I also had an older one and there it definitely wasn't all that friendly, to the point that the whole thing had to be hard disconnected from the grid before it would work again.
People should get on the bike and start cycling for real (like burn 200W an hour). Their perception of glucose will flip 100%. Suddenly glucose is like the fuel to your body, that you can't cram enough into. Which means white bread in the morning, croissants (and coca cola with sugar) will be your friends. Quite frankly a much better lifestyle.
I have been using https://github.com/marty1885/landlock-unveil on Linux for about two years now on my stock Ubuntu kernel. I am not sure, why this hasn't become more popular. It's also rootless sandboxing (and it does `unveil` like OpenBSD I guess). I use it to confine builds of third party software with success.
I am using fnv-1a to hash Objective-C method selectors, which are generally just identifier characters and zero or multiple ':'. At the time of my research, fnv-1a had the least collisions over my set of "real life" selectors. I think, it could be worthwhile some time, to try out other constants for maybe even less collisions. Is your list of good primes available ? (And maybe also those that are not quite perfect)
Everything is in the source code. I highly doubt any of the good hash functions listed in smhasher3 (ie all tests passed) would collide over identifiers.
So they should all have zero collisions, meaning there’s no ‘least’ among the good quality ones - they’re all equally collissionless (they differ in other tests).
Sounds like an interesting project. What’s its purpose?
Cool. I forgot to mention, that I am truncating the hash down to 32 bits to hopefully generate tighter CPU instructions. At these few bits, collisions are still rare enough, but they are a concern.
Now my understanding of the choice of prime is that, you are "weighing" the input bits and the computed bits, that will form the hash. So in case of identifiers its very likely that bit 7 of the input is always 0 and say maybe bit 4 is statistically more likely to be 1 by some margin. The other input bits would have some entropy as well. I would expect that certain (imperfect) primes would then aid me to get a better use of the 32 bit space and therefore less risk of a collision for my Objective-C runtime.
Ah, that's interesting. 32-bits yes you would get some collisions even from good hashes just statistically.
Also now I understand your constraints. Very interesting, so you are designing a custom hash function to use in this specific domain of keys with specific probabilistic properties, and you are thinking that there would be some way you could multiply by a certain prime that would ideally fan out these keys to be evenly distributed over the space?
mulle-objc looks fascinating: fast, portable Objective-C runtime written 100% in C11. I encourage you to post a Show HN I'm sure people here would like it.
Hahaha! :) Good on you. Nothing to stop you posting again :)
Truly I have much experience with Posting Show HN's. There's very little quality difference between something you post that gets 3 points and something you post that gets 300 points. A lot of it depends on time, current dynamic audience, and other posts at the time.
Repeat post to get a better idea of the true interest in your work. I encourage you to do it again!!! :)
Spooky also has some good results on common identifiers.
But fnv-1a is in a completely different league. It's recommended for hash tables with other security measures than hash function security. This hash is a typical hybrid, but not universal. umash would be the perfect hybrid: insecure, pretty fast, passes all tests, universal
I think for sports, I could wrap all the various mulle-sde and mulle-bashfunction files back into one and make it > 100K lines. It wouldn't even be cheating, because it naturally fractalized into multiple sub-projects with sub-components from a monolithic script over time.
I tried it on two of my git repositories, just to see, if it could do a decent commit summary. I was very pleasantly surprised with the good result.
I was unpleasantly surprised, that this already cost me 175 credits. If I extrapolate this over my ~100 repositories, that would already put me at 8750, just to let it write a commit message for release day. That is way out of free range and basically would eat up most of the $99 I would have to spend as well. My subscription price for cody is $8 for a month. Pricing seems just way off.
If you pull the HTML file off the above link, you will get a directory listing of all of the POSIX.2 utilities (tar, sed, awk, etc.). The options that you find in the specifications are maximally portable.