The issues with auto-translated Reddit pages unfortunately also happens with Kagi. I am not sure if this is just because Kagi uses Google's search index or if Reddit publishes the translated title as metadata.
I think at least for Google there are some browser extensions that can remove these results.
The Reddit issue is also something that really annoys me and i wish kagi would find some way to counter it. Whenever I search for administrational things I do so in one of three languages, German, French or English depending on which context this issue arises in. And I would really prefer to only get answers that are relevant to that country. It's simply not useful for me to find answers about social security issues in the US when I'm searching for them in French.
Dotted notation would not work because the keys in a dict can also contain dots. I am not terrible familiar with them but there is something called `lenses` that comes from functional programming that should allow you to access nested structures. And I am pretty sure there must be at least one python library that implements that.
My last systems programming class was already a few years ago and I am a bit rusty, so I got some questions:
1. Looking at the code in https://github.com/elast0ny/raw_sync-rs/blob/master/src/even...) it looks like we are using a userspace spinlock. Aren't these really bad because the mess with the process scheduler and might unnecessarily trigger the scaling governor to increase the cpu frequency? I think at least on linux one could use a semaphore to inform the consumer that new data has been produced.
2. What kind of memory guarantees do we have on modern computer architectures such as x86-64 and ARM? If the producers does two writes (I imagine first the data and then the release of the lock) - is it guaranteed that when the consumer reads the second value that also the first value has been synchronized?
I'm not sure I fully understand what you mean? Do you assume we implemented the same approach for shared memory communication like described in the blog post?
If that’s the case, I want to reassure you that we don’t use locks. Quite the contrary, we use lock-free[1] algorithm to implement the queues. We cannot use locks for the reason you mentioned and also for cases when an application dies while holding the lock. This would result in a deadlock and cannot be used in a safety critical environment. Btw, there are already cars out there which are using a predecessor of iceoryx to distribute camera data in an ECU.
For hard realtime systems we have a wait-free queue. This gives even more guarantees. Lock-free algorithms often have a CAS loop (compare and swap), which in theory can lead to starvation but it's practically unlikely as long as your system does not run at 100% CPU utilization all the time. As a young company, we cannot open source everything immediately, so the wait-free queue will be part of a commercial support package, together with more sophisticated tooling, like teased in the blog post.
Regarding memory guarantees. There are essentially the same guarantees like what you have when sharing an Arc<T> via a Rust channel. After publishing the producer releases the ownership to the subscriber and they have read-only access for as long as they hold the sample. When the sample is dropped by all subscriber, it will be released back to the shared memory allocator.
Btw, we also have an event signalling mechanism to not poll the queue but wait until the producer signals that new data is available. But this requires a context switch and it is up to the user to decide if it is desired to have.
Yep, these intrinsics are what I was referring to, and yes the software versions won’t use the hardware trig unit, they’ll be written using an approximating spline and/or Newton’s method, I would assume, probably mostly using adds and multiplies. Note the loss of precision with these fast-math intrinsics isn’t very much, it’s usually like 1 or 2 bits at most.
I’m not totally sure but I think fast math usually comes with loss of support for denormals, which is a bit of range reduction. Note that even if they had denormals, the absolute error listed in the chart is much bigger than the biggest denorm. So you don’t lose range out at the large ends, but you might for very small numbers. Shouldn’t be a problem for sin/cos since the result is never large, but maybe it could be an issue for other ops.
Just for your information: when calculating trig functions, you first modulo by 2 pi (this is called range reduction). Then you calculate the function, usually as a polynomial approximation, maybe piecewise.
But if it supports larger floats it must be doing range reduction which is impressive for low cycle ops. It must be done in hardware.
It doesn't surprise me regarding denorms. They're really nice numerically but always disabled when looking for performance!
Oh that range reduction. :) I’m aware of the technique, but thanks I did misunderstand what you were referring to. I don’t know what Nvidia hardware does exactly. For __sinf(), the CUDA guide says: “For x in [-pi,pi], the maximum absolute error is 2^(-21.41), and larger otherwise.” That totally doesn’t answer your question, it could still go either way, but it does kinda tend to imply that it’s best to keep the inputs in-range.
Do you have any sources on that? The only thing I could find was that some parties want to ban digital billboard in the city of Zurich (not the whole canton).
Sorry, I had to clarify that the final decision has not yet been made. The city still approves digital billboards [0].
In Geneva, the debate has been going on for some time. There, I think since 2021, digital billboards should be banned, but in the latest votes there is the counter-movement to keep the billboards [1].
A few years ago I was trying if I could port that code to Linux. I made some decent progress, but in the end I got stuck trying to convert the resources (images and audio files) to a more modern format. For example, the PICT format (https://en.wikipedia.org/wiki/PICT) is not just some pixel encoding but also contains QuickDraw commands. There are some open source converters, but they seem to support only a subset. And then I had to concentrate on finishing my degree so I abandoned that project.
Other things I remember:
- Data is mapped directly from files to C structs - this provided some challenges as I had to convert big endian to little endian.
- Classic MacOS handled memory allocations quite a bit different, if one wanted to access a dynamically allocated buffer, they first had to acquire a lock on that part of memory as otherwise the operating system was allowed to move the data to some other address.
My memory on these details is quite a bit fuzzy though, so I can't guarantee that what I wrote here is 100% correct.
I have been in a similar rut lately, trying to recover the artwork from Glider 4.0 (the Glider before Glider Pro). I have the amazing Mini vMac set up with Glider 4.0 installed as well as one of the last ResEdit apps. I am literally opening the PICT resources with ResEdit on the emulator and taking screenshots on Mac OS Monterey of the emulator.
Yeah, then I get to tediously re-crop the artwork in the screenshots for correctness.
Also the screenshot is pixel-doubled (Retina display, I suppose) but a little GraphicsMajick fixes it:
gm convert doubled_image.png -filter point -resize 50% new_image.png
It can open PICT resources and save to TIFF/GIF/PNG etc and it has a batch mode.
edit: Interesting. I just tried the current version of GraphicConverter 11 I have installed on Ventura and it can actually still open PICTs from resource forks! Or... it tries, but the multi-page GUI is messed up so it crops the images and doesn't display them correctly. But somewhere in there it is still reading resource fork PICTs on macOS Ventura on Apple Silicon...
You may want to try rsrcdump for this task (https://github.com/jorio/rsrcdump). It takes a resource fork as input and spits out modern file formats for a selection of resource types (PICT -> .png, snd -> .aiff, etc.)
I just ran rsrcdump on Glider 4.09 and the resulting PNG files appear to match their PICT counterparts as displayed by ResEdit.
I initially wrote this tool to assist in porting some old games by Pangea Software. If you have some resource fork that fails to convert properly with rsrcdump, just let me know and I'd be happy to try to fix it.
When one talks about the average income, does that average only consider people that work? Otherwise, in countries with a high rate of unemployment, I would imagine that a single salary might have to feed the whole extended family, so that one needs to have a salary that is much higher than then average income to be sustainable.
It is indeed O(N). There is also a more efficient algorithm (in case the range is small) that uses binary search to find the first index. Such an algorithm would have time complexity O(log(N) + H) where H is the number of elements in the output.
I think at least for Google there are some browser extensions that can remove these results.
reply