In part a follow-on from the Stack Overflow thread, when I was younger I found various "how to ask questions" articles to be really useful (I particularly recommend Jon Skeet's https://codeblog.jonskeet.uk/2010/08/29/writing-the-perfect-...). But he's focused on asking questions of "the Internet", where there should be no expectation that a random netizen will put effort into answering.
That's not necessarily the case at work with colleagues, so I wrote this article to fill a perceived gap.
You may have not realised that legitimate email has failed (and it might even be true) but my experience suggests it's unlikely that it hasn't happened. I only have a handful of users, but when I was greylisting I'd get reports of missing mail at least annually.
Which isn't to say it's not worth it, although nowadays I'd recommend that https://www.postfix.org/POSTSCREEN_README.html pre-greet checks are just as good at stopping spam and better at not blocking legit mail.
From the linked articles, I understand "greytrapping" to be adding clients that attempt delivery to an invalid address and don't retry when greylisted to a deny list.
Honestly, greylisting is a hack. There are better options available nowadays, for all that I was almost certainly using greylisting when the author wrote the text in the article.
The key insight behind the idea is that common junk mailing software doesn't support standard SMTP very well. Greylisting tells the client to try again in a few minutes, and most legit mailers will do just that. Not all, though.
A key observation here is that there's more than one way to ask a client to wait: the opening stanza in an SMTP transaction involves the server sending a message, and the client isn't supposed to respond until it receives that message. And it turns out that pre-greet checks (at least in my experience) have better anti-spam specificity. So I turned greylisting off $mumble years ago.
Pre-greet checks are still a hack: there's nothing stopping a competent spammer from implementing the protocol properly, except that "competent spammer" is an oxymoron.
In a correct program, the borrow checker has no effect.
Languages like C compile code with the understanding that if the compiler can't prove the code is incorrect, it'll assume it's correct. Rust compiles with the expectation that unless the compiler can prove the code correct (according to the language rules), it won't compile it. In C, all programs that only perform defined behaviour are valid, but many programs which exhibit undefined behaviour are also valid. In safe Rust, all programs which exhibit undefined behaviour are invalid. But as a trade-off, many programs which would actually execute perfectly well are also considered invalid.
In both cases, once you get past the layers that check stuff, you may normally assume that whatever you have has already been shown to be OK and you probably don't have enough information to re-check while compiling. It might blow up at runtime, it might not.
Unsafe isn't so unsafe that it disables the borrow checker!
The two main things the compiler allows in an unsafe block but not elsewhere are calling other code marked "unsafe" and dereferencing raw pointers. The net result is that safe code running in a system that's not exhibiting undefined behaviour is defined to continue to not exhibit undefined behaviour, but the compiler is unable in general to prove that an unsafe block won't trigger undefined behaviour.
You can side-step the borrow checker by using pointers instead of references, but using that power to construct an invalid reference is undefined behaviour.
Cloudflare needs to worry about their sandbox, because they are running your code and you might be malicious. You have less reason to worry: if you want to do something malicious to the box your worker code is running on, you already have access (because you're self-hosting) and don't need a sandbox escape.
Automatically running LLM-written code (where the LLM might be naively picking a malicious library to use, is poisoned by malicious context from the internet, or wrongly thinks it should reconfigure the host system it's executing code on) is an increasingly popular use-case where sandboxing is important.
That scenario is harder to distinguish from the adversarial case that public hosts like Cloudflare serve. I don't think it's unreasonable to say that a project like OpenWorkers can be useful without meeting the needs of that particular use-case.
Honestly, for my own stuff I only need one PoP to be close to my users. And I've avoided using Cloudflare because they're too far away.
More seriously, I think there's a distinction between "edge-style" and actual edge that's important here. Most of the services I've been involved in wouldn't benefit from any kind of edge placement: that's not the lowest hanging fruit for performance improvements. But that doesn't mean that the "workers" model wouldn't fit, and indeed I suspect that using a workers model would help folk architect their stuff in a form that is not only more performant, but also more amenable to edge placement.
That's not necessarily the case at work with colleagues, so I wrote this article to fill a perceived gap.
reply