While it’s interesting that Dom0 avoids Spectre-style branch prediction attacks it’s not clear from TFA exactly why that is so. How does the architecture of the hypervisor avoid an attack that seems to be at the hardware level? From my limited understanding of Spectre and Meltdown, swapping from a monolithic to a microkernel wouldn’t mitigate an attack. The mitigations discussed in the VMscape paper [0] are hardware mitigations in my reading. And I don’t see Xen mentioned anywhere in the paper for that matter.
I guess it’s sort of off topic, but I was enjoying reading this until I got to the “That’s not just elegant — it’s a big deal for security” line that smelled like LLM-generated content.
Maybe that reaction is hypocritical. I like LLMs; I use them every day for coding and writing. I just can’t shake the feeling that I’ve somehow been swindled if the author didn’t care enough to edit out the “obvious” LLM tells.
I think the author actually meant "Yes, vmscape can leak information on Xen, but only leaks from a miniature Dom0 process." Leaking from an small pool not being a security issue they seemed to consider.
Agreed on the point about hw-level mitigation. The leakage still exists. Containing it in a watertight box is quick and effective, and it does avoid extra overhead. But it doesn't patch the hole.
it might be as simple as more rigid context transfers flushing caches. there are a lot of guesses on here now. itd be great if people stopped using may or might and looked in the code. everyone's hopping on the lack of context and adding guesses. thats not helpful
Please see my other comment where I share more details about VMScape and why Xen is not affected. In short, it is because branch predictor state is flushed when transitioning to Dom0. Indeed, it has nothing to do with type of kernel...
And yes, LLMs were at work. The "quote" in the article is not an actual quote...
It's not the em dash, but the negative parallelism ("not X, but Y"). This is a pattern which some LLMs really like using. I've seen some LLM-generated texts which used it in literally every sentence.
(The irony of opening with this pattern is not lost on me.)
As an aside, Wikipedia has a fascinating document identifying common "tells" for LLM-generated content:
I'm also on the spectrum and like using various kinds of parallel construction, including antithesis.
I also tend to use a lot of em dashes. If I posted something I wrote in, say, 2010, I'd likely get a lot of comments about my writing absolutely, 100% being AI-written. I have posted old writing snippets in the past year and gotten this exact reaction.
I originally (two decades ago) started using em dashes, I think, because I also tend to go off on frequent tangents or want to add additional context, and at the beginning of the tangent, I'm not entirely sure how I'll phrase it. So, instead of figuring out the best punctuation at that moment (be that a parenthesis, a comma, or a semicolon for a list), I'll just type an em dash (easy on a Mac).
Then I don't go back and fix it afterward because I have too many thoughts and not enough time to express them. There are popular quotes about exactly this issue.
It's a kind of laziness in the form of my expression to give me more mental capacity to focus on the content. Alt 0151 and Alt 0150 are still burned into my memory from typing em dashes and en dashes so often on Windows.
I suppose I'll have to consider this my own punctuation mode collapse that RLHF is now forcing me to correct.
I've started deliberately using em-dashes and “smart” quotes (made easy by configuring a compose key) — mostly because they look nice, but also out of spite for any software that's somehow not properly Unicode-aware in 20-fucking-25.
Does using Grammarly count as AI-assisted writing?
I use Grammarly because it helps fix speech recognition errors. One of the challenges of speech recognition use is that it is a bit difficult at times to construct grammatically correct sentences in your head, then speak those sentences, and then proofread them before you start the next bit of writing.
I guess it’s sort of off topic, but I was enjoying reading this until I got to the “That’s not just elegant — it’s a big deal for security” line that smelled like LLM-generated content.
Maybe that reaction is hypocritical. I like LLMs; I use them every day for coding and writing. I just can’t shake the feeling that I’ve somehow been swindled if the author didn’t care enough to edit out the “obvious” LLM tells.
[0]: https://comsec-files.ethz.ch/papers/vmscape_sp26.pdf