Is this post AI-written? The repeated lists with highlighted key points, the "it's not just [x], but [y]" and "no [a] just [b]" scream LLM to me. It would be good to know how much of this post and this project was human-built.
I was on the fence about such an identification. The first "list with highlighted key points" seemed quite awkward to me and definitely raised suspicion (the overall list doesn't have quite the coherence I'd expect from someone who makes the conscious choice; and the formatting exactly matches the stereotype).
But if this is LLM content then it does seem like the LLMs are still improving. (I suppose the AI flavour could be from Grammarly's new features or something.)
It's interesting... Different LLM models seem to have a few sentence structures that they seem to vastly overprefer. GPT seems to love "It's not just X, it's Y", Claude loves "The key insight is..." and Gemini, for me, in every second response, uses the phrase "X is the smoking gun". I hear the smoking gun phrase around 5 times a day at this point.
It's hated by everyone, why would people imitate it? You're inventing a rationale that either doesn't exist or would be stupider than the alternative. The obvious answer here it they just used an LLM.
I think that the style itself is very clear and has its advantages, it's hated only because it's from LLMs, which are not liked when used without judgement (which is often the case).
So, someone who falls on the side of not completely hating LLMs for everything (which is most people), could easily copy the style by accident.
I love the style it was written in. I felt a bit like reading a detective novel, exploring all terrible things that happened and waiting for a plot twist and hero comming in and saving the day.
Real humans write like that though. And LLMs are trained on text not speech. Maybe they should get trained on movie subtitles, but then movie characters also don't speak like real humans.
"LinkedIn Standard English" is just the overly-enthusiastic marketing speak that all the wannabe CEOs/VCs used to spout. LLMs had to learn it somewhere
No, they do it because they're mode-collapsed, use similar training algorithms (or even distillation on each other's outputs) and have a feedback loop based on scraping the web polluted with the outputs of previous gen models. This makes annoying patterns come and go in waves. It's pretty likely that in the next generation of models the "it's not just X, it's Y" pattern will disappear entirely, but another will annoy everyone.
This is purely an artifact of training and has nothing to do with real human writing, which has much better variety.
I came back around 2017*, expecting the same nice experience I had with VB3 to 6.
What a punch in the face it was...
I honestly cannot fathom anyone developing natively for windows (or even OSX) at this day and age.
Anything will be a webapp or a rust+egui multi-plataform developed on linux, or nothing. It's already enough the amount of self-hate required for android/ios.
* not sure the exact date. It was right in the middle of the WPF crap being forced as "the new default".*
"What if you can't tell the difference?" Yeah, what if it becomes impossible to spot who's a lazy faker who outsourced their thinking? Doesn't that sound great?!
What's exhausting is getting through a ten-paragraph article and realising there was only two paragraphs of actual content, then having to wade back through it to figure out which parts came from the prompt, and which parts were entirely made up by the automated sawdust injector.
That's not an AI problem, it's a a general blog post problem. Humans inject their own sawdust all the time. AI, however, can write concisely if you just tell it to. Perhaps you should call this stuff "slop" without the AI and then it doesn't matter who/what wrote it because it's still slop regardless.
I completely agree with your parent that it's tedious seeing this "fake and gay" problem everywhere and wonder what an unwinnable struggle it must be for the people who feel they have to work out if everything they read was AI written or not.
It used to require some real elbow grease to write blogspam, now it's much easier.
I hardly ever go through a post fisking it for AI tells, they leap out at me now whether I want them to or not. As the density of them increases my odds of closing the tab approach one.
It's not a pleasant time to read Show HNs but it just seems to be what's happening now.
It never used to be a general blog post problem. It was a problem with the kinds of blogs I'd never read to begin with, but "look, I made a thing!" was generally worth reading. Now, I can't even rely on "look, I made a thing!" blog posts to accurately describe the author's understanding of the thing they made.
I see your point. You need to recalibrate how to decide what to read since the proxy has changed its meaning. I found a similar issue when monetized Youtubers started making things. It used to be amazing to see some hobby project that was a little bit sophisticated but now big stars have lots of money and their full-time job is doing incredible projects just to make videos of them. It's not AI but it's something that didn't used to exist. I'm thinking channels like "Stuff Made Here" and "I Did A Thing" that sound humble but doing difficult, expensive projects that have no purpose except to look good on a video.
I analyzed the test using Pangram, which is apparently reliable, it say "Fully human Written" without ambiguity.[1]
I personally like the content and the style of the article. I never managed to accept going through the pain to install and use Visual Studio and all these absurd procedures they impose to their users.
These days I'm always wondering whether what I'm reading is LLM-slop or the actual writing of a person who contracted AI-isms by spending hours a day talking to them.
I do feel there's far too much of a focus on instantaneous response in today's world, both at work and in personal life. If something I can give you is truly preventing you from moving forward then that's fair enough, but otherwise send emails, don't rush the replies, and let people plan their own time.
Even many non tech people have begun to associate Internet wide outages with “aws must be down” so I imagine many of them searching “is aws down” and for down detector, a hit is a down report, so it will report aws impacts even when the culprit is cloudflare in this case
interesting, maybe "AWS is down" will become the new "the server is down" that some non-tech people throw around when anything unexpected happen on their computer?
I must have used nano for years at this point, and I'm shocked to find out how customisable nano actually is!
I tend to use micro[0] on most of my systems now just because it comes with really lovely defaults and keybindings that are a bit more familiar, but this might make me take a second look at nano in future.
It's not just customisable, it's also insanely scriptable. Any action that you can do in nano itself corresponds to a command, and you can create "string macros" that you can bind to key combinations. Additionally it can execute external commands on any nano buffer and return the result. Combining the two is very powerful.
E.g. I have a configuration which allows me to use nano while editing pdf side-by-side, and be able to click on the pdf and land in the correct line in nano, and vice-versa. (and obviously compiling the latex document itself happens via a custom keystroke).
Simple. I don't know emacs that well :)
I didn't even know emacs had a terminal mode until I looked this up; my main experience with emacs was when I was writing prolog and the IDE was emacs based. I didn't find it as nice to use back then so I never gave it a serious shot.
By comparison nano is everywhere and was super-simple to configure and spruce-up with custom functions, so it just stuck with me.
As for other competitors, when comparing to vim, I find it much simpler to use, and to the surprise of most vim users I speak to, equally powerful (at least for my needs).
Before I used micro & ne I used nano, and configured the keybindings to work in the CUA style. I still have the dot files, didn't delete them, but they rarely get used anymore.
I think they recently added Ctrl+S to save by default, even if unconfigured, woohoo.
Promising but very barebones. Hard to do without syntax highlighting these days, for example. But I think it would be useful on a tiny machine with openwrt, as micro is huge there. ;-)
Same same. Never even occurred to me to look. That's the risk of a (successful) low-friction product though: you use it in quick bursts where the tool is necessary but largely invisible, and you never invest in learning more about it because it works so well with the defaults. There's probably a profound strategic insight buried in there somewhere.
Yes, their units come with a HDMI out, and you can connect them up to install onto them like any other server - but if you ever want the (admittedly very, very good) factory software back on them I'd recommend imaging the internal storage first as I couldn't find a way to get their OS installed back afterwards.
embedding i can understand due to lost revenue, but i really don't understand how linking to articles can possibly be anything but a boon for news outlets. how do news sites think people discover their content in the first place?! i don't know anyone of my generation who still subscribes to a single news outlet for their news.
reply