These image gen models are getting so advanced and life like that increasingly the general public are being duped into believing AI images are actually real (ex Facebook food images or fake OF models). Don't get me wrong I will enjoy the benefits of using this model for expressing myself better than ever before, but can't help feeling there's something also very insidious about these models too.
It's more likely than not that every single person who uses the internet has viewed an AI image and taken it as real by now.
The obvious ones stand out, but there are so many that are indiscernible without spending lots of time digging through it. Even then there are ones that you can at best guess it's maybe AI gen.
People will continue to retreat into walled, trusted networks where they can have more confidence in the content they see. I can’t even be sure I’m responding to a real person right now.
Just the other day, I saw a comment on HN accuse another comment of being AI for no good reason. I personally thought the comment was fine.
I know it's an unpopular opinion, but I don't really read too deeply into whether text is AI generated or not. On social platforms like HN I tend to just skim many comments anyway so it's not like the concept of "they spent no time writing so you shouldn't spend time reading" really applies.
I know some people use apps like Grammarly to improve their language and stuff, which I can respect. But at what point do we draw the line between AI assisted text and AI generated text?
I sometimes use AI to do research into the nuance of some topics to help me formulate a response and synthesize ideas, but if I ever get to the point where I'd be asking AI generate a response to the comment then I find it better to just not respond at all.
At the point now where basically any photo that isn't shared by someone I trust or a reputable news organisation is essentially unverifiable as being real or not
The positive aspect of this advance is that I've basically stopped using social media because of the creeping sense that everything is slop
Maybe not an actual argument for anything, but even before these image models everyone that used the internet had seen a doctored image they believed to be real. There was a reason that 'i can tell by the pixels' was a meme.
people only notice when they are prompted to look for AI or scrutinize AI
a lot of these accounts mix old clips with new AI clips
or tag onto something emotional like a fake Epstein file image with your favorite politician, and pointing out its AI has people thinking you’re deflecting because you support the politician
Meanwhile the engagement farmer is completely exempt from scrutiny
Its fascinating how fast and unexpected the direction goes
I actually think this was a good thing. Manipulating images incredibly convincingly was already possible but the cost was high (many hours of highly skilled work). So many people assumed that most images they were seeing were "authentic" without much consideration. By making these fake images ubiquitous we are forcing people to quickly learn that they can't believe what they see on the internet and tracking down sources and deciding who you trust is critically important. People have always said that you can't believe what you see on the internet, but unfortunately many people have managed without major issue ignoring this advice. This wave will force them to take that advice to heart by default.
I remember telling my parents at a young age that I couldn't be sure Ronald Reagan was real, because I'd only ever seen him on TV and never in real life, and I knew things on TV could be fake.
That was the beginning of my journey into understanding what proper verification/vetting of a source is. It's been going on for a long time and there are always new things to learn. This should be taught to every child, starting early on.
I agree. Too many adults are fooled by fake news and propaganda and false contexts. And CNN and Fox are more than happy to take advantage of this.
My personal rule of thumb is if it generates outrage, it's probably fake, or at least a fake interpretation. I know that outrageous stuff actually happens pretty often, so I'll dig into things I find interesting. But most of the time it's all just garbage for clicks.
I used to also have this optimistic take, but over time I think the reality is that most people will instead just distrust unknown online sources and fall into the mental shortcuts of confirmation bias and social proof. Net effect will be even more polarization and groupthink.
> By making these fake images ubiquitous we are forcing people to quickly learn
That's quite the high opinion on the self-improvement ability of your Average Joe. This kind of behavior only comes with an awareness, previously learned, and an alertness of mind. You need the population at large to be able to do this. How if not, say, teaching this at schools and waiting for the next generation to reach adulthood, would you expect this to happen?
I agree that improvement for the Average Joe will be very hard. I also think that taking more attention to teach the younger generation is vitally important. But mostly I don't see an alternative. I don't think we can protect people from fake information without giving up our freedom, and that isn't a viable alternative in my mind. So what is left but trying our hardest to teach people to think critically?
Our institutions have been trying to get our kids to think critically for a while. At least when I was in school, we didn't focus a lot on memorization (sometimes we did, like memorizing the times tables or periodic table). My teachers tried to instill in us an understanding of the concepts, something I took for granted. Many of my classmates have gone on to become lawyers, doctors, other prestigious careers.
But I feel like we live in a different time now. I hear teachers tell stories about school admin siding with parents instead of teachers, and the kids aren't learning anything. Anecdotally of course.
I think our teachers really want the kids to think critically. But parents and schools don't seem to value that anymore.
> By making these fake images ubiquitous we are forcing people to quickly learn that they can't believe what they see on the internet and tracking down sources and deciding who you trust is critically important.
Has this thought process ever worked in real life? I know plenty of seniors who still believe everything that comes out of Facebook, be AI or not, and before that it was the TV, radio, newspapers, etc.
Most people choose to believe, which is why they have a hard time confronting facts.
And not just seniors. I see people of all ages who are perfectly happy to accept artificially generated images and video so long as it plays to their existing biases. My impression is that the majority of humanity is not very skeptical by default, and unwilling to learn.
Yes. People willingly accept made up text (stories) if it fits their world view, and for words we always knew that they could be untrue. Why should it be different for images/audio/video?
When it comes to graphic content on the internet I usually consume it's for entertainment purposes. I didn't care where it came from before and don't care today either. Low quality content exists in both categories, a bit easier to spot in AI generated, so it's actually a bonus.
I feel like there is one or two generations of people who are tech savy and not 100% gullible when it comes to online things. Older and younger generations are both completely lost imho, in a blind test you wouldn't discern a monkey from a human scrolling tiktok &co
Boomers used to tell us to never trust anything online and now they send their life savings to "Brad Pitt"
New generations gets unlimited brain rot delivered through infinite scroll, don't know what a folder is, think everything is "an app" and keep falling for the "technology will free us from work and cure cancer"
There was a sweet spot during which you could grow alongside the internet at a pace that was still manageable and when companies and scammers weren't trying so hard to robbyou from your time money and attention
Your post seems a little naive to me, a lot of people are just not interested in putting in the work or confronting their own confirmation bias, and there's an oversupply of bad actors who will deliberately generate fake imagery for either deception or exhaustion. Many people are just not on quest for truth and are more interested in the activation potential of images or allegations than in the factual reliability.
In reality: millions of boomers are scrolling FB this very minute reacting to the most obviously fake rage/surprise/love bait AI slop you've ever seen.
People already have access to every form of niche pornography they could dare to imagine (for absolutely free!), I really doubt that 'personal taste' is the part that makes OF models their money. They'll be fine.
I think you're under-estimating how much personal taste applies in that industry. Yes, there's a lot of free content but it's often low quality and/or difficult to find for a particular niche. The OF pages, and other paid sites, are curated collections of high quality stuff that can satisfy particular cravings repeatedly with minimal effort.
A big part of it also the feeling of "connection" with the creator via messages and what not, but that too can be replicated (arguably better) by AI. In fact, a lot of those messages are already being generated haha.
I was mostly hinting towards the 'connection' part of it, yes - I think that's really where the money is made more than anything else. That's the part that'll start killing the industry once some company tunes it in.
This is the dystopia of that pacified moon from "Mold of Yancy" by PKD but taken to the next level.
What's astonishing abut the present is that even PKD did not foresee the possibility of an artificial being not only being constructed from whole cloth but actually tailored to each individual.
Even ignoring the model censorship making high quality sexual imagery/videos not possible, this is a crazy take. You think OF models are making money because it's the only way to see a nude man/woman with particular characteristics on the internet?
You're completely misunderstanding what the product being sold is.
Their point is that the point of OF is that there is (supposed to be) a real human. It's a (para)social relationship that no image generator model is going to give you.
If you can make X money running one client at a time you can make ( X × N ) money if you work with N clients at a time. You have to give just enough human to keep'em hooked.
Sex work shouldn't be shunned, but it's not a normal profession either. Mental health, addiction and abuse is just as much of a problem online and in countries where prostitution is legal and normalized.
You can’t really because these powerful models are censored.
You can create lewd pictures with open models but they aren’t nearly as good or easy to use.
I’ve seen some very high quality NSFW AI video in the last few months. Those models are not far behind and the search and training space for porn is smaller than being able to generate anything
> I’ve seen some very high quality NSFW AI video in the last few months. Those models are not far behind and the search and training space for porn is smaller than being able to generate anything.
Agreed. In my opinion, the primary limitation of the porn models is actually poor labeling of the training set. The company that manages to produce a well-labeled, porn-tuned AI image model is going to absolutely clean up.
The extractive dark patterns that will emerge from a parasocial chat "AI relationship" that can generate porn images relevant to the chat on the fly will be staggering. Once that proceeds to being able to generate relevant video, all holy hell is going to break loose.
> The company that manages to produce a well-labeled, porn-tuned AI image model is going to absolutely clean up.
For anime/non-photographic content that essentially exists (Pony, then Illustrious, then probably some new-fangled thing by now that I don't even know about), thanks to the meticulously tagged booru image corpus. However, as strong as these models are on matters of anatomy and kinks, they're limited in other ways due to the hugely biased dataset and dependence on tag soup prompts rather than natural language (many find the latter a plus, not a minus, though).
I haven't heard of any proprietary/cloud-based NSFW model that would be massively better than what's available for free. There are many NSFW-friendly services, but by and large they're just frontends to models trained by other people.
Because models can be used to alter existing images, you can use open and commercial models together in content creation workflows (and also the available findings of open models, and the ability to further tube them very specific used, are quite powerful on their own), so the censorship on the commercial models has a lot less effect on what motivated people can produce than you might think.
I still think, even with that, that like most predictions of AI taking over any content industries, the short-term predictions are overblown.
Doesn't Grok allow users to create lewd content or did they roll that back?
Also, I suspect that we'll soon see the same pattern of open weights models following several months behind frontier in every modality not just text.
It's just too easy for other labs to produce synthetic training data from the frontier models and then mimic their behavior. They'll never be as good, but they will certainly be good enough.
I don't think so. Talking to people in this space, I've found out about broad camps. There are probably more:
-They simply aren't into real women/men (so you couldn't even pay a model to do what they're looking for).
-They want to play out fantasies that would be hard to coordinate even if you could pay models (I guess this is more on the video side of things, but a string of photos can put be together into a comic)
-They want to generate imagery that would be illegal
Based on this, I would guess fetish artists (as in illustrators) are more at risk than OF models. However, AI isn't free. Depending on what you're looking for, commissions might be cheaper still for quite a while...
It stands for "OnlyFans" a website originally for creators to engage directly with their audiences but quickly became a website where women sold explicit pictures of themselves to subscribers.
They still run ads trying to push the narrative that it's for comedians and musicians.
But at this point, OnlyFans is so synonymous with egirls that suggesting someone has an account is used as a way to insinuate they sell pictures of themselves.
Jaded, but if I knew there was a possibility of a bunch of incriminating footage of me (images, video, etc.) out there in the pre-AI days, I would do my absolute best to flood the internet with as many related deepfakes (including of myself) as possible.
Oh we’ve seen nothing yet of the chaos that generative ai will unleash on the world, looking at Meta platforms it’s already a multi million dollar industry of selling something or someone that doesn’t exist. And that’s just the benign stuff.
This has been true for a while with digital art, photoshop, etc. Over time, people's BS detectors get tuned. I mean, scrolling by quickly in a feed, yeah, you might miss if an image is "real" or not, but if you see a series of photos side by side of the same subject (like an OF model), you'll figure it out.
Also, using AI will not allow you to better express yourself. To use an analogy, it will not put your self-expression into any better focus, but just apply one of the stock IG filters to it.
> a series of photos side by side of the same subject
Cameras are now "enhancing" photos with AI automatically. The contents of a 'real' photo are increasingly generated. The line is blurring and it's only going to get worse.
It's shitty, but I think it's almost as bad that people are calling everything AI. And I can't even blame them, despite how infuriating it is. It's just as insidious that even mundane things literally ARE AI now. I've seen at least twice now (that I'm aware of) where some cute, harmless, otherwise non-outrageous animal video was hiding a Sora watermark. So the crazy shit is AI. The mundane shit is AI. You wonder why everyone is calling everything AI now. :P
It seems like a low level paranoia - now I find myself double checking that the youtube video I'm watching isn't some AI slop. All the creators use Getty b-rolls and increasingly AI generated stuff so much that it's not a far stretch to have the voice and script all be auto generated too.
I suppose if the AI was able to tell me a true and compelling story, I might not even mind so much. I just don't want to be spoon fed drivel for 15 minutes to find it was all complete made up BS.
Still some tweaks to the final result, but I am guessing with the ARC-AGI benchmark jumping so much, the model's visual abilities are allowing it to do this well.
Animated SVGs are one of the example in the press release. Which is fine, I just think the weird SVG benchmark is now dead. Gemini has beat the benchmark and now differences are just coming down to taste.
I don't know if it got these abilities through generalization or if google gave it a dedicated animated SVG RL suite that got it to improve so much between models.
Regardless we need a new vibe check benchmark ala bicycle pelican.
What benchmark, though? There is very clearly a lot of room for improvement in its SVG making capabilities. The fact that it can now, finally, make a pelican on a bike that isn’t completely wrong is not an indicator that SVG generation is now a solved problem.
I'm thinking now that as models get better and better at generating SVGs, there could be a point where we can use them to just make arbitrary UIs and interactive media with raw SVGs in realtime (like flash games).
You’re not going to believe me when I tell you this, but generating a webpage with HTML is far simpler than generating arbitrary graphics (that look good) with SVGs.
Thats one dimension before another long term milestone: Realtime generation of 3D mesh content during gameplay.
Which is the "left brain" approach vs the "right brain" approach of coming at dynamic videogames from the diffusion model direction which the Gemini Genie thing seems to be about.
Unfortunately it still fails my personal SVG benchmark (educational 2d cross section of the human heart), even after multiple iterations and screenshots feedback. Oh well, back to the (human) drawing board.
On the other hand, creation of other vector image formats (eg. "create a postscript file showing a walrus brushing its teeth") hasn't improved nearly so much.
Perhaps they're deliberately optimising for SVG generation.
To show newbies how to use vim. Currently its not complete and has major issues. So if you want to try give it a go, but please hold your judgement as not all shortcuts have been added.
I have found GPT 5.3-Codex to do exceedingly well when working with graphics rendering pipelines. They must have better training data or RL approaches than Antropic as I have given the same prompt and config to Opus 4.6 and it seems to have added unwanted rendering artifacts. This may be just an issue specific to my use case, but wonder since OpenAI is partners with MSFT, which makes lots of games, that this may be an area they heavily invested in
While I think the use of the term “terrorist” is unwarranted, I do think deflock is seeking political change. The decision to use flock is a government policy choice, right?
>> “I was stunned to learn late yesterday that after convening a task force of local and national experts, Mayor Johnston has been negotiating secretly with the discredited CEO of Flock Safety and signing another unilateral extension of this mass surveillance contract with no public process and no vote from the City Council or input from his own task force,” Councilmember Sarah Parady told The Denver Gazette.
What is the point of this comment? Are you saying that deflock are not terrorists but are terrorist adjacent? Why respond to someone defining terrorism by pointing out that 2 words at the end of the definition also apply to deflock? Do those not apply to basically everyone who participates in their country's society, including literally everyone who votes and all politicians?
I am very curious if this app is making money or are users just using the two generators and then leaving? If so I am very impressed with your wrapper around the image gen models.
This could be the future of film. Instead of prompting where you don't know what the model will produce, you could use fine-grained motion controls to get the shot you are looking for. If you want to adjust the shot after, you could just checkpoint the model there, by taking a screenshot, and rerun. Crazy.
reply