Hacker Newsnew | past | comments | ask | show | jobs | submit | nickandbro's commentslogin

These image gen models are getting so advanced and life like that increasingly the general public are being duped into believing AI images are actually real (ex Facebook food images or fake OF models). Don't get me wrong I will enjoy the benefits of using this model for expressing myself better than ever before, but can't help feeling there's something also very insidious about these models too.

It's more likely than not that every single person who uses the internet has viewed an AI image and taken it as real by now.

The obvious ones stand out, but there are so many that are indiscernible without spending lots of time digging through it. Even then there are ones that you can at best guess it's maybe AI gen.


People will continue to retreat into walled, trusted networks where they can have more confidence in the content they see. I can’t even be sure I’m responding to a real person right now.

As long as HackerNews community keeps the quality of the conversation high (with or without AI), I don’t think many of us will question this too much

Just the other day, I saw a comment on HN accuse another comment of being AI for no good reason. I personally thought the comment was fine.

I know it's an unpopular opinion, but I don't really read too deeply into whether text is AI generated or not. On social platforms like HN I tend to just skim many comments anyway so it's not like the concept of "they spent no time writing so you shouldn't spend time reading" really applies.

I know some people use apps like Grammarly to improve their language and stuff, which I can respect. But at what point do we draw the line between AI assisted text and AI generated text?

I sometimes use AI to do research into the nuance of some topics to help me formulate a response and synthesize ideas, but if I ever get to the point where I'd be asking AI generate a response to the comment then I find it better to just not respond at all.


At the point now where basically any photo that isn't shared by someone I trust or a reputable news organisation is essentially unverifiable as being real or not

The positive aspect of this advance is that I've basically stopped using social media because of the creeping sense that everything is slop


Maybe not an actual argument for anything, but even before these image models everyone that used the internet had seen a doctored image they believed to be real. There was a reason that 'i can tell by the pixels' was a meme.

At least some of the comments here are likely AI-generated

people only notice when they are prompted to look for AI or scrutinize AI

a lot of these accounts mix old clips with new AI clips

or tag onto something emotional like a fake Epstein file image with your favorite politician, and pointing out its AI has people thinking you’re deflecting because you support the politician

Meanwhile the engagement farmer is completely exempt from scrutiny

Its fascinating how fast and unexpected the direction goes


I actually think this was a good thing. Manipulating images incredibly convincingly was already possible but the cost was high (many hours of highly skilled work). So many people assumed that most images they were seeing were "authentic" without much consideration. By making these fake images ubiquitous we are forcing people to quickly learn that they can't believe what they see on the internet and tracking down sources and deciding who you trust is critically important. People have always said that you can't believe what you see on the internet, but unfortunately many people have managed without major issue ignoring this advice. This wave will force them to take that advice to heart by default.

I remember telling my parents at a young age that I couldn't be sure Ronald Reagan was real, because I'd only ever seen him on TV and never in real life, and I knew things on TV could be fake.

That was the beginning of my journey into understanding what proper verification/vetting of a source is. It's been going on for a long time and there are always new things to learn. This should be taught to every child, starting early on.


I agree. Too many adults are fooled by fake news and propaganda and false contexts. And CNN and Fox are more than happy to take advantage of this.

My personal rule of thumb is if it generates outrage, it's probably fake, or at least a fake interpretation. I know that outrageous stuff actually happens pretty often, so I'll dig into things I find interesting. But most of the time it's all just garbage for clicks.


I used to also have this optimistic take, but over time I think the reality is that most people will instead just distrust unknown online sources and fall into the mental shortcuts of confirmation bias and social proof. Net effect will be even more polarization and groupthink.

> By making these fake images ubiquitous we are forcing people to quickly learn

That's quite the high opinion on the self-improvement ability of your Average Joe. This kind of behavior only comes with an awareness, previously learned, and an alertness of mind. You need the population at large to be able to do this. How if not, say, teaching this at schools and waiting for the next generation to reach adulthood, would you expect this to happen?


I agree that improvement for the Average Joe will be very hard. I also think that taking more attention to teach the younger generation is vitally important. But mostly I don't see an alternative. I don't think we can protect people from fake information without giving up our freedom, and that isn't a viable alternative in my mind. So what is left but trying our hardest to teach people to think critically?

Our institutions have been trying to get our kids to think critically for a while. At least when I was in school, we didn't focus a lot on memorization (sometimes we did, like memorizing the times tables or periodic table). My teachers tried to instill in us an understanding of the concepts, something I took for granted. Many of my classmates have gone on to become lawyers, doctors, other prestigious careers.

But I feel like we live in a different time now. I hear teachers tell stories about school admin siding with parents instead of teachers, and the kids aren't learning anything. Anecdotally of course.

I think our teachers really want the kids to think critically. But parents and schools don't seem to value that anymore.


> By making these fake images ubiquitous we are forcing people to quickly learn that they can't believe what they see on the internet and tracking down sources and deciding who you trust is critically important.

Has this thought process ever worked in real life? I know plenty of seniors who still believe everything that comes out of Facebook, be AI or not, and before that it was the TV, radio, newspapers, etc.

Most people choose to believe, which is why they have a hard time confronting facts.


> I know plenty of seniors

And not just seniors. I see people of all ages who are perfectly happy to accept artificially generated images and video so long as it plays to their existing biases. My impression is that the majority of humanity is not very skeptical by default, and unwilling to learn.


Yes. People willingly accept made up text (stories) if it fits their world view, and for words we always knew that they could be untrue. Why should it be different for images/audio/video?

As they say, people have accepted made up religions for thousands of years.

When it comes to graphic content on the internet I usually consume it's for entertainment purposes. I didn't care where it came from before and don't care today either. Low quality content exists in both categories, a bit easier to spot in AI generated, so it's actually a bonus.

I feel like there is one or two generations of people who are tech savy and not 100% gullible when it comes to online things. Older and younger generations are both completely lost imho, in a blind test you wouldn't discern a monkey from a human scrolling tiktok &co

How so? This "tech savvy and not 100% gullible" generation, gave birth to a political landscape dominated by online ragebait.

Boomers used to tell us to never trust anything online and now they send their life savings to "Brad Pitt"

New generations gets unlimited brain rot delivered through infinite scroll, don't know what a folder is, think everything is "an app" and keep falling for the "technology will free us from work and cure cancer"

There was a sweet spot during which you could grow alongside the internet at a pace that was still manageable and when companies and scammers weren't trying so hard to robbyou from your time money and attention


And if they don't?

Your post seems a little naive to me, a lot of people are just not interested in putting in the work or confronting their own confirmation bias, and there's an oversupply of bad actors who will deliberately generate fake imagery for either deception or exhaustion. Many people are just not on quest for truth and are more interested in the activation potential of images or allegations than in the factual reliability.


In reality: millions of boomers are scrolling FB this very minute reacting to the most obviously fake rage/surprise/love bait AI slop you've ever seen.

They were scrolling through fake bait long before generative AI

but now it is even harder to distinguish

>fake OF models

Soon many real OF models will be out of job when everyone will be able to produce content to their personal taste from a few prompts.


People already have access to every form of niche pornography they could dare to imagine (for absolutely free!), I really doubt that 'personal taste' is the part that makes OF models their money. They'll be fine.

I think you're under-estimating how much personal taste applies in that industry. Yes, there's a lot of free content but it's often low quality and/or difficult to find for a particular niche. The OF pages, and other paid sites, are curated collections of high quality stuff that can satisfy particular cravings repeatedly with minimal effort.

A big part of it also the feeling of "connection" with the creator via messages and what not, but that too can be replicated (arguably better) by AI. In fact, a lot of those messages are already being generated haha.


I was mostly hinting towards the 'connection' part of it, yes - I think that's really where the money is made more than anything else. That's the part that'll start killing the industry once some company tunes it in.

This is the dystopia of that pacified moon from "Mold of Yancy" by PKD but taken to the next level.

What's astonishing abut the present is that even PKD did not foresee the possibility of an artificial being not only being constructed from whole cloth but actually tailored to each individual.


We looked forward to the future, but it turns out the future smashed into our blind spot from the side.

For a podcast on this topic (niche pornography and how it was affected by the advent of pornhub and the likes) check out "the butterfly effect"

Even ignoring the model censorship making high quality sexual imagery/videos not possible, this is a crazy take. You think OF models are making money because it's the only way to see a nude man/woman with particular characteristics on the internet?

You're completely misunderstanding what the product being sold is.


If you don't think that OF models are using AI to reply to incoming chats from users, well I've got a bridge to sell ya.

No, I don't think OF models aren't using AI to respond to chat. Where did I say I thought that?

Then please explain what you're talking about.

Their point is that the point of OF is that there is (supposed to be) a real human. It's a (para)social relationship that no image generator model is going to give you.

If you can make X money running one client at a time you can make ( X × N ) money if you work with N clients at a time. You have to give just enough human to keep'em hooked.

Yes, and my point is that the (supposedly) real human is also AI. You're chatting with a bot.

they often contact that work out, i wouldnt be surprised if some of that is already ai. cheaper than hiring if you get it right

> Soon many real OF models will be out of job when everyone will be able to produce content to their personal taste from a few prompts.

net positive to society


In what way? Certainly not for the models, who lose their income/job. Probably not better for the consumer, either.

or the taxpayer

the high end probably pay the same sort of tax as professional footballers


Sex work shouldn't be shunned, but it's not a normal profession either. Mental health, addiction and abuse is just as much of a problem online and in countries where prostitution is legal and normalized.

lose the income, but likely they will live a more fulfilling life.

More fulfilling life starving on the streets with beginner programmers looking for a job?

And this can't come soon enough.

Coming soon... YOU!

You can’t really because these powerful models are censored. You can create lewd pictures with open models but they aren’t nearly as good or easy to use.

I’ve seen some very high quality NSFW AI video in the last few months. Those models are not far behind and the search and training space for porn is smaller than being able to generate anything

> I’ve seen some very high quality NSFW AI video in the last few months. Those models are not far behind and the search and training space for porn is smaller than being able to generate anything.

Agreed. In my opinion, the primary limitation of the porn models is actually poor labeling of the training set. The company that manages to produce a well-labeled, porn-tuned AI image model is going to absolutely clean up.

The extractive dark patterns that will emerge from a parasocial chat "AI relationship" that can generate porn images relevant to the chat on the fly will be staggering. Once that proceeds to being able to generate relevant video, all holy hell is going to break loose.


> The company that manages to produce a well-labeled, porn-tuned AI image model is going to absolutely clean up.

For anime/non-photographic content that essentially exists (Pony, then Illustrious, then probably some new-fangled thing by now that I don't even know about), thanks to the meticulously tagged booru image corpus. However, as strong as these models are on matters of anatomy and kinks, they're limited in other ways due to the hugely biased dataset and dependence on tag soup prompts rather than natural language (many find the latter a plus, not a minus, though).

I haven't heard of any proprietary/cloud-based NSFW model that would be massively better than what's available for free. There are many NSFW-friendly services, but by and large they're just frontends to models trained by other people.


Because models can be used to alter existing images, you can use open and commercial models together in content creation workflows (and also the available findings of open models, and the ability to further tube them very specific used, are quite powerful on their own), so the censorship on the commercial models has a lot less effect on what motivated people can produce than you might think.

I still think, even with that, that like most predictions of AI taking over any content industries, the short-term predictions are overblown.


Doesn't Grok allow users to create lewd content or did they roll that back?

Also, I suspect that we'll soon see the same pattern of open weights models following several months behind frontier in every modality not just text.

It's just too easy for other labs to produce synthetic training data from the frontier models and then mimic their behavior. They'll never be as good, but they will certainly be good enough.


Just a matter of time and open models will get there. Not once have we seen a moat across the model spectrums.

I don't think so. Talking to people in this space, I've found out about broad camps. There are probably more:

-They simply aren't into real women/men (so you couldn't even pay a model to do what they're looking for).

-They want to play out fantasies that would be hard to coordinate even if you could pay models (I guess this is more on the video side of things, but a string of photos can put be together into a comic)

-They want to generate imagery that would be illegal

Based on this, I would guess fetish artists (as in illustrators) are more at risk than OF models. However, AI isn't free. Depending on what you're looking for, commissions might be cheaper still for quite a while...


Lily Allen Says Her OnlyFans Feet Pictures Make More Money Than Spotify Streams: ‘Don’t Hate the Player, Hate the Game’ : https://variety.com/2024/music/news/lily-allen-onlyfans-feet...

And they might have to gasp! get an honest job!

I don't know much about that side of things, but I presume that's hard work! Maybe not always so honest though.

That's a pretty wide brush you are painting with there

That OF is not a honest job?

That's the narrowest of brushes. Anybody thinking otherwise is the one painting with an overly broad mind, and not in a good way.


Don’t think the demand for real OF is going anywhere

How do you know they’re real right now?

A lot of escorts have OF profiles.

> Facebook food images or fake OF models

What in the world is a fake OF model?

Does "OF" stand for "of food"?


It stands for "OnlyFans" a website originally for creators to engage directly with their audiences but quickly became a website where women sold explicit pictures of themselves to subscribers.

TIL it wasn't created to be a porn site

They still run ads trying to push the narrative that it's for comedians and musicians.

But at this point, OnlyFans is so synonymous with egirls that suggesting someone has an account is used as a way to insinuate they sell pictures of themselves.


Jaded, but if I knew there was a possibility of a bunch of incriminating footage of me (images, video, etc.) out there in the pre-AI days, I would do my absolute best to flood the internet with as many related deepfakes (including of myself) as possible.

Surely this is a problem that we will never be able to solve.

Oh we’ve seen nothing yet of the chaos that generative ai will unleash on the world, looking at Meta platforms it’s already a multi million dollar industry of selling something or someone that doesn’t exist. And that’s just the benign stuff.

This has been true for a while with digital art, photoshop, etc. Over time, people's BS detectors get tuned. I mean, scrolling by quickly in a feed, yeah, you might miss if an image is "real" or not, but if you see a series of photos side by side of the same subject (like an OF model), you'll figure it out.

Also, using AI will not allow you to better express yourself. To use an analogy, it will not put your self-expression into any better focus, but just apply one of the stock IG filters to it.


> a series of photos side by side of the same subject

Cameras are now "enhancing" photos with AI automatically. The contents of a 'real' photo are increasingly generated. The line is blurring and it's only going to get worse.


It's shitty, but I think it's almost as bad that people are calling everything AI. And I can't even blame them, despite how infuriating it is. It's just as insidious that even mundane things literally ARE AI now. I've seen at least twice now (that I'm aware of) where some cute, harmless, otherwise non-outrageous animal video was hiding a Sora watermark. So the crazy shit is AI. The mundane shit is AI. You wonder why everyone is calling everything AI now. :P

It seems like a low level paranoia - now I find myself double checking that the youtube video I'm watching isn't some AI slop. All the creators use Getty b-rolls and increasingly AI generated stuff so much that it's not a far stretch to have the voice and script all be auto generated too.

I suppose if the AI was able to tell me a true and compelling story, I might not even mind so much. I just don't want to be spoon fed drivel for 15 minutes to find it was all complete made up BS.


Hetzner had the best prices out of any cloud I’ve used. Sad to see that they are raising prices, but was due to happen.

Does well on SVGs outside of "pelican riding on a bicycle" test. Like this prompt:

"create a svg of a unicorn playing xbox"

https://www.svgviewer.dev/s/NeKACuHj

Still some tweaks to the final result, but I am guessing with the ARC-AGI benchmark jumping so much, the model's visual abilities are allowing it to do this well.


Interesting how it went a bit more 3D with the style of that one compared to the pelican I got.

Animated SVGs are one of the example in the press release. Which is fine, I just think the weird SVG benchmark is now dead. Gemini has beat the benchmark and now differences are just coming down to taste.

I don't know if it got these abilities through generalization or if google gave it a dedicated animated SVG RL suite that got it to improve so much between models.

Regardless we need a new vibe check benchmark ala bicycle pelican.


What benchmark, though? There is very clearly a lot of room for improvement in its SVG making capabilities. The fact that it can now, finally, make a pelican on a bike that isn’t completely wrong is not an indicator that SVG generation is now a solved problem.

I'm thinking now that as models get better and better at generating SVGs, there could be a point where we can use them to just make arbitrary UIs and interactive media with raw SVGs in realtime (like flash games).

> there could be a point where we can use them to just make arbitrary UIs and interactive media with raw SVGs

So render ui elements using xml-like code in a web browser? You’re not going to believe me when I tell you this…


You’re not going to believe me when I tell you this, but generating a webpage with HTML is far simpler than generating arbitrary graphics (that look good) with SVGs.

Or quite literally a game where SVG assets are generated on the fly using this model

Thats one dimension before another long term milestone: Realtime generation of 3D mesh content during gameplay.

Which is the "left brain" approach vs the "right brain" approach of coming at dynamic videogames from the diffusion model direction which the Gemini Genie thing seems to be about.


Unfortunately it still fails my personal SVG benchmark (educational 2d cross section of the human heart), even after multiple iterations and screenshots feedback. Oh well, back to the (human) drawing board.

Still not usable in production, not even near. But I'm happy to see any progress in this area.

On the other hand, creation of other vector image formats (eg. "create a postscript file showing a walrus brushing its teeth") hasn't improved nearly so much.

Perhaps they're deliberately optimising for SVG generation.


can we move on from SVG to 3D models at some point?

Image to model is already a thing, and it's pretty good.

Currently working on:

https://vimgolf.ai

To show newbies how to use vim. Currently its not complete and has major issues. So if you want to try give it a go, but please hold your judgement as not all shortcuts have been added.


I have found GPT 5.3-Codex to do exceedingly well when working with graphics rendering pipelines. They must have better training data or RL approaches than Antropic as I have given the same prompt and config to Opus 4.6 and it seems to have added unwanted rendering artifacts. This may be just an issue specific to my use case, but wonder since OpenAI is partners with MSFT, which makes lots of games, that this may be an area they heavily invested in


That's insane. Deflock is a map of Flock cameras.

Definition of terrorism is:

the unlawful use of violence and intimidation, especially against civilians, in the pursuit of political aims.

couldn't be further apart from that.


While I think the use of the term “terrorist” is unwarranted, I do think deflock is seeking political change. The decision to use flock is a government policy choice, right?


Just the people’s choice, right? They voted for this government policy, right???!? https://www.coloradopolitics.com/2025/10/22/denver-mayor-ext...

>> “I was stunned to learn late yesterday that after convening a task force of local and national experts, Mayor Johnston has been negotiating secretly with the discredited CEO of Flock Safety and signing another unilateral extension of this mass surveillance contract with no public process and no vote from the City Council or input from his own task force,” Councilmember Sarah Parady told The Denver Gazette.


What is the point of this comment? Are you saying that deflock are not terrorists but are terrorist adjacent? Why respond to someone defining terrorism by pointing out that 2 words at the end of the definition also apply to deflock? Do those not apply to basically everyone who participates in their country's society, including literally everyone who votes and all politicians?


Political parties seek political change too, but that doesn't make them terrorists. Deflock isn't trying to intimidate or cause violence to citizens.


If corporations can be people, cameras can be people too! Think of the cameras! /s


Please don't give them ideas like that, even in jest.


I am very curious if this app is making money or are users just using the two generators and then leaving? If so I am very impressed with your wrapper around the image gen models.


I can imagine the reverse model could be very profitable with every real estate agent using it to make dreary photos look great.


Reverse model aimed at estate agents already posted in this thread by someone: https://news.ycombinator.com/item?id=46829566


this landing page is a lead gen tool for the architect at the bottom


Ahh, I see that. Thanks


This could be the future of film. Instead of prompting where you don't know what the model will produce, you could use fine-grained motion controls to get the shot you are looking for. If you want to adjust the shot after, you could just checkpoint the model there, by taking a screenshot, and rerun. Crazy.


I feel like people are already currently doing this. Essentially storyboarding first.

This guy a month ago for example: https://youtu.be/SGJC4Hnz3m0


Great work! Really respect AI2. they open source everything. The model, the weights, the training pipeline, inference stack, and corpus


Interesting, I use cloudflare containers and it takes roughly 6-7 seconds to boot up using a very lightweight image.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: