A dermatologist a short while ago with this idea would have to find a willing and able partner to do a bunch of work -- meaning that most likely it would just remain an idea.
This isn't just for non-tech people either -- I have a decades long list of ideas I'd like to work on but simply do not have time for. So now I'm cranking up the ol' AI agents an seeing what I can do about it.
I still wish a better name had been coined/had stuck.
It’s hard to take the name “vibe coding” seriously, and maybe that was the whole point, but I feel like AI coding is a bit more serious than the name “vibe coding” implies.
Anyone that disagrees that it should be taken more seriously can surely at least agree that it’s likely it will cross that threshold in the not too distant future, yet we’re still going to be stuck with the silly name.
It is the perfect name for an industry that considers "enshittification" a serious term of art.
And I say that knowing it will absolutely rule everything in the future - I'd bet at last half of all Show HNs are vibe coded apps now. Not long ago tech was seriously talking about monkey JPEGS being the future of global commerce and finance. We've been living in unserious times for a while.
I'd feel better about vibe coding and AI in general if I thought it would lead to more people learning how to do what it enables for themselves, and actually exercise control over their devices and creativity. But as useful as it can be - and I have to concede that much at this point - it requires depending on centralized AI services and isn't much better than proprietary code in terms of defending end user rights. I fear AI driven everything will lead to more closed systems and more corporate commoditization of our data and our lives. Unfortunately from what I've seen not only do many vibe coders not care, they don't want to care and they think anyone who does care is a slope-headed neanderthal.
So yeah, call it what it is. OP's app would have just been a simple web app ten years ago, it's just a quiz, doesn't require any deep coding magic. But no one cares about anything but the vibe anymore.
This has often been tried. SQL, for instance, was specifically designed to feel like natural language and be useable by people with minimal technical background. But it always runs into the same problem. As you start to expand the capabilities of these scripting languages and you get into the nitty gritty reality of what programming genuinely involves, they always end up being just really verbose and awkward to use languages that are, otherwise, like any other programming language.
Even worse is the tendency for scripting languages tend to try to be robust against errors, so you end up with programs that are filled with extremely subtle nuance in things like their syntax parsing which, in many ways, makes them substantially more complex than languages with extremely strict syntactic enforcement.
The users are already feeling it, but may have trouble understanding why! The reason strongly typed languages with rigid syntax are easier is because it's much more difficult to accidentally do things like check if 3 is greater than true.
Apple has always been pretty good at this. AppleScript, Automator, Shortcuts. I did all kinds of cool stuff in OSX 10.4 back before I wrote any traditional code.
Before that was HyperCard. It was always amazing to me the types of applications that could be written with HyperCard.
In a similar way, VBA was amazing in MS Office back in the day. If you ever saw someone who was good at Visual Basic in Excel, it’s impressive the amount of work that could get done in Excel by a motivated user who would have been hesitant to call themselves a programmer.
I wrote, and sold my first piece of software in HyperCard. It was a pretty lame Choose Your Own Adventure style game, where you clicked on buttons, having read the text. 7 year old me was pretty chuffed, to buy some baseball cards out of his hobby. I really, really miss that world.
Workers are over specialized. And our business domain models are rigid. We want to streamline and standardize which often means that code is written in few places.
It would be nice if we could have the cake and eat it here. With LLM:s there's certainly opportunities if we can find ways to allow both custom scripting and large scale domain constrained logic.
What is the end goal of software? The vast majority of engineers seem to believe the goal is for the software to be perfect, when actually it's to do things like catch cancer early or make money. Do you think a person who’s life was saved by software with footguns cares?
They are free to use them for themselves. But to use these apps on others can be life threathening in cases. And if not it's still unethical to sell such software when they are literally unable to describe what it can and cannot do.
I believe this captures it well. There are many people that would have previously needed to hire dev shops to get their ideas out and now they can just get them done faster. I believe the impact will be larger in non-tech sectors.
Right. And what a lot of folks here miss is that the prototype was always bad. This process only speeds up the MVP, and gives the idea person a faster way to validate an idea.
Focusing on "but security lol" is a bad take, IMO. Every early attempt is bad at something. Be it security, or scale, or any number of problems. Validating early is good. Giving non-tech people a chance is good. If an idea is worth pursuing, you can always redo it with "experts". But you can't afford experts (hell, you can't even afford amateurs) for every idea you want put into an MVP.
There's a big difference between a "prototype" (or a POC, or a spike, or whatever your company calls it), and an "MVP" (minimum viable product). An insecure product is not viable. A product which cannot be extended or maintained without being almost competitively rewritten is not viable.
MVP means just enough engineered code to solve a problem, rough around the edges and lacking features sure, but not built by someone who has literally no idea what they were doing.
Prototypes of physical products are never put into production and sold to consumers. Unfortunately software prototypes "run", and are sold at that point. Then they begin to scale, and the inherent flaws in their design are amplified. The same thing used to happen with MS Access apps; the same thing still happens with "low code" solutions.
The engineers cost just as much after the prototype phase, but if you don't hire them to build your MVP then you never have one.
Yeah, no. Every MVP I've ever seen has been riddled with problems. Hell, even publicly launched projects are a mess most of the times. How many social networks we've had in the past 5 years that were pwned right after launch? I remember at least 4 or 5 very public failures (firebase tokens, client-side apis and so on). Those are just the most public ones.
Everyone wants to pretend that the software used to be better, but the reality is that MVPs and sometimes even public launches were always a house of cards.
You are pointing to the same low code/no code prototypes that I am, but you keep calling them MVPs for some reason. There's no "used to be better" here, there is good and bad software full stop.
Why don’t they deserve to see the light of day? Maybe the market gets to decide what “sucks” or doesn’t. More ideas in the marketplace gives users more choice.
Same, I've had ideas rattling around in my brain for years which I've just never executed on, because I'm 'pretty sure' they won't work and it's not been worth the effort
I've been coding professionally for ~20 years now, so it's not that I don't know what to do, it's just a time sink
Now I'm blasting through them with AI and getting them out there just in case
They're a bit crap, but better than not existing at all, you never know
I'm a big fan of barriers to entry and using effort as a filter for good work. This derma app could be so much better if it actually taught laypeople to identify the difference between carcinomas, melanomas and non-cancerous moles instead of just being a fixed loop quiz.
IMO it is better to keep the barriers to entry as low as possible for prototyping. Letting domain experts build what they have in mind themselves, on a shoestring, is a powerful ability.
Most such prototypes get tossed because of a flaw in the idea, not because they lacked professional software help. If something clicks the prototype can get rebuilt properly. Raising the barriers to entry means significantly fewer things get tried. My 2c.
> IMO it is better to keep the barriers to entry as low as possible for prototyping
Not in an industry where prototypes very often get thrown into production because decision makers don't know anything about the value of good tech, security, etc
It's perfectly fine for most MVPs to go into production. Most SaaS software is solved. Prototypes are outsourcing the hard parts around security. The hard part is making a sale and finding the right fit. Spending 4x the cost on a product that never makes a sale is bad economics. This app isn't remotely harmful, so do you care to make an argument for why it shouldn't exist?
Should decision makers be more informed? Yes, of course, but that's not an argument for gatekeeping. We shouldn't be gatekeeping software or the web. Not through licensure or some arbitrary meaning of "effort". That will do nothing but stifle job growth and I'd very much like to keep developers employed.
>They're a bit crap, but better than not existing at all, you never know
I don't agree. I think because of llm/vibe coding my random ideas I've actually wasted more time then if I did them manually. The vibe code as you said is often crap and often after I spend a lot of time on it. Realize that there are countless subtle errors that mean its not actually doing what I was intending at all. I've learned nothing and made a pointless app that does not even do anything but looks like it does.
Thats the big allure that has been keeping "AI" hype floating. It always seems so dang close to being a magic wand. Then upon time spent reviewing and a critical eye you realize it has been tricking you like a janitor that is just sweeping dirt under the rug.
At this point I've relegated LLM to advanced find replace and Formatted data structuring(Take this list make it into JSON) and that's about it. There are basically tools that do everything else llms do that already exist and do it better.
I can't count at this point how many times "AI" has taken some sort of logic I want then makes a bunch of complex looking stuff that takes forever to review and I find out it fudged the logic to simply always be true/false when its not even a boolean problem.
brother, no one cares. if LLMs made something exist that did not exist previously, they worked. it doesnt matter if you could have done it faster by hand if doing so would have resulted in the program not existing.
To anyone wondering if their are LLM paid shills on HN here is proof: A less than 30 day old account which only comment is a nonsense praise of LLM against legitimate criticism.
user: anthonypasq96
created: 22 days ago
karma: 2
about:
submissions
comments
favorites
well yeah, better not existing at all actually, if they're crap and you're ok with that. Those just serve to pad out your resume for nontechnical people. It's not like you're actually learning much if you couldn't be bothered to even remove the crap parts
My Resume has plenty of padding already and it's not about learning, it's about "maybe this random idea might actually work" and proving out that concept
Yes I agree - I could probably have worked out how to do it myself but it would have taken weeks and realistically I would never have had the time to finish it.
I've done enough image classification stuff that, nah. If all you care about is high level confirmation with high error rates, sure. But more complex tasks like, "Are these two documents the same?" are much, much harder and the failure modes are subtle.
> I think most experts wouldn't approach this problem as an image classification problem ...
Indeed. It is first and foremost a statistics and net patient outcomes problem.
The image classification bit - to the best of the current algorithms abilities - is essentially a solved problem (even if it isn't quite that simple), and when better models become available you plug those in instead. There is no innovation there.
The hard part is the rest of it. And without a good grounding in medical ethics and statistics that's going to be very difficult to get right.
I am a "noncoder" because of a number of reasons. My best friend is a "coder" and still starts instructions with "It's easy! Just open the terminal...".
Unfortunately, I do advanced knowledge work, and the tools I need technically often exist...if you're a coder.
Coding is not that accessible. The intermediary mental models and path to experience required to understand a coding task are not available to the average person.
This is not a healthcare app, it’s a health education app. This app will never have PII, or be used for treatment/diagnosis. If it goes down tomorrow it will have zero impact on anyone’s healthcare.
Why? I know tons of coding MDs. Pathologist hacking the original Prince and adding mods also just in assembly. Molecular pathologist organizing their own pipelines and ETLs.
Lots of people like computers but earn a living doing something else
He wasn't saying no coding MDs existed. Just that, generally speaking, most MDs would have had to partner with a technical person, which is true. And is now less true than it was before.
A dermatologist a short while ago with this idea would have to find a willing and able partner to do a bunch of work -- meaning that most likely it would just remain an idea.
This isn't just for non-tech people either -- I have a decades long list of ideas I'd like to work on but simply do not have time for. So now I'm cranking up the ol' AI agents an seeing what I can do about it.