My main issue with low-code/no-code is that it attempts to solve the complexity problem, without understanding what "complexity" is. Code is perceived to be complex, but when you look deeply into what people mean when they say that, it's almost entirely a social perception. They see these weird characters and it looks like gibberish and they assume that the people who understand this gibberish are somehow on another level of intelligence.
Code is really just a formalized expression of what you want. It happens to be used for very complex problems, because it's very good at solving complex problems. This in and of itself does not make code inherently complex.
100%. I worked at a company that really went in hard on Agilent Vee for hardware testing in the early 2000's. Absolutely a thing where a manager saw a "Hello World"-like demo and was so impressed that they went 100% full buy in.
Besides the obvious UI issues (like the fact that you couldn't really zoom out, you could just pan around your code), we had a bunch of engineers that still needed to do things like "get the three largest values from this array" and it just turned into the most ridiculous bubble-sort implementation you've ever seen.
Its pretty hilarious seeing some of the old screenshots now [1].
Anyways, I think it will always be really easy to sell some simple demos on low-code/no-code, but then the second you need something slightly outside the eco-system it just turns into a substandard mess that doesn't work well with source-control (or diffs) and in general is just harder than the code-full solution.
- Logic tends to take up a lot more screen space than real code
- There is no defined way to read the code. In real code you start top left and read left to right and down, but in visual code if there are lots of "paths" then your eyes end up darting around everywhere.
In automotive, development of control algorithms for the engine is entirely “visual”, through simulink/matlab. You design the model on simulink, then it generates the C code. I don’t have direct experience outside the automotive sector, but I believe that this approach is used in other sectors like aerospace. Maybe designing models is better than writing low level code for control algorithms?
Yes, IIRC some aerospace companies also use simulink. It's popular with safety critical stuff.
> Maybe designing models is better than writing low level code for control algorithms?
If you need both the code and the model, then generating the code from the model is also much easier than proving that your handwritten C code behaves exactly like the model used for verification. Having visual control flow for complex control systems might be less error prone than manually writing C (or Rust or Zig or $flavour_of_the_day) code. You probably know that, but other folks shouldn't forget that automotive and aerospace control software often means "programming errors might result in people getting injured or worse".
In power generation control when electrical engineers are discussing or analyzing the behavior of a governor or exciter block diagrams representing transfer functions are used. I would program turbine controls primarily in function block diagrams since it a great visual representation of the algorithms that makes
It easy for some one to understand and observe how it works. I started with computer programming and it was quite an adjustment in my way of thinking to implement algorithms this way but no question it is the right tool for the job in controls.
I don't have any experience in the automotive sector. But when you say that the development of control algorithms for the engine are visual, I immediately thought finite state machine. My question: Are they finite state machines or is there more to it than that?
Depending on how complex, it would be a hybrid system. You have finite discrete states, but also continuous dynamics that need to be controlled (like a PID)
Control systems have always been designed in a connected box fashion. Before gui software existed. Visual coding is just an extension of what was being done on a blackboard
The single most important point that is keeping visual programming from ever being taken seriously in corporate settings is the fact that there are currently no version control compatible with it (for good reason). You can't collaborate without version control.
I work out of a co-working space. A few months ago I walked past some people fidgeting with Zapier. Some (former) employee had built this crazy complex system of integrations, with connections going back and forth everywhere. It looked very much like this diagram, and I couldn't help but laugh, because it's the same kind of problem that coders face on a daily basis.
Yeah, I think Zapier is one of if not the only example I've come across in my software dev career of a low-code-type tool that devs will reach for before doing stuff by hand.
Great example of where low-code is a better alternative for a very narrow use case of development.
I think this is very right. We have only scratched the surface in terms of creatively experimenting with the IDE UI to give the developer more ways to quickly explore and refactor code. Text in/text out is a serious loss in fidelity from the overall input/output capabilities of a computer.
I think any discussion about no-code/low-code and if it's effective has to take into account Unreal Blueprints. Game development is full of inherently complex problems, but some people are able to pull it off with Blueprints. What's important about that though, is that Blueprints live inside a complex codebase focused on game development, so the graphical/node based toolbox you have is extremely powerful because developers have coded those blocks and tools for you. The majority of the engine is abstracted away.
So it might be that node based programming is a good way to allow non-devs to interact with parts of an otherwise code based solution, inside niches. Examples that come to mind are eCommerce shopping rules. They can be complex, and really arbitrary, and in many ways having a developer implement them is really inefficient. Giving a node based pipeline editor for shopping cart rules would be a boon for power-users and take time off the dev's hands to focus on implementing the surrounding ecommerce solution, something you definitely don't want to be writing in a graphical editor.
I almost put in my original comment that no-code/low-code becomes more useful as it gets more specific to a domain. Game dev is a perfect example - building complex shaders becomes much easier with a visual editing tool. I think of Google Sheets as a highly useful tool in this space. Same for Google Data Studio.
Isn't the domain of most no-code/low-code "business process"? It's some glue between systems to collect or input data. It allows the expert on the business process to directly translate the manual process without others.
I think no, since usually the condition for each Business process state is usually complex, which no code / low code won't help much and is usually worse on performance compared to native implementation.
Low-/no-code environments are really great for exposing domain-specific landscapes to non-coders, but that significantly reduces the hype value of the lingo, which aims to convince buyers that they can replace those pesky non-specific domain problem spaces and coders with capex and a smidge of opex. The value proposition of these products is the same as before because this is a repackaging of fourth-generation languages, and the economics have not changed.
The more narrow the domain, the easier it gets to figure out a low-/no-code environment that will work for the users. But the smaller the base of purchases. And distilling the complexity to just the right level still takes some really bright domain experts with extremely good intuition of where the most profitable use cases lay (the hardest role to fill) and highly empathetic coders (also hard to fill) who work well together (extremely rare), or a single person who embodies both (unicorn).
If energy and money were no object, low-/no-code is absolutely A Thing. I believe there is some kind of emergent Shannon–Hartley theorem behavior here, where complexity down a communications network with certain noise/bandwidth/etc. properties has some suspected hard physics-as-we-know-it limits.
Yes, and I have never struggled to execute something complex as I have with no-code tools.
Nocode usually makes the HelloWorld trivial and anything meaningful more challenging than it would be to do in a general purpose language.
This isn't just true of NoCode, it's endemic to young tools. These products are over-optimized toward low friction on-ramps because the only thing that matters is growth in DAUs.
NoCode is just attractive because it propagates the myth that you can do things without programmers, and programmers are expensive. So you've got low friction + perceived lower cost == more users signing up.
Java is somewhat legendary for being rather overly wordy. But, if a new language starts off its list of 'why should you use this' with comparing how to write to the console (java: `System.out.println`, other languages: a simple `puts` or `printf`) always strikes me as silly and turns me off of the language.
Who cares? When I'm writing toy command line apps, complexity is never going to be the problem, certainly not in the 'what do I print to the terminal' part of the code. And if I'm writing actual software where complexity is actually an issue, the odds that this code has any business writing directly to standard out in the first place are infinitesemal. Your language is optimized to do stuff I'm never going to care about. How silly.
However, perhaps it's not _just_ about the short sightedness of inexperienced devs who get tricked into thinking a language is all that because of optimisations to the 'write some toy stuff' process flow.
Think about it, how would you write a tutorial or 'sales pitch' for a dev environment/programming language _without_ focussing on how easy it is to write Hello World and other toy projects?
How would you highlight how the namespacing system seems like overkill for an app whose entire codebase is half a page of text in a large font, but is actually really good at making an easily navigable codebase once we hit the 5 personyears level? I really have no idea how you would show it. You could talk about it and pray that the reader is familiar enough with the challenges of codebases that have grown beyond 'one person tinkering around for a weekend', but almost by definition I don't think you can bash that down into a tutorial or pitch you can consume in 5 hours, let alone 10 minutes.
Still, languages could do more. I think most languages are just kinda bad at natively supporting the idea of DAG-based modular isolation: As code bases grow larger you want to be able to strictly enforce the ability to take a much smaller chunk of that base, draw a circle around it, and say: This can be understood on its own, it is dependent only on these things, and only these parts are 'public API', 'public' here meaning: accessible to other circled-off parts of the entire code base.
Maybe because it demos bad. And that's a real shame. It's many orders of magnitude more important than 'You can just write `puts` to echo to terminal, and there is no need to declare a method for the main entrypoint!'
If I made a tool that was good for large codebases, I'd demo it by making a video ostensibly demonstrating someone unfamiliar with a large codebase reading a simple JIRA ticket for the first time and then using the IDE/language/toolset to explore the codebase and figure out what actually needs to be done, the whole time narrating and pointing out each feature of the language, tool, IDE, etc that makes this experience quicker, easier, better.
It could even be a relatively fake example and I think it would still land if your audience has had experience exploring large codebases
> It could even be a relatively fake example and I think it would still land if your audience has had experience exploring large codebases
The problem is this can get pervaded quickly.
In most extreme cases, the 'example' is literally only useful for hello-world style things, and anything more complex requires digging deep into docs to understand (I can think of at least one .NET library where this 'blogger-friendly' API structure exists...) And then as alluded to, every example shows this simple case that doesn't help anyone know how to configure things in ways that are testable/etc.
This has been my universal experience with graphical code generation. Simple tasks are super simple, and easy to change, all good there, but the second you try to take that and grow it to do something you actually care about, it actually increases the complexity, as well as lengthening your development loop.
I don't think low-code as a "generalized" solution will ever gain traction with developers, but has a future as small, focused tools to solve specific dev problems.
To expand on this, you will have to house the complexity somewhere. For low/no code tools, it's swept under the rug, or rather, under layers of abstraction.
I don't think it's a good idea (in a disruptive-amount of cases) to deviate from the simplicity of straight forward code.
We are growing closer and eventually will realize as a society that programming doesn't have to be hard or scary. I'm sure that the futuristic lay-person will have a better basal understanding of technology and how it's made.
> I'm sure that the futuristic lay-person will have a better basal understanding of technology and how it's made.
I feel like we actively discourage people from having an understanding of technology for fear that they will understand it and complain about it and/or be in a better position to judge the quality of alternatives to our products. As a result, it seems like more people understood basic computer operation and architecture 30 years ago than they do today, but maybe I'm in a weird bubble. I'd think with the opportunities for the far more opaque operations and algorithms that deep learning has given us, people will understand even less about computers in the future.
University professors in recent years have been finding themselves shocked that students don't understand how file systems work[1]. Additionally I recall hearing a teacher say that they were trying to teach something in a computer lab, and none of the students knew how to work a windowing system or multitask, and they only operated by having every program maximized and using one at a time, likely because it's the computing paradigm they grew up on, using smartphones and tablets.
I think the idiot-proofing of modern computing is only making people less aware of how computers work. Think of how many 20-something programmers got into programming from learning how to make Minecraft mods. Or even just learning to install a mod in the old Java version of Minecraft or any other PC game, where you had to dig into weirdly-named folders, copy-paste files into various directories, maybe modify a data file here and there. Kids growing up on the iPhone version of Minecraft are never being exposed to the basics of how their device works.
I say it like I'm sure, but honestly I worry about this. It's possible we achieved such a technological peak that future generations won't be able to understand the basics; only the simplifications, leading to the premise of Idiocracy.
That being said, we've always built knowledge on top of the backs of others and never really had to deal with supporting its weight without necessarily knowing the basics. It would really make for an interesting case study if it led to the collapse of society.
> It's possible we achieved such a technological peak that future generations won't be able to understand the basics
I don't think it's this, and I don't think programming is so hard (it's the business logic that is hard when you have to specify it exactingly, to reference the thread.) I think that the manufacturers of the various computers we use make it unbelievably difficult and scary to touch anything, and cast quite a bit of suspicion on you for even wanting to change anything.
Exactly! I was going to comment on the same phenomena.
We now have a "digital native" generation coming of age for whom the only thing they have known is a ubiquitous internet. We might expect that they would be more intimately familiar with the workings of the technology that has surrounded them than we could ever be.
Yet the opposite seems true, and I think you've put your finger on a primary reason — they've only interacted with devices that are basically appliances — you turn it on and use the app, but it is very difficult to get inside either the hardware or software. So, it just becomes another box that either works or doesn't.
It is not dissimilar to the generations of people who grew up with automobiles. the more "user friendly" they technology gets, the fewer people even understand what goes on 'under the hood'. If you don't need to understand how to change a tire or change the oil, or check the valve backlash, because these are rare events, few people even have a clue how to do it, unless someone took a special effort to teach them and they took a special effort to learn.
You saw the same with cars. Back in say 1950, most car owners would know how to do basic maintenance and repairs, both because the cars was mechanically simpler, but also less reliable. With higher reliability and more complex cars, that is no longer the case. Now the same thing is happening with computers.
Totally agree. I've been working on a programming language for UI designers, but it's been years and I doubt I'll ever release it at this point. The idea was to offer a syntax that conforms to their vocabulary and mental models. So if you were designing a button:
elements
shape Wrapper
text Label
style Wrapper
fill: blue
style Label
content: Click Me
fill: white
edit: no harm in sharing the demo page that I never released. Warning - it's very broken. But it kinda works: https://matry.design/demo
Yeah, this is the reality. I'm not going to name the tool but I went to a training session for one of these things recently and it pretty quickly devolved into programming: looping, branching, conditionals, etc. Except rather than raw text this was rendered as (basically) a set of bubbles containing icons and text, and with arrows between them. It was even vertically oriented the way code is.
I actually pointed out in the training that this is still coding - at least at a scripting level - with all the pitfalls that involves. Nobody there really understood, and some actively tried to deny this point of view (I thought that was a bit weird and culty, but there you go).
One problem is that if you, as J Random HR Employee, implement something with a bug in it (which you will do at some point), you may not be well equipped to diagnose and fix the issue. It's not that hard, not at this level, but the training sort of glossed over that.
And then I asked a question about the fact that all these services that were being integrated together were distributed, so what happens when something fails part way through a flow? What happens with consistency across the different systems you're integrating? Well... it depends on the adapter for each specific integration, but in most cases you're supposed to implement failure handling yourself.
In fairness they showed us an example with failure handling and enforcement of consistency for something that integrated perhaps half a dozen systems together... and it looked pretty complex, just as you'd expect it to.
And that's where I think these things fall down. You can build a happy path flow pretty quickly - in fact I suspect a lot of non-programmers could do that - but it's what happens when things go wrong where it gets really gnarly.
Basically, if something goes wrong either IT or DevOps are getting a call, or somebody in a business, HR, or finance function is manually logging into a bunch of systems to update them and bring them into a consistent state.
I don't think this is terrible necessarily, but I do think the capabilities of these low/no code solutions - particularly with respect to their use by non-techies - are way oversold.
Automobile engines are also very complex, but no one talks about no-mechanic cars. In highschool where I grew up, the bottom academic 50% of boys ended up being mechanics of one type or another. At one point in time, being an auto mechanic was an elite, rare profession, and that complexity was encapsulated as something that low-performing academic students could bank on for a career.
I see the exact same thing with code: the complexity is eventually abstracted away until it becomes a vocation that doesn't require a college degree (think mechanical engineer vs mechanic, computer scientist vs programmer).
I see the low-code stuff as an opportunity to let the business-folks handle the usecases where the complexity is low, and value of rapid iteration with deep domain-knowledge is more valuable.
Also, they might get a better understanding of why the code stuff might make sense when stuff is actually getting complicated :)
I can imagine a few instances where giving a node-editor to business would have been a way better solution than having us re-implement it in code continuously. That said, that was always in the context of a larger, domain specific piece of software. Where I think low-code falls flat is trying to be a general purpose programming tool, which means for someone to do their domain specific tasks, learning how to make a general purpose program from scratch is a big ask. I think low-code editors should focus on this portion of the user's journey. Getting data in, data storage, manipulation, image processing, reaching out to APIs, all that stuff needs to be as easy as possible.
Low code isn't about tinkering under the hood. It's about going places.
Most cars have automatic transmissions now. Cars don't have manual chokes anymore. They're reliable enough now that most people don't need to know anything about repair, engine or otherwise, whereas that used to be widespread knowledge.
Yes, I agree. I thought that was obvious in my response. Modern cars are "low code", so mechanics don't need advanced degrees to fix them. Also: "the complexity is eventually abstracted away", which is exactly what you are saying.
Complexity is definitely the problem. I think the main problem with most LC/NC solutions is that they offer TOO MUCH customizability. I think there will be more in roads when it becomes a little more opinionated. People always think they want all these features and weird use cases but they forget about any kind of 80/20 or 90/10 rules.
The hard part of my job is rarely the programming itself. The hardest part is figuring out what to build, in enough detail for the blinking computer to understand it.
The classic Mythical Man Month makes a crucial distinction between "accidental" and "essential" complexity. Accidental complexity is stuff that it is in your solution, but is not necessary. For example, you added a layer of abstraction that ultimately isn't necessary and doesn't have benefits outweighing the costs. Essential complexity is intrinsic to the problem itself. For example, if you write software to help you file your taxes, you have a certain fundamental amount of complexity that no amount of clever code can get you around, because the problem itself is complicated.
One of the core errors of the most breathless low-/no-code advocacy is that they clearly see that code as it stands today is very complicated, but they think it's all accidental complexity. If we just did the right things... if we just made it easy enough... if we just had the right visual user interface... all this complexity could go away! And then anyone could code! And lo, Utopia would emerge.
There is a grain of truth in this. Code does have accidental and unnecessary complexity. Some traditions and codebases have more, some have less. The low-/no-code approaches can help with this... though I have to qualify it a bit because at times they add their own accidental complexity to problems that traditional approaches don't. But they can be easier sometimes, certainly.
The problem is that even a hypothetically perfect low-/no-code solution with absolutely zero accidental complexity still wouldn't do anything to eliminate the essential complexity of the problems people want solved. I don't care how amazing your no-code UI is. I don't care how visual it is, and how you can grab any 12-year-old off the street and show them your UI and they can grasp everything that's going on. I don't care how much work you put into it. If someone tries to use it to create a tax preparation service, they are going to ram face first into the brick wall of the essential complexity of the problem.
No matter what you do, there are going to be people in the future who have developed a skill set in dealing with that sort of thing, and it isn't going to be a skillset everyone develops to the same degree. Even if you stipulate a domain expert who knows literally everything about the tax code, it will still be a distinct skill to learn how to explain that correctly to a computer, even a zero-accidental-complexity computer.
It is further inevitable that the people dedicated to that will have their own toolset that tunes the expertise and power level to the fact that they are able to pour more time into developing skills that have a longer-term payoff than people who are "programming" for six hours a year.
It is literally impossible for low-/no-code to take over. Even if I pushed a magic button that replaced everything in the world with low-/no-code solutions right now, the world would still bifurcate into the people who further develop their skills with the system, learn how to use it efficiently and effectively, and people who do other things like build houses, provide clean water, etc. The only way to make it so there is no such thing as a "programmer" is to forcibly cap the capability of the universal low-/no-code system so low that there is no way to become better at it, which is just inconceivable.
Everything you said would happen, I saw happpen. I literally witnessed this.
I started my career at a company that sold a graphical programming product. It's exactly the typical "no code" thing described elsewhere. Nodes and edges to implement Ifs, Loops, actions, subroutine calls, etc.
We sold this to customers but also had an in house professional services type department that used it. Those folks were indeed at a whole nother level with that tool and knew all kinds of tricks and had developed special scripts to transform the XML formatted files that the "programs" were saved in, etc. They had developed a long slew of best practices to try to tame some of the problems of the tool.
Then someone added a "execute arbitrary javascript" action and it was open season on everything...
I think when I left someone was working on a linter!
I venture the opinion that the overwhelming majority of the complexity on most current software is accidental. It's the kind of share that you measure in 9s, not on raw percentage.
Some stack that avoids all that complexity would be technically "low-code". But the format of the low-code stacks people push around today is incompatible with general development, and doesn't actually make most of the complexity go away.
Perfect way to say it, yes. It's a mirage in the desert that we'll be chasing forever, because to the business stakeholder's eyes, the water is just right there over that hill.
This is the issue. I've worked a lot with low and no code tools, and what happens is the complexity becomes absolutely ridiculous to do very simple operations.
A perfect equivalence is if you were to do math exercises but instead of operators you used written English prose to describe what you are doing.
Much less intimidating, but mind blowingly verbose
I basically agree that most complexity lies in correctly formulating ideas -- though depending on language, there is a great deal of time and effort wasted on the overhead of expressing those ideas.
For example, Javascript's stdlib has long lacked tools for simple data transformations (group array items by a function, transform values of an object by a function, etc). These operations can still be accomplished, but only by writing bug-prone manual transformations or using third-party packages like Lodash. There is no standard idiom for expressing these ideas, which causes significant overhead.
No-code/low-code, in contrast to general purpose languages, is savagely optimized for idiomatic expression of a small set of ideas, at the expense of the ability to express any arbitrary idea.
The problem is difficulty in developing the business requirement. I have to plead with Product owners to give me more than a few sentences about what it is they want to built. It makes no difference what the tool we use to express requirements is if we cannot have a firm grasp of the requirements themself.
This is actually the strongest case in favor of "no code" tools IMO. Get the PM to spend time actually modeling what they want, as close as they can get, in no-code tools. They get to bash their head against the wall for a while until they give up and bring in engineering. But now (a) the PM actually has some intuition around how complex things are, and (b) their no-code solution is probably a better approximation of a spec than they would have written.
Much the same experience using Excel - people describe a problem in English with a lot of glossing over of details, but asking them to mock something up in a spreadsheet brings home some of the complexities.
If you work for a company where software is not a core function the explanation you posted will sound like an alarming number of dollars to your executives.
The problem these days is everyone wants great software (big tech impostor syndrome) without splurging on software talent like big tech. That’s the sweet spot low code companies target when marketing.
Graphical coding is bad in general. It’s not just paid products. I’ve had the misfortune of using Apache NiFi - which is “low code data movement”.
> They see these weird characters and it looks like gibberish and they assume that the people who understand this gibberish are somehow on another level of intelligence.
Honestly: if you come to such a (dubious) conclusion, the conclusion is actually likely true (with respect to you), because if you were smarter, you would sooner or later realize the mistake in this flow of thoughts.
I myself had that perception before I learned how to code. I even still have that perception when it comes to math. Every single time I hear mathematicians talk on HN I feel like I hear a whole new set of words I've never heard before, and looking at math diagrams makes my brain hurt. It makes you feel stupid, but it's just an illusion.
If you want to become smarter in math: Plan to understand work that some Fields medalists of your choice produced. If available, get a modern textbook treatment of their work (these, in most cases, are better for understandable than their original papers). Of course, you won't understand much at the beginning. So you know in what areas on mathematics you have deep knowledge deficits. So get some good textbooks on these topics to fill these knowledge deficits. Iterate.
-
If you want to become smarter in physics (the following advice is what a good friend of me gave who I really trust regarding this): Start with the 10 books of "Course of Theoretical Physics" by Lev Landau and Evgeny Lifshitz.
Having read these books should (according to him) give you at least some very basic foundation of physics on which you can then, as a next step, build by reading much more advanced textbooks.
> My main issue with low-code/no-code is that it attempts to solve the complexity problem, without understanding what "complexity" is.
I get your point. But without low/no-code tools I would argue a lot of simple workflows have to be implemented using code. These usecases, where the technology-side is simple, is a good fit for low/no-code platforms IMO
The magic will always be in deciding where to stop using low/no-code and start using actual code. domain specific and opinionated low/no-code tools with clear boundaries on what the tool can and can't do would be a good thing but the markets would be small and that's a bad thing.
I've seen teams spend a lot of time in low/no-code tools and either it grows more complex than actual code or they resort to the escape hatch (node that executes user defined code) and the visual tool basically becomes a container for actual code.
Also, the dream usually is put the effectiveness of software developers into the hands of people who are not software developers. However, it never seems to fail that the low/no-code work ends up back in the lap of software developers because of the typical product/project delivery lifecycle.
> I've seen teams spend a lot of time in low/no-code tools and either it grows more complex than actual code or they resort to the escape hatch (node that executes user defined code) and the visual tool basically becomes a container for actual code.
The ideal state alluded to is that Dev teams write modules that cleanly 'plug in' to the flowchart mess, but the reality is that a Dev team that can write code at that level (in a modern world, it implies at least an abstract/subconscious understanding of mid-advanced FP-ish concepts, including merging Procedural things like calling the bizarre webservice you're probably integrating with, while striving for idempotency based on the provided args/context.)
At that point, said devs likely are of a skillset they could build a framework for lower TCO, or at the very least are productive enough that they aren't the problem.
Agreed. The cynic in me hears business execs saying “we want non-programmers to be as effective as programmers”, and what I really hear is “we don’t pay these people nearly as much, can we get them to do it instead?”
Reminds me (once again) of the famous Salesforce marketing campaign, "The End of Software".
What they meant was "the end of on-prem software and its installation headaches" because the software was in the cloud - which was innovative at the time.
What appeals to people in "The End of Code" is the end of being forced to use a superficially illegible formal language.
IMHO there's no such thing as "no code" or "low code". It's "bad code". Or less catchy programming languages that usually lack the tools and features that widely used programming languages do.
it just comes down to abstraction and associated tradeoffs. Low-code/no-code fundamentally isn't all that different from the difference between using Python vs a lower level language.
At the same time, on a deeper level there are things that can be hard to deal with like asynchrony, performance, etc. These are not social but inherent in computing architecture etc.
formalized expression is detail and while that detail may not be complex for a programmer it can be completely meaningless to a user. I agree with you in part but i think worth recognizing that if we all had this attitude then we wouldn't be building business applications / consumer apps for non-developers
whether you think it's complex or not, we're hiding information on procedural tasks from users in the products we build because information to someone who is not a programmer IS complex
As someone who got their start in tech with a low-code environment (ServiceNow reporting) I have found the true value of low code is the ability for business/ops teams to create tools that serve their needs without waiting on a team of "real" developers to make time for them.
One of the biggest benefits is the sense of excitement this creates for these users as they are able to add the logic of programming into a process that was formerly a manual one. When they do reach their "edge" around low-code, they can then engage development teams with better knowledge about the system and a clearer vision for what they need.
As other comments have said, low-code will always have trouble solving special cases due to their very nature of being simple and interchangeable. However, empowering others to solve these low hanging fruit problems liberates the develops from a backlog full of basic functionality and allows them to focus on the big problems that will require more robust tooling and design.
> the true value of low code is the ability for business/ ops teams to
create tools that serve their needs without waiting on a team of
"real" developers to make time for them.
I agree with this.
There is a hidden market that exists between "Developers" and "End
Users".
With Pure Data, I found this in the games and interactive audio
industry where much of the procedural sound we made was done by
"artists" essentially. Rapid prototyping using visual languages gave
"good enough" code that could be exported to C/C++ later for
embedding.
Again I saw that with LabView in industrial modelling, where engineers
who are not at all "expert" coders could work within their domain of
expertise and spit out code (also through tools like Matlab/Octave and
NetLogo) as basically a very advanced (demonstrably working)
"requirements spec" to any developer who wanted to take it further.
The win comes when you realise the lifetime of many rapid prototypes
is good enough, and real developers are expensive enough, that the
no-code "mock-up" is the actual product.
It happened with a job I did for the British Foreign and Commonwealth
Office, when I showed the POC that took a few hours to knock together
and they said, that's good enough, just put that demo into production
as is. The scope was only a 7 day campaign.
I see no-code as a peoples' config language that should basically
replace the interface of devices like Android. Kids as young as 6 can
understand stick and box dataflow diagrams. When "The diagram is the
program", configuring things like privacy settings or app preferences
might better be done in this domain.
That doesn't make developers who use real languages obsolete.
> As someone who got their start in tech with a low-code environment (ServiceNow reporting) I have found the true value of low code is the ability for business/ops teams to create tools that serve their needs without waiting on a team of "real" developers to make time for them.
This is because a lot of so-called "real developers" nowadays think their job is to keep up with the latest fads and finding ways of chopping the business needs in a way so they fit better to the popular framework of the day.
Business quickly grow tired of hearing "that's not possible" when what the developer actually means "what you want to do is against the architecture of the framework I've decided we must use". If the business insist, the developer spends a lot of time fighting the framework.
Now, from a job market perspective, it is entirely understandable that the developer prefers to do RDD - resume driven development.
But it also makes it fully understandable that the business people wants to find ways of solving their needs, that do not include waiting on the "real developers". Hence the Excel and Access monstrosities present in every org.
I 100% agree on the savings of real developer time when end users can self service.
At the same time, I've also seen users do incredibly crazy things because either they didn't know any better or just didn't bother to "read the code". Good example of this is people just keep creating Statuses in JIRA until you end up with "Which of these 264 'Completed' statuses is the one I want?". It's similar to the Ops person "Hey, I wrote a Perl script that does what I want. Yay!" that turns out to be a spaghetti ball of copy/pasta.
This is maybe less of a point about programming and more about governance. Either way, you will still need specialized people who make sure that everything is being done in some kind of guardrails.
> At the same time, I've also seen users do incredibly crazy things because either they didn't know any better or just didn't bother to "read the code". Good example of this is people just keep creating Statuses in JIRA until you end up with "Which of these 264 'Completed' statuses is the one I want?".
Why is this a problem? Programs are a means to an end. The 264 Jira statuses are messy and could be done away with, but would a clean solution actually change anything for the better? In a significant, "it was worth spending the money on a real developer" way?
> Why is this a problem? Programs are a means to an end. The 264 Jira statuses are messy and could be done away with, but would a clean solution actually change anything for the better? In a significant, "it was worth spending the money on a real developer" way?
"Let's not waste money on a programmer so instead everyone in the company wastes millions of man-hours every month slowly filling things the wrong way!"
There is a lesson here for ambitious system architects: the most dangerous enemy of a better solution is an existing codebase that is just good enough.
I think no code/low code solutions certainly have a place, but the way they're currently designed is not necessarily helpful (I've worked on replacing integromat and automate.io solutions in the past). Those solutions are fine, for the most part, but they're main flaw is that in order to get anything done you have to integrate them with other tools/software, which means that you're limited by what parts of the api the developers exposed to the no-code. That and the abstractions they produce on the UI end can be confusing and difficult to navigate. That's less their fault and more of an outcome from being a shiny new tool trying to do everything, but it results in a solution that doesn't really work for the people who would go for a low-code solution in the first place.
A good model in my mind is Excel. The amazing part of Excel is that it can be used as a no-code tool, a low code tool, and extended with code macros. Even self-described non-coders who would otherwise be intimidated often use to extreme productivity. It's extremely visual, straightforward, and self-contained for the most part. The fact that it's so well documented as well is also hugely important. The no code solutions I've seen and worked with usually are fairly lacking in this regard, but excel has fairly clear documentation, the famous f1 key, and a support line[^1] which makes it a more default choice. The only problem is that integrating it with your other tools relies on either 3rd party plugins, a coder, or integrating specifically w/ other ms software.
[^1]: I had a professor state in a comparison between MATLAB and Python that the reason to choose MATLAB was that they had a customer support line. This wasn't a CS professor, so you shouldn't underestimate how much value is placed on talking to a human rather than RTFM by non-coders.
Can't believe I had to scroll this far down to see the first mention of Excel. I'm an Excel wiz turned junior dev. My biggest limitation usually was integrating with other software/systems, but power query actually covered a lot of use cases. Excel dominates that liminal space between software engineers and end users
Excel wizards who consider themselves non-programmers, end up with what I would consider programming skills.
And this is the rub with low/no-code solutions. To use them effectively, you have to have some programming skills already.
They are perfect for lazy programmers, but less so for "Citizen Developers". At least, I haven't come across any I was really impressed with as a "Citizen Developer" platform. I would happily take references to any I should check out that might succeed at enabling non-coders to build production solutions.
Honestly, I might be wrong, but in my experience the target consumers for no-code/low-code aren't the people completely intimidated by computers or navigating software in the first place. However, someone who wants to build a tool for themselves or someone else are not necessarily experienced enough to build it using code, and these seem to me to be the ones who are more likely to turn to a low-code solution. In the sense that using software someone else wrote, that has a UI and operates in a familiar way, is a lot more easier for many than writing code from scratch.
If you were trying to write a simple Python program, you'd need to know how to run it in the first place. This means you'd need to install Python (if you're on Windows then you need to add it to PATH which an entirely new issue). Then to edit the program you need an editor (thankfully Jupyter has made this a lot more trivial). If you want to do anything outside the standard library, you need to learn to use pip (hell you need to know what is considered standard and what's outside of that scope in the first place). Coding to me seems to some extent to be more than writing logic, but also experience with programming environments. Low-code solutions seem to be just a lower on-ramp. You still have the logic, but everything is self contained so you don't need the environmental experience. You're therefore limited by whatever is provided in that self-contained environment.
So while I may not have met enough people to really say, a lot more people can "code" than can write programs in the sense that they fundamentally can abstract the necessary logic, but are intimidated by the unfamiliar work flow. Citizen developers can be born from this demographic of people, but the problem is that the tools available are often too limited to stick with and don't allow for production solutions.
I'm not sure we're talking about different things.
> in my experience the target consumers for no-code/low-code aren't the people completely intimidated by computers or navigating software in the first place.
These tools are definitely not for them, I agree.
> Low-code solutions seem to be just a lower on-ramp.
Agree.
> a lot more people can "code" than can write programs in the sense that they fundamentally can abstract the necessary logic, but are intimidated by the unfamiliar work flow.
Not quite sure what you're getting at here. But - the way people talk about "Citizen Developers" is, it's the people who know the business logic really well, they know their tools really well also. Spreadsheets, web forms, even some databases perhaps. But not "coders" per se.
So you start with a person like this. They want to automate something using a no-code solution. Sooner or later, they're going to have to parse some input file, hopefully something structured like JSON or XML. They have to learn about parsing such files, which is by any measure a programming skill, even if they're using a no-code tool for it. They need to learn the principles of it to make it work for them. And thus:
> Citizen developers can be born from this demographic of people
At that point you're training a developer.
> the problem is that the tools available are often too limited to stick with and don't allow for production solutions.
This is also my experience. But if we imagine evolving all those tools out there to be production reliable, and flexible enough for whatever task... You still have to take on the journey of training a non-programmer to be a programmer of some stripe. You don't end up with the dream of non-programmers intuitively building production apps out of thin air.
Visual Basic I believe was born from the same idea, of making more intuitive tools so people could focus on the business logic, and less the fiddly bits of coding. But make no mistake, Visual Basic produced programmers, and not business people who built apps.
I think we're basically saying the same thing, but I suppose the confusion is mostly semantics.
>At that point you're training a developer.
I don't think this is necessarily the case, or even if it is, I see this as not a problem and even a perfectly acceptable situation. The goal overall is simply to take someone who wasn't originally hired to code, or someone w/ institutional knowledge but no traditional background in programming and empower/enable them to build production applications or tools. Whether in doing so they learn to write code or not is not really an issue one way or the other.
>They are perfect for lazy programmers, but less so for "Citizen Developers".
This was in your original comment and to what I was mostly responding. It seems to me that low-code solutions like Excel do indeed enable people who don't consider themselves programmers to build tools. To a certain extent no-code/low-code solutions like Squarespace do a similar thing, where someone without ever knowing what HTML or CSS is can still build a website, but obviously someone with additional experience would be able to make even better use of the tool. Basically my issue is that I think low-code solutions can actually be pretty decent and useful, but there are a lot of bad solutions out there and I obviously don't see them displacing coders. However, I wouldn't go as far to say that you need to train someone to program in order to make use of low-code solutions.
100%. Key thing is finding the right abstractions where the tool does enough to provide real value, but slots neatly into your existing workflow without requiring a bunch more work to actually integrated.
But we will continue to see domain-specific tooling that require less "coding". There are plenty of "no/low code" CRUD app builders, tools that manipulate components on a canvas to create web pages, even tools to integrate different systems together. But the second you need to go off the trail, you've now got a big problem: you're constrained to doing things that don't break the no/low code environment, and that's a lot harder than rolling your own components and customizations.
Anything interesting enough to "disrupt" tech development will, almost by definition, not be possible in a no/low code environment.
It's robust and perfectly fine, you can even build full games without any coding at all.
Is it useful? Yes, but it would never replace "real" coding. Yet I'm excited to see any advancement on the field because it still feels like "magic" to some extent.
Unreal Blueprints isn't low code, it's just code, and it doesn't need to replace "real" coding because it is real coding.
It does raise the question where the line is. Something that I'd consider a low code / no code platform is webflow. My non programmer CEO used it to create our marketing website, and it solved something for us that normally is a very highly technical problem. He had a couple small visual glitches, he asked me to help and I discovered that it actually was a pretty thin abstraction layer over some good practice CSS, and so the glitches were easily fixed by me thanks to that.
Blueprint makes a programming language easy enough to use that a non experienced programmer might have a go at it, but in the end it's really just the outstanding library that makes it so powerful, you still need programming skills to solve programming problems with it.
They are already. I consult for a lot of hyper-early-stage YC startups as they formulate their plans and I've had two separate clients in the past 6 months go the no-code route to build their MVP. At first I was against it, but having seen these clients go through it I would say it is much cheaper and less risky and gives developers an extremely good blueprint for when the founders inevitably decide to go beyond the MVP stage and build their own codebase. No-code platforms are a great way for non-technical (and even technical!) founders to do that hard product work of figuring out what they actually want their app to do and how each screen should work. Things like figma let you do this too, but I find founders will often make something in figma with logic holes or missing flows if they aren't thinking like an engineer. The no-code platforms force you to make these decisions because you'll notice the holes the moment you go to use the app instead of noticing them when your developers go to implement something that doesn't make sense. Much cheaper way to do early iteration.
It depends on the application. In this case both clients were using a no-code system used by big fintech players like Goldman Sachs that costs $150k for a one year license.
I do a fair amount of pre-seed and angel investing and I've seen a _massive_ increase in the number of very early businesses that have a "product" that they've been able to build with no/low code tools. It gives non-tech founders a set of options they've never had before, in my experience.
That obviously isn't viable for all early businesses and even the ones it is viable for eventually need to hire engineering teams to build their products, but I love how much more accessible these tools have made shipping something basic.
I'm currently contracting for a client that uses a no-code tool to build their product, and this is exactly my opinion. If you're building a software startup, the only scenario where it makes sense to use a no-code tool (e.g., Bubble), is if you don't know to code. And even then, you will need to replace everything with proper code once you start growing.
Of course, this is a bit different for low-code tools that are used internally etc.
Basically this space is deliver multiples on what engineer team would bring at multiple discounts. It's not really a sustainable model because they are constantly viewing it as a cost center even with the productivity no-code tools. So the moment some open source version is released or they see a cheaper solution they will flock to it especially if you allow easy migration. I've had requests from companies who were doing well with the no-code SaaS but felt they were being held hostage/realize they want an internal tool they own because their requireents always devolves past what no-code tool can do.
This might not be an issue for SaaS who already publish their source code but for the vast majority of no-code/low code, it seems to cater to small to medium enterprises who are constantly looking to reduce their cost centers, even after achieving it an optimal setup.
I am a non-tech founder building a app on building, basically very CRM/Data centric for a specific industry vertical I work in professionally.
I am pushing Bubble to its limits where I have access to approx 150m records with all sorts of other database relationships to many millions of records.
To build this app in a custom software solution, I have been quoted $100k to $500k depending on the backend architecture.
Instead, I am spending around $20-30k in development costs to get my app off the ground to be able to pick up the first paying users... at least that's the goal and I'm a few weeks away from launch.
I would also say that my "MVP" is not like Airbnb's MVP... my hope (and it seems) that it may be functional enough to nearly (70%) replace my existing CRM that I use day to day. Of course, I am planning to transition to a custom software solution once I validate, so I don't see bubble as a long term solution.
I've self taught myself a bit of python and tried teaching myself JS as well off and on, so I am able to generally talk through with software developers what I'm trying to achieve and generally understand the technical things that might be discussed. So I'm not entirely a total non-technical noob.
But the issue with Bubble is there is still a high barrier to actually learning how to use the platform for the average non tech person. It's a black box of sorts and they don't have the best education (unlike webflow). I mean come on, you have to pay $800 to take a bubble sponsored course? Lame. Nonetheless, you learn programming concepts by learning how to build on bubble.
I'll end by replying to the top comment about code being perceived as complex.
Spanish isn't complex. Nor is French. Maybe we can consider Japanese to be more complex. But even then, millions of people speak it just fine. Conjugations in Spanish are complex for a 40 year old English speaker new to Spanish but not complex for a 8 year old native speaker.
But after a certain age, life takes hold, you begin working, and you lose the time and opportunity to spend 100's or 1,000's of hours learning another language. Trying to learn how to code is like this. I sometimes need to carve out 3-6 hours of a day to context-switch away from my busy (non-technical) professional & social life to get back into programming mode.
Low code tools abstract away hours of that complexity you would have to learn, which allows you to start building something functional quicker than you otherwise would have. I know software devs look at low code and say "what's the matter with this crap, I can just spin up a X to do Y in 1 week, this is worthless!". But you are the native Spanish speaker in my metaphor, not the folks learning Spanish way past the days they had time to learn Spanish in college, trying to build the next greatest Spanish hit song to tell their story (i.e. software app!)
> But you are the native Spanish speaker in my metaphor, not the folks learning Spanish way past the days they had time to learn Spanish in college, trying to build the next greatest Spanish hit song to tell their story (i.e. software app!)
To probably strain the metaphor, native Spanish speakers see this like you're struggling with Spanish so you give up and instead decide to learn Esperanto because it's easier and a few people have sold it as being a good alternative to communicate across cultures. You run off and spend 4 months on Esperanto and have a song written and a catchy tune that is well received, but when the time comes to publish an album for the Spanish speaking world, maybe you've got some notoriety built but you've still got to start over from scratch and learn Spanish. For most people, they aren't going to make a hit song on their first try, they're going to fail and have to try again, so learning those Spanish fundamentals instead of a shortcut might have been a better use of time.
As someone without a lot of budget to hire devs, I currently use no code in my company extensively, our Airtable has 50 tables, some of them with > 10k records.
There is certain excitement to do all in no-code now, but the ceiling is truly there as the article says, the rule of thumb is that if something is operational and not core for business we do in low code, if it is business critical (we lose money if goes down) then it gets coded.
What happens if our business goes down, and is <low code tool> fault? We can't just say "welp" to our customers complaints, recently we saw that not even Atlassian is immune to long outages.
Also what happens when the market needs for you to do "X" and the tool can't? Now you are stuck in a low-code environment with 3 months migration while the competitors pass ahead.
We also got a lot of bugs with this tools on things that were supposed to work, and it slowed us down significantly.
From what I see that the best use case of low code is kickstarting software projects. If done with the right tool (one that allows for easy exporting), it allows to get the data in order for future scaling with code.
Since changing application logic in a living project (and even the entire language / framework) happens all the time, but the data generally does not, this seems like a best case scenario for a new feature or product that still needs to validate their market fit.
No. Not general purpose application development. Not ever.
But niche applications or parts of applications might be. We see lots of “apps” that are spreadsheets or old access databases. In games/vfx and sound you see “shader graphs” and “audio pipelines” take over the job of code for parts of a codebase.
Learning tools with puzzle pieces for code statements aren’t “low code” they are just an accessible way of writing regular code. The amount of code created is typically much larger and the complexity higher than with a regular language.
Regular program code is the low complexity answer to general purpose software development.
Low/no-code tools as replacements for devs is never going to happen.
But as an "aid" to empower developers... absolutely.
The lowest hanging fruit for a low-code solution would be in the Front-End space since you're dealing with a visual medium anyway, and because Front-End work isn't "hard" as much as it's super tedious which is generally a good target for disruptive automation.
Of course one of the big challenges will be that devs are most comfortable coding in text-heavy non-GUI environments like the IDE or terminal, and any low-code tooling that leans on a visual interface is going to struggle.
This was actually a huge problem for me at https://rapidream.com (apologies for the shameless plug). I wanted a Figma-to-React dev-tool that I could actually use on my real "day-job" projects, but designing an interface and user flow for users who don't like low-codey tooling was almost a bigger challenge then the actual tech.
> The lowest hanging fruit for a low-code solution would be in the Front-End space since you're dealing with a visual medium anyway, and because Front-End work isn't "hard" as much as it's super tedious.
I think the take that frontend isn't hard is extremely outdated. Responsiveness and a11y are nuanced problems with huge surface areas. As designs get more complex, keeping all these things in check requires tooling that needs to be learned. People on HN constantly bemoan the complexity of frontend development and how hard it has become. There's a reason drag and drop isn't the default way of creating a web page, despite tools for this being around for 30 years.
It's not Front-End dev as a whole, just the "pushing-pixels" stuff.
There's a ton of difficult FE work (esp. state). It's just that when it comes to a lot of the "basic" styling and responsiveness there's a lot of tedious work that should be automate-able (just like any framework or library, the goal is to reduce unnecessary work, not to suggest that the work is trivial).
I disagree with this opinion
"Anyone marketing a low or no code tool to developers is targeting the wrong audience"
Switched-on developers are a perfect market for good no-code/low-code tools. If a tool is 100 times more productive, why on earth would a smart developer not use it to deliver value to their customers??
I wrote a complete ERP and CRM system using our No-code platform - this would have been impossible for a single person using traditional tools such as Java.
I've written a couple of blog posts explaining why the rise of No-Code/Low code is inevitable.
The problem we should be solving is not the accessibility of code itself, but the accessibility of toolchains.
The most confusing thing about learning to make software isn't writing a Hello World program or a Fibonacci series loop. It's figuring out what to do with that code once it's written.
People don't interact with terminals and shells anymore, but guess what nearly every Hello World program is written for? A shell.
We have a lot of tools that try to hide the toolchain, and even the shell itself, from the uninitiated developer.
Unfortunately, hiding the complexity of the very system you are writing code for causes more problems than it solves.
How can you learn to get input from the user if you don't even know how to use a shell? How can you start using libraries if you don't know where they are, or how they are distributed to the user? How do you configure the correct environment variables if you don't even know what shell you are running or how/where it was initialized?
We should instead be doing the inverse: do everything we can to show new programmers the system they are using. Show them all the pieces of the puzzle, and how those pieces fit together, because our new programmer's primary goal is to make new puzzle pieces.
I've recently been building a business directory using Bubble.io.
It took me about half a day to do tutorials and another half day playing around just to learn the platform.
After that, I was able to build this business directory, including Stripe payment integration, some reasonably advanced Google Maps and search / categorisation functionality.
Building the same thing from scratch would have taken me two weeks or so.
I am saying this as someone whose main job is providing a CMS, so I am familiar with having to create a simple enough interface for non-technical users to use my CMS.
Hats off to Bubble.io for achieving such a usable interface for being able to knock up an app this quickly.
At the same time it is clear that some actions which would be painfully simple to perform in code take a lot of clever thinking and hacks to convince bubble.io to do it.
It also won't scale and if it breaks for no reason, it'll be nearly impossible to fix it.
If it doesn't provide a certain bit of functionality, you have to add your own CSS / JS and that's where it becomes clear that my extensive knowledge in web development contributed to my ability to use this no-code tool a lot.
Ultimately, I think a no-code tool can be a great way to enable programmers to build things quickly, but I think the ability to think like a programmer is still worth learning.
I'd be surprised if these tools could replace anyone building complex applications in the near future.
For everyone saying no, this is exactly what Shopify, and arguably Wordpress are. These things have already disrupted tech development. I think the real question is how much complexity can they take on, and I think they’re largely near the end of that curve.
I think Wordpress is a great example. I recently tried making a simple website using Wordpress's visual layout editor, and it was absolute hell. The process consists of fighting with unlabeled icons, convoluted UI state that is controlled by mouse movements apparently, shit jumping around all over the place when you add/remove stuff, and a ton of other terrible UX decisions. It felt like I was trying to perform a satanic ritual with my mouse.
I'm not a front end guy, but I was able to write exactly what I wanted directly in HTML/CSS in around 2-3 hours. That's for a static site with responsive layout and like 3 pages.
Could someone who is actually in expert in Wordpress's editor do this in less than 3 hours? No doubt. But at the end of the day, all it cost me to build a vastly superior solution was a few hours (and again, I'm not an expert at frontend stuff at all, so that was not as fast as it could be done). Instead of having to host a bloated Wordpress site now, I just need to serve a few static HTML/CSS files, which can be done for free nowadays with a lot of providers.
So the no code solution is more expensive to operate, and less efficient/performant. But at the same time, it doesn't require an expensive developer (even if an actual front-end dev would only need 1 billable hour at most)
Good point. Not too different to the application surface of AI where nocode is similiar to general AI but instead of aiming for that we're now learning to scope down to expert systems to see real gains.
The difference is that Shopify and Wordpress are for well-defined solutions (shopping, blogs, landing pages), but are NOT viable for custom software applications.
They already have and will continue to. Most programmers don’t even think about their machine’s internal circuitry, much less know how a circuit works. They put their attention on other things.
The ability to assemble 60%, 80%, even 100% of a simple application by snapping together building blocks has lead to a massive democraticratization of programming. Yes, and buggy crap and security holes, but it’s easier to fix some of them by improving the Lego bricks. This is a triumph of abstraction.
==
The first “no code” system I ever saw was called, IIRC, “eve” at some PC expo back around 1981. There was a bit of buzz in the press and then that was the end of it. Then again, that was the promise of FORTRAN (and then COBOL).
BTW the article itself was meandering and mushy. I gave up on it.
Instead of hiring developers to do a 2-month project to build us a suite of admin tools, I built them myself in Retool over the course of a couple of days.
For a non-developer, maybe we're still far away from disruption. Excel seems to be the programming platform of choice for most people anyway.
But for developers, these things turn you into RoboCop. It's amazing how much I can get done now that I don't have to worry about UI, build pipelines, etc.
Thank you for expressing exactly what I’ve felt. In my experience something like Retool lets you spin up an incredible amount of functionality very quickly, and it also does a great job of helping you find the boundary where you should do something elsewhere.
Honestly the hardest part of my work with Retool has been convincing other devs that they should use it for internal use cases. While they’re still waiting on designs to be done in Figma I’ve got multiple users trying out a solution in Retool that I can update in near real-time to see what works best.
I've used Retool as UI and Airtable as DB for so many tiny custom apps, it's great. Retool ends up being used to create "constraints" on top of Airtable in a way that enforces how people add / view records.
I agree with the sentiments regarding the use of low-code/no-code tools for general purpose programming.
I'd like to present a specific use case as an example of how domain-specific no-code tools can help.
I'm developing a tool for automated browser testing. This fits well into the description of being no-code and is being developed as an alternative to writing C#- or Java-based browser automation tests in Selenium.
The "code" that you write is more akin to configuration that defines what page elements you want to interact with, how you want to interact with them and what you expect to happen. There's more to it than that, but this is not a sales pitch.
My project is in direct response to the experiences my partner encountered when providing browser automation training to manual testers within businesses.
Browser automation testing requires, in broad terms, a small subset of what is offered by C# or Java, however a significant understanding of and familiarity with matters such as objects, variables, sane naming and debugging is required to even begin reaching competence.
Many manual web testers, who were very capable, were just not able to grasp coding matters sufficiently. Many were, but plenty were not. For those that barely could, I feel for the people who have to maintain what would then have been created in their businesses.
Programming that requires only a subset of a general-purpose programming language has the capacity of being implemented in a no-code tool if the scope of the programming needs are narrow.
At my workplace, one of our common sayings is “when you’re doing badly, users complain about bugs; when you’re doing well, they’re complaining about missing features.”
From 1998 to 2004 I worked as the IT manager at a plastics manufacturer whose entire system was backed by SQL Server, and fronted by MS Access databases that people ran locally for the forms and reports (no data was stored in them) or Excel (same thing). This is low-code in a nutshell and it worked well back then: it was CRUD before websites were CRUD. And we were constantly developing it and building out more complex scenarios for industrial production control and accounting and reporting.
Why? Because it worked, it allowed users to conceive of the next thing they could use, which inevitably led out of low-code scenarios. At one point we seriously investigated hiring trying to implement bin-packing algorithms in VBA so that warehouse pickers had better direction in order assembly.
Low code systems either don’t work, or they do and create demand for “high code” solutions.
They already have, in many ways. Look at the "modern data stack" -- the idea that you can replace a ton of bespoke ETL platform and implementation with managed, low-code tools. It's been really successful.
But this and other examples don't feel like disruption though because there's still an ever expanding universe of work that requires code.
I use them to great advantage as an engineering manager. If you have programming skills, you can use these tools to whip stuff up significantly faster than people without programming skills. It’s an organizational super power.
I just replaced an internal web app used for planning with a data visualization tool dashboard that can spit out a spreadsheet for further analysis. Took a week to build something that solves the business problem better than several months of effort put into the web app did. I didn’t have to deal with auth, security and privacy reviews, building a custom UI that people would have to learn, learning data source API’s, or build/deployment.
I’m increasingly of the opinion that if it doesn’t face customers it should be done using something like Retool. Especially considering that you can just plug it into your internal APIs so if you need to do anything particularly hairy you can still use your backend.
My experience is that the people for whom learning a bit if SQL is a bridge too far, will also not invest time into learning some complex automation software. And these no/low code software all look like complicated electrical diagrams when you are doing real life workflows.
The category of people who are ready to invest time in understanding those tools are the people who might as well spend the same time or less learning SQL and maybe a bit of python or VBA.
As for business users doing code, I wish there were more, and I think younger users tend to be a bit more technical, but it's a light breeze, not a big change.
I think it's useful to flip the script a bit and ask this question: why are software developers so often stuck writing software with their only abstraction available being text? Isn't that working with one hand tied behind their back?
I do a lot of work in geometric tools, and the fact that the best thing that we have for unit tests is to painstakingly construct objects by specifying their coordinate space hurts. Our team gained a lot of velocity on building and maintaining unit tests by simply writing a tiny graphical tool to convert back and forth between a text representation of a coordinate space and a visual representation, because visual presentation is far better for the average human than a text description for a bunch of circles and rectangles. The signal gets lost in the noise.
I also still think there's meat on the bones of tooling that makes it harder or impossible to craft invalid programs. When we're editing text files, a modern IDE will throw some red underlines under the text when we've written something that it knows won't compile, which is a massive step in the right direction. But contrast that with humble Scratch, which makes it structurally impossible to write an invalid program as you go (incomplete, yes, but an incomplete program is obvious because it will have unfilled slots). I wonder sometimes if tooling that treated the syntactic components of a language as things, not strings, would actually allow developers to move faster with some training.
Structural editors are not a new idea. They have well-known drawbacks wrt. supporting incremental editing of program text, which as a rule requires "passing through" invalid representations. This actually makes them quite unintuitive for such tasks.
OTOH, modern editing tools, such as "language servers", provide most of the advantages without the aforementioned issues.
No code is for mundane human processes and commodity development you want to get done cheaply, whether internal or outsourced. That’s already been disrupted by major platforms. These smaller platforms can only ever dream of becoming the next service now, appian, Salesforce, pega, etc.
Anyone who stays near to, or on, the bleeding edge of technology will know that no-code isn’t a real thing for them. Add some extra automation and boilerplate capabilities by all means, but software will always eat software.
Unreal Engine Blueprints have been a game-changer for me personally, no pun intended. The rapid prototyping, API discoverability, and total immunity to 99% of dumb syntax errors which drag-and-drop visual programming provides has been a great help to learning Unreal. At the end of the day my ambitions are bigger than what I can efficiently build with Blueprints, but they're a great starting point for demos and small one-off applications.
Well... for no/low code to take over everything, we'd need 10x the developers to create it all and then probably 20x the developers, in perpetuity, to handle the integrations.
"Code" isn't an end to itself. Many developers spend their hours, days and careers trying to think of the simplest, clearest way to specify solutions to the problems they work on. Code (and data) is the best general thing we've been able to come up with.
There can certainly be complexity arising from the code itself (more generally -- are you spending your time solving problems in the solution space or problem space?) but you need some way to deal with the complexity inherent to the problem. If there's a better way to do it than code, I'd be more than happy to jump on it.
There are some good no/low code systems -- like spread sheets and some of the forms systems -- where they've found a powerful, flexible, yet simple enough abstraction. But these tend to form silos, and you need a ready escape hatch where the complexity of the underly problem exceeds the capability of the system (which is very common for anything useful).
It's still programming, just a trying to be nice. Here and there it makes programming worse by e.g. not being very object oriented sometimes, but overall it always tries to make everything as high level as possible.
High level programming languages on top of low level ones have been a hit ever since. The day I can plug together APIs and basic user management will be great: Make default user UI with login, profile, picture, make a twitter-style feed where everyone can subscribe to others. This I imagine possible within a day.
Ideally you also wrap some APIs for me so I can e.g. take my existing twitter feed and do stuff with it. We are not too far from all that, but if you want to do one custom step you are back at jumping in to a lower level and need to build components yourself and it's just even more annoying :).
It also has a way better chance with narrow use cases and is very successful there: chat bot builders, configuration of headless CMS admin UIs, analytics tools, KNIME even try machine learning and it looks wonderful.
Well, no code and low code solutions are in practice more code. You use a tool, that is supposed to solve a trivial or a simple problem, and it works great until the first change requests come. With the second or third change request, the complexity is already so big, that no code is not a solution any more. But then, you have sunken costs, and no approval to rewrite. So, you keep adding a workaround around a workaround and another layer of patching or complexity, until a few years later you finally get an approval to ditch the no-code tool and rewrite using an actual code.
In the end, cost and effort is much higher, than as if it were implemented properly in code from the beginning. You write a lesson learned, discuss it with the team and management, get an agreement to do it right the next time.
Then, a week later, a new project starts, you want to do it in code, but get overridden because it's a perfect use case for this great low code or no code tool you are using.
Source: every integration / middleware / ESB developer in every enterprise company.
First: I feel like it already has, and did so way back in the day. Excel and Access (modern version Airtable) are no/low code tools that drastically expanded the set of things that business users/domain experts could do without a "programmer", and have likely supplanted a number of that would have been developed by "real developers" if they didn't exist.
So the question is how much of that power can you push to those end users vs what needs to be held by "proper developers". I could see tools like advanced versions of co-pilot expanding the scope of what people can do to the point where only very high scale, very universal things are built by "proper developers".
What I really like is visual scripting which compiles to code, that you can then edit and use for your application.
There's a plug-in for blender called Armory which does this. It's only for game development, but I think something like that could really revolutionize software development. Let people drag and drop, and customize.
To a degree... but the problem with a lot of "low/no-code" is the people using it, lol. I'm not a musician, so even with better/different tools, I can't compose a good song. Same goes for a lot of low-code solutions I've come across. The design/architecture causes a lot of problems. The big ones I've seen are ppl designing without thinking about side effects and responsibility of "modules."
I'm probably bias though, as I do contract work that is 99% replacing low-code solutions with something proprietary. I think it works out pretty good though. They usually have some sort of system in place and "working." Makes my job a lot easier as you can see the "shape" of their intent.
There's always going to be one or two things that bother a PM to the point of switching to some proprietary solution ;)
Essentually asking "can newer tech become easier to use and more accessible". The answer is yes. Ive seen juniors struggle with django but make quick progress with hasura. I know plenty of developers who still dont use snippets. As far as im concerned to say that low code or no code couldnt disrupt tech is to say that software can't become easier to make or more accessible to people out of collage which i dont believe. Big problem these days is that tech has become more complex because we make it more complex. It doesnt always have to be that way. Enforcing good taste will be an interesting challange but just like easy to use guns made archery redudant for war, easy to use low code/no code tools will help take care of the urgent issues that arent important. That'll free up dev time for other things.
> “As it stands now, there seems to be a tradeoff between ease-of-use and control, and until someone figures out how to remove that tradeoff, there will always be a need for engineers who can fully manipulate software to meet the full range of use cases businesses (and individuals) need.”
we're constantly making this type of tradeoff as developers. It's not just a decision we make at the beginning of a project where we look at the requirements and say hey no-code might be the way to go here.
have you written a function for a library to hide information / details of how something works? that's a tradeoff between ease and control for the client.
now expose that through a GUI with some params and now you have "no-code".
My hope is that no code / low code disrupts the waiting for developers to have time to do stupid tedious stuff that changes randomly. My perception is the tech industry recruits a ton of people into it that aren’t particularly interested or suited to try to triage the amount of simplistic logical manipulation of data and processes that doesn’t require going through a full development cycle. By offloading some portion of that back to end users you can give them an inflection point to work without danger while keeping engineers focused on the complex parts of the system - and hopefully hire less people into tech teams who aren’t engineers.
The amount of effort we put into people who refuse to learn to script to allow them to perform complex tasks feels like such a waste.
I've seen in my industry, we used to spend ages making the most ridiculously complicated front-ends for complex tasks. Now finally we're starting to accept that these orgs will have people who can smack out a bit of Python and suddenly we get to stop wasting our time and focus on supporting those who accept they have to code.
No code/low code will always remain toy and brittle outside of demo-esqe use-cases and those who are willing to open a file in notepad and tinker will retain a significant advantage over the rest.
I'm going to bang my "software is a form of literacy" drum again.
No-code is a "solution" to illiteracy. It is (snarky example alert) like taking all the cells in all the Marvel comics, cutting them out singly and arranging them in order (punch, horror, fly) and saying "now you too can write a story"
Yeah kind of.
But what we need is not No-Code. what we need are two things - more people who learn to code, and companies that make their data and processes accessible to code.
But it's like trying to start a marketplace - you need to attract the buyers and the sellers at the same time
What happens to the software built with these tools when the no-code tooling company goes under? Do you have 30 days to port everything you did to something else before they turn it all off?
I've seen a lot more interest for this kind of tool in enterprise environments recently. Never seen this kind of interest before. I'm curious on how this is going to pan out.
It has been a perennial dream for six decades. The creators of Cobol and Fortran thought that was what they were producing! (Not literally no-code, but the notion that computing by non-specialists would result from putting assembler and other hardware-oriented knowledge under the covers.) After that, there were several waves of hype and disillusionment, though that seems to have dampened down in the 21st century.
Spreadsheet programs are arguably the closest we got up to now, and that was a long time ago.
I think the potential value for niche line of business apps. The kinds of things that used to be drive by a spreadsheet can now be driven by a spreadsheet-driven mobile app with better defined workflow. I think this won't really displace much, if any, real development work. It will open up a lot of smaller tasks to automation. The kind of things that weren't valuable enough to warrant an expensive development project.
I was involved with a project to bring these low-code / no-code tools into a financial company. I think its almost all driven out of the idea that software developers are expensive and development takes a long time. The "Dream" is that the business analyst or financial analyst or whomever automates their own job away, or that they write the business process (While being paid half as much!). Of course the issue is that they can't easily do this while they are also doing their own job, mistakes are made and they need "real" software developers to come in, it takes just as long or longer, etc.
Yes. As technology progresses, in general, we tend to create and move up into new layers of abstraction. This is one of them.
At the same time, "traditional" software development isn't going anywhere soon. In the short to medium term it may even become more valuable and sought after because an increasing number of technically minded people will enter the market at this new level of abstraction and never learn traditional software development at all.
I can't seem to find people in this thread talking about AI/ML, which surprised me.
For "enterprise software", I imagine a future of domain experts training/instructing AI minions to do things like process sale orders and returns - like the office clerks of the past, but silicon clerks. And if that, why not consumer apps etc. too?
The next "everything is bloated electron apps" is "everything is bloated AI models" ?-)
Most developers I've interacted with have extreme limitations outside of their immediate expertise, and some (especially those who seem to be getting culled lately) are barely functional. With that, I'd suspect one of two possible futures:
1. Insufficient skill to make such automation work
2. A large enough number of people are capable of making this happen, and after that, everyone else('s career) dies.
I have twice now built in-house low code interfaces for companies. A recurring problem is when the same product manager who requested the low code interfaces turns around and created a bunch of stories that go beyond the limits of said interface. It's like they ask me to build a high performance racing car and then on race day they bring me to an off road dirt trail.
Having used low-code tools successfully to build ERP systems for the past few years.
I feel low code tools can only really disrupt development once they solve the problem of requirements gathering from customers/ end users and also formally describe change management in low-code as well. As long as there is ambiguity in requirements - code or low code makes no difference.
One thing I’ve found is that if you put something together in thirty minutes you’ll get much more detailed requirements because end users now have a tangible thing to organize their thoughts around.
Did ever Visual Basic back in the 90's disrupt tech?
Not that much. It certainly back then allowed a lot of people with low skill/experience in dev to jump in development. Was an inclusive tool but not revolutionary.
Tools like github copilot gives us a glimpse of how will be the tool that will revolutionize tech dev on day in the future IMHO.
Maybe this is inexperience on my part, but the LCNC tools I've used are very clunky, with UIs that are plain at best, and performance that is cumbersome. While there is certainly a place for them in the business world, I can't see them replacing a really snappy, custom product with beautiful and intuitive UX/UI.
I'm not worried, if anything I can see the latest crop of tools creating additional opportunities for traditional developers. I know of freelancers who get a lot of work lifting and shifting previously successful lc/nc solutions developed in Excel or Access.
Sometimes you just need some IFFT-like actions triggered on some kind of event. In Azure stuff like Logic Apps / Power Automate is particularly geared to running actions based on events or recurrence schedules with minimal code.
No-code only works for MVPs, landing pages, and generally projects that are being bootstraped which are not tech-specific. You can have a solution using no-code at roughly 20% the price of hiring a developer to do it from scratch.
Asp.net Forms and WinForms are much much much better rapid development tools than the kinds of bubble/retool etc. The issue is that lots of very young people have very limited set knowledge about the choices available to them.
That's a lot of hopium here. The problem (around here) is that the sort of people who write these blogs and/or frequent Hacker News mostly would be harmed by this kind of development.
I call it the one page problem, which is how you represent something that's bigger than one page. Code and math notation attack this problem by design. All other tools try to do it as an afterthought, or not at all.
Eventually, yes. But we will be approaching AGIs at that point.
On a shorter term, the disruption they can provide is similar to the one involving cryptocurrencies and blockchain: a lot of hype, grift, and few real applications.
Sure. Lots of people who cant code can put together a squarespace site. I know a decent number of barbers or other small business owners who have nice looking websites they tossed together in a few hours.
Totally agreed, really the goal of any programming whatever is to determine how easy is it to model the related logic. Low-code/No-code is not some magic bullet, it's just another way of doing it. The question has always been how well can you abstract certain well known logic. Rails makes it so you don't have to interact with networking primitives... Until you try in a way that doesn't match that. But there are a whole mess of crud apps in the world. The real thing that I think low code will hit up against is scale. Everyone in the world can write twitter in a CS 101 class, but scale means a closer connection between the business logic and the technical implementation and I think you'll run into the 9 foot ladder 10 foot wall problem. Low code/No Code will be increasingly useful at building larger and better apps, but the problem spaces and scale will continue to grow.
Most comments here are in agreement that a ton of programming is doing complicated things, and the complicated part is not "which letters do I press on this keyboard to make code loop a few times". Hence, low-code tools don't solve the right complexity problem. So let's move beyond that.
We've all been there, from time to time you have a database table or whatnot that "makes sense" (its columns closely match what users of the software expect to see), and you really just need to expose CRUD operations, add half a page worth of validation and action buttons and then the app is done. (CRUD = standard db ops; CREATE, SELECT (Read), UPDATE, DELETE).
Why is the 'low-code automatisation' of that part wavering in and out of popularity? low-code solutions _can_ capture away the right kind of complexity here, no? And it's been tried. In many forms. Many times.
In the 90s, 'database-oriented software development' was very popular. FoxPro, MS Access, xbase, that sort of thing. You got the CRUD stuff for free, and all you were really doing was designing forms to lay out the various DB columns and maybe adding special scripted actions to certain buttons. That was pretty much it, already quite low code and trying to, I dunno, turn those scripts into more lego-brick-style low-code solutions seems feasible at that point, too.
The development environment was all in on this. The basic interface was a form designer. Then you'd click on a button in the form and 'add an action listener'. The other main view is your columns and tables view where you can click on things to 'add change listeners'. You blessed some form view as the 'main view' which loaded on app start, and that's how you build an app.
But FoxPro, MSAccess, that sort of stuff mostly died out. The vast majority of software, both consumer and business oriented, is written in java, python, C#, javascript - those sorts of languages. General languages where database support isn't even baked in - you need to add a dependency no less.
The concept was then reinvented: Rails (Ruby-on-Rails) with the notion of 'skeleton' generation, which didn't just generate the barest of bones ("Here is the source file containing the entry point, here is a project definition and all you need to do is edit the names") - but did a lot more than that, giving you a basic but functionally styled web interface for CRUD ops. The development 'model' of rails is then to just add new features and endpoints, and perhaps even replace these CRUD pages one day, until you're happy with your app.
But rails is far less popular today than it used to be, and whilst various web frameworks still offer really easy ways to toss up CRUD operations, it seems to me like it's less of a key feature, and various frameworks that don't have it or whose CRUD support seems like an afterthought are still quite popular.
Direct rails clones in other languages, such as Grails (for java), have pretty much died out.
Why?
It's the fact that "the low code experiment has been tried a ton of times and it has failed every time" that teaches me that low-code is doomed to fail. Unfortunately, it doesn't explain _why_ it fails. Just that it is likely to.
Lol, this whole discussion is like a bunch of architects talking about how folks shouldn't do DIY projects on their own, and yet... DIY is a huge thing because most people are actually reasonably intelligent and capable of doing things on their own.
TL;DR: I spoke with a lot of people in the industry and thus came to the conclusion that X will not be disruptive...
The good part of the bet is that most potential disruptions end not happening... But the exactly same median consensus is also reached about the disruptions that do end happening...
Disrupt, yes. Replace, no. Programming is moving to ever higher levels of abstraction. It's just a continuation of the trend. You'll still have the lower levels when you need them. It reminds me of WordPress. It will get the job done for most people but at the end of the day it's a piece of crap.
Code is really just a formalized expression of what you want. It happens to be used for very complex problems, because it's very good at solving complex problems. This in and of itself does not make code inherently complex.