Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Am I still relevant as a programmer now that there's AI? (andrewhuth.substack.com)
33 points by ahuth on March 14, 2023 | hide | past | favorite | 86 comments


For the many programmers asking questions like this… if your programming skills can’t compete with a supercharged auto-complete (call it “AI” if you prefer), then you won’t stay relevant or employed for long. Until SkyNet nukes us all I will continue doing what I do. None of my customers imagine replacing me or anyone else developing software with a really good chatbot.


Who knows, maybe we'll see new programming jobs emerge for cleaning up all the shitty LLM code finding its way into systems as we speak. Lazy programmers have provided me a lot of job security in the past.


No doubt. There’s more money in maintenance programming than green field development. But there's no startup stock option lottery ticket in maintenance work, just a steady income.


As a developer I feel that understanding business requirements, translating them into code, liaising with product owners, designers, managing deadlines, expectations, making decisions on when to push back, having a mental map of the codebase, managing infrastructure, knowing which dependencies to use and when, these are the most difficult parts of the job.

Writing the actual code once it gets to it is usually not the hard part, unless you work on particularly difficult problems. LLMs help but don't replace us just yet.


I’ve said a thousand times that learning a programming language and writing code are just a small part of successful software development. Necessary but not sufficient.

Programmers who focus all their efforts on skills that can get automated will eventually get automated away. When a chatbot can replace me in terms of understanding customers and business domain expertise, and translate requirements into code, I’ll retire.


Nice framing - need to be better than supercharged auto-complete.

What kind of stuff do you work on?


Web application development and system admin, which mostly means cloud infrastructure. A lot of that is already automated in some sense, but not enough for my customers to tell Alexa to set up a VPC and write some useful code for them.


Sounds like the best stuff to work on is supercharged auto-complete.


I think this person is concerned with AI as it will evolve not as it currently exists today. This person may be young and will need to stay in the market dealing with that evolution longer. Everybody starts as a junior before they become elite programmer but if the bottom is removed people won't be able to skill up.


You sound a bit like an old man not adapting to technology progress. As an example, if I would make a logo today I would perhaps before used Fiverr or similar services but not I will use an AI and then edit it in some photo editor.

It has become much easier to do stuff like that even if you suck at photo editing.


I’m an old man with 40 years in software development. I’m still working, and not on COBOL. Tell me how I have not adapted. My career predates Unix, relational databases, the internet.

In my career I’ve heard about one thing after another that was going to make programmers obsolete. It may happen but I doubt it, because coding isn’t the hard part of software development.


True true, most likely I agree, that programmers will still be needed for the time being at least. We will most likely just make use of the tool to make us more productive and easier programming may disappear.

Perhaps people who nowadays make a living on making small websites for small mom- and pop-shops will replace themselves with an AI and they will mostly care about design instead. They will have more time on other stuff, increasing the quality all over.

That said, it's hard to tell either way. The future is hard to predict, I wouldn't have guessed I could generate the amount we now can just one year ago. It's hard to imagine many years of progress and what that could potentially lead to.


People who make web sites for mom and pop businesses got obsoleted a long time ago by WordPress and Shopify and numerous other out-of-the-box solutions.

Why does anyone doing that kind of work still get jobs? Ignorance plays some part, of course. In my experience a lot of business owners believe they are unique and special and need something “custom.” It’s a status thing as best I can tell.

Back in the ‘90s I started refusing to write custom accounting systems for customers because QuickBooks could do everything they needed until they graduated to the big leagues, where enterprise software solutions dominate. They never listened, always sure they were too different to use something as pedestrian as accounting software in a box.

Programmers aren’t immune. Look at all of the functionally equivalent languages and tools we argue about. Notice programmers writing that they can’t use an IDE or web site that doesn’t have dark mode. How many more “to do list” apps will we endure? Freud called it “the narcissism of small differences.”

The same human tendency explains why we have many more cars available than Toyota Camrys and Ford F-150s.


Certainly an applicable analogy - all the people whose standards are low enough to use a doctored-up JPEG as a logo now have a free solution. The rest of us who'd prefer a vector graphic, or maybe a .ai or .psd, will continue paying for quality output and the ability to have a rapport about aesthetics.

When you pay a person, you're paying for more than the output. Maybe they'll use a GPT/LLM model behind the scenes, but I also use bucket-loads of coffee behind the scenes. That doesn't mean a bag of Stumptown can build your analytics architecture.


Well if you already have an image you're satisfied with, it's really easy to make it into vector graphics. Either you can map it yourself or just use a tool for it.

As an example of this I generated an image of a dog with an AI and then I put it in the vectormagic.com's tool. The result is very good since the image is not a logo style but an advanced image with a lot of detail.

https://vectormagic.com/images/kvzrgcxyjp45m/edit/p24qfcqegj...


Yes, many image editors support converting a raster image to a vector, even high-fidelity images like the one you've shown. But creating a usable vector is more than just dropping an image into vectorizer and cleaning up the results. Every angle and point in a vector is handled like a unique object, and thus, the image you've shown would be a nightmare to work with. Often times, there will be numerous duplicate points and hidden angles that can be impossible to sniff out. That's why Illustrator is distinct from Photoshop - creating vectors from scratch is not the same process as converting a raster, and the result is typically far more clean.

So yeah, still stands. If you can stomach messy, unpredictable outputs, use an automated solution. If you want something lean and usable, use a human-being.


That means logo designers are out of job. Not programmers.


Well in time the same could perhaps be said about programmers. ChatGPT can already write a lot of advanced code tbh.


Honest question: Is ChatGPT better than IntelliJ already was?


That did occur to me while using github copilot recently.

With the correct plugin, yes.


I think this equally applies to all knowledge workers. LLMs are not going to cut the human out of the loop just yet, but building them into the workflow is going to end up being so powerful that anyone not using an LLM will be uncompetitive.

I see this being at least a few years out. Performance is stil scaling up and cost is still scaling down. It’s going to require collecting a lot more specialized training data to produce fine tuned model. Business will need to adapt and people will need to learn how to accelerate themselves. But eventually, I see this going the way of the introduction of the computer to the office. No productive person will work without it.

As a thought exercise, I’m going to assume you’ve used same GPT model to produce some code before and been impressed in a narrow kind of way. Then project that forward a few years, through GPT-4 and GPT-5. Add some better training data and a better understanding of how to fit the LLM output into day-to-day work. Maybe add the ability to fine-tune on the existing code base and design documents. The trend-line is at least going to hit junior dev level, so the job will probably turn into a senior dev supervising a “team” of LLMs.


I'm personally fine with that as long as integrating a "junior dev level" AI coder doesn't create more debugging for me. The problem I see right now, as this creeps from writing narrowly defined functions to spewing out whole architectures is that it's turning a small footgun into a giant one that gets much, much harder to debug and understand. If you don't even know parts of your own program, how are you supposed to know why something is going wrong? Ask the AI? And then ask the AI to fix it? The danger here is that people start generating code they can't even read or debug; it may work in most cases but when it fails, it will fail spectacularly.


I think the "junior dev" analogy is spot on - just as many junior devs are talented and productive enough to be really dangerous without some supervision to ensure that the result fits in a reasonable architecture that won't result in burnout-driven suicides for its maintainers, you have to apply the same level of senior guidance and supervision to LLM-generated code.

Of course, there will be companies who'll get the one bright-eyed young person whom they can exploit the most, and get them to apply ChatGPT to build all of their systems, cheerfully reimplementing all the most popular architecture flaws and (e.g.) security issues that are seen in the many repositories of old code used in training the model.


People will still use Excel.


Yes, Excel supercharged by LLM trained on all the world's spreadsheets.


Can't wait to short the first bank to try that one.


I think this is a great point. Why do people use excel? I think it’s because they take not old obsolete software, but actually one of the best ways for non-technical people to create simple applications. You can basically turn out a custom inventory tracking system in a few hours for a small business using this ubiquitous software that almost everyone has some skill in.

I think that same accessibility is going to be a major driver for these natural language models. I’ve seen great examples where people can basically “program “ in English by having a conversation back and forth with the LLM and refining its output of code.


Nothing will ever beat the horse.


unless it's dead, which Excel isnt.


What's scary is Bill Gates may have finally found a new horse to flog...


I have been coding for 22 years, never thought of myself as programmer, programming for me is like literacy, it's like calling yourself a 'reader' because you can read.

Programming is a first order thinking, a way to deconstruct and construct things, sometimes you have to use it on a computer, sometimes you have to use it while baking bread, or while woodworking.

AI will help me so much, I am super excited, and now I use it every day, I plugged in whisper+chatgpt[1] to my emacs and now it is more and more like a partner, it is of course still stupid, but it will get better, and soon (with the recent llama work people are doing) I hope we will be able to finetune it.

We are doing lists and tables and data transformations and input validation for 60 years now, we built gazillion specialised spreadsheet clones.. I think now we can start building some new things, some really "soft" software, where the users can program it with language.

This is the first time I feel the computer is helping me do something, and its not me "fighting" with it.

[1] https://github.com/jackdoe/emacs-chatgpt-jarvis


Look, most good programming is not the simple data transformations, read/write, CRUD, etc. even though that is most day-to-day busywork we have to deal with. Good programming is envisioning a whole set of behaviours and the architecture to support them, before a line of code is ever written. If an AI can bang out the boilerplate for you once you know what you want, that's great. But like any tool, relying on it to do all things will lead to places where you're at a crossroads and might take the way the tool suggests instead of continuing along your intention to build something the way you conceived of it. And the tool may seem to "know" - but it doesn't, and it shouldn't guide you (anymore than, say, taking the path of least resistance with React should guide your decisions about app architecture). Lists and tables aren't an unsolved problem anymore. Good programming is about solving the problems that haven't been solved yet. But an AI is trained on existing art, and can only follow paths that have already been trodden.


i am sad to say that lists and tables are totally unsolved problems, look how many issues there are in react's virtualized list[1] (just an example, not picking on react)

but i think we are talking about the same thing :) programming is creative, and for me it is art, i love it.

https://github.com/facebook/react-native/issues?q=is%3Aissue...


Yeah, I think we're talking about the same thing. But taking your example, virtualized list problems recur every few years. They were a problem in Swing, in JSwing, in Flex, Flash, on and on. I mostly use DataTables, which is its own endless hall of mirrors. They're among the glitchiest UI elements because so many things come into play. They try to do everything and be everything for every application and for all browsers, right? But more importantly, there are a lot of different ways of approaching the problem of virtual lists. You could do offscreen rendering in the DOM, or render on the fly, or reuse existing elements, or go nuts and draw the whole thing on a Canvas. So this is like the last thing you would want to rely on an AI to actually write. You could definitely let it embed some existing solution, but then you wouldn't know why it had some code that shifted everything down 3px for Safari.


The failure of no-code will be the same failure of AI programming...

People miss understand (and most business failure to realize the potential) of IT in general which is not typing functions in the latest programming lang...

It is problem solving. The number if times I have sat down with a business user to go over a problem they are having only to have to unpack the Rube Goldberg machine of illogical processes, datasets, and dependencies they have created to arrive at their "solution" they are now asking for my assistance with.....

I dont look for AI, at least not ChatGPT AI to solve that. Most IT problems are Solution XY issues. ChatGPT can solve to Y, but understanding X is the real trick


The fear of language model AI takes me back to my original fear when I joined a data science team at a huge bank with just a bachelor's degree and five years of experience. I was embedded in the mess of databases created by hundreds of large bank mergers.

I was working with really, really smart PHDs and was doubting my hiring to the team. I felt like a lot of these guys had forgotten more than I'd ever known.

During my time in that job, I realized that my experience in navigating the complex databases from working with the business and operational teams gave me an advantage. I took my domain knowledge for granted and was able to work on the problems at hand much faster than the highly intelligent colleagues who relied on IT/DBAs to write their queries. They often had to go back and forth for a couple of days to clarify their requests, leading to misunderstandings and delays.

In the end, I fit in just fine, held my own, and was aware of my individual talents. I enjoyed learning from the PhDs, and they were happy to teach from their backgrounds.

A lot of the experience from the big bank has 0 transferability to any other company. What made me stand out was I knew which tables of the gigantic data environment were the best to use, I had hundreds of already built queries for many different problems and I was dependable. It got me really far before I went off on my own.


> A lot of the experience from the big bank has 0 transferability to any other company.

That's one of the beauties of IT work - for almost all of us, unless you shoot your own foot continuously, maybe 95% of our skillsets are easily transferrable not only to another employer, but also to another domain.

Of course if you willingly position yourself to some tiny niche that makes you unfireable or just follow your passions unchecked, you may end up in similar situation... an advice, for an easy life don't do that, rather accept potentially slightly lower pay and enjoy life more, its anyway damn too short


I mean, take the next ticket on your board, feed it into an AI and call it a day. You could even take another job (or why not a dozen) and just outsource yourself to an AI. Good luck!

In case it's not obvious, this is sarcasm. It doesn't work like that, nor is it likely to anytime soon. And even if that day comes in our lifetime (which I doubt), your boss or PM still couldn't just delegate tickets to the AI. The AI would need people with advanced knowledge of both the AI and the domain who understand how to supervise it, and that would most likely be you and me.


“Programming” isn’t writing code, it’s understanding computers on a technical level well enough to make them do exactly what you want them to do given a certain set of constraints.

The tools we use today are incredibly powerful and time-saving without AI. I can whip up a reasonably interesting mobile app with a data backend I could scale to millions of users in less than a week and basically for free. And yet there are more programmers than ever. Which indicates that maybe these tools empower human engineers instead of cutting them out of the loop.

Although just as software engineering is very different today than it was 20 or 30 years ago, in 20 or 30 years it will be very different…


>Although just as software engineering is very different today than it was 20 or 30 years ago, in 20 or 30 years it will be very different…

It's not actually very different than 20 or 30 years ago. The platforms are rather different, the end product is different too in some cases, but the development work itself is surprisingly similar. That doesn't mean there cannot ever be a real leap forward of course.


I deal with a bit of anxiety for a number of reasons, that I personally work on. I have a background in control theory we even used early neural nets in my research. I’m somewhat familiar with how LLMs work. With all that said? I get mini panic attacks about what my job will look like in 10 years. To pay the bills I’ve been in mobile native development for a few years now. I genuinely panic when I see people say AGI is coming soon, or Sam Altman say that AI is coming for Knowledge based work first. Is my whole world about to crumble? Will I be ok? What can I do to prepare?

Then other days, I use chatgpt or bing and I see some silly response, or I remember my control theory background and I feel at ease again.

It’s a really horrible rollercoaster ride mentally.


My wife had an uncle who has been programming a long time, and he tells of a time when he believed that languages like COBOL and FORTRAN IV would make programmers obsolete since now anybody could just tell the computer what they wanted it do and they didn't actually have know anything about computers or programming to do so.

Turns out he was kind of right and kind of wrong. On the one hand most programmers today know virtually nothing about computers and programming in the sense that a programmer in early 60s knew computers and programming. Yet programmers today are more prolific and in demand than ever before. I suspect we'll see something similar play out.


- difficult to automate or low value dev work is already being offshore to cheaper locations

- lot of dev work is just plumbing, config files, packages, APIs and tedious non challenging stuff

- demanding roles have high barrier: leetcode, certifications, open source, github, networking and so on - it will get more imo

- coding is treated as a basic skill taught in schools, like english or maths - not itself a differentiator

AI still does not have access to most businesses internal context, terminologies and so on - that is where we still have leverage.


Existential questions aside, this post is a non-article that adds nothing to this conversation. It's the kind of noise that I'd expect LLMs to replace.


There has been a shift in how we discuss this question.

For a long time, the answer was, "No jobs are at risk, AI can't compete in any scenarios. At best, it's a tool."

Now the answer is, "Only a few jobs are at risk, AI can only compete in a small range of tasks."

It's possible we're at the beginning of a hockey stick graph.

So what would it look like for AI to make the leap to mid level developer? It would have to understand:

1.) The codebase

2.) The technical requirements (amount of traffic served, latency target)

3.) The parameters (must have code coverage, this team doesn't integration test, must provide a QA plan, all new infrastructure must be in Terraform)

4.) The end goal of some task (e.g. integrate with snail mail provider to send a customer snail mail on checkout attempt if it was denied for credit reasons)

It would then have to make a design based as much as possible on the existing code style and library choices and follow it.

This is all probably possible now, although perhaps not for a general AI or LLM. But someone could build a program leveraging an LLM to provide a decent stab at this for a given language ecosystem.

The hard parts:

Point 2 requires an understanding of performance which is a quantifiable thing, and LLMs up until now have been bad at making math-based inferences.

Point 3 requires the bot to either provide opinions for you (inflexible) or to be very configurable for your team's needs (takes longer to develop).

Point 4 requires a _current_ understanding of libraries, or the ability to search for them and make decisions as to the best ones for the job.

-----

What about extending the above for a senior role? Now the bot has to understand business context, technical debt (does technical debt even exist in a world where bots are doing the programming?), and other "situational factors" and synthesize them into a plan of action, then kick off as many "mid level bot" processes as necessary to execute the plan of action.

The hard parts:

Current LLMs are pretty uninspired when suggesting ideas.

Business context + feature decisions often involve math, which again LLMs aren't great at.


I would suggest a 5th that AI tends to be very poor at models and security/error detection. AI is pretty good when there's no errors, no security problems, and no semi-malicious-ish users, but pretty bad under real world conditions.

What exceptions should I be catching off this database connection, not just listing them but knowing even the concept that I should consider those error conditions? What could possibly go wrong with simple string concatenation when putting together a database query? Is there anything wrong with trusting the user to enter a quantity at a self checkout stand, and is there any input bounds checking to be done? (Like not permitting negative number quantities of produce, or not permitting the user to enter thirty different items in a row claiming them all to be cheap Idaho baking potatoes)

"Plz write me a hello_world.py" is pretty easy for an AI, but actually writing usable code seems quite challenging.

I am so old that I predate the internet going back to 1981 and I remember coding without google and stack overflow and similar resources; life was different back then and if you couldn't google how to invert a binary tree either you figured it out yourself (like a bad interview) or you researched paper textbooks or you didn't do it. Likewise AI will be a similar step, nobody will ever again type in by hand something for example like connect to a MSSQL DB in C#. But much as being able to google the algo to invert a binary tree didn't put all programmers out of work, it seems likely that not having to hand type simple stuff will put many people out of work.

I see the AI situation as very similar to the claim that photocopiers and NOLO publications will put all the lawyers out of business. Sure, some very small businessmen will get away with hand filing their own homemade incorporation papers without lawyers, and there will be a small amount of wage pressure, but overall they're not going to get wiped out.


The 7th point which is unfortunately yet another lawyer analogy is chatgpt for software development is kind of like a magic "nolo publishing" where you can ask for any very short programming textbook and chatgpt will push it out.

However, the problem with being your own lawyer (or software developer) is the AI will never exceed its previous accomplishment but your goals and desires and requirements certainly will. So its a rising tide lifting all the boats situation. The game will be upped; you're not getting ahead by using AI you're just not going out of business, and the jobs are going to be even harder and higher paying. The cousin of this problem is if you have no idea what you're doing and randomly file legal papers in a best effort sense, just blindly trusting a liability-free book that may be out of date already, you'll eventually paint yourself into a corner if you try to set up your own LLC, c-corp, trust, will, etc, and as per above the AI was already operating at its limit so it won't be able to help and you're past your limit which is why you used the AI in the first place so you won't be able to help yourself or sometimes even understand the problem. Which is why human lawyers still practice...

In summary point 7 is I expect a lot of money will be made by humans cleaning up after AI "accidents". The purpose of the tech to "do stuff" beyond your skill level, which always turns into a disaster sooner or later.

A pretty good analogy from the pre vs post internet era is being able to blindly cut and paste code, perhaps an algo or an API, doesn't mean you can actually apply it correctly, use it, understand it, or troubleshoot it. And so it will be with AI, its just "nicer looking" code to cut and paste. But everyone who's ever self taught themselves an algo or an API knows they didn't really know anything when they cut and paste, the real learning came after.


And that's just context. It also has to be capable of debugging its own code, and looking for resources when it doesn't have the knowledge to come up with a solution.


That's a good point, and I guess the meta point is that there are a thousand and one things I haven't mentioned here that it would have to do as well. To some extent perhaps we could solve this from the LLM the same way a human would: by iteratively plugging in the errors generated by the output back into the bot and taking some next step based on the suggestions generated.

But then we'd have to coerce the bot into generating structured responses that can act as next steps.


It doesn't have to do any of that - if an AI assistant enables a team of 3 developers to do what just last year needed 6 programmers, then it has automated away half of the jobs, even if those 3 people are the only ones who can debug the code and look for resources when needed.


I'll toss out a 6th point that AI as I've seen it so far is beyond useless at answering "I" questions, but OK at answering "groupthink" questions.

Ask AI if "I" should use AWS, GCP, Azure, private cloud, or ? Instead of an answer explaining what "I" should do you can only get a generic groupthink comparison of "well, Azure is pricey for DB compared to AWS" and no interpretation.


There is no AI. ChatGPT is a glorified and hyped Markov chain generator with zero understanding what's it is doing.


I asked chatGPT for a response:

"There's no such thing as human intelligence. We're just a bunch of organic machines following pre-programmed genetic codes and learned responses, with zero understanding of the true nature of our existence."


I'm probably a bit older than you. Even so, I plan to work as a programmer for roughly the next 25 years at least. I'm not worried about competition from AI becoming a meaningful factor in that timeframe, any more than actors in the past few centuries have been worried about being replaced by trained parrots.


Imagining that AIs would truly be better at writing code than humans one day, I wonder who would write the instructions telling the AI what code to write. It may be a whole new kind of job, kind of like programming instructions that the AI has to follow. Maybe we could think of a name for that. Programmers maybe?


Here's a scenario that might be possible in the future:

Person: I need a website to sell shoes. Here's my brand and logo. Here's a selection of what the product looks like.

5 seconds later.. AI: I've generated 5 prototype sites and deployed them to URL1, URL2 .. URL5. Please review and select one.

Person: URL3 looks great. Let's get that deployed and add all the shoes in this CSV. Monitor the site and make any changes to increase engagement and sales.


That kind of cookie-cutter site generation is already kind of possible without AI. The AI might dream up the design (which is cool, and certainly a threat to designers). But all the hard parts of actual logic that would be unique to this project still needs to be specified, implemented, verified.

Now you might say: but what's so special about a shoe store? Payment, inventory, shopping carts, databases - it can all be the same as every other shoe store!

But that's the key takeaway here. If you work in something where the next deployment has much in common with the last one: then your job is at risk. But these jobs have been eaten by SaaS for a long time. An "online store as a service" was a thing long before AI.

Could AI help take over this type of software development even more? Sure. But that's like machines taking over dull industrial jobs. We shouldn't worry about being out of a job, we should be happy that we can do more interesting things than what the machines can do, just like we could stop weaving fabric.


Why would I need or want an AI to do that? I'd rather pay 40 bucks to Shopify than have to deal with deploying and maintaining code generated by an AI.


Programming the programmers


I think it depends. As a programmer, you really should carefully watch the development, and adapt to it. It might be that the way programmers work in 10 or 20 years will really be quite different than how we work now. Maybe we just talk to some AI and give it high-level instructions on how to design some system, speak about bugs, etc, and the actual coding, debugging, etc is delegated to the AI. So your role transitions more into management.

Or maybe that's not realistic, and it will look somehow different. In any case, I think it will change, and you should be prepared to adapt to it.

If you ignore the whole development, your job might indeed be at risk, or just be at much lower value.


I think there two questions.

The first is whether the current generation of AI is capable of writing finished software based on an end user's natural language specification. I think the answer to that is no. There are too many loose ends and these AIs are not good enough at logical inference and root cause analysis, especially when the logic is extralingual.

The second question is whether these AIs can make developers more productive so that fewer developers are going to find employment. I'm not sure. Yes they will make us more productive, but the demand for developers may well outstrip the added supply. If something gets cheaper, people tend to use more of it.


Yes. For the time being you are still relevant.

But AI is slowly taking over these tasks, pretty much the same way the conveyor belt revolutionized our industry.

It is difficult to predict how fast this will happen, but somewhere within a 5-10 year period high level programming will definitively be different from what we are used to today.

As the AI technology improves, programming will be a matter of dictating what you want, from what sources and how it should be displayed.

Code will become better, stable, accessible for all and make us all high level programmers.

There will still be programmers, however they will most likely be much more specialized than what they are today.


I'm Naive in AI domain.

LLMs are trained on existing contains which are presumably original so far. This means LLM can not think beyond what it has learnt from past resources.

- Would it be safe to assume LLM can not produce new original work?

- I worry internet will be soon will be flood with LLMs generated content which will again feed in to LLMs to train them further. IMO such feedback will hardly help, in fact that may make LLM more rigid in answers.

I know there is parallel effort going to detect machine generated content so that they dont rank on top but its is still in nascent phase.


Much of coding today is doing the same thing all over again in a new environment or new language or tool. LLM's will have plenty of examples of this and will be able to generate those products faster and better.

Original code will be harder to duplicate. We can assume that those coders will use an LLM as a tool to speed up their development, but won't be able to trust it for mission critical items. Although, writing this I realized that an LLM may become an excellent way to bug test.


I think that "mass-produced" code; the type that is fairly typical, for many companies, these days, using systems like React Native, Xamarin, Flutter, etc., are almost certainly going to be replaced by auto-generated stuff.

I guarantee that the C-Suite people are already salivating over the thought of firing all their programmers.

For people like me; probably not, but then, my skills don't seem to be in demand, so much.


Does the AI know your product?

Does the AI know your database?

Does the AI know your design, device, legal and other constraints?

No, and won't for any foreseeable future.


The foreseeable future is quite short though, and getting shorter. You didn't predict the success of LLMs even a year before they hit the mainstream. Right now you can't predict the next big AI thing, what it'll do and when. To me the future stops being "foreseeable" and becomes very foggy (which jobs still exist and so on) a few years from now at the longest.


> The foreseeable future is quite short though, and getting shorter.

Not really. The answers for my questions are as far away now as they were 15 years ago.


The junior programmer we would hire doesn't know any of that as well, they have to get all of that from someone else on the team - so perhaps that job can be replaced by us giving all of that info to some cheap computer instead of a relatively expensive person, as even junior programmers cost quite a lot.


todays generation of AI yes.

I worry about an AI that will LEARN your product and LEARN your database and LEARN your tribal knowledge. As a similie, I fear we are like a horse, in the era of Henry T. Ford.

On the other hand. Someone still needs to interact with ChatGPT or the likes. Someone needs to keep an eye on quality control. Someone needs to ASK chatgpt TO write the code. Its like programming on yet an even higher level. I just worry that, with that increased efficiency there wont be troves of teams writing code like today.

On the foot, chatgpt doesnt write perfect code and can introduce complex bugs that only the best of the best will be able to decipher, so there is that.

On the other foot, when you look at what joomla,wordpress,squarespace did to website design it is a very similar trajectory. There will always be niches


If an AI can "rationalize" your product, know your database, know your design, legal, and other constraints, at that point, what employment role is not at risk?

You wouldn't need project or product managers or designers. Heck, at that point, you don't need accountants either. Maybe you have a team that supervises the AI - but, really, you could have an AI do that. So you have the business owner(s) and zero employees. No more tech companies as everything can be done by AI, including CEO work.

What I actually expect: better tooling. We were not replaced when we went from punch cards to C - it just increased demand for programmers. AI may be the next tooling, but I only see more opportunity: solutions happening faster.


> todays generation of AI yes.

No. They have no idea what my codebase it, how to translate concepts like "audio is limited to users on free tier" into code etc.

> I worry about an AI that will LEARN your product and LEARN your database and LEARN your tribal knowledge.

It won't, not for any foreseeable future. It fails consistently even at the most simple of tasks. If you're afraid that you will be made relevant because we will be able to teach it those simple tasks?

> when you look at what joomla,wordpress,squarespace did to website design it is a very similar trajectory.

What did they do? Did they make you obsolete?


To approximate the validity of this statement, you can ask yourself "What was the earliest time, when I expected AI to be where it is today?"


You're assuming that every single year you're going to have major leaps in this tech. And that in any foreseeable future the leap will be such that an AI will be useful enough to answer those questions.


I am assuming neither, and while interpreting the present is hard, correctly projecting into the future is a lot harder.


Look around and see how many programmers' jobs have been replaced by AI at your office.

None?

Search the net to find how many companies are cutting down their programmers' workforce and replacing it with AI.

Still none?

Search for anecdotal evidence on forums to find any programmer's job been replaced by AI.

All talks point to promises in the "future" only?

Rest assured. It happens when it happens.


Maybe there will be a time where programing in high-level languages will be seen as punch cards are seen today.

Even if that happens maybe the ones that know how programming languages work will be the good ones, as today a good programmer knows about OS’, architecture or compilers.


How many programmers are there for large language models? It seems like there are very few ai researchers compared to the number of programmers.

There will be more people using the ai to program computers, but the ai interface may be so easy that it's no longer a skill.


What kind of maths is required? I am not a practitioner but I try to stay up to date. I read ML papers, and it seems like they only require pretty basic maths (as in High School level.) Linear algebra, calculus, and stochastic calculus. It seems to me you can go a long way with just understanding the chain rule and dot products.

I am trying to build my own Language Model using Pytorch and it is much easier than a lot of the stuff I've had to do as a traditional SWE over the years. My understanding is that the difficulty comes from training massive models that cannot fit on a single GPU efficiently, so more of a software problem than a maths problem.

Am I suffering from the Dunning-Kruger effect ? Or is the genius of the field in the combination of this limited set of techniques ?


I have to deal with POs, PMs, helpdesk people, clients changing requirements every week... Programming is almost a tiny part of my job, understanding what people want now and what will want in 6 month is the real trick


Turns out Tom[0] has the laugh after all. The goddamn people skills will be what counts in the end.

[0] https://www.youtube.com/watch?v=hNuu9CpdjIo


Getting a strong John Henry vibe with Programming (and law degrees). Things are going to get interesting...Tinkering with Calculus and ML models feels like the eventual future of logic tasks.


Surely if there’s one task that’s AGI-complete, it’s programming. So if the AI makes programming obsolete, you probably have bigger things to worry about than job security.


It's like a carpenter asking if they are relevant now that there are intelligent hammers?


Yes, I am.


For fun, I asked ChatGPT what it thought:

https://zmichaelgehlke.com/autocontent/essay-sde.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: