Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
[dupe] It's probably time to stop recommending Clean Code (qntm.org)
120 points by azth on Nov 12, 2021 | hide | past | favorite | 88 comments


Good article.

But discussed with 800+ comments five months ago

https://news.ycombinator.com/item?id=27276706


> Martin asserts that an ideal function is two to four lines of code long

This is the kind of assertion that's going to need a very compelling, realistic example in order to avoid being seen as anything other than some sort of purely theoretical fantasy.


I’ll take a 100 line commented function over one that delegates to a dozen other methods that aren’t used anywhere else. The more functions you add to the mix, the less clear the logic becomes and the easier it gets to break things.

Abstractions have their place, but the evolution of my 15 year career has led me to use them as a last resort when I find myself needing to copy code more than twice.


To me, that's a procedure, not a function.

It's a pity that so few of our programming languages distinguish between these two quite different concepts.


What is the difference between a function and a procedure to you? I assume it's not just about whether something returns a value or not.

The only language I use that differentiates between functions and procedures is PostgreSQL, and to be honest I find the distinction pretty annoying in practice. One can return values, the other can control transactions. So when refactoring code you sometimes need to make a function a procedure or vice versa and it's a pain. (As a consequence, I try to always use functions to make things easier)


Fortran distinguishes between the two as well.


I just don't understand how 100 lines comprising 10 or 20 functions in the same file (as in the cited example) is any different than 100 lines of a single function that does the same thing. Is it just that the former has one fewer tab per line?

This is the sort of advice that can quickly become a thought terminator because the writer has so much authority.


The difference is that 1 big functions naturally encodes the ordering and dependencies between things that happen inside - because it's linear and has nested braces.

10 or 20 small functions with no nesting inside still has all the complexity, but none of the structure to help programmer understand it.

Functions can be called in any order, any number of times, and you have to trace the full call-stack in your mind and keep it there to get the same understanding of what happens that you had just by looking at the linear code.


The thing about "functions should be X lines long" is that an appropriate value for X differs substantially from industry to industry, language to language, and domain to domain. My game code (mostly C#/unity stuff) often has longer functions with more commenting, whereas my server code (Go stuff) has virtually none of this. I definitely write 100-line long functions with lots of commenting in C#. I never ever, ever do that in Go. Maybe I would if I was writing a game in Go, but for Go I'm writing servers, and I never ever wind up with code that looks that way in server code.


It seems fundamentally nonsensical. It’s like saying a sentence is ideally 7-14 words long.

A sentence should be as long as required to concisely get across one idea.

“A function should be as long as required to concisely do one thing” might be a better sentiment. But even then it will have exceptions.


It's interesting because it contradicts the more nuanced opinions of renowned developers like John Carmack.

http://number-none.com/blow/john_carmack_on_inlined_code.htm...

Setting aside the lesson, I find Carmack does a wonderful job articulating his argument, whereas the Uncle Bob's put a lot of weight on an appeal to authority.


It's worth noting that the book opens by explaining that none of the rules within it's pages should be treated as iron-clad law. Rather they should be treated as guidelines. They should be followed more often than not, but a skilled designer/developer should know when it's appropriate to break them. Keeping functions short is usually good advice, as it helps prevent developers from breaking more important rules such as "one function should do one and only one thing". When writing a function that's becoming too long, memory of this advice should raise a flag in a developer's mind to reconsider restructuring their code. Reconsider. Not act as a robot and restructure to hit some arbitrarily limit. The book is very clear on this.

It's been a while since I read Clean Code, but I seem to recall the author giving numerous examples of bad code where it was obvious the developer attempted to condense their code into as few lines as possible at the expense of clarity. This immediately proceeded the advice to keep functions short. The author does this as a warning to not follow this advice blindly. Commenters on this thread are cherry-picking advice from the book and ridiculing it by presenting edge cases where the advice would be bad. To which I imagine the author facepalming and saying "yes, that was the entire point".


Multi hundred to thousands of lines of code in a single function that are pure spaghetti code used to be the norm in my early career.

I was certainly guilty of it at times.

The clean code movement has drastically improved the quality of most projects I’ve seen.

I see plenty of garbage but nothing like 20 years ago.


Abstract spaghetti is still spaghetti. I prefer having long messy functions for linear reading rather then having to read down call stacks of messy but smaller functions. I get lost way sooner trying to keep the call stack in my head. Easier to debug too. You can place the break-point exactly where you want and don't need to guess which function call order is going to take you somewhere.


Good point. Super abstract async and messing with global state. Not so fun times


There's a spectrum from "pure spaghetti" to what I'd call "clean code enthusiast".

The sweet spot is somewhere in the middle.

I've definitely refactored code which was done by someone who was clearly a clean code enthusiast where they couldn't generate enough single line methods.

I took a lot of those single line methods, that were often used only in one place or where the name of the method was nearly as complicated as the expression that it was replacing and by in-lining them did not produce spaghetti code and which eliminated the cognitive load of that extra method.

You have to account for the cost of extracting a method, which is always additional cognitive load and the possible cost of a reader scanning through the source code to look for the definition of the method (even if its still in the same file).

When you are cleaning up clear spaghetti code the cost of the cognitive load is generally going to be much less than the benefit of removing the impenetrable spaghetti.

But at some limit you no longer have spaghetti code yet you can still continue to pull out methods which are mostly useless abstractions which give the reader more names and more overhead. At that point you're just inventing a complicated domain specific language composed of your methods which are adding pure cognitive overhead to the reader and offering nothing in terms of actual code cleanup.

Also not every way of breaking up spaghetti code is equivalent. If you use complicated abstractions in an attempt to avoid code duplication you can wind up with code that is deduplicated and abstracted and not spaghetti but completely impenetrable. I've had success with removing abstractions completely producing simple, clean but duplicated code, and then approaching the de-duplication problem by producing simple instead of complicated abstractions.

And when you extract a method you've invented it so you're instantly familiar with what it does and your rationale for naming that way, and it seems perfect and good. When someone else approaches your source code, your subjective rationale for why something was abstracted into a method and what it was named and what it does may not be anywhere near as intuitively obvious. You can generate an affinity for your own abstractions due to the familiarity of inventing them that blinds you to how difficult it is to read your own code for someone with no experience with it.

The best code isn't measured by line count of methods or anything like that, but lowest cognitive overhead on the part of the reader. All the objective measures that are offered are really trying to get at that actual subjective measurement. And every single objective rule about writing clean code have edge conditions that you need to violate from time to time to produce actual easy to read code. All rules about clean coding are going to be terrible if they're followed 100% of the time. Some rules are better and should be followed almost all the time, other rules are closer to 80/20 and should be fairly commonly rejected. When you reject a rule you should understand why the rule is there and what it costs when you reject it, and understand what benefit you're getting by rejecting the rule.


I've seen a lot of Go code where things are chopped or abstracted on purpose just to make it look nice on "paper" but it is actually really hard to follow what the code does


A tip from someone who programmed professionally in probably 10 different languages by now, this happen in every single language where you can create abstractions, not just in Go. Could be that it's easier/harder in some languages, but crap programmers exists everywhere.


Golang Channels are like this, in my experience. Like little tubular black boxes where data goes in this Wonka-esque tunnel of wonder.


I thought it was me being new to go that made some code bases hard to figure out


…until somebody makes it a lint rule, and you are tasked with making the examples. :-)


The easiest way to write unreadable code is by splitting your functionality across 20 clean small functions.


It depends. Have you ever read a parser made using combinators? It would be composed of hundreds of small, compact functions called "combinators" that all take the same input (a character or token stream) and produce the same output (a partial parse tree).

They are readable because you can look at any one part and understand what is going on. Referential transparency is important here, as you can be assured each combinator has no strange side effects. Composability is also important, as it gives you an idea as to what each function call does without having to look it up.

If your functions have side effects and 100 different interfaces, then yes, that's a mess and hard to understand. If your functions have predictable interfaces and are referentially transparent, then that's a completely different scenario.

I mean, pretty much every Haskell program ever is split between a number of clean small functions. But for someone who understands Haskell it's hardly unreadable. Quite the opposite.


As someone who regularly inherits functions that are 300 lines long, I'll take 20 clean small functions over that any day.


I am reading this comment section in disbelief. If I were into conspiracy theories, I’d think it’s a spontaneous collaboration to misguide developers and ensure continuous job security for those of us whom desperate businesses would have to hire to clean up and maintain Copilot-generated single-function spaghetti.

Modularizing code into focused units is a big part of how you end up with sustainable software (another important part is understanding the requirements).


One of the easiest ways to make code clearer is to have well-named single-purpose functions that are used in many places.

One of the easiest ways to obfuscate code is to have control flow obscured by hiding it in functions scattered around the codebase.

Getting the former without the latter can actually be difficult, particularly with large teams.


Only if said functions are illogically split along non-obvious boundaries. Very frequently, though, I find I can break down concepts to two-three statements in a way that makes it very concise. But also not a hard rule, because there are cases when long-and-drawn-out work as well.


Even logical splits impede readability, because they require tracking and navigation during reading. Every new API has a cost.


I agree. If a function is only ever likely to be used in the one place it was pulled out of, you're probably better off just using a comment instead.


> Even logical splits impede readability, because they require tracking and navigation during reading.

IME, most reading of code has a specific focussed goal for which logical splitting allows not reading lots of irrelevant code you'd have to read if it was a non-abstracted mass.


I take this approach for internal divisions within the same file. For external APIs, you're operating under a completely different set of constraints. As I recall, similar advice was in the original Clean Code: context guides which principles you apply.


That advice is absolute garage. That would be like writing a book, but you can only have four sentences per page. Maybe fine for a kids book, but any real author trying this would be laughed at, rightfully.


These little functions seem like writing a book with just [4]

4* footnotes


My mine problem with Clean Code is Uncle Bob technique for refactoring big functions into classes.

When you have a long pure function that has several local variables and changes them in several places - you can't trivially split it into smaller pure functions (because you would need to return several values from each of them).

You could return Pair<X,Y> or similar, but that's clumsy in Java. Sometimes these values form a single consistent type - then it's simple - make a class out of them and return that. But often they don't, they are just conceptually independent parts of the algorithm. For me that just means the function is cohesive, either change the algorithm or keep away.

Uncle Bob's advise is to make a new class out of that function, change the local variables into private fields and split the code into methods of that class with side effects modifying these variables all over the place.

This results in exactly the kind of code this article complains about, and I agree it's awful. We managed to change code that was explicit about what it does, with no side-effects and no easy way to break it accidentally - into a mess of small interlocking surprises. Whole classes of programming errors that were basically impossible previously are now easy to make.

Can this state even happen when I'm in this function? Should I check for it? Can I call this function after I called that other function? Can I change this variable in this function after requirements changed? The answer after the refactoring is "Dunno - solve this intricate puzzle to find out."


The problem with almost all of Martin's advice, and the advice of all those other full-time conference speakers who apparently never actually write any code, is that it's entirely unsupported. It's random opinions thrown out there.

Where is the data or evidence for his ideas?

Nowhere.

Even a few that call themselves 'scientists' still don't offer any evidence. I don't know why we tolerate it. Why doesn't someone shout out 'where's your data' at their talks?


90% of the "good" advice from Uncle Bob is unattributed restatements from Kernighan, Plauger, Ritchie, Yourdon, Myers. Big revelations like "DRY" and "SOLID" are buzzwordy rehashes from The Elements of Programming Style (1974) or Myers' books on composite software design.

I've wondered what experience and successes "Uncle Bob" has on his resume that makes him an authority. Whenever I read his articles I get the feeling he's a pompous fraud.


Because it's not practical. There are few people in a position to do research on entire software teams.

Also, even when it's possible to research something, external validity (whether it applies in another situation) is often a matter of opinion anyway.

We still need ways to share what we've learned from experience. But I think case studies, telling stories about what we learned in particular situations, are about the best we can do.


Can Martin point to _any_ major successful project that he's influenced with his ideas?

If not, then that's clearly a astronomically enormous red flag. If his ideas are that good he should be shipping successful stuff left, right, and centre. I don't think he is.

I'd listen to someone who's shipped great things. I don't think he's shipped anything substantial at all, let alone anything great.


I don't what he's been doing either. I guess if anyone is curious, they could check it out?


The problem I have with Clean Code and other design philosophies is there are so test driven. TDD adherants will tell you that lots of unit tests will make the code clearer, but I'm not convinced. All the hoops people have to jump through with dependency injection, test code overhead I'm not convinced any more that all this clean code is better than the stuff we wrote 20 years ago that was just tested at the final product level.


My own experience is that companies that have some sort of formal automated testing have MUCH higher quality code. A testable codebase has to be modular, people spend less time chasing bugs and can refactor more confidently. That being said there are some over-the-top testing philosophies that are more actively harmful than helpful.


Sure, but do you think the formal automated testing could be integration tests and avoid unit tests altogether? Esp for microservices I think this could be the best way to go. So much simpler and confirms you're getting whats in the spec.


I think that "good coverage" with integration tests certainly is possible. But to get that coverage /confidence you typically will have to make other test enabling code that while not being DI and stuff that affects the production code still is code. I was at a place where it was practical to drive most of the testing this way and we had to produce fragile and complicated code to set up the database in the proper way. And of course the tests were too slow so we made lots of effort to optimize. And in the end the coverage was too low but we had enough confidence to release often. That confidence was partly due to our customers being so deep in our product without a proper competitor so the bugs the customer found was not a big deal! :)


Integration tests can become heavy for complex systems, and unit testing then gives you faster test-modify loop when developing code.


I think they could be any kind of test that makes sense, so yes. I think the important bit is the automated part.


In my experience, TDD adherents tend to work in languages with no type systems (JS, Ruby, Python) or languages with types of insufficient expressivity to enforce invariants in the code without significant effort (Java). Unit tests are ultimately failings of your type system, a sufficiently advanced type system with dependent types would make them entirely redundant IMO. Integration tests are still useful as a compiler can hardly be expected to check the validity of cross-application-boundary concerns (unless?).

That said, I think TDD is a cargo cult. I think a better approach is to determine on a case-by-case basis if a test is useful for whatever you're working on and developing a sense for that is part of becoming a better software developer. Things like coverage metrics completely obliterate this nuance and lead to some of the most obvious, ridiculous tests I've ever seen.


> I think a better approach is to determine on a case-by-case basis if a test is useful for whatever you're working on and developing a sense for that is part of becoming a better software developer.

Do you have any hard evidence that this is actually a skill that some people have and others don't? How do you even measure it to verify?

I'm genuinely curious -- I've been thinking in these terms myself recently, too, but my literature search came up empty.


Yes, I hate the mentality that unit tests are the holy grail. I work in data science. A lot of data scientists are very mediocre programmers. The solution was for software engineers to assert that data scientists at our org should work more like them. Test coverage is pushed as a critical metric. But it's really a terrible fit for what we're doing. Data science QA needs to track the state of the data at different steps in the pipeline. Unit tests, imo, are better suited to uses cases that deal with user input where you can concretely cover a good spectrum of expected states.

Not all unit tests are bad of course, but many in my context, imo, are unhelpful.


I've grown to rely much more heavily on integration level testing for that exact reason. One still has to mock certain dependencies such as 3rd party APIs but it's far more reusable to figure out how to do that imo than figuring out how to mock every single class you're interacting with.


Now I'm feeling dumb but... why would you ever need to mock every single class you're interacting with?

I'm often test-driving stuff with unit level tests and the only times I have to mock collaborators are when the collaborators are

- third party services,

- badly designed, or

- not built yet.

This has nothing to do with unit/integration level testing.


I was thinking of access to files, database, http endpoints, etc that need to be mocked if the unit your testing has dependencies that do those things.

With integration tests my expectation is that we're using real dependencies just fake data which is often easier to set up.


If anywhere close to 100 % of your classes accesses files, database, http endpoints etc. I still think you have badly designed code. (And if that code had been test driven, maybe it would have had fewer classes with external dependencies.)


I write concise procedural PHP with 0% test coverage for all my personal projects. They handle easily 2-3x the traffic of $dayjob and don't fall over on a regular basis.

We need to go back to writing code like we are on resource constrained systems, which forced you to be explicit in what you did and think about why you were doing it.


Perhaps when you just release something once for once customer and never look at the code again. If you actually want to maintain something and release it multiple times this is very untrue. Let us say that a feature takes twice as long to implement the TDD way. So without tests it takes N and with tests it takes 2N. Sounds like a clear win for omitting the tests, right? No, very wrong.... How do you even know any of your previous features still work? Testing? Hmm.... not automatically if I get the gist of what you are saying... Manually? O dear.... Hmm.... You do realize that it is very easy for code to have unintented consequences, right? So, ideally, with every release any feature could be broken. You have to test them all N.... manually... If there are m releases this would be m N. If m is some percentage of N. E.g, on average you release every 10 features, we see that development time suddenly becomes of the order of N^2. Ow.... the TDD development time is still order N..... Could it perhaps be that this statement is actually incredibly short sighted....

Then there is the code quality issue. The stuff that was written 20 years ago was somewhat hard to refactor because noone dared because who knows what would break. So basically every code base would degrade into a mess and become harder and harder to maintain over time. A code base that is maintained with automated tests can be improved as time goes on with relatively low risk.


mocking and dependency injection (what a stilted phrase!) seem to have be championed by "Uncle Bob" (definitely not my uncle).

perhaps the pertinent take-away is "be sure you test thoroughly" rather than "do things my way and you will mess your code up egregiously, likely increasing the cognitive load to work with it beyond sensibility, and maybe test thoroughly as a side-effect thereof".


Whether and how to use mocks is a different decision than how much to test. There are people who write a lot of tests and also dislike mocks.


I kind of agree on some level. The amount of work required doesn't necessarily justify the benefits. Especially for non-critical features.


With enough time and effort, anyone can write code that adheres to some coding guideline. Meanwhile a disproportionate amount of developers struggle to provide practical, concise documentation of their code. I don't want to dive into Mr Clean's legacy code to find out how to call the stuff needed to develop some feature because he's too smart to write it down somewhere. I'd rather have a more informal codebase where the developers have taken the time to explain what each relevant thing does and the quirks of each non trivial function.


Sandy Metz, a Ruby lecturer, also advocates 5 line functions. I'm keen to have functions limited to 24 lines, because as someone with significant vision challenges who uses large fonts, that's as many lines as I can get on a screen, and I want to see the whole function at once.

But I also think it's worth asking "what language are we in"? When I'm writing in C, I'm going to be maxing out that 24 lines, because it's sufficiently low level that seems appropriate. With powerful languages like Python and Ruby, you can accomplish so much with 3 lines of code, maxing at 5 doesn't seem terrible. With Java (yuck, not a fan), one of the problems is there's so much boilerplate for every function that it just seems like the screen is filled with non-informative blah-blah-blah.


Most of the discussion is about the rather secondary topic of function length, but the actual article is much more indepth. I love the dissection of the render method (which is in fact, only 4 lines long):

So... imagine that someone enters a kitchen, because they want to show you how to make a cup of coffee. As you watch carefully, they flick a switch on the wall. The switch looks like a light switch, but none of the lights in the kitchen turn on or off. Next, they open a cabinet and take down a mug, set it on the worktop, and then tap it twice with a teaspoon. They wait for thirty seconds, and finally they reach behind the refrigerator, where you can't see, and pull out a different mug, this one full of fresh coffee.

Worth a read for the instructive value of the critique - whether you agree with either point of view, much is to be learned in the "space" between different viewpoints.


I love succinct analogies like this. Anybody could understand that.


In the context it was written, it was pretty good advice. Maybe a bit too far on some points, but it was a solid step in a good direction. This caused the book to be praised. But we tend to elevate praised books well beyond what's reasonable, dogmatically follow their advice even when they cause problems, and recommend them even though they're responses to a discussion that nobody has been having for 15 years. Same with Gang of Four and a bunch of other books.


"provided that we aren't too dogmatic about how we define 'one thing'"

And herein lies the problem. If I define a function that calls two other functions, does the caller do one thing, or is it two? The main() function ultimately does everything. Does that mean that main() is the worst function in an application?

I studied programming in the 80's, and I seem to recall that evidence suggested that large functions were not necessarily more unreadable nor unmaintainable than small functions.

So in the end, are we are forced back into a circular argument: good programs are written by good programmers, and bad programs are written by bad programmers. How do we know that they're good programmers? Because they write good code!

Parenthetically, I've always been suspicious of his "Uncle" moniker.

"Functions should have no side effects." So, Haskell it is, then. No printf for you, monads is where it's at.

"output arguments are to be avoided in favour of return values." So what about functions that affect multiple values? Are we supposed to use structs everywhere? What about error conditions? Are we supposed to use hybrid variables, which are often regarded as a source of problems?

No silver bullet.


I'd say it's long time past recommending "clean" anything. Every time I hear someone talk about "clean" writing, "clean" design, "clean" code, or anything else I want to vomit because it's meaningless noise. You challenge that same person to define clean and they spew out nonsense. Production of any creative work, whether it's architecture, code, writing, design, etc is hard and messy. Compromises are made. I can't help but think of Christoper Alexander, IE: the timeless way - yes, there is a something there, and it tends to be minimalistic, but is that "clean?" I just want to have conversations without pithy language.


Clean code is specifically a book about how to write code well. This isn't criticizing writing clean code, but specifically what that book recommends programmers do.

I wouldn't say the mission of trying to create "clean" code is a bad one. Certainly there is terrible code to read and maintain, which implies there are good ways to write code.


I completely understand that it's about a specific book. My beef is with the fluffy, imprecise language that gets used when discussing quality standards.


“Modern” is the same. It means basically fuck all.


Modern normally means either that it's an unattractive building/artwork/sculpture or that it's implemented in JavaScript.


Amen brother / sister! "Premium" is another that chaps my hide.


This is why I never understood or liked NPM. Apparently it's the DRY methodology that I don't like, one thing per function.

This is one step closer to understanding a Bash coder. Just get things done, because you don't have the time to worry about how it was done. Maybe a bit of optimization but really optimizing some string or numeric loops that are split second over optimizing functions or operations that take longer periods of time are much more important. Readability and the ability to debug is also important.

Stop worrying about the Purity Of Essence of your code, you'll end up turning on your own kind while muttering something about the fluoridation of the code supply.


I've had problems with the ideas in Clean Code for a while, but I couldn't really effectively communicate them, until I watched this video by Brian Will. I think it is excellent: https://www.youtube.com/watch?v=QM1iUe6IofM


I used to only make a function if a block of code was used in more than one place. But now if performance isn't an issue (for example a block of code that isn't in a tight loop), I'll take a chunk that together does one thing and make it a self-descriptive function for readability purposes.

Instead of:

function foo() {

// this does abc

big block of code

// this does def

big block of code

...

etc

}

and having a function I have to scroll pages through, I now do this:

function foo() {

DoABC()

DoDEF()

}


Yeah, what we really need is elements of programming style written in C, Python, or Go.


if i could distill clean code into one word it would be “small”. small functions, classes, files, changes, and iterations. i don’t see anything wrong with this but understand there are differing points of view. what i do think is important is that developers get placed on a team that is aligned with their taste. how you feel about uncle bob can be a revealing interview question.


The only real metric worth using here when discussing such issues is wtfsploc


Does your code work? Can your interested peers tell what's going on?

If yes, great, your code is fine. Clean code is a meme foisted upon us by bloggers and the equivalent of dev influencers.

Edit: Can we have one thread without the political topics overtaking the discussion? It's Friday, come on.


Can your code be easily changed, or will a change in one place cause a domino effect all over your code base? Does a simple change in requirements require a change in one (or a few) places or is it a "shotgun surgery"? Is the difficulty of changing your codebase negatively affecting the viability of your business?

This type of attitude to what's "fine" is immature and in my opinion represents a clear and present danger to businesses. I have seen several companies brought to their knees because their codebases were so difficult to change.

And that's not even touching on issues like security, performance, reliability, etc.

Code quality matters.


Goes a bit deeper. Can you replace a source of data with another in a few lines (besides the new implementation) and not break a whole chain of dependencies and associates unit tests? (apart from "does someone knows what's going on") - i.e. code at scale.


My criteria is: how long does code last during product iteration and spec changes. If it’s immediately thrown away every time requirements change, the code probably ain’t that great. If it is relatively easy to evolve, something has gone well. However, if you can’t change the code because it’s such a complex web of interdependencies where you musta modify the requirements, the code is very bad.


Pleasantly surprised to see that this is a technical discussion instead of a critique against Bob's political views. I went into it fully expecting the latter.


Unfortunately I think that the motivation for that is the same.


Regardless of what you think the motivation is, these things should be debated on their own merits, otherwise you run commit the other ad hominem fallacy where you reject criticism of an idea because of where the idea came from.


Judging from reddit, I think there's an appetite in the tech community to 'cancel' Bob Martin for his political beliefs. If it also turns out that his technical ideas are wrong, then that will make it much easier.

It's like our little mini version of the moscow show trials.


I think this is correct. Here is my somewhat simplified but as far as I know not too far from the truth version of what happened based on following from the sidelines of Twitter and HN, please correct me if I'm wrong (feel free to adjust my score as well, I don't care about that, but tell me what I got wrong):

(Oh yes, as I mentioned to someone else earlier today I'm so utterly fed up about how we treat certain topics. I actually respect actual climate scientists but I don't to same degree respect people who are out to punish climate heretics.)

As far as I can see Bob was everyones uncle, generally popular, not infallible but widely respected until he did the grave sin of not paying enough respect to climate scientists or something to that effect. In fact there is a chance he just asked questions, but that is a sin so grave it got its own derogatory term, JAQ-ing. As everyone should know by know asking questions isn't good scientific behaviors and everyone who ever does it only ever do it to it to derail constructive discussions.

After that he became pretty clueless and his books became dumb and useless. All he knew about programming is wrong and it turns out he never helped any project.


If this is supposed to be related to the article, you've completely the mark. The article presents a very nuanced view on the book, including its historical context, the wealth of good advice, but ultimately also the impractical or antithetical parts.

I hadn't heard of Uncle Bob being cancelled before this thread, but I have voiced my reservations about Clean Code to colleagues multiple times before.

You're not doing anyone a favor by painting all criticism of his work like that. It's meaningless polarizing nonsense.


Just speaking for myself - I agree the article is a good technical criticism. It's the timing of posting the article again that's suspect.

But maybe I should give the benefit of the doubt.


> If this is supposed to be related to the article, you've completely the mark.

It was not related to the article but to the comment.

Sorry for the confusion.


Had a quick look - wow what a low bar for controversy. Can we all just deliver Twitter and stfu




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: