Hacker Newsnew | past | comments | ask | show | jobs | submit | more necovek's commentslogin

It can be your problem if the company goes under and you lose your job: you might not be able to pay your mortgage or bills.

If you believe your manager is asking for unreasonable things in what you are an expert in despite you raising these concerns, and it's not clear their manager is in on it, please raise it to their manager!

"I am willing to continue working this way, but I just want to make sure the consequences it could have on the business are clear to everyone here."


Obviously, unleash LLM code reviewer with the strictest possible prompt on the change :)

Then innocently say "LLM believes this is bad architecture and should be recreated from scratch."


2 decades ago, so well before any LLMs, our CEO did that with a couple of huge code changes: he hacked together a few things, and threw it over the wall to us (10K lines). I was happy I did not get assigned to deal with that mess, but getting that into production quality code took more than a month!

"But I did it in a few days, how can it take so long for you guys?" was not received well by the team.

Sure, every case is its own, and maybe here it made sense if the fix was small and testing for it was simple. Personally (also in a director-level role today), I'd rather lead by example and do the full story, including testing, and especially writing automated tests (with LLM's help or not), especially if it is small (I actually did that to fix misuse of mutexes ~12 months ago in one of our platform libraries, when everybody else was stuck when our multi-threaded code behaved as single-threaded code).

Even so, I prefer to sit with them and loudly ask questions that I'd be asking myself on the path to a fix: let them learn how I get to a solution is even more valuable, IMO.


Pairing people together on a single task makes that task get done faster, with higher quality. However, when paired together, the people still pick up some of the same biases and hold the same assumptions and context, so it is really worse than having a single author + independent reviewer.

So:

  single author, no review < pair programming, no review < single author + review < pair programming + review

Again, if you rotate pairs daily this doesn't happen. You could argue that creates a "team bias" but I call that cohesion.

Can you elaborate?

Is this pairing on small tasks that get done in (less than) a day? Or do you also have longer-running implementations (a few days, a week, or maybe even more?) where you rotate the people who are on it (so one task might have had 5 or 10 people on it during the implementation)?

If the latter, that sounds very curious and I'd love to learn more about how is that working, how do people feel about it, what are the challenges you've seen, and what are the successes? It'd be great to see a write-up if you've had this running for a longer time!

If it's only the former (smaller tasks), then this is where biases and assumptions stay (within the scope of that task): I still believe this is improved with an independent reviewer.


I do have a blog post about this sitting around along with all my other mostly completed posts. Finishing this one would be great as it's a topic I'm very passionate about!

Our stories were broken up to take worst-case-scenario 5 days, the ideal was 1-3, of course. Maybe would be multi-day stories so yes, if something ended up taking 5 days it was possible the whole team could have touched it (our team was only 6 devs so never more than 6).

In short, there are the usual hurdles with pairing, the first one being that it takes some time to catch your stride but once you do, things start flying. It is of course not perfect and you run into scenarios where something takes longer because as a new person joins the ticket, they want to tweak the implementation, and it can get pulled in different directions the longer it takes. But this also gets better the longer you keep at it.

There are lots of pros AFAIC but a couple I like: you develop shared ownership of the codebase (as well as a shared style which I think is neat), ie, every part of the app had at least a few people with lots of context so there was never any, "Oh, Alice knows the most about that part of so let's wait until she's back to fix it" kinda thing. There is no "junior work." When a junior joins the team they are always pairing so they ramp up really quickly (ideally they always drive for the first several months so they actually learn).

Another downside would be it's bad for resume-driven-development. You don't "own" as many projects so you have to get creative when you're asked about big projects you've led recently, especially if you're happily in an IC role.

And also yes, if there is a super small task that is too small to pair on, we'd have a reviewer, but often these were like literal one-line config changes.

We'd also have an independent reviewer whenever migrations were involved or anything risky task that needed to be run.


Once you do publish the write-up (here's some encouragement!), hopefully it catches the attention of others so I don't miss it either :)

With time and experience, reading code becomes much easier.

And well-written code is usually easy to read and understand too!

The purpose of a code review is, apart from ensuring correctness, to ensure that the code that gets merged is easy to understand! And to be honest, if it's easy to understand, it's easy to ensure correctness too!

The biggest challenge I had was to distinguish between explanations needed to understand the change, and explanations needed to understand the code after it was merged in. And making it clear in my code review questions that whatever question I have, I need code and comments in the code to answer them, not the author to explain it to me (I frequently have already figured out the why, but took me longer than needed): it's not because I did not get it, it's because it should be clearer (finding the right balance between asking explicitly, offering a suggestion, or pitting it as a question to prompt some thinking is non-trivial too).


Doing code review as described (actually diving deep, testing etc) for 10 engineers producing code is likely not going to be feasible unless they are really slow.

In general, back in 2000s, a team I was on employed a simple rule to ensure reviews happen in a timely manner: once you ask for a review, you have an obligation to do 2 reviews (as we required 2 approvals on every change).

The biggest problem was when there wasn't stuff to review, so you carried "debt" over, and some never repaid it. But with a team of 15-30 people, it worked surprisingly well: no interrupts, quick response times.

It did require writing good change descriptions along with testing instructions. We also introduced diff size limits to encourage iterative development and small context when reviewing (as obviously not all 15-30 people had same deep knowledge of all the areas).


Wow, that's a very arbitrary practice: do you remember roughly when was that?

I was in a team in 2006 where we did the regular, 2-approve-code-reviews-per-change-proposal (along with fully integrated CI/CD, some of it through signed email but not full diffs like Linux patchsets, but only "commands" what branch to merge where).


Around that time frame. We had CI and if you broke the build or tests failed it was your job to drop anything else you were doing and fix it. Nothing reached the review stage unless it could build and pass unit tests.

Right, we already had both: pre-review build & test runs, and pre-merge CI (this actually ran on a temp, merged branch).

This was still practice at $BIG_FINANCE in the couple of years just before covid, although by that point such team reviews were reducing in importance and prominence.

I believe they meant you could create an executable that is accessible outside the container (maybe even as setuid root one), and depending on the path settings, it might be possible to get the user to run it on the host.

Imagine naming this executable "ls" or "echo" and someone having "." in their path (which is why you shouldn't): as long as you do "ls" in this directory, you've ran compromised code.

There are obviously other ways to get that executable to be run on the host, this just a simple example.


Another example is they would enumerate your directories and find the names of common scripts and then overwrite your script. Or to be even sneakier, they can append their malicious code to an existing script in your filesystem. Now each time you run your script, their code piggybacks.

OTH if I had written such a script for linux I'd be looking to grab the contents of $(hist) $(env) $(cat /etc/{group,passwd})... then enumerate /usr/bin/ /usr/local/bin/ and the XDG_{CACHE,CONFIG} dirs - some plaintext credentials are usually here.

The $HOME/.{aws,docker,claude,ssh}

Basically the attacker just needs to know their way around your OS. The script enumerating these directories is the 0777 script they were able to write from inside the root access container.


If your chosen development environment supports it, look into distroless or empty base containers, and run as --read-only if you can.

Go and Rust tend to lend themselves to these more restrictive environments a bit better than other options.


You'd still need one real measurement at least: this might get proportions right if background can be clearly separated, but the absolute size of an object can be worlds apart.

That's true. And there's lens correction and all that, but it would be nice to accelerate the CAD modeling.

I'd suggest a different verb like "detach" or "unlink".

isolate from the background?

Even better, agreed!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: