Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Honestly having an interview process that is highly standardized and teachable + learnable is a good thing, so I'm not sure why people complain. You know exactly what sort of questions you will be asked from company to company, and can spend nights and weekends over a couple weeks studying for it.

Most jobs are not like that, and then you get extreme variance in expectation with little communication on how to prepare, which is a waste of time for both the interviewer and interviewee if the interviewee has no clue what will be asked or expected ahead of time.



The problem is twofold:

1. False positives: you get folks who do really well at DS&A yet who are really bad developers. I mean really bad. I wouldn't have believed it if I had not seen their interviews and then subsequent performance. I'd wildly guess that it's about 20-30%.

2. False negatives: you get folks who are really good developers, and yet for whatever reason, perform badly on DS&A algorithms despite practicing. I think this number is higher than false positives, probably around 50% or more.

If you're a FAANG company, you can afford to play these odds. If you're not FAANG, then you're killing yourself by requiring DS&A interviews, almost inevitably.

Both are contributing to destroying the profession for huge numbers of people, IMHO. I mean, that's good for me, because I'm almost certainly sticking around and generally do OK on DS&A interviews with adequate practice (which is a waste of time, since anything beyond a broad knowledge of the performance characteristics of various DS&As is totally unneeded for 99% of us). But it's not right, and I really don't like it.


Where did you get all of your numbers? My experience dictate otherwise. Plenty of people who don’t know DS&A and are bad coders, and call themselves senior.


Is it really true though? FAANG seem to be doing just fine and innovating new products year after year. Obviously the developers are performing amazingly.


"If you're a FAANG company, you can afford to play these odds"


Except programming and engineering is about overcoming the unexpected. Adapt and improvise. Not everything is textbook and, at least this is my opinion, the better engineer is the one that can solve unexpected problems. You can really only judge that utilizing past experience. Canned, standardized questions similar to Mensa intelligence questions or any type of "brain-twister" puzzle are pretty crappy. Once you know the tricks they're applying to the question, they're easy to solve. But that's not the same as actually "figuring out" a real world problem.


Ok, but why is that a bad thing for engineers that are interviewing?

If the interview process is largely memorizing 200 or so commonly asked algorithm questions and that is the gateway to a $200k+ job then it's a good thing for applicants, not "dystopian" at all.

Again, it would be much, much more painful and time consuming for the interviewee if they were asked to code up some fullstack project for every interview. That is far more time consuming, and in my opinion more "dystopian" to expect those interviewing to do dozens of hours of work specific to one interview for free.


I think you missed the entire point of these articles. The algorithms you're forced to memorize are, 99% of the time, useless. Instead of hiring someone by their track record, managers are choosing to hire those capable of memorizing trivia.

Part 2, paying someone on trivia instead of capabilities is not sustainable. The company ends up suffering in the long term. Other engineers that are actually capable have to pick up the slack. Longer hours, less family time, higher burn out risk. Then comes the firing period because the company is losing revenue due to rampant incompetence. Putting even more pressure on the capable engineers. There's plenty of articles where trendy startups have some brutal layoffs, even though 12-18 months earlier had massive funding rounds and went into "extreme" hiring phases. The chickens come home to roost, no matter the sparkling bling of big paychecks.


I think you missed the point of this interview style. This style of interview is scalable, predictable, efficient and effective. It is useful in that way.


It's scalable all right, but the other points- especially the latter two- are being debated in this thread.


Right, which misses the point of having this interview style in the first place.


> the gateway to a $200k+ job then it's a good thing for applicants,

For most of us outside the Bay Area and NYC, it's quite unlikely that we'll secure a $200k/year job unless we go into management.


What a misconception. I know plenty of people in Seattle, Austin, Los Angeles, and even Pittsburgh, pulling in $200k/year in tech.


According to the Bureau of Labor and Statistics, the median salary for software develors is $103,000. Top 10% earned $166,960 or higher. Take out the cities I mentioned and those numbers are even lower.

If you know plenty of people outside of the major tech big cities (and I should have included Austin and Seattle in that, for sure) making over $200,000 year, they you have a statistically unusual sample of friends.

All the data back up what I'm saying. Go look at the numbers. Look at salaries on monster.com or your favorite job site. Perhaps you and your friends don't realize how unusual it is (granted, you also have higher cost-of-living in those cities, sometimes by enough to eat up the additional salary).


Base salary is only one portion (usually less than half) of total compensation. Also, these engineers that I am referring to do not have the entry-level title -- they are often "senior software engineer" or "technical lead".

Maybe I should revise my claim to "top software engineers make over $200,000 in Austin, Pittsburgh, Seattle, etc.".


Interesting factoid for perspective - if you take the US and exclude every metro area at least as big as the Austin* MSA...you still have a majority of Americans.

*I think Austin is the smallest mentioned.


Damn, you're right. I did the spreadsheet math quickly based on 2018 estimates on the top 314 cities by population in the USA. Apparently 100k minimum population is the marker of "city" according to this. Anyways, ~94m city dwellers compared to ~327m (2018) for the USA. "Big" cities are a minority.

At 284, Boulder, CO is considered a "city". Don't get me wrong, it's a nice town, but you can't compare it to Austin (11), Portland, OR (25) or NYC (1). Top 100 cities comes to ~64.5m (Spokane, WA coming in at 100 with 219k).

I just... wow... I don't know why, but I truly thought "city slicker" America made up like 50%+ of the population. At best it's 29%. Not insignificant. But... not a wide majority.


I think it's best to compare metro areas and not cities per se. I live in a metro area much smaller than Austin, but it's still maybe a million people. If a city proper of say 100,000 people was out in the middle of nowhere, that would be very different.


I don't think so, speaking out of my personal experience. Colorado Springs Metro, for example, can gobble up a town called Palmer Lake (59k population). The Springs alone is 464k. Even though the Springs is kind of country (meh, not really, lots of defense and tech moved in the past decade), Palmer Lake IS country. Even though they border each other, they don't really vote the same or have similar concerns. Let's put it this way, your stereotypical hipster can live a good life in the Springs. Not so in Palmer Lake... at all. Then take Manitou Springs on the west side of CS. It's a tourist town/trap. Not at odds, but not similar either. CS is very locked in with the needs of Fort Carson, the Air Academy and 2 other air bases, along with defense and a few major tech firms dropped big offices in CS even though Denver is ~65 miles away.

Then take Wilsonville, OR. It's oddly considered a metro area for Portland. It's a good 30min away and independent AF from Portland. Plus the people there are different. More old money or straight up white trash.

Metro is a really loose/gray term. To say surrounding towns are the exact same as a city is a bad idea. Let's take the purchasing decision of a home in those areas. The "value" of location compared to price and are at odds. One person is willing to pay a premium for location, the other is not. Those are two different mindsets as to what that person is willing to deal with in life.


I think it's important to include the general area because people commonly live and commute anywhere within a few miles. Most Americans live in what are technically suburbs.

And the overall lifestyle of a place depends on the population within a reasonable radius of travel, not what is within an arbitrary boundary.

I live in an area of NY currently that has towns similar in size to where I went to school in AZ, but the surrounding area, say a ten mile radius, has maybe three times the population. It's a very noticeable difference, both in general lifestyle and in employment opportunities, education and so on.


> Not insignificant. But... not a wide majority.

Yes. Maybe you can understand some of the frustration coming out of "middle America"? The new American economy is booming in large metro areas, while the rest of the country stagnates. I'm not a Trump voter, but I certainly understand the bitterness at being economically and culturally dominated by a segment of the country that is at the very most, 50% of the population.


The remainder of the country is still relatively blue and urban/suburban. Just because they live outside of the large metro areas doesn't mean they are rural. For instance, the Austin area was around 30th largest on a list I was looking at whereas I live in a MSA that is more like 60th+. But there are still lots of people here, universities, etc. - it's not the middle of Wyoming or Alaska. It's just that the actual cities and towns aren't that large. I don't live in New Jersey, but have you ever been there? You have like flat suburbs that go on and on - individually, I imagine that you can have a large area with a large population, but the towns aren't so big.


One reason to complain is that these interview-questions usually bare so little relevance to the actual job.


So what? It's a logic test. And you're still proving that you have enough experience to code well in general language e.g. java, python, javascript, ruby, etc.

It would be way more time intensive for the interviewee to be asked to code up some fullstack project for every interview.

Having to memorize some basic algorithm questions that are maybe 20 lines of code each is way better.


> It would be way more time intensive for the interviewee to be asked to code up some fullstack project for every interview.

Assuming the role is a fullstack developer, memorizing basic algorithms doesn’t show one is fit for the purpose.

Providing a simple skeleton of a fullstack project—in the chosen tech stack of either the candidate or the company—and then verifying a candidate can add a simple feature, or something similarly fullstack, would accomplish that far faster than algorithm answers.

Edit: I realize this risks sounding like stupid take-home interview homework. I personally oppose that crap. However, I recognize why some companies take that route, as I don’t think I’d feel confident that a candidate could work in my company’s stack by asking silly algorithm questions. I’d probably feel more confident watching the candidate do a remote screen share, git clone a starter app, and do some simple to moderately complex fullstack tasks. Of course, the tasks should fit the role, I think—e.g., I wouldn’t ask a candidate who’s being hired to tune DB queries a bunch of fullstack questions. And if I was hiring a backend dev to build out APIs, I wouldn’t bother with a bunch of frontend tasks and questions. The hiring processes I’ve seen and managed always had better results when more time was invested in prepping specific, job-focused interview processes, rather than offloading that time onto candidates because recruiting teams can’t actually do more than ask shallow questions or follow checklists.


>Honestly having an interview process that is highly standardized and teachable + learnable is a good thing

Yes I agree but the current way we have is not even close to that.

He touch that issue in the article. So you spend a lot of time and effort to learn about binary tree, great, you pass with flying colors with company A, then you interview with company B, they ask you trie tree ...fuck.


The closest to a standard (and only for FAANG) is Cracking the Coding Interview, but even that is a large pool of questions to keep in one's head far above what you're likely to do on the job.


It's pretty standardized. 99% of the questions you could possibly be asked are on leetcode for most large companies.


The article to which this is a follow-up answered your questions fairly comprehensively. Among other concerns: no, it's actually not that easy to predict what questions you'll get.


Doesn't leetcode pretty much cover most of the question?


There are thousands of questions on Leetcode.


Yes, which is not impossible to tackle. There is structured and systematic way to attack it. Tons of study guide and materials on the net to help you. Not to mention tons of people successfully get hired.


As mentioned elsewhere, this is discriminatory against people with families and and commitments that prevent them from spending hundreds of hours in prep. Not to mention it affirms Goodhart’s Law, where a single metric- the ability to answer DS&A questions- overrules qualified applicants from becoming hired.

Not to mention such interview styles can be gamed. Suppose a Flatiron bootcamp for DS&A questions becomes big in response. What then? An arms race for more and more difficult weeder questions?

Such questions aren’t necessarily bad, but focusing on them to the exclusion of all other skills is becoming an anti-pattern.


I think a decent amount would trade "no technical interviews" for credential hiring (degree required for job). The degree would be a well-known, long-standing target vs. the vague moving target of technical interview competence.

This isn't unlike other professions, but you would lose self-taught developers who don't have the time/money to afford the credential. That is currently something special about development compared to many other professions.

This assuming you prevent people who can't do something like fizzbuzz from graduating with a CS degree.


> this is discriminatory against people with families and and commitments that prevent them from spending hundreds of hours in prep

Maybe but its not the purpose to select people with families or commitment. You choose to have families or commitment, you have to deal with the trade off.

>An arms race for more and more difficult weeder questions?

Its always be an arm race. Why to expect otherwise ?


Because hiring doesn’t have to be this adversarial process. And work doesn’t have to be this dehumanizing race to the bottom that excludes qualified people who are being excluded by bad metrics.


bad metrics ? maybe according to you but I doubt it according to the person who do the hiring.

The people who do the hiring get to the decide what the metrics is and what they considered good/qualified.


Not according to me, according to many in this thread, in dozens of articles posted on this site, and many more across the industry. There are all sorts of management principles and truisms people take for granted, and this is one of them that’s being called into question.


>Not according to me, according to many in this thread, in dozens of articles posted on this site, and many more across the industry.

Yes, that what said, but I doubt according to the people who do the hiring.

It doesn't matter if you think you are right candidate according to you or other people. Ultimately its the people who going to hire you who is going to judge you according to his/her subjective criteria.


Right, but this entire discussion is predicated upon questioning if perhaps the hirers are wrong. Are their interviewing practices serving their organization? A lot of the employers themselves have expressed difficulty at hiring and dissatisfaction with the process. Are they screening for the optimal qualities? There's no shortage of engineering projects that have gone bad. Could that be because the candidates being hired with current practices are suboptimal?

Hiring is still an on-going debate. Google SVP of People Ops Laszlo Bock has admitted previously that internal studies revealed that the interviewing practices at the time didn't really correlate to employee success:

https://www.nytimes.com/2013/06/20/business/in-head-hunting-...

https://news.ycombinator.com/item?id=17196974

We're having a debate here. You can claim the current practices are best, that's fine. But you can't just state it and take it for granted without providing evidence.


That article is missleading, the actual statement is this:

> Four meticulously orchestrated Google interviews could identify successful hires with 86 percent confidence, and nobody at the company—no matter how long they had been at the company or how many candidates they had interviewed—could do any better than the aggregated wisdom of four interviewers.

So it isn't that interviews have zero correlation, it is that they fond no case of a single interviewer being better than the aggregate of four algorithm interviews. And as you see, four algorithm interviews combined have a very high success rate.

https://www.theatlantic.com/business/archive/2016/04/the-sci...


At a certain scale, intent stops mattering. You need to take responsibility for the incentives in the systems you create. To do otherwise is just negligence.

Families are kind of important. They're usually a large part of the reason people have a job to begin with. To callously disregard the impact of a system on families is exceptionally appalling. You should not do that.


I didn't say families are not important, to some people they are important but you can't use that as an excuse.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: