I know web developers who just put together WordPress sites and that’s how it’s been for decades.
Words matter but you need a thousand of them. Someone could say they’re “into rock climbing” but that could mean they climb once a month or they’re obsessed and do it everyday. That’s why we go through interviews or have dates because everything has a certain hard-to-put-into-words nuance.
And that’s why I don’t really care how you use “web developer” as long as you get the general idea.
I legitimately tried my best to like XLST on the web back in the day.
The idea behind XLST is nice — creating a stylesheet to transform raw data into presentation. The practice of using it was terrible. It was ugly, it was verbose, it was painful, it had gotchas, it made it easier for scrapers, it bound your data to your presentation more, and so on.
Most of the time I needed to generate XML to later apply a XLST style sheet, the resulting XML document was mostly a one off with no associated spec and not a serious transport document. It begged the question of why I was doing this extra work.
Making your data easy to scrape is part of the point (or just more generally work with). If you're building your web presence, you want people to easily be able to find the data on your site (unless your goal is platform lockin).
The entire point of XSLT is to separate your data from its presentation. That's why it made it easy to scrape. You could return your data in a more natural domain model and transform it via a stylesheet to its presentation.
And in doing so it is incredibly concise (mostly because XPath is so powerful).
I find RT scores very accurate but not the raw score.
What I mean is that a 70% score is meaningless to me. I need to know the movie genre, the audience score, the age of the movie and then I basically do a “lookup table” in my head. And I have that lookup table because I’ve looked up every movie I’ve watched on RT for 15 years so I know how the scores correlate to my own personal opinions.
As an example: the author said that critic scores should align with audience scores but no that’s not true at all. Critics tend to care more about plot continuity, plot depth and details while the audience tends to care about enjoyability. Both are important to me so I always look at both scores. That’s why a lot of very funny comedies have a 60-69% critic score but a 90%-100% audience score — because it’s hilarious but the plot makes no fucking sense and has a million holes. And if you see a comedy with 95% critic but 70% audience, it will be thought-provoking and well done but don’t expect more than occasional chuckles.
Plex shows you both critic and audience scores from RT (IMDB also) and they indeed diverge consistently on the lines you suggest. In general I trust the audience scores a lot more because I'm trying to have fun watching movies rather than analyze their plot/pacing/cinematography/etc.
The audience can be trusted to know how to have fun. The discrepancy between critic and audience scores is also a valuable signal to judge how fun campy/schlocky/B-movie horror films particularly from the 80s and 90s.
In Rotten Tomatoes there's always That Guy who's being contrary for the sake of being different.
Like Paddington and Paddington 2 had 100% review scores for a long time, until some "reviewers" disliked it on purpose bringing Paddington 2 to 99%
Using multiple sites as an aggregate works. In IMDB you need to check the vote distribution graph and in your mind take out all the 1's and 10's and see where the average/median lies after that.
And it's important to find actual reviewers whose taste aligns with yours and use them as more directed guidance.
I often enjoy movies that are unexpected and don't fit neatly into one established genre, but I think these tend to get lower audience ratings, while films that deliver to expectations do better, even if most of a randomly selected audience would dislike them.
If a movie is a comedy, with a poster with big red letters and a white background, people know it's a certain kind of movie, and mostly those who enjoy those movies will go see it.
Likewise with documentaries about some niche interest - those who watch it mostly sought it out because they're into that.
My immediate thought after seeing the first chart was that it is inversely related to my own experience with movies in the last 20 years. Maybe there's an idea in there for a 'score normalizing' browser extension.
One important factor is that the critics score is binary in a sense: if all critics agree that the movie was "passable but not great" then Rotten Tomatoes still gives it a 100% critics score.
The website explains it clearly enough I would say.
I didn't like the idea that my money had paid for such a disservice of my favourite book, so it pushed me to cancel my Prime subscription that had been ongoing for years. I don't buy nearly as much on Amazon these days as a consequence.
I am annoyed by Rings of Power, but at least we got some fairly passable (if still very flawed) adaptations from Peter Jackson. I'm more salty about Wheel of Time, because that trashed the source material just as hard, and because it bombed it's unlikely we will ever see someone try again with an actual good adaptation.
I rarely get angry about bad content but RoP felt like a personal affront. I love Tolkien's world and the people who put RoP together did so with not just ignorance and incompetence, but some kind of malice. They intentionally butchered Tolkien's writing and world. This stands in such stark contrast with Peter Jackson's position that it is not his right to inject his personal values and narcissistic hubris into the movies. He chose to honour the material as best he could while adapting it. It is, without any shadow of a doubt, the better approach.
> This stands in such stark contrast with Peter Jackson's position that it is not his right to inject his personal values and narcissistic hubris into the movies. He chose to honour the material as best he could while adapting it.
That's funny, because that's very much not what happened with those movies. Remember the character assassination of Faramir? I recall Jackson (or perhaps Fran Walsh) saying in an interview that they deliberately broke from Tolkien's story with that one, because the way Tolkien wrote it didn't fit the story they were trying to tell. They felt that having someone set the One Ring aside when tempted undermined the idea of building up the Ring as a threat in the minds of the audience. In other words, they chose to go with the story they wanted to tell rather than honoring the story Tolkien told.
Certainly the LOTR movies weren't as flagrant as Rings of Power with the liberties they took. And some of the changes were indeed due to the constraints of adapting to the medium of film, rather than a book. But even so, they chose to disrespect the source material pretty blatantly at times.
It's fair to point out the difference re Faramir but I feel it is rather small and inconsequential. He ultimately made the same decision in both the book and movie. Again, I am not contending that no changes were made. A movie adaptation requires changes. I'm claiming that the changes were in service to the material, lore, world-buildings, themes, and messaging. The RoP writers thumbed their noses at all of that.
To me that feels like sacrificing a detail to service the larger story, which when you're trying to fit three whole books into just three movies might be necessary. In RoP they made many changes nilly-willy, missing most of what made the source material great.
Critics often score based on first few episodes to be released in, and never revisit the score. And if it's shiny/ expensive (and RoP was both) and seems like it might lead somewhere, they risk ridiculing themselves by being too critical.
Critics have a political agenda, they overrate movies with “a message”, the message being always leaning Californian. The movie industry is a massive sector with lobbies, and paid critics are no stranger to that.
And as the sibling says, audience pays to see a movie. The audience, the people, are more politically balanced. There is no bias or selection: It’s the democratic components, including people that the “in” lobbies don’t like.
And there's no evidence for "the idea". Also the "audience" reviewers are self-selecting, and in my experience tilt towards shallowness and bigotry. My own preferences are generally better aligned with the top critics.
If I weren't already well familiar with the diverse critic reviews on RT, claims that the critics are "woke" (or equivalently, have a "Californian" "political agenda" that "overrates" movies with a message) would be reason for me to value their views over self-selecting "audience" reviews, which I find to be mostly shallow and uninformed, and with a good dose of provincial bigotry as part of the "political balance". I personally am not looking for "political balance", certainly not as that currently manifests itself in the U.S.
And if paid critics are no stranger to lobbies (or the movie industry as a massive sector with lobbies ... it's a bit hard to parse), I see no particular reason to expect them to have a political agenda that overrates movies with a message--I don't think those are the ones that make big bucks for the massive sector. (I'm more interested in indie fare, or at least stuff with more character and depth and less CGI and juvenile superheroes vs. supervillains.) Much more likely is that this spew reflects a political agenda.
I thought "Californian bias" was a great term precisely because it isn't quite the same (or as shallow) as "woke". How could the movie industry not have a Californian bias? So much of it is made in that very peculiar culture, peculiar even by American standards.
And yet if you hated that sort of thing, why (or how?) would you become a movie critic? Can you imagine being a classical music critic and intensely disliking Vienna? (Another damn peculiar, damn influential culture, by the way).
I agree. It is clear and self-evident that movie critics have a California bias. I cite Emilia Perez (https://www.rottentomatoes.com/m/emilia_perez) with a 71% of critics recommending the movie. This movie won *91 awards* this season. This is, by any objective and subjective metric, an atrocious film. Audiences gave it 17% on Rotten and 5.4 on IMDB. Why did this movie win so many awards and positive reviews from critics? Because it has a trans person as the lead. That's it. The bias is on full display with this movie.
But that is more about "woke" than California. My point was that California is peculiar in far more complicated ways than merely being more trans-positive. Arnold Schwarzenegger doesn't seem especially "woke" to me, but he seems very California. Scientology is hardly "woke", but it's very California. Steve Jobs, same. Utterly weird culture, if we hadn't been so extremely exposed to it. We think of so many things as normal, even though they're not normal at all in our actual lives where we are, but they're normal in California. (Well, more normal). That was Vienna too. It goes way beyond a simple culture war dichotomy.
I completely agree. Feels a bit ironic that not only do anti-woke conservatives seem to implicitly believe California = woke, but this unnuanced read is apparently held a plurality of liberals as well.
A California bias is not the same as being woke. 6 time Oscar award winner and 14 time Oscar Nominee film La La Land wasn't especially woke, but it was damn near manufactured in a lab to appeal to the exact sort of sensibilities that are held by Academy members.
The focus on careerism and social climbing, the nostalgia for an era of media since gone by, the melancholic reality of how damn near impossible it is to succeed as an actor or musician, these aren't woke ideas, but they do reflect the general sentimentality of the people in the greater LA area.
Something I thought you might get into would be series, whether movies or TV / sequels. Sometimes they get devoted fans who love the whole series, and those sequels or later season have great scores, but you might not enjoy them whatsoever.
My example would be a TV show, A Discovery of Witches which is overall well-received, but I couldn't enjoy at all. Perhaps if you read the books, you'll like the show, but for me, it was such an empty show, devoid of excitement or intrigue or entertainment value.
Additionally there are movies who just have something unique to them that a niche audience may love, but both critics and the general audience treat them more harshly.
The truth is that other peoples opinion may or may not be a good proxy for your own taste in movies, even if it was uncorrupted and independent.
I guess because such a tool starts with you having to input a ton of data before being useful. Either people don't do that or if they are willing then the platform would be getting lots of valuable data and wants to keep you feeding it to increase their trove of data before selling off to Amazon or Google.
https://www.criticker.com is the best I’ve seen for this. You rate movies relatively and then they match your ratings to people who have similar tastes and recommend based on that. So if you have period westerns all rated highly they’ll see what other movies were rated highly by people so rare period westerns highly. It’s actually pretty genius.
A recommendations Netflix guy explained this quite well, people lie in their reviews, so they mostly don't matter, what matters is watching habits, those clearly show what you really like instead of the imaginary person that rates movies they'd never watch.
So the actual market for something that recommends like that is quite small.
I would disagree and say this is important if you are trying to maximize screen time, not if you are trying to maximize enjoyment. When I want to half pay attention or I am mentally tired, I put on something easy, usually something I have seen before. I watch it because it is easy and familiar, but I have no desire to watch something “similar”. If you keep pushing it, and auto playing, you could probably get me to watch similar things, but I doubt I will like them.
When I’m actually watching something, I want it to be complex, thoughtful, interesting and challenging. I don’t spend as much time watching but those are the things I watch with all of my attention.
I am actively trying to reduce my intake of low effort tv because it wastes my time, doesn’t actually give me much enjoyment, and takes attention away from things I do enjoy. It’s like comparing the books you read to doomscrolling Facebook. Reading time vs ratings are going to be very different.
There is Tastedive, which has given both great suggestions as well as recommended utter garbage to me in the past. Very hit or miss, but when it hits it hits.
Here is a long ass post with some more but they come with a huge disclaimer — they are VERY personal opinions:
If you care a lot about plot and hate holes, go for critic >70%. 60-69% is passable but only if you like the subject/genre of the movie.
Very personal opinion — I find any movie with critic <50% completely unwatchable. I literally want to walk out of the theater. This includes nearly every modern horror movie because characters in horror movies always do dumb things. I know that’s the appeal but I hate it.
The extremely rare horror movie with >85% critic probably won’t be scary but these are personally the only horrors I enjoy (e.g. The Cabin in the Woods).
Movies with audience scores below 60% are hard watches.
>90% critic movies are really well done as in they did their homework. Left no stone unturned. It doesn’t mean that it’s an objectively good or memorable movie (use IMDB scores for that).
If you like
experimental movies and/or are you’re into filmmaking, go for >90% critic and 65-85% audience for gems. If you’re not, you will HATE these movies.
But watch out — sometimes if you come an across a movie with high critic and low audience, it’s a movie really for people in the movie industry. You have to read the synopsis to figure out which case it is. See Once Upon a Time in Hollywood.
Superhero movies and fandom movies (e.g. LOTR) need extra consideration. If they didn’t follow source material, audience scores seem to be even lower than if it was original content. On the other hand, it also goes the other way.
If you’re a deep cut kind of person, check to make sure that the movie has a high enough rating count. Scores for less well-known movies are less accurate.
Old movies, especially those older than the 70s-80s, are harder to judge on RT. There seems to be a self-selection bias of people who are only rating those movies because they remembered liking them. But at the same time, they were also more revolutionary for their time (to be fair to the movie).
All of these tips are for movies. I watch few TV shows and don’t have insight into that side of RT.
>If you like experimental movies and/or are you’re into filmmaking, go for >90% critic and 65-85% audience for gems. If you’re not, you will HATE these movies.
Just checked some random “weird arthouse” movies I liked, mostly horror genre. It holds up pretty well. And I wouldn't in a million years recommend these movies to anyone that I didn't already know liked "these types" of movies.
The Blackcoat's Daughter: Critic 77%, Audience 52%
Possessor: Critic 94%, Audience 59%
The Lighthouse: Critic 90%, Audience 72%
The Killing of a Sacred Deer: Critic 79%, Audience 63%
The last comedy that I saw that matches your description is American Fiction. It didn’t feature too many laugh out loud moments, but it was thought provoking and well done. And yet, 93% from critics and 95% from audiences.
I wonder if audiences can appreciate these movies more than you give them credit for?
Let’s try a few more
- Death of Stalin (94%, 79%) has the pattern you’ve predicted.
- O Brother Where Art Thou? (78%, 89%) has the opposite of the pattern.
- Grand Budapest Hotel (92%, 87%) was appreciated by both, like American Fiction.
I’m just not seeing a pattern here. Looking at comedies that fit your description the critics and audience scores don’t follow a predictable 95%, 70% pattern.
- Ratings are very personal. I find some movies funny but others don’t.
- There’s more factors involved but there’s no point mentioning them because the movies I like are not the movies you might like. Everyone has to find their own multi-dimension multi-axis criteria.
- And lastly, to repeat what someone else said — I see RT scores as a tool, not a verdict. It just has to be accurate enough where I consistently can pick movies I will enjoy.
The classification of "comedy" seems to be a bit ambiguous.
Funny People with Adam Sandler is considered a comedy, and has a trailer to match. But the actual content of the movie is that of a drama / tragedy. (69% critics, 48% audience.)
The Bear (TV show) is called a comedy but everything I've read paints it as... drama.
American Fiction, for me, was a thoughtful drama with dark humor. And I think that's what the audience expected so the scores match. I never thought it was a comedy.
Maybe this is a me problem where I don't consider things comedy when others do.
I mean, Wes Anderson movies aren't exactly comedy either. They are whimsical and silly, and can elicit laughter, but the stories are dramatic.
I generally find IMDB user scores far more reliable and granular for movies. There is a noticeable jump in a movie's quality when it gets a 6.x rating (okay), versus a 7.x (great) versus an 8.x (a Top 500 of all time).
Metacritic is the next most useful, while Rotten Tomatoes is easily the least useful. High critical and user RT reviews often does not provide a good intensity barometer of how good the film actually is. The last ten years I went from being a loyal RT user to completely ignoring their scores altogether.
I have to disagree with your take on "High critic score / low audience score". There is a swathe of more challenging, experimental, or art house movies that fall into this category. These reviews fill the void where another audience only place like imdb falls short.
Maybe we need to define "bad," because I would argue an enjoyable movie is good. Movies don't need to be avant garde to be good. They just need to be entertaining.
I wish this were true, but the critics as a polity just don't have a sophisticated palate. Many individual critics much like many general audience members do, I'm sure.
This sounds appealingly highbrow, but it's not particularly accurate in my experience. I think the critics often get into their own bizarre headspace that nobody else cares about.
I feel pretty confident that Captain Marvel, Emilia Perez, Spy Kids, Sausage Party, The Last Jedi, and Ghostbusters 2016 aren't at much risk of going over the heads of audiences, but you're entitled to your opinion if you think they count as avant garde cinema
I’m not goal oriented and follow my instincts. There’s no way you could get me to write a 5 year plan for myself.
But I will never pick the fork on the road where I will probably be worse off in 5 years. I won’t take a job where I make good money but sit in a corner doing little, for example. I will regret it.
+1 for TickTick. It gets the job done and I love that I can set reminders for my tasks that I can snooze from my Apple Watch or computer desktops instead of having to rely on Apple Reminders.
Sometimes you can link the bad years of a generally reliable vendor to a new part e.g. the first year they might have introduced a 10-speed transmission.
These first years are scary.
Some vendors don’t seem to change major parts as often, which helps their reliability.
There was an economies of scale back then with OS-level UI components.
If Microsoft spent money on UX research that improved its UI controls, it would benefit a lot of people. Essentially the cost of that research was bore by all application developers.
The problem now? Every company is designing their own UI components. Every company has to bear the cost of UX research individually. It’s a lot of wheel re-inventing. UX easily takes a backseat.
I work on design systems for an enterprise software company. I was talking with one of the engineers on the team about how great it would be if there were better built in browser-based solutions for things like autocomplete, select and multi-select.
Maximizing productivity comes from maximizing efficiency. Efficiency is about sitting down and analyzing your tasks holistically and doing min-maxing to ensure every process achieves its greatest result.
Now ask yourself this: how often do people do this at all? Pretty much never. Most of us only do it when you have to, because you aren’t making enough money, because your application is slow, because you can barely meet budget, or because you are trying to land on the moon and failing costs too much.
The other hard lesson learned during the Covid shutdowns (actually, it is learned and promptly forgotten at every major crisis):
maximizing efficiency also maximizes the number of single points of failure in your system. Anything that goes wrong breaks an optimized for efficiency system.
You need to have resilience and redundancy to deal with variability, but those cost money.
Much better from the management perspective to ignore those risks, cash in on the cheap profits and blame "unexpected events" and get a bailout when things go wrong. They cash in on the easy money and have no downside consequences.
I'm wondering how many people read it to the bottom? It makes the case that productivity analysis tools already exist, maximally productive systems have some slack in them and 100% efficiency isn't just impossible, it's counterproductive (for the reason you state).
But the problem with AI is that it adds a random element which makes any kind of modelling much harder. Sometimes you get good results fasts, sometimes it wastes a lot of time and holds up the project.
You never know what you're going to get. So any kind of project planning becomes even harder.
But that's not even the main point. The subtext - which isn't stated - is that the C-suite has persuaded itself that AI is a system that is more controllable and predictable than human employees.
When in fact - as anyone working at the coal face knows - it's the opposite.
And that's a problem, for all kinds of reasons. The obvious ones, like the loss of expertise through career progression, have already been talked about.
The less obvious one being discussed here is that the more AI is used, the less predictable all kinds of projects become - both in time and quality.
And if the economy is now being designed on the assumption the opposite is true, that's not going to end well.
In previous phases of industrial revolution consistency was the bedrock benefit.
Trying to create a revolution out of inconsistency is a very risky click.
> You need to have resilience and redundancy to deal with variability, but those cost money.
Reiterating for truth, and also to expand upon the point:
These are things that cost money all the time, but only pay off visibly in a crisis. And it has to be a crisis of the right kind (if your headquarters burns down in a wildfire, it won't help you to have 225% coverage on every role). So this makes it very difficult to justify to people who think only numbers—and only specific kinds of numbers—matter.
But redundancy and resiliency, at least in most cases, also make the lives of everyone working there better. They mean, among other things, that if one person needs to get surgery, or take their child to the doctor, or just go on vacation, there's still enough people there to keep the work flowing smoothly. The people still there won't be hopelessly overloaded, the work will get done, and the one person who's out won't have a mountain of catch-up work to go when they get back.
The only drawback is that it means you're paying people to work at a rate that means they regularly have downtime and aren't "fully utilized" constantly. (By definition.)
Onerous tasks can also be shared between redundant positions so that o one person has to do them all the time. I’ve left places that were spiraling specifically to avoid being the last person who knows a terrible tasks. Sorry other guy.
You also spend a lot more time on communication. If you have one person who is the resource for X, they can spend their time doing X and don't have to spend time on coordination. When the procedures for doing X change, only one person needs to figure it out, etc.
That doesn't mean having a single person is the right decision, the benefits of having multiple people are important; just want to be clear about the drawbacks of multiple people doing the work.
The lemonade to be made however is that if you don’t talk about your work you don’t reflect on it, and it’s more difficult to improve if you don’t examine your work and the work of others performing the same task.
That does make you more disposable, but also more useful if you can embrace it.
...But, on the flip side, that extra communication, and especially making sure that procedures are documented somewhere so that everyone can reference it, rather than just having it all live in one person's head, are vital for institutional stability and continuity.
Queuing theory. You can only fill a pipeline around 60% full before you start seeing measurable delays, and 80% is where the wheels begin to come off. The line goes very vertical shortly after 80%.
> Efficiency is about sitting down and analyzing your tasks holistically and doing min-maxing...
Accurate analysis depends on accurate data. Which is why some people I know are required to account for their work activities in 7-minute increments, have frequent and detailed meetings to account for progress, project planning, etc.
Actually, doing that analysis has a cost that must be discounted from any perceived gains of efficiency, to be truly efficient. One learns to tolerate some waste to avoid the ultimate time-suck.
To say nothing of maintence and sustainability -- if you're always sprinting, you're doomed to fall.
For factory optimization they didn't ask people to track their hours at 7 minute intervals. They put a camera on them and analyzed what they did over an entire shift. Then they redesign the factory line to optimize things. Having people manually track time has far more overhead and is less accurate in general. In practice I've never seen a software org take estimates or time tracking seriously beyond "Track your hours in 15 min increments against tickets".
Boeing does 6 minutes, which means that it’s impossible to have a 45 minute meeting. My coworkers decided that if the meeting went over its .8 hours and if we cleared out on time or earlier that it’s .7 hours.
I don’t know why you would do 7. That doesn’t divide into 60 at all. Nor 480.
Maximizing efficiency is how smaller companies eat your lunch. They maximize effectiveness instead. But that’s hard to measure except in customer satisfaction and market share. So it takes a lot of storytelling to get that far, and it can be broken quickly by the wrong execs being hired or the right ones leaving.
I disagree with the toplevel statement. In high-school science terms, productivity is like accuracy whereas efficiency is like precision. You can be very "productive" at a high-level without being efficient. Similarly you can be quite efficient at your tasks while not managing to achieve the toplevel goal.
You may argue that "analyzing your tasks holistically" will make you more efficient at achieving your goals. But min-maxing is for tactics, and is antithetical to strategy. Very often, taking a nap or a bath or walk will yield an insight that changes the entire game, obviating weeks or months of efficient and diligent work. We've all experienced this, so why do we insist on "maximizing efficiency"?
> Now ask yourself this: how often do people do this at all? Pretty much never. Most of us only do it when you have to, because you aren’t making enough money, because your application is slow, because you can barely meet budget, or because you are trying to land on the moon and failing costs too much.
I do it most of the time at work because I find joy in finding the best possible solution.
Words matter but you need a thousand of them. Someone could say they’re “into rock climbing” but that could mean they climb once a month or they’re obsessed and do it everyday. That’s why we go through interviews or have dates because everything has a certain hard-to-put-into-words nuance.
And that’s why I don’t really care how you use “web developer” as long as you get the general idea.