Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

IMO, this doesn't seem a coordinated strategical movement (if it was so, they should've done it much better than this) but more of computational resource saving. You'll be surprised by size growth of the entire web and its degradation on signal to noise ratio. My gut says more than 99% of incremental web pages are filled by some auto generated craps. The problem is there's no great economical way yet to figure out which is garbage and which is a genuine content. They should develop a better, scalable technologies for that (and it's fair to say that they should've focused more on this), but LLM is still too expensive to run and vulnerable to lots of attack vectors.


The real problem imho is that Google's and other search engines' design are fundamentally flawed and can't be fixed.

They want a single box that can do everything because they don't think an user can be given a minimal amount of training to use a search engine, and they want all results ranked in a single, linear list similarly.

It's becoming increasingly clear that when you do that, all long tail results turn into garbage.

Google has become essentially a phonebook. If you type the name of a thing, Google can quickly find its official website. Anything more complex than that and it quickly derails.

A good example is reviews. Google literally can't find reviews if you type "reviews" in the query, because it can only find webpages that strictly contain the terms you typed. This means that for a review of something to be found, the writer would have to literally write "reviews" in their webpage, or Google would have to interpret the query non-literally and heuristically categorize webpages as reviews, which is error-prone, because then you'll have lots of webpages that aren't reviews appearing when you search for reviews and the word "review" you typed is apparently ignored completely.

The whole authority concept also feels fundamentally flawed.

If the most brilliant researcher in a field has a blog that is a gold mine of information, that blog is never going to appear in any search result because this person would be too busy doing their research to waste their valuable time building backlinks to rank.

There are too many cases where "ranking" will go wrong. It's wrong to assume Google can deliver the best result as the first result, as it has no way to make value judgements on the content of articles, but this assumption apparently drives Google itself to implement policies to hurt its own results. They seem to only care about the first result. They don't want users to browse a whole list of results. I believe "browsing" is the key word here. If it's not possible to browse, it's not possible to find it yourself, and you depend entirely on a machine that is a black box to do the finding for you. When the machine fails, there is no recourse.


> If you type the name of a thing, Google can quickly find its official website.

with five ads before it pretending to be its official website




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: