It's a peer review platform build on atproto tech (aiui the vision), not to be social media, though I would not be surprised if it has elements of that
Peer review goes beyond the formal process, in the court of IRL. Social media is one place people talk about new research, share their evaluations and insights, and good work gets used and cited more.
Arxiv has been invaluable in starting to change the process, but we need more.
Has a bit of a leg up in that if it's only academics commenting, it would probably be way more usable than typical social media, maybe even outright good.
Calling it peer review suggests gatekeeping. I suggest no gatekeepind just let any academic post a review, and maybe upvote/downvote and let crowdsourcing handle the rest.
While I appreciate no gatekeeping, the other side of the coin is gatekeeping via bots (vote manipulation).
Something like rotten tomatoes could be useful. Have a list of "verified" users (critic score) in a separate voting column as anon users (audience score).
This will often serve useful in highly controversial situations to parse common narratives.
I'm not sure anonymous users should be able to join. Arxiv's system of only allowing academic users seems fine for this, although exceptions could be made for industry researchers.
I feel vindicated when I say that the superintelligence control problem is a total farce, we won't get to superintelligence, it's tantamount to a religious belief. The real problem is the billionaire control problem. The human-race-on-earth control problem.
I don’t believe the article makes any claims on the infeasibility of a future ASI. It just explores likely failure modes.
It is fine to be worried about both alignment risks and economic inequality. The world is complex, there are many problems all at once, we don’t have to promote one at the cost of the other.
Yeah article aside, looking back on all the AGI stuff from the last year or so really puts our current moment in protective.
This whole paradigm of AI research is cool and all but it's ultimately a simple machine that probabilistically forms text. It's really good at making stuff that sounds smart but like looking at an AI picture, it falls apart the harder you look at it. It's good at producing stuff that looks like code and often kinda works but based on the other comments in this thread I don't think people really grasp how these models work.
- Hiring managers can actually focus on the resumes in their inbox and assume that people are genuinely interested in the role.
- The whole system works more efficiently.
From day one I thought the whole notion of AI in the hiring process (on both sides: candidates submitting AI resumes, HR filtering resumes with AI) was positively absurd. I hope more people catch on.
The share holders are the users that big business now focus on, no the actual end users any more. They care more about the stock than the quality of product.
I told somebody that Palantir is building the maid services and rat poison for a post-lower/middle class society. They didn’t believe me. Seeing this is vindicating.
The irony is Tyson's is an absolutely horrendous organization and ruins food left and right. Not to mention the absurd living conditions for the animals they feed us.
I’m not sure this is such a reality check. I remember figuring this out maybe a month or so after October 2023, when ChatGippity first dropped. Like, if it’s a “do anything platform” won’t the first anything be to cannibalize low hanging anything’s, followed by progressively higher hanging anything’s until there’s no work left?
Like play out AI, it sucks for everybody except the ones holding the steering wheel, unless we hold them accountable for the changing landscape of stake-in-civilization distribution. Spoiler: haha, we sure fucking aren’t in the US.
> Like play out AI, it sucks for everybody except the ones holding the steering wheel
Not true. Models don't make owners money sitting there doing nothing - they only get paid when people find value in what AI is producing for them. The business model of AI companies is actually almost uniquely honest compared to rest of software industry: they rent you a tool that produces value for you. No enshittification, no dark patterns, no taking your data hostage, no turning into a service what should've been a product. Just straightforward exchange of money for value.
So no, it doesn't such for everyone except them. It only sucks for existing businesses that find themselves in competition with LLMs. Which, true, is most of software industry, but it's still just something that happens when major technological breakthrough is achieved. Electricity and Internet and internal combustion engines did the same thing to many past industries, too.
> they only get paid when people find value in what AI is producing for them
The people "finding value in them" are other people with money to throw at businesses: investors, capital firms, boards & c suites. I'm not sure anybody who has been laid off because their job got automated away is "finding value" in an LLM. There's a handful of scrappy people trying to pump out claude-driven startups but if one person can solo it, obviously a giant tech company can compete.
blank stare they're not taking my data hostage, but they're sure as shit taking my data
I think we just fundamentally disagree on all of this. You may be right, and I hope you are. I go back and forth on whether it's going to be a gentle transition or a miserable one. My money is on the latter.
In the sense that a dark pattern is anything designed to trick people into doing something they didn't necessarily consciously want to do, the entire AI industry is an oligarch's wet dream of a dark-pattern: every day we're teeing them up latent information on human-level patterns of control that I promise you LLM providers are foaming at the mouth to replicate. Like if you've got an effective "doing" system, and you've got an effective "orchestrating" system, that's AGI. Deployed at scale, at competitive cost, and even a 1.1x improvement over regular workforce, that's game for anybody but billionaires. There will be a slow dynamic deplatforming of regular people, followed by an extermination. Palantir is building the rat poison and maid service.
> The people "finding value in them" are other people with money to throw at businesses: investors, capital firms, boards & c suites. I'm not sure anybody who has been laid off because their job got automated away is "finding value" in an LLM.
And the millions with ChatGPT (and other LLM) subscriptions, using it for anything from for-profit and non-profit work to hobby projects and all kinds of matters of personal life.
Contrary to a very popular belief in tech circles, AI is not only about investors. It's a real technology affecting real people in the real world.
In fact, I personally don't give a damn about inverstors here, and I laugh at the "AI bubble" complaints. Yes, it's a bubble, but that's totally irrelevant to the technology being useful. Investors may go bankrupt, but the technology will stay. See e.g. history of rail in the United States - everyone who fronted capital to lay down rail lines lost their shirt, but the hardware remained, and people (including subsequent generations of businesses) put it to good use.
Yeah I'm hopeful that they spend the next software update atoning for their UI sins.
I remember being really excited for Liquid Glass, because it felt like a return to the good old days of Skeuomorphism, at least in some spirit. In reality, it was a botched delivery, I suspect for two reasons:
1. Trying to unify all of their design (in one year no less) against one style -- developed primarily on Apple Watch & the now defunct Vision Pro -- was a colossal undertaking.
2. There's so much goddamn software packed into each OS that you're going to inevitably be stuck with bloated menus. Imagine Apple releasing OS 27 this year and saying "we're stripping you down to the bare bones. It's going to feel like Snow Leopard, but we're going to give you customization menus to alter that experience." I would lose my mind with joy. I'd be so excited to be able to operate my fucking phone again.
No, Liquid Glass was stunted from its conception. Any UI designer worth his salt could have pointed out the legibility issues immediately.
The fact that no one (in power) saw a problem with Liquid Glass shows that Jobs was right, that letting the MBAs take the power never works out. And he was wrong for appointing Cook. Remember that Jobs made MacBooks "expensive" (no more expensive or even cheaper than a Vaio or Portege) because he wanted to make great devices with a great UX and UI, which needed a certain level of investment. Jobs loved his users. Cook only loves his shareholders.
But how would they do that without scrapping the whole version?
Their marketing for this year heavily relies on liquid glass but if they remove the shiny stuff, it’s not very pretty, it’s just functional. Functional is what people with work to do appreciate, marketing people will want the shiny back now that it was introduced.
I really don't think the shininess is the issue at hand. It's interface clutter. My iPhone is so cluttered. It's packed full of software I'll never use. I wade through menu options I'll never use.
I like the look of Liquid Glass and I'm generally for it. It just needs to be organized better.
> just functional
This is ultimately what I disagree with. I think iOS/macOS have become entirely dysfunctional. Software is broken, webpages are broken simply because they're running from OS 26. Alarms and calendar events either run randomly or not at all. The system preferences menu is hardly navigable. I could go on. Maybe I'm just getting old and crusty, and yearn for the days when Steve Jobs was running the ship.
They just pack needless software in and do nothing to keep it organized/usable.
> But how would they do that without scrapping the whole version?
There is a way for them to fix this while saving face. You see, Liquid Glass™ was just the first of their incredible new Material Design paradigm. Now introducing Apple Stone™, Apple Paper™, Apple Linen™ and Apple Brushed Metal™. All just as realistic as Liquid Glass™.
> Yeah I'm hopeful that they spend the next software update atoning for their UI sins.
I have heard Liquid Glass was in development for two years, so I see no hope of them spending all that money over again. Nevermind all the developers who have redesigned apps for IOS26.
They could just re-release IOS 18, but that would piss me off as a developer.
This is why I left the Apple Ecosystem last month, I see no hope.
We have a monorepo, we use automated code generation (openapi-generator) for API clients for each service derived from an OpenAPI.json generated by the server framework. Service client changes cascade instantly. We have a custom CI job that trawls git and figures out which projects changed (including dependencies) as to compute which services need to be rebuilt/redeployed. We may just not be at scale—thank God. We're a small team.
Monorepo vs multiple repos isn't really relevant here, though. It's all about how many independently deployed artifacts you have. e.g. a very simple modern SaaS app has a database, backend servers and some kind of frontend that calls the backend servers via API. These three things are all deployed independently in different physical places, which means when you deploy version N, there will be some amount of time they are interacting with version N-1 of the other components. So you either have to have a way of managing compatibility, or you accept potential downtime. It's just a physical reality of distributed systems.
> We may just not be at scale—thank God. We a small team.
It's perfectly acceptable for newer companies and small teams to not solve these problems. If you don't have customers who care that your website might go down for a few minutes during a deploy, take advantage of that while you can. I'm not saying that out of arrogance or belittlement or anything; zero-downtime deployments and maintaining backwards compatibility have an engineering cost, and if you don't have to pay that cost, then don't! But you should at least be cognizant that it's an engineering decision you're explicitly making.
reply