Hacker Newsnew | past | comments | ask | show | jobs | submit | mbreese's commentslogin

Is this the start of a more frequent code-migrations out of Github?

For years, the best argument for centralizing on Github was that this was where the developers were. This is where you can have pull requests managed quickly and easily between developers and teams that otherwise weren't related. Getting random PRs from the community had very little friction. Most of the other features were `git` specific (branches, merges, post-commit hooks, etc), but pull requests, code review, and CI actions were very much Github specific.

However, with more Copilot, et al getting pushed through Github (and now-reverted Action pricing changes), having so much code in one place might not be enough of a benefit anymore. There is nothing about Git repositories that inherently requires Github, so it will be interesting to see how Gentoo fares.

I don't know if it's a one-off or not. Gentoo has always been happy to do their own thing, so it might just be them, but it's a trend I'm hearing talked about more frequently.


I'm really looking forward to some form of federated forking and federated pull requests, so that it doesn't matter as much where your repository is.

For those curious, the federation roadmap is here: https://codeberg.org/forgejo-contrib/federation/src/branch/m...

I'm watching this pretty closely, I've been mirroring my GitHub repos to my own forgejo instance for a few weeks, but am waiting for more federation before I reverse the mirrors.

Also will plug this tool for configuring mirrors: https://github.com/PatNei/GITHUB2FORGEJO

Note that Forgejo's API has a bug right now and you need to manually re-configure the mirror credentials for the mirrors to continue to receive updates.


I use GitHub because that's where PRs go, but I've never liked their PR model. I much prefer the Phabricator/Gerrit ability to consider each commit independently (that is, have a personal branch 5 commits ahead of HEAD, and be able to send PRs for each without having them squashed).

I wonder if federation will also bring more diversity into the actual process. Maybe there will be hosts that let you use that Phabricator model.

I also wonder how this all gets paid for. Does it take pockets as deep as Microsoft's to keep npm/GitHub afloat? Will there be a free, open-source commons on other forges?


Unless I misunderstood your workflow Forgejo Agit approach mentioned in OP might already cover that.

You can push any ref not necessarily HEAD. So as long as you send commit in order from a rebase on main it should be ok unless I got something wrong from the doc?

https://forgejo.org/docs/latest/user/agit-support/


Personally, I'd like to go the other way: not just that PRs are the unit of contribution, but that rebased PRs are a first-class concept and versioning of the changes between entire PRs is a critical thing to track.

> and be able to send PRs for each without having them squashed

Can't you branch off from their head and cherry-pick your commits?


That's effectively what I do. I have my dev branch, and then I make separate branches for each PR with just the commit in it. Works well enough so long as the commits are independent, but it's still a pain in the ass to manage.

GitLab has been talking about federation at least between instances of itself for 8+ years: https://gitlab.com/groups/gitlab-org/-/epics/16514

Once the protocols are in place, one hopes that other forges could participate as well, though the history of the internet is littered with instances where federation APIs just became spam firehoses (see especially pingback/trackback on blog platforms).


That's kind of the way Tangled works, right? Although it's Yet Another Platform so it's still a little bit locked in...

I just want a forge to be able to let me push up commits without making a fork. Do the smart thing for me, I don't need a fork of a project to send in my patch!

This is supported on Codeberg (and Forgejo instances in general) via the "AGit workflow", see https://forgejo.org/docs/latest/user/agit-support/

Agreed. I assume there are reasons for this design choice though?

I would love git-bug project[1] to be successful in achieving that. That way Git forges are just nice Web porcelain on top of very easy to migrate data.

[1] https://github.com/git-bug/git-bug


So... git's original design

No. Git is not a web-based GUI capable of managing users and permissions, facilitating the creation and management of repositories, handling pull requests, handling comments and communication, doing CI, or a variety of other tasks that sites like Codeberg and Forgejo and GitLab and GitHub do. If you don't want those things, that's fine, but that isn't an argument that git subsumes them.

Git was published with compatibility with a federated system supporting almost all of that out of the box - email.

Sure, the world has pretty much decided it hates the protocol. However, people _were_ doing all of that.


People were doing that by using additional tools on top of git, not via git alone. I intentionally only listed things that git doesn't do.

There's not much point in observing "but you could have done those things with email!". We could have done them with tarballs before git existed, too, if we built sufficient additional tooling atop them. That doesn't mean we have the functionality of current forges in a federated model, yet.


`git send-email` and `git am` are built into Git, not additional tools.

That doesn't cover tracking pull requests, discussing them, closing them, making suggestions on them...

Those exist (badly and not integrated) as part of additional tools such as email, or as tasks done manually, or as part of forge software.

I don't think there's much point in splitting this hair further. I stand by the original statement that I'd love to see federated pull requests between forges, with all the capabilities people expect of a modern forge.


I think people (especially those who joined the internet after the .com bubble) underestimate the level of decentralization and federation coming with the old-school (pre web-centric mainframe-like client mentality) protocols such as email and Usenet and maybe even IRC.

Give me “email” PR process anytime. Can review on a flight. Offline. Distraction free. On my federated email server and have it work with your federated email server.

And the clients were pretty decent, at running locally. And it still works great for established projects like Linux Kernel etc.

It’s just pain to set up for a new project, compared to pushing to some forge. But not impossible. Return the intentionality of email. With powerful clients doing threading, sorting, syncing etc, locally.


I'm older than the web. I worked on projects using CVS, SVN, mercurial, git-and-email, git-with-shared-repository, and git-with-forges. I'll take forges every time, and it isn't even close. It's not a matter of not having done it the old way, it's a matter of not wanting to do it again.

What I like about git-and-email-patches is the barrier to entry.

I think it's dwm that explicitly advertises a small and elitist userbase as a feature/design goal. I feel like mailing lists as a workflow serve a similar purpose, even if unintentionally.

With the advent of AI slop as pull request I think I'm gravitating to platforms with a higher barrier to entry, not lower.


What is a forge? What is a modern forge? What is a pull request?

There is code or repository, there is a diff or patch. Everything else your labeling as pull request is unknown, not part of original design, debatable.


Sorry to hear that you don't see the value in it. Many others do.

It's not what I meant.

GitHub style pull request is not part of the original design. What aspects and features you want to keep, and what exactly you say many others are interested in?

We don't even know what a forge is. Let alone a modern one.


And the forks network display.

Find a project, find out if it's the original or a fork, and either way, find all the other possibly more relevant forks. Maybe the original is actually derelict but 2 others are current. Or just forks with significant different features, etc. Find all the oddball individual small fixes or hacks, so even if you don't want to use someone's fork you may still like to pluck the one change they made to theirs.

I was going to also say the search but probably that can be had about the same just in regular google, at least for searching project names and docs to find the simple existence of projects. But maybe the code search is still only within github.


I really like @mitchellh perspective on this topic of moving off GitHub.

---

> If you're a code forge competing with GitHub and you look anything like GitHub then you've already lost. GitHub was the best solution for 2010. [0]

> Using GitHub as an example but all forges are similar so not singling them out here This page is mostly useless. [1]

> The default source view ... should be something like this: https://haskellforall.com/2026/02/browse-code-by-meaning [2]

[0] https://x.com/mitchellh/status/2023502586440282256#m

[1] https://x.com/mitchellh/status/2023499685764456455#m

[2] https://x.com/mitchellh/status/2023497187288907916#m


The stuff he says in [1] completely does not match my usage. I absolutely do use fork and star. I use release. I use the homepage link, and read the short description.

I'm also quite used to the GitHub layout and so have a very easy time using Codeberg and such.

I am definitely willing to believe that there are better ways to do this stuff, but it'll be hard to attract detractors if it causes friction, and unfamiliarity causes friction.


I really don't get this... like you're a code checkout away from just asking claude locally. I get that it is a bit more extra friction but "you should have an agent prompt on your forge's page" is a _huge_ costly ask!

I say this as someone who does browse the web view for repos a lot, so I get the niceness of browsing online... but even then sometimes I'm just checking out a repo cuz ripgrep locally works better.


Person who pays for AI: We should make everything revolve around the thing I pay for

The amount of inference required for semantic grouping is small enough to run locally. It can even be zero if semantic tagging is done manually by authors, reviewers, and just readers.

This looks like a confusing mess to me.

for [1] he's right for his specific use case

when he's working on his own project, obviously he never uses the about section or releases

but if you're exploring projects, you do

(though I agree for the tree view is bad for everyone)


I also check for the License of a project when I'm looking at a project for the first time. I usually only look at that information once, but it should be easily viewed.

I also look for releases if it's a program I want to install... much easier to download a processed artifact than pull the project and build it myself.

But, I think I'm coming around to the idea that we might need to rethink what the point of the repository is for outside users. There's a big difference in the needs of internal and external users, and perhaps it's time for some new ideas.

(I mean, it's been 18 years since Github was founded, we're due for a shakeup)


Hrm. Mitchell has been very level-headed about AI tools, but this seems like a rare overstep into hype territory.

"This new thing that hasn't been shipped, tested, proven, in a public capacity on real projects should be the default experience going forwards" is a bit much.

I for one wouldn't prefer a pre-chewed machine analysis. That sounds like an interesting feature to explore, but why does it need to be forced into the spotlight?



Oh FFS. Twitter really brings out the worst in people. Prefer the more deeply insightful and measured blog posting persona.

Aren't they literally moving off GitHub _because_ of LLMs and the enshittification optimising for them causes? This line of thinking and these features seem to push people _off_ your platform, not onto it.

Coincidentally, my most-used project is on Codeberg, & is a filter list (such as uBlock Origin) for hiding a lot Microsoft GitHub’s social features, upsells, Copilot pushes, & so on to try to make it tolerable until more projects migrate away <https://codeberg.org/toastal/github-less-social>.

Arch Linux have used their own gitlab instance for a long time (though with mirrors to GitHub). Debian and Fedora have both run their own infra for git for a long time. Not sure about other distros. I was surprised Gentoo used GitHub at all.

Pretty sure several of these distros started doing this with cvs or svn way back before git became popular even.


Both GitHub and now Codeberg are mirrors of a self-hosted cgit repository of Gentoo.

I mean, gitlab is only from ~2019.

The first hit I could find of a git repository hosted on `archlinux.org` is from 2007; https://web.archive.org/web/20070512063341/http://projects.a...


Gitlab started in 2011. Which, granted, is still after 2007.

https://www.ycombinator.com/companies/gitlab


I would say started with Zig.

For us Europeans has more to do with being local that reliability or copilot.


I hope so. Ever since Trump and the US corporations declared software-war against Europeans, I want to reduce all dependencies on US corporations as much as possible. Ideally to zero. Also hardware-wise. This will take a long time, but Canadians understood the problem domain here. European politicians still need to understand that Trump and his cronies changed things permanently.

It might also be a reflection of the number of frequent outages of GitHub under Microsoft recently and GitHub Copilot push

I moved one of my projects from Github to codeberg because Github can't deal with sha256 repositories, but codeberg can.

It's been going on for a while. Recent AI craze just accelerates it.

>code-migrations out of Github

I hope so. When Microsoft embraced GitHub there was a sizeable migration away from it. A lot of it went to Gitlab which, if I recall correctly, tanked due to the volume.

But it didn't stick. And it always irked me, having Microsoft in control of the "default" Git service, given their history of hostility towards Free software.


I was thinking of something similar — instead of just two passes, couldn’t you also store different quantized values? If you have thousands of documents, you could narrow it down to a handful with a few bit-wise Hamming comparisons before doing a full cosine similarity on just the rest. If you hand more than one bitmap stored, you’d have fewer comparisons at each step too.

Would this work?


I came across this a week ago when I was looking at some LLM generated code for a ToUpper() function. At some point I “knew” this relationship, but I didn’t really “grok” it until I read a function that converted lowercase ascii to uppercase by using a bitwise XOR with 0x20.

It makes sense, but it didn’t really hit me until recently. Now, I’m wondering what other hidden cleverness is there that used to be common knowledge, but is now lost in the abstractions.


A similar bit-flipping trick was used to swap between numeric row + symbol keys on the keyboard, and the shifted symbols on the same keys. These bit-flips made it easier to construct the circuits for keyboards that output ASCII.

I believe the layout of the shifted symbols on the numeric row were based on an early IBM Selectric typewriter for the US market. Then IBM went and changed it, and the latter is the origin of the ANSI keyboard layout we have now.


xor should toggle?

That's correct, a toUpper would just use OR.

I left out that the line before there was a check to make sure the input byte was between ‘a’ and ‘z’. This ensures that if the char is already upper case, you don’t do an extraneous OR. And at this point, OR, XOR, or even a subtract 0x20 would work. For some reason the LLM thought the XOR was faster.

I honestly wouldn’t have thought anything of it if I hadn’t seen it written as `b ^ 0x20`.


The question isn’t if the demand is real or not (supplies are low, so demand must exist). The question is if the demand curve has permanently shifted, or is this a short-term issue. No one builds new capacity in response to short term changes, because you’ll have difficulty recouping the capital expense.

If AI will permanently cause an increase in hard drives over the current growth curve, then WD, et al will build new capacity, increasing supply (and reducing costs). But this really isn’t something that is known at this point.


My post argues that the demand has permanently shifted.

By the way, plenty of people on HN and Reddit ask if the demand is real or not. They all think there's some collusion to keep the AI bubble going by all the companies. They don't believe AI is that useful today.


Usefulness and overvaluation are not mutually exclusive. AI is useful, but it is not a fraction as useful as these companies spending rates would have one believe.

If it is, then the world is going to lose pretty much all white collar jobs. That's not really the bright future they're selling either.


> My post argues that the demand has permanently shifted

The time horizon for this is murky at best. This is something you think, but can’t know. But, you’re putting money behind it, so if you’re right, you’ll make a good profit!

But for the larger companies (like WD), over building capacity can be a big problem. They can’t plan factory expansion based on what might be a short term bubble. That’s how companies go out of business. There is plenty to suggest that you’re right, that AI will cause permanently increased demand for computing/storage resources. Because it is useful and does consume and produce a lot of new data and media.

But I’m still skeptical.

The massive increase in spending can’t be sustainable. We can’t continue to see the AI beast at this rate and still have other devices. Silicon wafer fabs can’t be built on demand and take time. SSD/HD factories take time. I think we are seeing an expansion to see who the big players will be in the next 3-5 years. Once that order has been established, then I think we will fall back to more sustainable rates of demand. This isn’t collusion, it’s just market dynamics at play in a common market. Sadly, we are all part of the same pool and so everything is expensive for all of us. At some point though, the AI money will dry up or get more expensive. Then I think we’ll see a reversion back to “normal” demand, maybe slightly elevated, but not the crazy jump we’ve seen for the past two years.


Us being in the same pool as AI is one of the potential risks pointed out by AI safety experts.

To use an analogy, imagine you're a small fluffy mammal that lives in fertile soils in open plains. Suddenly a bunch of humans show up with plows and till you and your environment under to grow crops.

Maybe the humans suddenly won't need crops any longer and you'll get your territory back. But if that doesn't happen and a paradigm change occurred you're in trouble.


AI can be useful today, while also being insanely overvalued, and a bubble.

There will be a bubble. It's inevitable.

The most important question is are we in 1994 or 2000 of the bubble for investors and suppliers like Samsung, WD, SK Hynix, TSMC.

What about 10 years from now? 15 years? Will AI provide more value in 2040 than in 2026? The internet ultimately provided far more value than even peak dotcom bubble thought.


> The internet ultimately provided far more value than even peak dotcom bubble thought.

Yeah, but not to the early investors. The early investors lost their shirts. The internet provided a lot of value after the bubble popped and everyone lost money.


Static or pre-rendered static? I rarely come across the latter, but the former is pretty common.

There might be an institutional block in Google due to the way that Google Wave was received. Google has tried (a few times) to get chat to work. It's never quite lived up to expectations (or hype in the case of Wave). Knowing their history, I can see why they'd want to avoid trying to take on that market again. It's difficult to get enough traction with users to make it a successful product.

Not impossible, but it's not like they haven't tried before in the past.


Wave's core ideas are at the heart of modern collaborative tools. It's just the UX that was poor. If they stuck at it and refined it, they could be the leader of this segment. Something that I can say for a lot of what Google does. They quit too fast and maybe more importantly they don't use the knowledge they got from their failures to improve.

It's the same with Inbox which remains the best email client I ever used but weirdly Gmail never got the core UX ideas which made it works so well. I would like to say Google doesn't get UX but clearly they have great UX designers on board. It's just that they probably never get final say and are not first class citizen.

For me, it's an issue of discipline. A lot of Google products seems to be built like R&D projects with the mindset which goes with it. They don't have the discipline to do the boring refining work that great UX requires.


It’s not just the UX in wave was poor. They didn’t have one compelling use case that made sense to people and they botched the launch.

1. They did the same “invite” thing they had done with gmail so you couldn’t get an account (even if you had a gmail account). They repeated this mistake with google+ also (a social network for people who work at google).

2. They basically had a working CRDT and said “you can use this for all sorts of things” (which is true) and a thin UX on it that implemented a sort of bizzaro threaded chat with document sharing and said “this will replace email” (which is blatantly untrue) and everyone was just confused.


> Google has tried (a few times) to get chat to work

The original gmail-integrated gchat/google-talk first released in 2005 was fabulous. If they had just kept developing it instead of repeatedly creating a new one, they would easily be the undisputed leaders in this space.


Sometimes you have to question whether the product organisation actually hampers effective delivery of products as PM’s chase career winning moves

Google leadership failed in chat because they forgot the most important thing. Metcalf's law. the value of a network is scales to the square of the number of users.

when they wanted to create new chat apps, they had a choice. do we force all of our users to move to the new app or do we figure out a way to bridge the apps. They chose to force users to move.

The problem is, when you force people to move, you also give them the chance to leave and try new things. Instead of figuring out how to make the new chat app more valuable to users it was meant to appeal to by giving them access to google's entire chat userbase without forcing anything on those users, they killed their existing user base on the hope of forcing them to move to the new app. They didn't and now google's an afterthought in the chat space.

They did the same thing with google+ in general. They had a community of committed users sharing data with each other and commenting on stories on google reader. Instead of figuring out how to leverage that user base to contribute "content" to google+ and users that would prefer to use this new interface, and thereby make that new interface more valuable, they killed google reader in an attempt to force those users to migrate to google+. They didn't and went elsewhere.

Google has repeatedly made the mistake of forcing their users to migrate from what they were used to, and every time they do they open the gates for those users to migrate outside of google.

Facebook has learned this lesson relatively well. They don't force users to migrate to Instagram/facebook or whatsapp/messenger. In the Instagram / facebook case they seem to be improving the ability of users to use their Instagram account to add content to facebook (though not in the reverse). While in the whatsapp/messenger case, they haven't forced anyone to migrate, but they also haven't had any interoperability. One would think the apps would have even more value if they could communicate with each other.


You could also look at it as: in order for a Slack competitor to compete with Slack's network effects, the new program will need to offer an easy way to extend chat workspaces with external collaborators. It's not impossible, but it does make Slack's moat explicitly clear.

The problem here is that companies artificially limit integrations, so it's impossible to exchange messages between different providers like how email works.

I’ve always thought that the proper competitor to Slack/Twitter/etc… was a protocol not another service. Protocols enable competition, services just shift the market to another service.

We really need some legal requirements similar to "right to repair" for machinery. We need "right to integrate" for software. I don't know how you'd pull it off, or how much support for integrations would be "enough" but it would allow competitors to cross these network effect lock-in moats that large players are able to build.

This looks like a nice project!

I always have a love hate relationship with bookmarks. I tend to treat bookmarks as a write once read never datastore. I have a set of 2-3 bookmarklets that I use often, but almost never use other bookmarks. I do keep an archive of pages or links I find interesting, but I store those in a separate archive (self hosted Karakeep).

So, I’m legitimately curious — for the author or others — how do you use bookmarks? What is your personal usage pattern? Do you have many pages you need to keep track of? Is there much churn or adding of new bookmarks? I’d like to make beater use of my stored links, but right now it is really a write-only archive.


I use bookmark tags a lot, and rely on them to quickly find things in future.

I bookmark all sorts of things. Projects or articles that I think I'll likely need in future, issues which I report and might need to reference in future, etc.

I'm sure over 50% of my bookmark were written and never read, but I definitely query all sorts of old bookmarks nearly every day.


Thank you! I have similar issues with bookmark managers overall. When they are too far from where I use them, it turns into a list of links I never read

In Arc, I'd organize links in dedicated workspaces for each project (personal or work). So whenever I work on a specific project, I'd open that workspace and have all the necessary links right there. For example, I tend to check Product Hunt often, and I have a dedicated workspace where I'd store products organized by my personal use cases. So next time I'm looking for a tool for something, I'd just open that workspace and search


I built a simple bookmarks app for myself and others, which sends you a weekly recap of what you saved and finds things from your archive you might want to revisit. Would love your feedback: https://apps.apple.com/us/app/eyeball-bookmarks-assistant/id...

I use Obsidian (other note-taking apps and editor modes are available) and generally write at least a sentence about each bookmark. Subject areas get their own notes/bookmarks and I use the available linking and tagging options to try to make the resource more useful and easier to refer to in the future.

Or many other sources. If you’re writing about Space, you kinda need to cover SpaceX. If you’re opening critical of everything the owner says, pretty soon you won’t have any sources at SpaceX to give you the insights you need to do your job. I get the impression that the space field is pretty small, so you might not want to burn too many bridges.

Also, mission lengths can cover decades. In this case, it might be best to have a short memory when the story has a long time horizon.


This is even more true when politics has a rather short time horizon. Musk decided to jump into public politics at a time when the nation is substantially more divided and radicalized than it's been in living memory for most of us, to say nothing of being fueled by a media that's descended into nothing but endless hyper partisan yellow journalism. It's not really a surprise that things didn't work out great. But as the 'affected' move on to new people and new controversies, perspectives will moderate and normalize over time.

And, with any luck, Elon can get back to what he does well and we can get men back on the Moon and then on Mars in the not so distant future.


All of Musk's political nonsense, social media theatrics, etc., aside, if SpaceX performs over the next decade or two the way one would hope, he'll be remembered for centuries because of that. Tesla, X, his political dalliances, will fade to obscurity compared to that.

Elon just needs a wild party to blow off steam.

> © 1987-2005 Vrije Universiteit, Amsterdam

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: