Hacker Newsnew | past | comments | ask | show | jobs | submit | agartner's commentslogin

I guess it was only a matter of time...

Part of this is fair since there is a cost to operating the control plane.

One way around this is to go back to using check runs. I imagine a third party could handle webhooks, parse the .github/workflows/example.yml, then execute the action via https://github.com/nektos/act (or similar), then post the result.


inb4 "our webhooks are now 2c per call"

I've been using this for a ~year now and it works very well. Thanks!



Working on decently cool things with relatively limited bureaucracy.


You can do that at established companies. If the cool thing comes to an end you'll often have a boring job you can keep or stay at while you find something else.


I've been patiently waiting to convert my ZFS array to bcachefs. I'm very excited about better SSD promotion logic. But I'm not willing to spend any time on an experimental filesystem on my main systems.

> But you can expect to get flamed about running Debian, or perhaps more accurately, not being willing to spearhead Kent's crusade against Debian's Rust packaging policies.

It is quite unfortunate that Kent couldn't have just said "Debian isn't supported, we will revisit this when bcachefs is more stable" and stopped talking after that. Debian and experimental software just don't work well together.


Oh, the author's completely misrepresenting what happened here.

We had a major snafu with Debian, where a maintainer volunteered to package bcachefs-tools (this was not something I ever asked for!), and I explained that Debian policy on Rust dependencies would cause problems, and asked him not to do that.

But he did debundle, and then down the road he broke the build (by debundled bindgen and ignoring the minimum version we'd specified), and then _sat on it for months_, without even reporting the issue.

So Debian users weren't getting updates, and that meant they didn't get a critical bugfix that fixed passing of mount options.

Then a bunch of Debian users weren't able to mount in degraded mode when a drive died. And guess who was fielding bug reports?

After that is when I insisted that if bcachefs-tools is packaged for debian, dependencies don't get debundled.

If you're going to create a mess and drop it in my lap, and it results in users not able to access their filesystem, don't go around bitching about being asked to not repeat that.


Yeah just typical Debian stuff. jwz has been ranting about this for years. It's not worth spending any time on it.

Some suggestions:

- Only "supporting" the latest mainline kernel and latest tools. I prefer to point to CI system configurations to show exactly what it "supported"

- Make this clear via your website and a pinned issue on Github.

- Force users to report the versions they use via an issue template: https://docs.github.com/en/communities/using-templates-to-en.... Immediately close any issues not meeting your version/system requirements without further discussion or thought.


That last one’s great advice. I don’t remember if you can use checkboxes there and I’m too lazy to look at the moment, but I could imagine the first question being:

  [ ] I am using Debian packages
and auto-closing if set.


Do you ever admit you're wrong?


I think I did once back in 2002.


I seem to recall a previous fs creator with ego problems was tried and convicted of murder, and then his work unceremoniously disappeared into an oubliette.

I’m 99% sure you’re joking but as an outsider I have… concerns.


It does help to have a sense of humor :)


That was a good one! Keep up your humor. It's a tough environment out there.


Genuinly curious: it seems like you are making a remark on his character, right? But why did you do so? Just fed up? Or did he actually state something wrong in the parent comment?


I've been running bcachefs on my spare dedicated SteamOS gaming machine for fun. Especially for the SSD promotion logic. It's a spare computer with an old 128GB SSD and 3TB HDD that I've got as a single filesystem. I love not having to manage games between the SSD/HDD. Too bad it's a mITX build with no space for more old drives I could stick in.


Here's a quick example I put together on how to use these runners to accelerate docker builds: https://github.com/gartnera/actions-arm64-native-example


amazing, exactly what I was looking for. thank you


There are attestations that the binaries were built via CI:

https://userdocs.github.io/qbittorrent-nox-static/artifact-a...

Here's a verification of the latest build:

  gh attestation verify x86_64-qbittorrent-nox -o userdocs
  Loaded digest sha256:af9ceace3fea1418ebd98cd7a8a7b6e06c3d8d768f4d9c9ce0d3e9a3256f40d9 for file://x86_64-qbittorrent-nox
  Loaded 1 attestation from GitHub API
   Verification succeeded!

  sha256:af9ceace3fea1418ebd98cd7a8a7b6e06c3d8d768f4d9c9ce0d3e9a3256f40d9 was attested by:
  REPO                             PREDICATE_TYPE                  WORKFLOW
  userdocs/qbittorrent-nox-static  https://slsa.dev/provenance/v1  .github/workflows/matrix_multi_build_and_release_qbt_workflow_files.yml@refs/heads/master


Spending all their security time/budget on box checking rather than actual security.

I'd rather see open ended red team pentest reports.


This is my complaint with "cyber insurance". Companies spending money on insurance premiums and checklists for the insurance company rather than spending money on security.


Yep. My experience as well. Once a place starts doing useless box checking stuff like SOC2 it’s time to find a new job or switch vendors.

Positive indicators would be talking to employees and getting an idea of organizational clue level. There are no shortcuts here I’ve ever found beyond doing this sort of old fashioned “know your vendor” style work.


It seems most if not all google domains are HSTS preloaded so no you can't: https://hstspreload.org/?domain=script.google.com


Yep I'm getting 503's. But the status page says nothing is wrong: https://www.dockerstatus.com/

Update: I configured the gcp docker mirror and can pull images again https://cloud.google.com/artifact-registry/docs/pull-cached-...


Same here. 503 when we try to pull an image during our deployment pipeline.


"Full Service Disruption" for Docker Hub Registry acknowledged at the time of writing


Same here - checked from multiple locations.


as of 22:45 UTC (~30 minutes ago) they marked the status green with "Issue is mitigated, monitoring the situation"

...but I'm still getting 503 responses on my dev box as well as when I run `curl https://registry.docker.io/` on 2 CI boxes in 2 different datacenters.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: