1 store my ssh key in 1Password and use the 1Password ssh agent. This agents asks for access to the key(s) with Touch ID. Either for each access or for each session etc. one can also whitelist programs but I think this all reduces the security.
There is the FIDO feature which means you don’t need to hackle with gpg at all. You can even use an ssh key as signing key to add another layer of security on the GitHub side by only allowing signed commits.
This story reminds me of a similar issue people love to solve with the same idea. Software builds. The can’t we have a simple make file or worse just a shell script to build.
And just like described in the post it starts the same. Simple script wrapper. No tasks no tasks dependencies. Then over time you need to built now a library which contains the core part of the software to share between different other projects. You need to publish to different platforms. Shell scripts become harder to use on windows all of a sudden. You need to built for different architectures and have to integrate platform specific libraries.
You can built your simple make / shell file around all that. But it ain’t so simple anymore.
For the 80% of use cases, you have homogeneous build commands that are the same across projects (such as a makefile with build, clean, test, etc). This calls the real (complex) build system underneath to actually perform the action. You shouldn't need to type more than 15 keys to make it do common things (and you CERTAINLY shouldn't need to use ANY command line switches).
Then for the other 20% of (complex) use cases, you call the underlying build system directly, and have a document describing how the build system works and how to set up the dev environment (preferably with "make dev-env"). Maybe for self-bootstrapping systems like rust or go this isn't such a big deal, but for C/C++ or Python or node or Java or Mono it quickly becomes too bespoke and fiddly.
Then you include tests for those makefile level commands to make sure they actually work.
There's nothing worse than having to figure out (or remember) the magical incantation necessary to build/run some project among the 500 repos in 15 languages at a company, waiting for the repo owner to get back to you on why "./gradlew compileAndRun" and "/.gradlew buildAndRun" and "./gradlew devbuild" don't work - only to have them say "Oh, you just use ./gradlew -Pjava.version=11 -Dconfig.file=config/dev-use-this-one-instead.conf -Dskipdeploy buildAndDeploy - oh and make sure ImageMagick and Pandoc are installed. They're only used by the reports generator, but buildAndDeploy will error out without them". Wastes a ton of time.
Yes. In the example of gradle I setup all specifics to the well know lifecycle tasks: check, assemble and in some cases publish.
Some projects are more complicated specifically when you can really use the rule of: 1 project one assembly. See android with apk vs bundle.
Here you may need more specific tasks. But I try to bind CI (be it Jenkins or GitHub actions) to only know the basic interface.
But I meant specifically the believe that build systems and tooling around is too complicated and unnecessary.
Ah yes. Unfortunately the complexity is necessary in modern codebases. There are usually ways to simplify, but only to a point - after that all you're doing is smearing the complexity around rather than containing it.
I think this is true for nearly all compiled languages. I had the same fun with rust and openSSL and glibC. OP didn’t mentioned the fun with glib-c when compiling on a fairly recent distro and trying it to run on an older one. There is the “many Linux” project which provides docker images with a minimum glib c version installed so it’s compatible with newer ones.
The switch to a newer open ssl version on Debian/Ubuntu created some issues for my tool. I replaced it with rust tls to remove the dynamic linked library. I prefer complete statically linked binaries though. But that is really hard to do and damn near impossible on Apple systems.
I think the HDMI connectors popped up at the same time screens switched from 16:10 (VESA compatible at the time) to 16:9 to be more cost effective for the manufacturers. But I’m not sure why. I looked at graphicscards and wondered why HDMI suddenly gained traction in the PC space even after the release of DisplayPort. I think this should never have happened.
Paired with a long lived GitHub access token that had more access than needed for this operation. GitHub Actions has some features for short lived tokens that are not stored in static action secrets. I’m not quite sure why a bot user was actually needed here. Then there is the simple fact that lots of developers over provision their environments. Every sessions hosts hundreds of env variables for all kinds of things. From docker to GitHub tokens etc.
we started to oidc all the things in Jenkins and GitHub actions to guard secrets to be accessible only by certain repos and branches inside them. But the more you shut that down the more flexibility you loose. Or you need even more automation to help with access management.
I agree partly. I love cargo and can’t understand why certain things like package namespaces and proof of ownership isn’t added at a minimum. I was mega annoyed when I had to move all our Java packages from jcenter, which was a mega easy setup and forget affair, to maven central. There I suddenly needed to register a group name (namespace mostly reverse domain) and proof that with a DNS entry. Then all packages have to be signed etc. In the end it was for this time way ahead. I know that these measures won’t help for all cases. But the fact that at least on npm it was possible that someone else grabs a package ID after an author pulled its packages is kind of alarming. Dependency confusion attacks are still possible on cargo because the whole - vs _ as delimiter wasn’t settled in the beginning.
But I don’t want to go away from package managers or easy to use/sharable packages either.
> But the fact that at least on npm it was possible that someone else grabs a package ID after an author pulled its packages is kind of alarming.
Since your comment starts with commentary on crates.io, I'll note that this has never been possible crates.io.
> Dependency confusion attacks are still possible on cargo because the whole - vs _ as delimiter wasn’t settled in the beginning.
I don't think this has ever been true. AFAIK crates.io has always prevented registering two different crates whose names differ only in the use of dashes vs underscores.
Didn’t know this term. After reading I wonder why short lived tokens get this monocle. But yeah I prefer OIDC over token based access as well. Only small downside I see is the setup needed for a custom OIDC provider. Don’t know the right terms out of my head but we had quite the fun to register our internal Jenkins to become a create valid oidc tokens for AWS. GitHub and GitHub Actions come with batteries included. I mean the downside that a huge vendor can easily provide this and a custom rolled CI needs extra steps / infrastructure.
Ok I wished for this kind of feature for years. I started using a yubikey with an ssh key via gpg ssh-agent in 2018 or 2019. When resident ssh keys came around I switched over to FIDO2 based keys on my yubikey. The main issue with both was the fact that the default ssh setup wasn’t working anymore. One needs extra configs and more commands to get to the public key etc. Yubikey’s are great but block an USB port. And then there is the age old question for me: One SSH key per User for all services? One key per machine for all services? Or one key per service?
This year I started to play around with the 1Password ssh-agent feature (bit warden has it as well as far as I know)
If you're ok with allowing all your keys being listed in the agent this works pretty easy out of the box.
I never liked the fact that the default recommended way to use ssh is to use an agent that just has multiple keys which can be tested one after another and in most cases stay unlocked there after first use for the rest of the session.
I configured around to make sure that I explicitly use one key for one specific service. But that is sadly extra configuration etc etc.
I believe that in 1Passwd you can define / preselect a key per host now. So you can pinpoint key -> host. Some hosts have firewall rules that will block after X attempts were X might be low.
However the agent still has access to all your keys, obviously.
Ah cool. I worked around by storing the public keys in my dot repo and use the identity file ssh config option for said host. Great if I don’t have to do this anymore.
Next level config madness: Use different ssh keys per GitHub org ;).
It is interesting to me that something like this can have such a high value. It speaks meanly for the our shared cultural global connection when it comes to items like these. For what purpose other than saying: “I have a …” would you buy this? Or is it the believe the price only goes up and it gets bought as an investment? I mean specifically this item with this high price.
I ask because I think the price is only as high if the item in question is still cultural relevant. So I assume you buy it and start shadow produce new Superman projects :)
I ask because I think the price is only as high if the item in question is still cultural relevant.
Les Poseuses Ensemble by Georges Seurat was sold for $149m. Very few people have heard of it, care about it, or even like it considering it's pointillism which no one buys modern versions of. The world of art and collectables is entirely rich people speculating that the price (not value) will go up in the future.
Ah damn. I forgot to add in the whole world of art collection which of course this item belongs in as well. Still baffles me how we humans can put such high prices on some items
Dunno why I can't reply to your other comment explaining what you mean but hot damn. False evaluation of a cheap painting to save on taxes? That's mental.
Different tax loopholes depending on region etc, but basically like this:
I’m a billionaire earning $100M this year.
I owe $40M as taxes for that. (Too much!)
I find a dumb banana painting by a starving artist.
I buy it from him for $1000.
I wait 6 months.
I go to a museum to get it appraised by “professionals”.
I pay the professional appraiser’s wife $50K as a gift.
The appraiser says the painting is now worth $30M!
Wow that’s awesome, I have such a keen eye for art.
You know what, I’m gonna donate this painting to a museum instead because I’m such a patron of art and culture.
Oh, look at that, I get a tax rebate for the value of my donated painting ($30M)
Now I only have to pay $40M - $30M = $10M in taxes on my $100M income.
There’s more nuance to it in practice, but that’s the gist of it.
-----
Edit: For some reason I can't reply to the comments below so I'm gonna do it here.
> That wouldn't explain the price here, since in your scam the whole idea is to buy cheap and donate dear. not buy for 139M
Now we're getting in the details but it's very suspicious for an appraiser to appraise a work of art from an unknown artist at millions. But it's not that suspicious if they take Van Gogh's Starry Night which was previously appraised at $500M to now be valued at $1B. this way the deca-billionaire still gets to save his taxes while appraiser avoids suspicion.
> As far as I know, that's not how taxes work. You can't get a rebate for the amount of taxes you would have paid, you can get a deduction for the amount of money you made.
There are a lot of loopholes in the complicated tax system for the ultra-wealthy, not for us. This video (still a simple explanation in an animated way) covers a few more of them: https://www.youtube.com/watch?v=dHy07B-UHkE
As far as I know, that's not how taxes work. You can't get a rebate for the amount of taxes you would have paid, you can get a deduction for the amount of money you made.
So:
You made $100M owe $40M in taxes.
Your painting is worth $30M! You have such a keen eye for art.
Now you made $130M and owe $50M in taxes.
You donate the painting, you're back at having made $100M and owing $40M.
Otherwise we'd all choose not to pay tax and donate our tax money to charitable institutions instead.
I’m pretty sure he’s right in how taxes work. There’s no moment where the value of the painting is realized but you are allowed to deduct the FMV if you make enough and if the donation goes to the charity’s exempt use (which it will if it’s a museum or whatever).
So if you buy painting for a dollar and wait a year then next year you make $3m and the painting is now worth $1m then if you donate it, your AGI is reduced to $3m-min($1m, 30% of income) = $3m-$900k.
You don’t count the appreciation of the painting as income. You don’t even count it as LTCG if you don’t sell it.
I think it also applies to stock option awards. When the startup I was at was acquired some people were talking about it.
Yes, there are lots of “loopholes” available if you are willing to commit tax fraud! But that’s something anyone can do, it’s not particularly harder to lie about the value of charitable donations if you’re not ultra-wealthy.
Correct. Fraud is fraud, loopholes are loopholes. One is legal, the other is not.
Or put another way - a loophole in law/regulations is found, then the law/regulation gets changed to close the loophole. If it were not legal this change would not be necessary - you would just prosecute.
The price is determined by the depths of pockets of buyers. High price for such items means only that too many stupid people have too much money in our time.
"only for thing people would legitimately like to have."
Whilst that may be true for the most part, much of the art dealt nowadays is never displayed, just stored somewhere incredibly tax efficient until it's value has gone up enough to warrant selling.
In that respect I suspect it is much the same as bored apes. The price can go up while there are people with funds to put into things they don't care about. When the time comes that they have less money than the cost of things important to them, the 'value' can swiftly evaporate.
reply