Hacker Newsnew | past | comments | ask | show | jobs | submit | mbakke's commentslogin

For those who resonate with "why might this be useful", here are "plain git" alternatives to this tool:

> searching for a function I deleted

    git log -G someFunc
> quickly looking at a file on another branch to copy a line from it

I use `git worktree` to "mount" long-running branches much to the same effect as Julias tool. To quickly look at a file from a non-mounted branch/commit, I use:

    git show $REF:$FILENAME
> searching every branch for a function

    git log --all -G someFunc
Note that -G can be replaced with -F for a speedup if the pattern you are searching for is a fixed string.


> searching for a function I deleted

> git log -G someFunc

This will look for all changes mentioning someFunc throughout the history of the project.

Usually -S is more valuable, as it will look for changes in occurrence counts. So if you moved a call in a commit -G will flag it, but -S will ignore it (+1-1 = 0).

-S also defaults to fixed string, so no need for -F. Instead you need —pickaxe-regex to switch it to regex search.


Magit has a really easy to use way to "step" through previous versions of files. It's usually bound to something like "C-f p". You get a read only buffer of the previous version open in the best text editor (emacs). You can then press n and p to step through next and previous versions of that file. Can be pretty useful!

It's kind of funny, I think, how most git users don't seem to know how to access any version other than the current one. So many people think of it simply as the annoying tool you have to use to make code changes but don't really know what version control is.


That’s a pretty cool feature of Magit.

I was inspired to look for something similar for the next best text editor (vim) and came across this: https://salferrarello.com/using-vim-view-git-commits/

    git log | vim -R -
Placing your cursor over a commit hash and entering K displays git show for that commit.


I've been told by my elders that when a vim user encounters an emacs supremacist, they must fight back. You can't just call it "the next best text editor".

Jokes aside, As a vim user of 6 years, I did learn just enough emacs for magit (TM) and have also been making quick bucks on the side teaching it to my friends , so I guess I can't help with the "fight back" part :-)


If you're not using neovim you're really missing out right now IMO. It's a renaissance for a hackable text editor because it uses a sensible modern programming language (Lua) rather than vimscript (yikes) or elisp (eh).


Having worked with Lua and Elisp extensively, they have very different pros/cons profiles. In my experience, Lua is great as a scripting language - tiny, speedy, and completely dynamic. For "programming in the large(r)," Lua is just a little better than early JavaScript (i.e., tragic). The purity of the design - tables, metatables, closures, coroutines, and that's it - necessitates reinvention of tens of wheels (either in Lua or in the host app) when your codebase grows and complexity increases. Elisp provides two orders of magnitude more "bells and whistles" than Lua out of the box. Additionally, while Lua's primitives are extremely powerful, they are all strictly run-time constructs. Elisp has macros, so many abstractions can be (and are) shifted to compile time.

I use Awesome WM. It's essentially an "Emacs of Window Managers," and the codebase is very well written, with a small C core and everything else implemented in Lua. It's even very well documented. Yet, writing a nontrivial program (call it an "applet" or something) for Awesome is a nightmare compared to doing the same in Emacs.

LuaJIT is an excellent runtime, and Lua is a great IR, but writing it by hand for anything that's not strictly scripting within a previously established framework is challenging. It's to the point where I'm using Haxe to produce Lua for my Awesome scripts. I know a few people who use Haxe to script NeoVim, too. Really, having to reinvent inheritance and method resolution order every time you start writing Lua in a new project gets old fast.

I genuinely like Lua as a language - the same way I like Tcl, Scheme, and Io. They are all beautiful and powerful and perform very well in some scenarios. Elisp is ugly in comparison, but it's way more practical for medium-sized codebases. Being tied to Emacs is a considerable downside which limits its applicability, but focusing on language features alone, larger codebases are more practical to write in Elisp than in Lua. Plus, there's an escape hatch - Common Lisp or Clojure, pick your poison - for cases where Elisp actually doesn't cut it. There's no such easy way out for Lua.


> There's no such easy way out for Lua.

I think there are some Lisp/Clojure inspired languages developed for Neovim which are used in Lua.

https://github.com/Olical/conjure

https://github.com/Olical/aniseed

https://github.com/Olical/nfnl


You may be missing the point of Elisp. Elisp isn't an "extension language". It's the language Emacs is built with. When you run Emacs you're actually running a Lisp interpreter with a load of text editing features pre-loaded. When you eval some Lisp you're modifying the runtime, essentially live patching your editor in real time. So really any comparisons with Elisp are irrelevant unless you can do what it can do. Common Lisp and Scheme (Guile) are real contenders but the challenge is not giving up the enormous amount of useful code that is already written in Elisp.


Does the elisp interpreter that runs emacs have jit?


Only with v28, a year or two ago.


That's not a JIT. It uses `libgccjit` (IIRC the name), but the native code is produced ahead-of-time. JITs compile using info available on runtime, and native-comp doesn't do that. LuaJIT, by contrast, is a "real" JIT. Still, native-comp does speed things up considerably.


I'm a big lisp fan. I know about all of this, I used emacs for maybe a decade, and I still don't like elisp. I love the hackability of Emacs, but it's OK to dislike the language itself. Disliking semantic choices of the language doesn't mean I'm missing the point either!

And on "it's not just an extensibility language": in my experience this doesn't matter. I get that "well the editor itself is half written in elisp" and so vaguely that is superior, but it is only so in an academic sense.

Expose the primitives for the editor in some API in _any_ langauge and you can basically achieve the same thing anyway, so pick a language that doesn't make me want to poke my eyeballs out with a hot skewer.

Sorry, rant over.


I think if you're just talking about writing extensions/packages (like magit) then the difference is not as big. But for using Emacs it makes a big difference. I can just start hacking on package code by redefining functions etc. and using/testing them straight away. The power of Emacs is not about being able to write extensions (most editors can do that), it's about being able to write tiny little bits of code to change your editing experience as you go. There are specific things in Elisp that make it good for this, like dynamically scoped variables. Writing extensions for other editors is always a "thing", a project. Writing Elisp to change how Emacs works is just using Emacs.


I will concede that you are probably a very different kind of Emacs user than I, since I pretty much exclusively used Elisp to set up and tune my editor to my liking as a totally independent act from actually using my editor to write programs.


For me, even when working on something besides my editor configuration, having access to the parts of the “editor primitives” makes for a lot of powerful one-off editing tools. It was relatively easy, for example, for me to us lsp features to get a list of undefined JS variables in the current scope and add them to the function argument list. And, since lsp and the other bits I put together are all in elisp, I could use jump-to-definition to quickly find the itnernals I need to make the change.


What you're describing is a feature of the system as a whole. You can have the same workflow with any dynamic, reflective environment. All Smalltalks give you the same ability to "jump to definition" of anything, turtles all the way down, and fiddle with those definitions. You could have the same ability in a system written in Lua - you just generally don't, because it requires designing the system as a whole specifically to allow it.


Sure, but I’m not particularly defending emacs lisp (would prefer Common Lisp). It’s not exactly true that any language can have it, though: the language has to be designed to handle redefinition correctly.


> the language has to be designed to handle redefinition correctly.

I believe it's more a question of how easy the redefinition is to implement. You can live-update a running Java or C programs, it's just less convenient/harder to pull off than with Forth, Smalltalk, Lisp, Prolog, and the like. So I think that yes, in principle, every language could have it - it's just that you'd need a huge pile of hacks for some and a few simple instructions for others to get it.


I think it’s basically impossible to safely live-patch part of a compilation unit in most programming languages: you’d have to account for inlining and other optimizations to do this correctly. You _can_ patch at linkage seams and other places, but this is a fraction of the sorts of redefinitions that you get easily in systems designed for it. (And I’ve spent a lot of time trying to make various programming languages more Lispy so I can get stuff done: you always discover there are static presumptions that make it impossible to get the full experience)


> I think it’s basically impossible to safely live-patch part of a compilation unit in most programming languages

There's no argument there; you're right. That's why V and Nim, for example, put reloadable things in a separate compilation unit and handle some things (global state at least) specially upon reload (if I understand what they do correctly.)

My point was that you can get quite close (sometimes with a massive pile of hacks and/or developer inconvenience), not that you can get the full experience (as in Smalltalk or Lisp) everywhere. Especially since the reloading being convenient is a large part of the experience, I think.


Or I could use magit inside the best implementation of the vi standard.


My bread and butter is jumping via blame, though I don’t really like emacs’ blame view so I generally use intellij or git gui.

e.g. see something odd / interesting, activate blame view, and “show diff” / “annotate previous revision” to see how the hunk evolved. Often this provides fast-tracking through the file’s history as the hunk goes through inconsequential changes (e.g. reformatting, minor updates) without changing the gist of what sparked your interest.


vc-annotate in emacs is my favorite blame view I’ve seen, if you haven’t tried it.


Similarly with fugitive in vim, which is fantastic. Diffing, resolving conflicts, and moving through file revisions (and a lot more).


If you’re a Vim user, fugitive by tpope is a great tool


Extremely handy, saving these. Thank you.


They were also tapping the private network links of the Norwegian oil company Equinor (formerly Statoil) according to the original leaks.

It's kind of odd that neither of these oil giants have put pressure on the U.S. government as a result. They are about the only "victims" big enough to pursue the case legally.

I suspect a Supreme Court case is just about the only thing that can bring some of the remaining documents to light. Anyone with access today is almost certainly under some gag order.


I have a similar workflow, but make a bunch of commits with "git commit --fixup HEAD". (Or --squash REF).

Then on the final rebase the commits are automatically ordered with s and f as appropriate.

Although I do a fair bit of amending, too.


git rerere only "automates" conflict solving after you already solved it. As in, it remembers previous merge resolutions, even if you undo the merge/rebase.

It is particularly useful when doing difficult merges regularly. Invariably I'll find a mistake in the merge and start over (before pushing, obviously); the second "git merge" remembers the previous resolutions so I don't have to solve all the same conflicts again.

Similar for difficult rebases that may need multiple attempts.

Git remembers resolutions across branches and commits, so in the rare case where (say) a conflict was solved during a cherry-pick, rerere will automatically apply the same resolution for a merge with the same conflict.

I think the reason it's not on by default is that the UI is confusing: when rerere solves for you, git still says there is a conflict in the file and you have to "git add" them manually. There is no way of seeing the resolutions, or even the original conflicts, and no hint that rerere fixed it for you.

You just get a bunch of files with purported conflicts, yet no ==== markers. Have fun with that one if you forget that rerere was enabled.


> when rerere solves for you, git still says there is a conflict in the file and you have to "git add" them manually.

been using rerere for years, never seen this behavior.


The only "enterprise" feature of RHEL is Red Hat support.

There is nothing to standardise. If anything we need more commercially backed distributions to tear down the monoculture.


Rhel does things to their distribution that customers do not want. This is well-known.

Two shining examples are the removal of older SAS controllers in the kernels of newer versions, and the fate of btrfs.

OpenELA could easily grow in influence to a point where rhel will lose customers if they try it again.

It will be interesting if such decisions push effective control of rhel to OpenELA. It could easily happen.


Redhat doesn't ship btrfs because they don't have any engineers working on it and thus can't provide support for it. Anything that ships in RHEL is comes with a strong support guarantee that they just couldn't hold up for btrfs.

The SAS controllers may be a similar thing, maybe they don't have the hardware anymore to regression test against.


Amazing how well btrfs works in Fedora.

The first thing to do with a rhel install to restore complete functionality is to remove the stock kernel.

With extreme prejudice.


What support "guarantees" does Fedora come with?

The two are not contradictory. Fedora's btrfs functionality can absolutely be exceptional - nothing about that means that Red Hat has engineers working on or able to effectively support it.

Disclaimer: I am an ex-Red Hat employee, though I was nowhere near RHEL.


If there is no value in the technology within it, then why grasp it? Let it go, and cut the Fedora/rhel tie.

Conversely, there is no value in the limitations imposed upon rhel. OpenELA gives us a voice in letting them go.


rather they don't have any customers demanding it. if they had, hiring engineers to support that should not be the problem.

incidentally, i never had a chance to figure out what the big deal was with the centos stream change, because i was already forced to switch to debian because btrfs was removed from the centos kernel.


CentOS being in lock-step with RHEL minus support also included certain certifications that applied to RHEL also applied to CentOS, most importantly if you were in the business of selling to US government or related customer.


> i was already forced to switch to debian because btrfs was removed from the centos kernel.

Funny!


Btrfs was tech preview. There is no support or guarantee that tech previews remain in a release.


And there is no guarantee that we will not use the UEK, because the feature is compelling.


And you have that right. If uek meets your needs of support and gives you what you need and rhel does not you absolutely should use it.


It's enterprise in that you pay for it, but their support sucks. We had lots of open tickets they never solved. Total waste of money. Instead of fixing the problems they went back and forth with us asking for more log files until they just closed out the ticket because of lack of activity. It was their lack of activity.

At least Canonical/Ubuntu enterprise support stumbles into a solution and helps out.


Really?

It's been a while, but 5 years ago I worked for a company with a Red Hat support contract, and we found them incredibly helpful. Everyone we spoke to had deep Linux knowledge, and their team solved issues and gave us real worldn advice much faster and better than any vendor. I'd be sad if that's no longer true.


I remember thinking why don't they just do a WebEx with us to see what's going on. Nope, never. They just wanted us to ship logs to them and try random crap until they closed the ticket. Total waste of time and money.


Because there is legal complications in giving someone remote admin.


Not if they do it right. Lots of companies did that with us, some required it as part of their contract. We'd give them the ability to control or not, and what we were sharing with them. Way more efficient than sending up a tarball full of /var/log. Id argue THAT was a security and liability issue.


Redhats customers run across the world and many legal jurisdictions, which have their own set of complications.

The flip side of the coin that I have seen is that some customers will rely heavily on redhat to do basic admin work that a system admin would/should be doing. Effectively taking a job with legal liability for work provided.

The sosreport Tarball makes an active effort not to gather private information. If you believe it does, please let support know as they are interested in improving this.


Oracle's UEKR4 stopped updating Intel microcode for several months (in the middle of the spectre/meltdown hysteria). After a community forum post, they demanded an open SR so I had the pleasure of explaining all of this again. Even with a support contract, sometimes they are amazingly obtuse.

We dumped support. We are paying enough for the database as it is.


Great article. Real life horror stories of life-critical software gore, with some good news at the end.

It should be illegal to sell software that someones life depends upon without giving the user the right to inspect and modify the code.


I have an ICD (implanted cardioverter-defibrillator) to save my life if my heart stops.

I was also given a proprietary box that sits at home, reads data from it and sends it to my cardiologist over a cellular network, on demand. As part of periodic remote checkups I'm supposed to sit next to it, press the button, which causes it to read data and send any abnormal heart rhythms it detected (via cellular network), whether it treated it (via a shock, in which case I would have known anyway) or whether the abnormal rhythm resolved itself with no treatment (in which case it's worth it that they check out what it picked up). I have to do this about 2-4 times a year.

Every time I hit the button I'm charged $200. Even if there are ZERO events. 90%+ of the time there are zero events.

There is NO interface provided to me where I can read the data directly. There is no way for me to read the device on my own, see zero events, and inform my cardiologist that there are no events and that there is nothing new to diagnose.

I hate this medical system. The device is great for saving my life but I want access to read its data without being charged.


That's appalling and should be illegal.

I wish more programmers would refuse to contribute to this kind of exploitation.


I work in medical devices and it's extremely hard as a dev to figure out what's because of some regulation and what's just for profit.


If it was illegal he might be dead. If he refused, he could be dead. Is that a better world?


No, if it was illegal he'd have access to his data. I'm not saying medical equipment should be illegal.

And to be clear, I wasn't saying he should have refused treatment. I was saying I wish more programmers would refuse to help develop exploitative software like this.


It might not have even been the programmers of the device that chose to do this. It was very likely some manager somewhere who saw the dollar signs when they realized they could collect rent.


Programmers implemented it though. And they knew exactly what they were doing, too.


I don't think he had a choice.

If you had a good doctor that liked da Vinci robotic surgery, versus another one that did raven II would that factor more than the reputation of the doctor? Programmers who make life saving software are good in my opinion, even if the company they work for wants to make money.

I think we should strive for the best features, and also be grateful for "fascist trailblazers". Shockley was known to be an awful boss but our transistors started there and we are better off for it. Body warming methods were created by Nazi scientists experimenting unethically. These are the 2nd step, at least the profiteers show it's doable and the drive for profit made it in the first place.


I would argue that the discoveries would have happened anyway sooner or later even without unethical assholes. And for every example of a step of progress accelerated by them there is an example of a step of progress held back by them.

We do not need the monsters to make progress. Don't try to justify their inexcusable actions in some myopic utilitarian way.


Did you seriously just use the holocaust as an example of successful R&D?


It was successful at least in R part of R&D by any definition of "successful" and "research".


This is nuts. Who charges you? Is it the company that makes these devices? What if you want a different “provider”?


Stanford Healthcare charges me for "general classification" just for a nurse to open up their computer and see that there are zero events.

Boston Scientific, the device maker, does not have an interface for patients, they only send data to hospitals directly.

I'm not currently willing to switch to a different ICD because Boston Scientific's ICD has successfully saved my life 3/3 times in out-of-hospital situations and 2/2 times during in-hospital testing where they induced ventricular vibrillation in controlled testing and I'd rather not risk trying something different. Insurance wouldn't pay for an extra surgery deemed unnecessary, anyway.

I could switch healthcare providers, but I'm not sure if the others in my area are better at cardiology.


I see you have your hands full, but perhaps a class action lawsuit should be in order.


> Stanford Healthcare charges me for "general classification" just for a nurse to open up their computer and see that there are zero events.

Okay so having access to the data wouldn't change a thing, surely you'd be charged even more if you wanted to talk directly to the cardiologist to do a report yourself, as you said?

> inform my cardiologist that there are no events and that there is nothing new to diagnose


This is giving me feelings similar to that movie repo men where you had to rent life saving organs and they could come repossess them at any time.


That is genuinely insane


Quality of life critical software should be ensured by FDA certification. Homebrew modifications of that software, even in the name of “freedom”, risks the patient’s life and health and should be illegal if uncertified.


In EU (and probably elsewhere), there are strict rules for the stability of power wheelchair. One such rule is "On a incline of x% (x chosen by the manufacturer), pushing for max speed from stop should not lift the front wheels"

To achieve that, the max acceleration must be quite low (software controlled), and the whole experience is sluggish, like trying to steer a car by pulling on rubber bands attached to the wheel.

From the moment I found a way to overcome this, I never went back. I know that I can hurt myself if I do something stupid, but I prefer this hypothetical risk instead of cursing 100 times a day because I cannot move how I want. It has been 10 years and I never got hurt.

I understand that such "high" risk device cannot be sold, but forbidding someone to change this is like inflicting a second handicap on him.


I suppose we all have, or should have, the right to try stupid things. Sometimes experience and competence are more important than 100% safety. Your comment made me realize how limiting it would be to be physically incapable of taking even the smallest risk.


That is a very poor regulation. Why enforce wheel lift? What matters is that the chair doesn't tip over - that the center of gravity remains in the center of the four wheels.


  > Homebrew modifications of that software, even in the name of “freedom”, risks the patient’s life and health and should be illegal if uncertified.
The official modifications of that software — in the name of "profit" — are currently risking the patient’s life and health, and therefore should also be illegal by your logic.

Surely you must also support effective (ie harsh/deterrent) prosecution and punishment for these crimes as well, correct?


>>>should be illegal if uncertified.

I think this is the key part of the comment - yes, uncertified changes by anyone could feasibly be illegal. The FDA or similar should probably do code reviews.


Looking at corner cases for this:

What if you fix a bug in your own pacemaker? Would it be ok to:

a) Fine you?

b) Jail you?

c) Force you to revert the change? (plausibly leading to death in an extreme case)

[edit: I do agree that there's a chance that making a 'fix' to your own pacemaker might also make it worse. In which case, who do we trust more? The person on the ground with a stake in the matter (however misinformed), or $government_official with no stake in the matter (however well informed).

I think it's tricky! ]


I don't think that scenario is particularly tricky. If you modify someone else's pacemaker, it's a tricky question, even with their consent. If you modify your own, absolutely nothing should stand in your way beyond a nice big notice saying "danger of death,on your head be it". That is, you should have the same freedom to screw with your own personal medical devices that you have to climb out of your own fourth floor window.

People have a right, albeit not enshrined in law, to do stupid things that might kill them - at least as long as they don't then ask someone else to save them.


This is a huge straw man/whataboutism that contributes nothing to the discussion.

Yes, bad software modifications are bad and should be punished wherever they arise.

Homebrew modifications make it way easier for bad stuff to happen, and make it harder to punish.


> bad software modifications are bad and should be punished wherever they arise.

That almost never happens. Software sux.


  >  This is a huge straw man/whataboutism that contributes nothing to the discussion.
It's a countervailing concern, not a strawman.

  > bad software modifications... should be punished wherever they arise
Corporations are currently unpunished (per TFA) when they alter software in a way that risks patient safety, and they have already caused documented harm to patients. This is a shocking failure of federal oversight, but the captured FDA will (by design) never fix it. Oops.

In light of the real harm caused by this neverending policy failure, the Library of Congress is morally and ethically obligated to permit fair use exemption. Individuals and homebrew communities must be unshackled to protect patients from the real (not hypothetical!), documented, and widespread harm caused by corporate-sponsored attacks on US medical infrastructure.

No, that's not an exaggeration.

Given the current anti-patient landscape, the protections of open source far outweigh any risk.


I think this might be a cultural thing.

In some (western) countries, your body is your personal private property, and you have the freedom and ultimate authority over how to use and abuse it, or anything on or in it. (you are still advised to treat your most precious property wisely, obviously)

In other (western) countries/subcommunities people feel that obligations to your community are stronger.

People from these different cultures can get into some pretty hefty discussions when it comes to things like abortion, drugs, euthansia, or -here- implants.


So like suicide, drugs and other and other cases where we are denied dominion over ourselves for our own good? IE. Your life and body are not yours, they belong to society and you only get limited access.


Society doesn't have to give you the rope to hang yourself.


I disagree or rather yes, it does have the responsibility to provide you a rope. It is up to you whether you hang yourself or not.


I disagree, I think if you walk into a pharmacy and ask for something dangerous without a prescription they shouldn't be obligated to give it to you. It's the same with medical equipment that keeps you alive.

If you want to risk your life you can do it but no one should be compelled to help you.


No one should be compelled. I mean it more in a negative manner, that it has an obligation to not stop people from helping. If someone wanted to offer a nitrogen tank, valve, tube and an easily head fitting bag for sale to people who want to commit suicide in a painless and ensured manner they be able to sell that (and people would). But in fact you cannot, and that is wrong.


You are taking the position that an individual "owns" themselves

That is not obviously true.

I feel I belong to my family and my community.


Your position is not universal, and in fact strongly opposed by many. I believe that I have the absolute right to edit or terminate my own existence, either on purpose or accidentally. To the extent that anyone can own a person, people own themselves exclusively.


> Your position is not universal

True. But neither is the other position


Surely the patient should have the right to risk their own life?


To distribute? Sure. To make changes to your out of support cyber-eyeball? Nah.


Serious question, what does the FDA know about software quality?


Surely not less than the average consumer.

And surely they could hire experts to do the job.


1. Compared to the average person in the FDA's population of people who are in charge of evaluating the medical devices, the average person in the population of people who would make fixes and helpful modifications might have more expertise in determining the quality of the device's normal software.

2. It's not as if the people who depend on the medical devices have to take the word of the community of people who will mod the devices over the word of the FDA.


Safetism is a great curse on the world. I cannot disagree with you more.


So you would prefer it not be developed?


The software is clearly not the primary product. While there might need to be a carve out or a specific licensing scheme developed to protect them from liability in the case of modified software, I doubt these companies would experience serious financial setbacks if they made their software free and open.

And don't tell me that SaaS is an integral part of the business model for medical device companies. There's no world in which they can't figure out how to turn a profit without charging a monthly fee to use your tens of thousands of dollars eyeball.


> The software is clearly not the primary product.

Sure, in this case. But that means that the rule we're considering actually needs a big asterisk next to it, something like "when the software in question isn't the primary product." That sounds like a thorny regulatory question, and any answer to that question other than "I know it when I see it" probably has big loopholes. This might be unnecessary nitpicking on my part if we're just shooting the breeze about companies we don't like, but if we're actually interested in writing laws, this is a common failure mode. Maybe _the_ common failure mode.

On the other hand, "so you would prefer it not be developed" is a less-than-entirely-charitable way of making this point. Of course @mbakke would _not_ prefer that, and it might avoid an unnecessary round of back-and-forth to make a reasonable guess about what they would prefer and work from there :)


This is being downvoted yet there’s a reason why this types of treatments always starts being developed to serve the US market initially


I used to work with a guy that mounted a keyboard vertically on each side of his chair, using both at once.

A true legend.


I'm typing this from a split keyboard right now. If someone made a nice set of arms that could mount my keyboard halves to my chair arms (with enough room on the right for a trackpad) I'd absolutely spring for that and get rid of my desk. Then when Apple makes a Vision Air I can just go full cyberman and drop the monitor too.


It frustrates me to no end that the split form factor has not gotten more poplar. It naturally seems a better fit for the human body. Instead, I am stuck paying a premium to kinesis for their garbage software.


I don't know how popular you would like, but ZSA has been around for quite a while. Their ErgoDox has been around for years, and they have other models as well. [0]

The Dygma Defy has also been making waves, it released not too long ago. [1]

The Keyboardio Model 100 has been out for a while, it pops up occasionally on eBay so can't be too unpopular if it shows up there. [2]

The MoErgo is probably less well known but has a good following both in the US and Europe. [3] Their Discord is pretty active.

> premium to kinesis for their garbage software

All the keyboards above are programmable, often with more than one option for programming. QMK is the common denominator and it isn't bad, but their are other options (Python, etc) and usually web-based configurators also.

...If you're willing to go with something that is too new to be popular, but has excellent ergonomics, programmable, and great customer support in US, I'd recommend Cyboard. [4] Currently waiting for mine to be shipped.

There are lots of options besides these. So you are correct split keyboards are not available at the local big-box store, but at the same time they are definitely more powerful, more comfortable, and more customizable as a class. They are out there and they have a following, just have to know how to get started.

[0] https://www.zsa.io/voyager/

[1] https://dygma.com/pages/defy

[2] https://shop.keyboard.io/

[3] https://www.moergo.com/

[4] https://www.cyboard.digital/


I know a lot of these options exist, but I have a few unbreakable requirements.

1) not building it myself. I need off the shelf. I am lousy with a soldering iron.

2) it must have a dedicated F row. I do not care about layers and supposedly saved movement distance. I must always be able to mash F5 without any chording. Give me an F row + layers. The keys need to physically be there, my desk has plenty of “vertical” space to hold it.


Dedicated FN row supported by 2 out of 5 links above, check Cyboard and MoErgo. Most places (all in these links) provide turnkey, no soldering required.

Agree your requirements get into rarefied territory if you want something ready-to-go, split, programmable, with FN row. But there are options.


Glove80 has a dedicated F row, is off he shelf, and also has layers if you -want- to use them. I haven't bothered with the layers yet, but my carpal tunnel definitely thanks me for switching to it.

I am not affiliated in anyway, just a happy customer who is no longer in pain after work. Dedicated F row was a must for me as well.


You don't need to solder modern day DIY keyboards (to my disappointment). There are now switch sockets. So everything is pre-soldered and all you do is maybe put in some screws, put in the switches and put on the keycaps.


Get Keeb.io sinc it’s sold prebuilt and has an f row. I have 2


I’d say non-split keyboard is barbaric at this point of computer evolution


Sadly I agree. Splits are great. I am eagerly awaiting my ortholinear split keyboard from https://dygma.com/ - no affiliation other than being a customer of their first keyboard, the raise and having the defy on order.

Maybe it's just me, but I think the proliferation of mechanical keyboards brings people closer to the fringe where custom keyboards, layouts, parts and pcbs are the norm for the pursuit of perfection.


Check out this mounting kit: https://www.zsa.io/moonlander/tripod-kit/


This is nice in that it has standardized threaded mounting points, but doesn't solve the distance from the chair arms outs to the keyboard halves. So there'd still be some doing.


The cable connecting the two halves of my ZSA is just a standard 3.5mm TRRS audio cable. Moonlander docs say max length is 6ft~=2m. That should be plenty to route it around the back of your chair.


sure, that's not the issue. The issue is that these mounts let you put a moonlander on tripod bolts. They do not address attaching the so-equipped keyboard halves to the arms of a chair.


Glove80 has a quick release mount system meant specifically for that use case. The first version was better in my opinion, but they had trouble source some parts and just launched a version 2 with the parts they can get.



I do something similar at home: lie on my bed with a keyboard half leaning on each of my hips.

One of these days I'll get around to 3D-printing a clamp or something.


I saw someone playing a concertina once and thought hmmmm... a split keyboard might work there. Never got round to doing anything about it, though.


May I introduce you to the Commodordion: https://linusakesson.net/commodordion/index.php



Guix has tooling to verify binaries:

https://guix.gnu.org/en/manual/en/html_node/Invoking-guix-ch...

"guix build --no-grafts --no-substitutes --check foo" will force a local rebuild of package foo and fail if the result is not bit-identical. "guix challenge" will compare your local binaries against multiple cache servers.

I build everything locally and compare my results with the official substitute servers from time to time.


It obviously depends on the hardware, but IIRC for me maybe 3-4 hours building from the 357 byte seed to the latest GCC.

The early binaries are not very optimized :-)


Typically part of a "version string":

    $ python3
    Python 3.10.7 (main, Jan  1 1970, 00:00:01) [GCC 11.3.0] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>>
Perhaps a relic from when software had to be manually updated?


On NixOS, I think the release time or commit time is used:

    $ python3
    Python 3.10.11 (main, Apr  4 2023, 22:10:32) [GCC 12.2.0] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> 
That is more useful than the build time.


How is that possible? Is nixpkgs an input to the Python derivation? Or do packagers "hard code" a value every time they modify the Python build code? Automated tooling that sets it after pull requests? Something else? :-)


GCC respects SOURCE_DATE_EPOCH, and Nixpkgs has specific support for setting that environment variable: https://github.com/NixOS/nixpkgs/blob/92fdbd284c262f3e478033... (although I haven't proved that this is actually how it works for cpython's build).

Irrelevant spelunking details follow:

That string is output by cpython to contain the contents of the __DATE__ C macro (https://github.com/python/cpython/blob/fa35b9e89b2e207fc8bae... which calls to https://github.com/python/cpython/blob/fa35b9e89b2e207fc8bae... which uses the __DATE__ macro at https://github.com/python/cpython/blob/fa35b9e89b2e207fc8bae... ).

Cpython is defined in nixpkgs at https://github.com/NixOS/nixpkgs/blob/92fdbd284c262f3e478033... which I imagine (but haven't proved) uses GCC.


Thank you! Setting SOURCE_DATE_EPOCH to the most recent file timestamp found in the source input is a clever hack.


The source for the cpython build is the release tarball (https://github.com/NixOS/nixpkgs/blob/master/pkgs/developmen...).

In that case, NixOS sets SOURCE_DATE_EPOCH (which I suspect will be picked up by the python build) to the latest timestamp found in that archive (https://github.com/NixOS/nixpkgs/blob/master/pkgs/build-supp...)


2023-04-04T22:10:32 is the timestamp of Python-3.10.11/Misc/NEWS from https://www.python.org/ftp/python/3.10.11/Python-3.10.11.tar...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: