I wish open source projects would support MingW or at least not actively blocking it's usage. It's a good compiler that provides an excellent compatibility without the need of any extra runtime DLLs.
I don't understand how open source projects can insist on requiring a proprietary compiler.
There are some pretty useful abstractions and libraries that MinGW doesn't work with. Biggest example is the WIL[1], which Windows kernel programmers use and is a massive improvement in ergonomics and safety when writing native Windows platform code.
'MinGW' is not GCC; it's an ABI, and from the developer perspective it is also the headers and the libraries. You can have GCC MinGW, Clang MinGW, Rust MinGW, Zig MinGW, C# AOT MinGW.
if you want to link msvc built libraries (that are external/you dont have source), mingw may not be an option. for an example you can get steamworks sdk to build with mingw but it will crash at runtime
From the capitalization I can tell you and the parent might not be aware it's "minimal GNU for Windows" which I would tend to pronounce "min g w" and capitalize as "MinGW." I used to say ming. Now it's my little friend. Say hello to my little friend, mang.
I've been shipping Windows software for 20+ years. Not one project I have ever worked on was based on MinGW. It's a gross hack that is ABI incompatible with the predominate ecosystem.
In the year 2026 there is no reason to use MinGW. Just use Clang and target MSVC ABI. Cross-compiling Linux->Windows is very easy. Cross-compiling Windows->Linux is 1000x harder because Linux userspace is a clusterfuck of terrible design choices.
If a project "supports Windows" by only way of MinGW then it doesn't really support Windows. It's fundamentally incompatible with how almost all Windows software is developed. It's a huge red flag and a clear indicator that the project doesn't actually care about supporting Windows.
If I'm writing some cross-platform bit of software, my interest in supporting Windows is naturally in producing binaries that run on Windows.
Why on earth should I give a flying toss how "almost all Windows software is developed", or which kinds of ABIs are BillG-kissed and approved? Good god. Talk about fetishising process over outcome.
I just eyerolled so hard I gave myself a migraine.
If your focus is on outcome that I promise and assure you that using MinGW will make producing a positive outcome significantly harder and more frustrating.
With modern Clang there really isn’t a justifiable reason to use MinGW.
I think the issues you're referring to are related to C++ ABI which is inherently incompatible between different compilers (and sometimes versions). This can be sometimes issue for plugins, though sane programs always use C wrappers.
I never had issues with C ABI, calling into other DLLs, creating DLLs, COM objects, or whatever. I fail to see what is fundamentally incompatible here.
Yes if you go pure C API and write libraries that conform to this and don’t pass owned memory around then sure that works.
MSVC is ABI stable since 2015. Many libraries that distribute pre-compiled binaries that rely on this stability. ~None of them include MinGW binaries in their matrix. (Debug/release X MT/MD).
Being a good citizen means meeting people where they are. Libraries and programs should expect to be integrated into other build systems and environments. I have literally never ever in my career worked in a Windows dev environment that used MinGW. The ~only reason to use MinGW is because you primarily use Linux and you want to half-ass do the bare minimum.
Interestingly I've never used Visual Studio on Windows (only for a brief time for ARM based PDAs where I could experience the infamous MFC which was okay to me despite the hate).
I've used Borland C++ and Watcom C/C++ back in the day and MinGW after that. Also COM was invented for interoperation between languages/compilers.
Being a good citizen means to not use the inherently unstable C++ ABI directly. You can use C API or even COM for that. Relying on it is cute but it's accidental and it will break in the future. Microsoft can't guarantee that it will stay stable because C++ is always evolving, forcing to break the compatibility.
Open source projects shouldn't depend on proprietary compilers (they can support them, but not as the only option). It just undermines the purpose of it.
The reasons I use MinGW is because it produces better compatible binaries on Windows, allowing me to support all Windows versions from Windows 2000 to latest with a single binary (and sometimes 64bit if there is a need/advantage) and it doesn't require me to bundle dozens of DLLs (or worse, installing them system-wide) and artifically limit the compatibility for no reason.
Breaking the compatibility is hostile to the users who can't or want to always use the latest Windows.
I use MingW without any extra libs (no msys), it just uses the ancient msvcrt.dll that is present in all Windows versions, so my programs work even on Windows 2000.
Additionally the cross-compiler on Linux also produces binaries with no extra runtime requirements.
But that's the point, I don't want the same style executable as Visual Studio. Having to distribute bunch of DLLs and having worse compatibility is pretty bad.
A major part of the incompatibility with older versions of Windows is just because newer VS runtimes cut the support artifically. That's it. Many programs would otherwise work as-is or with just a little help.
yeah, you can get away with this now a days because Git itself installs 2/3rds of the things you need anyway. You just need to finish the job by getting the package and putting the binaries in your git folder. Bam! mingw64, clang, what ever cc you need. It all links to standard windows stuff because you have to tell the linker where your win32.lib is. But this is true no matter the compiler, it's just Visual Studio supplies this in some god awful Program Files path.
Clearing of the secrets is a separate issue from memory allocation mechanism. It must be done all the way from the encryption layer to the program to avoid the leaks.
This is typically not done, only certain parts such as handling of the crypto keys. That's because it's pervasive and requires reworking everything with that in mind (TLS library, web framework, application).
On the other hand the centralization and global usage of GC in the process allows to modify it to always zero out the memory that it deallocated and to do GC at regular intervals so it can have advantage here (it's very easy to inadvertly leak the secrets to some string).
You can do it with HW accelerated emulation like Apple did with M1 CPUs. They implemented x86 compatible behavior in HW so the emulation has very good performance.
Another approach was Transmeta where the target ISA was microcoded, therefore done in "software".
They said that they implemented x86 ISA memory handling instructions, that substantially sped up the emulation. I don't remember exactly which now, but they explained this all in a WWDC video about the emulation.
Not instructions per se. Rosetta is a software based binary translator, and one of the most intensive parts about translating x86 to ARM is having to make sure all load/store instructions are strictly well ordered. To alleviate this pressure, Apple implemented the Total Store Ordering (TSO) feature in hardware, which makes sure that all ARM load and store instructions (transparently) follow the same memory ordering rules as x86.
In my case, as a developer of a programming language that can compile to all supported platforms from any platform the signing (and notarization) is simply incompatible with the process.
Not only is such signing all about control (the Epic case is a great example of misuse and a reminder that anyone can be blocked by Apple) it is also anti-competitive to other programming languages.
I treat each platform as open only when it allows running unsigned binaries in a reasonable way (or self-signed, though that already has some baggage of needing to maintain the key). When it doesn't I simply don't support such platform.
Some closed platforms (iOS and Android[1]) can be still supported pretty well using PWAs because the apps are fullscreen and self-contained unlike the desktop.
[1] depending on if Google will provide a reasonable way to run self-signed apps, but the trust that it will remain open in the future is already severely damaged
The signing is definitely about control, as is all things with Apple, but there are security benefits. It's a pretty standard flow for dev tools to ad-hoc (self) sign binaries on macOS (either shelling out to codesign, or using a cross-platform tool like https://github.com/indygreg/apple-platform-rs). Nix handles that for me, for example.
It makes it easy for tools like Santa or Little Snitch to identify binaries, and gives the kernel/userspace a common language to chat process identity. You can configure similar for Linux: https://www.redhat.com/en/blog/how-use-linux-kernels-integri...
But Apple's system is centralized. It would be nice if you could add your own root keys! They stay pretty close to standard X.509.
What is the subset of users who are going to investigate and read an rtf file but don’t know how to approve an application via system settings (or google to do so)?
I would say quite a lot of users because even the previous simple method of right clicking wasn't that known even by power users. Lot of them just selected "allow applications from anyone" in the settings (most likely just temporarily).
In one application I also offered an alternative by using a web app in case they were not comfortable with any of the option.
Also it's presented in a .dmg file where you have two icons, the app and the "How to install". I would say that's quite inviting for investigation :)
Try "killall -STOP photoanalysisd", this will pause the process instead of killing it (which would result in restarting it by launchd). You can unpause it by using "-CONT".
Chickens are very intelligent, it just happens that most people ever see chickens in overcrowded small spaces where they behave idiotically. So would you if you would be in the same situation.
I kept chickens for 15 years (mostly free-roaming in my backyard, unless there was a fox lurking, so not in overcrowded small spaces) and I disagree. To me they seemed pretty stupid, and pretty mean to one another
We've had a small amount (just 3) with plenty of space and it was fun to observe them, all sort of interesting behaviors.
My favorite thing is them cooperating against a common enemy (a dog that was eating their food sometimes, which we've tried to mitigate but not being much successful).
Then once they had a discussion in the opposite corner about the problem and launched a stealth attack, covering themselves behind the trees while approaching the dog without the dog knowing it. Then once close enough they attacked from behind, the dog squeaked, more from the surprise than pain and since then the dog never touched their food again and avoided them.
That exists and it's called web apps. For native apps you need the exact opposite, access to everything otherwise it can't do the useful integrations and provide the best experience for the user, which is the point of native apps.
You have to trust native apps, as it always was the case. You can't just install random apps. You can delegate the trust to a curated lists of apps that you trust.
Or you can just use the web apps, but then you have to trust them too (so they don't misuse information about you or your data for example). But then it can't integrate with anything and many features are simply not available.
As for your example, a photo editor could need a network connection when it contains collaborative features. Or an auto-update system. Or downloading of assets on demand. Or cloud AI feature. Or list of add-ons to install. Or for license checks. Or online help/docs. Or whatever.
Why do I "have to trust native apps"? I owe them nothing and they can happily work in a sandbox where they have access to a their own folder and files that I allow them to use. If I decide they don't need network, then they don't need network.
> a photo editor could need a network connection when it contains collaborative features. Or whatever.
Unfortunatelly DMA is the reason Google is doing this. It allowed Apple to require notarization for "security". Google is just copying the same approach as it's now clear what the requirements by the governments are.
Before it was unclear so it was better to allow installation of apps without any verification to appear as more open.
Remember any regulation/law has unintended consequences. At one point Apple decided that PWAs would no longer be supported in EU so they don't have to provide equal capabilities to implement them in alternative web browsers, fortunatelly they changed their mind by obtaining an exception. PWAs is the only alternative choice for making "proper" apps on iOS (no hacky sideloading methods).
I think overally DMA is more a loss than a win (good on paper, terrible in practice). It codified worse things. The EU app stores are still fully controlled by Apple (harder to install, they can just decline or drag notarization of any apps or revoke your license to dev tools, you need to still pay them, etc.).
For various apps the EU market is too small (esp. for things that need to be global) to invest into the development so while you can for example theoretically develop a real alternative web browser to Safari/WebKit (forbidden by App Store rules) nobody is willing to do it.
I don't understand how open source projects can insist on requiring a proprietary compiler.
reply