Did Mozilla really came from Godzilla? I've always thought it was short form of 'Mosaic killa' (Mosaic killer). Original code of NSCA Mosaic was licensed by Microsoft Corp from Spyglass, Inc. (and so become a part of first version of Internet Explorer); while team which had written this code (Marc Andreessen et al) got venture funding from James Clark et al in 1994 to form Netscape Communications Corp and basically rewrite the browser from scratch. I.e. initial goal of that team was to kill NSCA Mosaic, their previous creation, hence the name.
> Mozilla Foundation – from the name of the web browser that preceded Netscape Navigator. When Marc Andreessen, co-founder of Netscape, created a browser to replace the Mosaic browser, it was internally named Mozilla (Mosaic-Killer, Godzilla) by Jamie Zawinski.[110]
Does GDI/non-GDI distinction really matter if the only job for GDI is to blit already rendered framebuffer after Skia library (up-to-date part of browser) to the hardware? I.e. when GDI is actually not exposed to the fonts and vector graphics downloaded from the web, just pixels? To me it seems highly unlikely that GDI can be exploited via colors of pixels.
An interesting compilation of buzzwords of that time. Ultimately, almost none of them stuck to the walls. I wonder if today's AI hype will look the same in 27 years?
> I wonder if today's AI hype will look the same in 27 years?
If the AI hype from 37 years ago is any indication (fuzzy logic on lisp machines, anyone?): Yeah. 99% of the buzzwords will be dropped because they're empty, and the remaining 1% useful ideas will vanish into the background and see continued use under new names, because they won't be sexy enough any more to be called AI.
My first year of computer science (or any university course) was in 1997. I had used a Commodore 64, Sinclair ZX81, etc in the 80s, then a break in Junior High and High School, and for several years after that, and gotten back into computers as a hobby in the mid-90s. For health reasons, I could no longer work as a cook, had been learning C from K&R's book to make it easier to build ray tracing animations, and decided to go to university for it (I had some money saved up by virtue of having no free time or social life).
I sat down a to a Sun workstation that was described as a "thin client". This was slicker than my Linux system at home (which I mostly ran as a terminal using Lynx or something to browse the web) and featured the Sun browser and blinding fast Internet. Since I was using dial up at home, this was pretty intense.
Yes, I believed the future was thin clients, web browsers for everything, and that hyper-connected devices were just around the corner. It seemed totally reasonable to me that every system would be like a Chromebook. You sit down, login, and your "system" is waiting for you just as you left it the last time you used it whether it was here, or across the world. I also imagined this experience working on your watch (I imagined a bigger watch - we had watches that you could watch TV on in the 80's - a kid in my school had one) or on what I imagined were inevitable table-top and wall sized monitors that I assumed would be ubiquitous in 10 years (surface hub comes to mind).
So this was a popular vision and one that is kind of realized today although not all the way and not quite in the way "we" (people I was reading, talking to, and hoping to become) imagined.
So today we have Google accounts suspended by corporate bots (on the grounds mostly based on output of /dev/urandom), and I wonder (looking at the sales pitch of these ideas in 1997): are we NUI yet? And if yes, can we have our GUI back please?
Frankly, idea of mainframe is much older, and never really appealed to me. I prefer a kind PCs where P is for 'personal'.
> I sat down a to a Sun workstation that was described as a "thin client". This was slicker than my Linux system at home (which I mostly ran as a terminal using Lynx or something to browse the web) and featured the Sun browser
I don't think so, that thing was dog slow due to its Java based OS and I can't associate it in any way with the word 'slick' ;)
Sun workstations without disks were also often used as thin clients. It was quite simple to boot them from the network and use an NFS root or use them as a graphical terminal.
I've switched away from FastSpring in 2021, when they outsourced their payouts to Hyperwallet (for me this change meant double currency exchange USD -> EUR -> USD with associated double exchange fees). It looks like FastSpring rolled further downhill since then. This reminds me of Plimus/Bluesnap collapse: when this kind of company runs of cash, its tends to establish various funny fees before finally flipping up.
Much of the Windows compatibility is "just" stable API for Windows controls, GUI event handling loops, 3D graphics and sound (DirectX). Linux has stable API for files and sockets (POSIX), but that's all.
And I am saying you don't need to rely on any of that. You can just ship it yourself (statically link, or use LD_LIBRARY_PATH). That's what Windows applications that rely on GTK or Qt do as well, and it works fine, which works well, and it works fine for Linux too. The basics (libc, libX, etc.) are stable, and the Linux kernel is stable.
And this is what Windows does too really, with MSVC and dotnet and whatnot redistributables. It's just that these things are typically included in the application if you need it.
It's really not that different aside from "Python vs. Ruby"-type-differences, which are are meaningful differences, but also actually aren't all that important.
Stop spreading FUD, X and OpenGL have maintained stable ABIs. There is Wayland now but even that comes with Xwayland doe maintain compat.
Sound is a bit more rocky but there are compatibility shims for OSS and alsa for newer audio architectures.
Stop claiming that I'm spreading FUD and show me at least one Linux app which was compiled to the binary code in 1996 and exactly that binary code still runs under modern Linux desktop environment and has similar visual style to the rest of builtin apps.
Got no counterexamples? Then it's not FUD at all, rather a pure truth.
There's one thing I can't understand in this story: if that's lawful interception, why Hetzner and Linode bothered to set up MitM interception with different LE certificate and key, rather than extract the TLS private key directly from the RAM and/or storage device of the VPS? Even if this is a physically dedicated server, they can extract the private key from the RAM by dumping the RAM contents after unscheduled reboot. Extraction of the private key isn't visible in CT logs, much more stealthier, practically undetectable.
There's also a possibility that one would be a "search" and the other would be an "interception" with different levels of approvals requested, but I don't know what the current legal situation in Germany is right now.
On a physical server, couldn't you just hotplug a PCIe card in there and DMA out any data you are interested in? Something like a network card with firmware specifically for the purpose should do it. It sounds so much a standard thing for law enforcement that I imagine such equipment should be available off the shelf?
The difference between modern days and days of DOS isn't in C/C++ compiler, it's in virtual memory and address space isolation and privilege isolation. So it's not a job of a C/C++ compiler to enforce protection from writing to "special" addresses, because interrupt table updates (and memory-mapped hardware I/O in general) still must happen somewhere (i.e. in kernel, hypervisor, drivers etc) and that code is still written in C/C++, same as in the DOS era.