Asahi Lina is truly an inspiration for open source reverse engineering. For those not aware, they also live stream their coding sessions quite often: https://www.youtube.com/@AsahiLina
I'm excited for the day that I can easily install SteamOS (the modern one that runs on the Steamdeck) on an M2 Mac mini for an insanely powered "Steam console" for my living room TV.
As interesting as it would be to watch some of these streams, it's completely unbearable to watch and listen with this persona involved. If they were trying for something that would ensure a very niche audience, this was a good choice.
I honestly find it very hard to take someone seriously who chooses this kind of persona, even though it's hard to argue with their technical ability and results.
Asahi Lina has the same spanish-sounding accent as Hector but pitched up and falsetto. "raider" - the hostname of the developer machine is that same as the hostname of Hector's machine.
That looks pre-recorded, and the second person talking with Lina is clearly following a script, I don't think they could do it that well on one sitting, and the animations are probably not automatic.
Probably some pre-recorded, agreed upon advertisement to promote a new channel.
Well then it must be legit. Who among us hasn't accidentally leaked a cryptographic secret key to a screeching vtuber who has the exact same accent as we do.
Even if you had the stream key¹, I wouldn’t expect you to be able to take over a stream with that kind of clean cutting from marcan to loading throbber to Asahi Lina. Without any knowledge of what YouTube Live actually does about conflicting input streams, I would expect the takeover to (a) be instant, (b) fail, or (c) produce a garbled mess. What the thirteen second throbber delay would be adequate for is starting up whatever software you need to play the part of Asahi Lina.
I have no specific knowledge, but it seems very clear to me that at the very least marcan is a collaborator in the persona of Asahi Lina; and absent further contrary evidence, them being the same makes sense.
—⁂—
¹ And if stream key exfiltration actually happened, I find it hard to imagine anything but acrimony arising.
> and he's also not a GPU hacker as far as I know.
I'm not really willing to indulge the greater discussion, but marcan has done some serious GPU hacking before, reverse engineering the microcode (not shaders) of the PS4's GPU to fix bugs that the PS4 hacked around in the drivers.
AFAIK, that is music made by Marcan (Lina) him/her self. Amongst other talents, Marcan also appears to be musician. There are a few interviews with Marcan where you can see pianos in the background, for example here: https://youtu.be/dF2YQ92WKpM?t=989
I wonder how long it's going to take for games to start generally supporting ARM. Getting Linux running well on M1/M2/etc.. seems like only half the battle for making a good gaming machine out of these.
Desktop games*. ARM is a major target for gaming already. Games are 61 percent of app store revenue and 71 percent of play store revenue. And mobile games are the majority of the gaming market (51%).
Getting Waydroid to run well on Linux has big issues still unfortunately if you wanted to use it to play Android games. Managed to get it working for a few months on a previous Ubuntu version, now can't get it to work at all.
I wouldn't count on many developers going back to update old games with ARM support. It's more likely that the community will work to build some sort of Box86 + Proton stack to get games working, which should get a lot of the classics working[0]. From there, I think the struggle will be getting Box86 to run fast enough for modern games. Apple's ARM CPUs have great IPC, but that can still get annihilated when it's forced to simulate SIMD/AVX instructions. I assume Apple has some sort of vector acceleration framework in Apple Silicon, but it will take time and effort to reverse-engineer and implement.
Things are certainly looking better than they did a couple years ago, but getting ARM to run x86 code faster-than-native is an uphill battle. Maybe even an impossible one, but I've been surprised before (like with DXVK).
It's a little frustrating how it's the norm in the game industry for companies to toss a binary over the wall and maybe patch it for a short period after release (not a given, ports in particular are vulnerable to being forever stuck at 1.0), with significant technical updates being out of the question until it's been long enough for them to try to sell you a remaster.
Not having any experience in that industry, I wonder what the driving forces of this are. I suspect it's some combination of incredibly brittle codebases that cease to build if glanced at the wrong way and aversion to spending anything on games post-release.
>aversion to spending anything on games post-release.
I am pretty sure that is the answer. Unless the game is Cyberpunk levels of unplayable, there is no money in post release support unless it is bundled with DLC or GOTY releases.
Back in the day it was pretty commonly sited figure that like 90% of a game's revenue came in the first 3-4 weeks of release. DLC and "seasons" are an attempt to stretch it out and make more off a single release, but I haven't heard how well that works.
> I suspect it's some combination of incredibly brittle codebases that cease to build if glanced at the wrong way and aversion to spending anything on games post-release.
The primary reason is that there's no money in it. Like movies, your "one shot" game (without some sort of continuous billing e.g. mmo, subscription, continuous stream of DLCs) makes most of its revenue in the first few weeks, and once the kinks are ironed out what it makes afterwards doesn't really depend on maintenance.
Additional maintenance doesn't pay for itself, the producer doesn't pay the devs for that, and thus the devs take on the next contract to pay the bills. Not to mention additional maintenance is a risk.
Most of the time if the game was good and it’s been abandoned, someone will make a remaster or a modern take on it which now works on modern systems again
- Extremely low expected ROI even if it were possible to deliver on other platforms
Gamedevs aren't in the business of building platforms, they're in the business (mostly) of consuming them and going where the players are.
Gamedevs not updating is because
- The engines themselves are indeed outrageously brittle at times, with LTS releases sometimes containing significant bugs that persist against newer releases of minor and major versions
- New releases can actually cause dramatic regressions, not just in terms of bugs, but in terms of features, stability, binary size, and more
- AAAs are wasting time chasing the next big thing, non-AAAs are struggling with few people and need to constantly be building the next thing because they're building products, not services
- Gamedevs are largely media/entertainment companies, very few act like technology companies
The "binary abandonment" model can have effectively the same result, though.
An example that come to mind immediately is how much of a mess it is to get games that were built with Games for Windows Live like the PC port of Fable 3 running on modern Windows. It's possible, but there's a ridiculous number of hoops to jump through, none of which would be necessary if Microsoft shipped a quick and dirty update that pulled out the Games for Windows Live dependency.
> I assume Apple has some sort of vector acceleration framework in Apple Silicon, but it will take time and effort to reverse-engineer and implement.
I'm pretty sure it's just vanilla ARM NEON so I don't think it will take any reverse engineering. The Apple Silicon GPU is custom, but the CPU is just minor extensions to (and compatible with) AArch64. Rumour has it that this is because AArch64 was designed by Apple and donated to ARM (who Apple has close relationship with being that they were a founding member).
Interesting, that's what I was curious about. NEON is a bit slow last I checked, but at least Apple is sticking to spec here. It does make me wonder how much performance is left on the table for ARM architectures that want to emulate x86, though.
...it also raises the question of how emulated titles fare against translated ones. It would be fascinating to see how something like Dark Souls Remastered performs through Yuzu vs DXVK on Apple Silicon.
I'd guess that will probably only happen when either windows gets widespread ARM adoption, or there's a new Xbox or PlayStation console that uses an ARM processor. Which... might be a while.
The Nintendo Switch console already uses an ARM SoC by Nvidia. But I'm not sure whether this has meaningfully increased the probability of porting games to MacOS. The Switch uses Vulkan, but Apple uses Metal, a proprietary graphics API. Whether ports make sense probably depends on how strongly the Mac market share increases compared to Windows.
The Switch can use Vulkan but it's unusual in offering a wide range of APIs, from OpenGL and Vulkan (the implementations likely derived from Nvidia's existing PC driver) or a custom low-level API tailored to the hardware called NVN. From what I gather from the emulation scene, the majority of Switch titles with non-trivial performance requirements use NVN. Even idTech, which famously uses Vulkan on PC, uses NVN instead for its Switch ports.
Linux gaming has been developing in the opposite direction for a while, moving away from even x86 Linux native ports and toward running x86 Windows games under emulation.
The recursive acronym tradition of course (GNU's Not Unix, Eine Is Not Emacs etc) traditionally implied the implementation being superset or better than the thing it's replacing and referencing in the acronym.
Wine FAQ concludes
> "Wine is not just an emulator" is more accurate. Thinking of Wine as just an emulator is really forgetting about the other things it is. Wine's "emulator" is really just a binary loader that allows Windows applications to interface with the Wine API replacement.
The page size dictates the minimum size and alignment requirements for `mmap`, and also for regions of memory with different levels of protection (e.g. read-only vs read+write vs read+execute, etc). If a program expects to be able to `mmap` in 4kb chunks and can't, it will probably not work properly.
On macOS, IIRC the userspace and kernel-space page size can be different and different userspace programs can run with diferent page sizes, however on Linux the page size is currently fixed across the system and set at compile time. The M1's IOMMU only supports 16k-aligned pages, so memory regions that need to be shared with other hardware (e.g. the GPU) need to be 16k-aligned. As such (and because Linux doesn't currently have great support for mixed page sizes), the Asahi Linux project has decided to run with 16k pages globally. However, that breaks a number of applications that are expecting 4k pages.
I imagine that there will be a lot of work to improve this over the coming years, not just because of asahi, but the cloud ARM systems that are being developed.
Pages have been 4k on a lot of systems for 30+ years.
That means a lot of software has come to assume that.
Certain memory buffers need to be page size aligned, or a multiple of pages long. Code can only be loaded to a page aligned memory address. Memory mapping and read/write/execute permissions can only be set on a per-page basis.
If all that stuff is hardcoded now, there will be lots of fixes necessary to make things work properly with a different page size.
And those fixes probably will need the software to be recompiled. And some software is only distributed in binary form, and getting someone to recompile it may be nearly impossible.
Sibling comments said it all, though "The Quest for Netflix on Asahi Linux", posted on HN [1] as a very good, detailed explanation of this and is a nice read.
I think the Asahi project will release a 4k kernel version for those really need/want it at some point. As I understand there are no technical barriers, they're just delaying it to push more projects into supporting the 16k mode (which has better perf).
I believe 4k pages works with Asahi Linux today. However the CPU can do 4k and 16k pages, the GPU is 16k pages only. So you give up accelerated 3D to run 4k pages.
I have Steam on a Mac, and roughly 1/3 of my library supports M1 Macs. I have some old games in my library, so that's pretty decent numbers for a relatively new platform that is 64 bit only, relatively niche in general, and extremely niche for gaming.
Buy a Ryzen mini PC with a Radeon 680M and get that now with HoloISO? M2 really isn't that fast. And as a bonus you won't have to run every game under a translation layer.
I'm excited for the day that I can easily install SteamOS (the modern one that runs on the Steamdeck) on an M2 Mac mini for an insanely powered "Steam console" for my living room TV.