weinzierl 14 hours ago

Decades ago Linus Torvalds was asked in an interview if he feared Linux to be replaced by something new. His answer was that some day someone young and hungry would come along, but unless they liked writing device drivers Linux would be safe.

This is all paraphrased from my memory, so take it with a grain of salt. I think the gist of it is still valid: Projects like Asterinas are interesting and have a place, but they will not replace Linux as we have it today.

(Asterinas, from what I understood, doesn't claim to replace Linux, but it a common expectation.)

  • loeg 13 hours ago

    More recently, in a similar vein:

    > Torvalds seemed optimistic that "some clueless young person will decide 'how hard can it be?'" and start their own operating system in Rust or some other language. If they keep at it "for many, many decades", they may get somewhere; "I am looking forward to seeing that". Hohndel clarified that by "clueless", Torvalds was referring to his younger self; "Oh, absolutely, yeah, you have to be all kinds of stupid to say 'I can do this'", he said to more laughter. He could not have done it without the "literally tens of thousands of other people"; the "only reason I ever started was that I didn't know how hard it would be, but that's what makes it fun".

    https://lwn.net/Articles/990534/

    • ackfoobar 12 hours ago

      > Hohndel clarified that by "clueless", Torvalds was referring to his younger self

      As the saying goes "We do this not because it is easy, but because we thought it would be easy."

      Occasionally these are starts of great things.

      • nickpsecurity 10 hours ago

        Sometimes, we do such things because it’s hard. We enjoy the challenge. Those that succeed are glad to make it, too.

        • dathinab 4 hours ago

          but most times, even in such cases, people underestimate or not estimate at all the "hard task they do as a challenge" it's kinda part of the whole thing

          • BodyCulture 3 hours ago

            Sometimes we just don’t know if a person that started something did know how hard it would be or not. Sometimes it is not possible to know how hard things can be or not.

            Generally this is a very interesting question hat could be discussed in a very long thread, but still the reader will not get any value from it.

    • m463 7 hours ago

      "You are enthusiastic and write kernel device drivers in rust. Write a device driver for an Intel i350 4 Port gigabit ethernet controller"

      • senko an hour ago

        You jest, but I believe @tptacek is using an LLM (ChatGPT?) to understand the details of various Linux kernel subsystem and has said it works quite well for the task.

        It's not a great jump from that to "port Linux device driver for XYZ to this new OS in Rust". Won't be perfect but a lot less hassle than doing it from scratch.

      • Y_Y an hour ago

        Claude Sonnet 3.5 seemed happy enough to do it, and the start looked promising

             Absolutely! Let's dive into writing a device driver for the Intel i350 4 Port Gigabit Ethernet Controller using Rust. This is an exciting project that combines low-level hardware interaction with the safety and performance benefits of Rust. I'll create a basic structure for our driver, focusing on the key components needed to interact with the device.
        
            #![no_std]
            #![feature(abi_x86_interrupt)]    
            ...
        
        
        but I'm not qualified to judge the quality from eyeballing and I'm certainly not going to go to the trouble of trying to test it.
      • NetOpWibby 6 hours ago

        Some future VC-funded company will unironically have this same requirement

        • m463 5 hours ago

          It wasn't a requirement, it was a prompt :)

          • NetOpWibby 5 hours ago

            Haha damn, it’s so obvious now. I should be asleep.

      • sshine 3 hours ago

        LLMs are notoriously bad at improvising device drivers in no-std Rust.

  • linsomniac 11 hours ago

    I feel like there's a potentially large audience for a kernel that targets running in a VM. For a lot of workloads, a simple VM kernel could be a win.

    • lmm an hour ago

      Those workloads would probably be better off as unikernels that can run directly on the VM, avoiding the question of which kernel to use entirely.

      • rcxdude 17 minutes ago

        There's a difference between "want to run an application with as little extra move parts on a VM" and "want to take an existing system and swap out for a kernel with some better properties, even if it means needing to run it in a VM"

    • yjftsjthsd-h 9 hours ago

      How is that different from Linux with all virtio drivers? (You can just not compile real hardware drivers)

      • rcxdude 14 minutes ago

        If it's written in rust, you might expect less security vulnerabilities (especially if the codebase is also smaller: NB this is potentially counterbalanced by the many eyes on linux). Maybe there would be some extra features you find useful.

      • lmm an hour ago

        The point is it would be better than Linux in whatever way that was the reason you were writing it, but you don't have to write hundreds of different device drivers to make your cool new kernel usable.

      • m463 7 hours ago

        I would imagine that virtualized device drivers would have a well-defined api and vastly simplified logic.

        • prmoustache 3 hours ago

          Shouldn't we start building hardware that have a builtin translation layer that makes them driveable by virtio drivers themselves? At least for the most capabilities?

        • yjftsjthsd-h 7 hours ago

          I imagine they do. But given that Linux has those simple drivers, why not use them?

    • pjmlp 5 hours ago

      This is already the reality today with native cloud computing, managed runtimes.

      It doesn't matter how the language gets deployed, if the runtime is on a container, a distroless container, or directly running on an hypervisor.

      The runtime provides enough OS like services for the programming language purposes.

    • prmoustache 3 hours ago

      this x1000

      Provided you have virtio support you are ticking a lot of boxes already.

  • GoblinSlayer 3 hours ago

    Just ask an AI to riir linux drivers. Anybody tried it?

  • mdhb 6 hours ago

    Also this mysterious new Fuchsia OS from Google is also shooting for full Linux compatibility and is about to show up in Android, I think this is a much more realistic path of the next generation of operating systems that have a real chance to replace Linux but who knows what their actual plans are here at the moment but I don’t believe for a moment that that project is dead in any way.

    • vbezhenar 4 hours ago

      I wonder if decision for stable syscalls was genius? Like imagine that Linux syscalls will become what C ABI is now. And there will be multiple compatible kernels, so you can choose any and run the same userspace.

    • lifty 6 hours ago

      Can you give more details about it being used in Android? I thought they started using it in some small devices like nest but haven’t heard anything about Android

      • mdhb 4 hours ago

        It’s about to turn up inside Android running in a VM [1] but it was less clear exactly for what purpose.

        My theory is that this is essentially a long term project to bring the core of Chrome OS and Android to rely on Fuschia for its core which gives them syscall level compatibility with what they both use at the moment and that they would both essentially sit as products on top of that.

        This is essentially the exact strategy they used if I remember correctly with the Nest devices where they swapped out the core and left the product on top entirely unchanged. Beyond that in a longer term scenario we might also just see a Fuchsia OS as a combined mobile / desktop workstation setup and I think part of that is also why we are seeing ChromeOS starting to take a dependency on Android’s networking stack as well right now.

        [1] https://www.androidauthority.com/microfuchsia-on-android-345...

prmoustache 3 hours ago

> docker run -it --privileged --network=host --device=/dev/kvm -v $(pwd)/asterinas:/root/asterinas asterinas/asterinas:0.9.3

Is that the new generation of curl | bashism in action?

  • oefrha 16 minutes ago

    Hardly different from downloading random binary installers and executing them. Or random source distributions and (sudo) make install. Or npm/pip/cargo/etc. install random packages. Before anyone mentions distros and package managers, as a former team member of a major package manager I can assure you we don’t vet shit beyond project notability, and new versions are accepted semi-automatically. We’ll yank something after the fact if you report a malicious update, sure.

    curl | bash has an actual problem: potential execution of an incomplete script (which can be mitigated with function calling). And there’s the mostly theoretical problem of the server being pwned / sending malicious code just to you (which of course also applies to any other unsigned channel). Arbitrary code execution is never a problem unique to it, but people dunk on it all the time because they saw another person dunking on it in the past.

  • wslh 3 hours ago

    Is the "--privileged" option ironic here? The project is very interesting, but it feels a bit pedantic, especially when emphasizing Rust's safety features while downplaying Linux. At the same time, it seems they're not fully applying those principles themselves, which makes it feel like they're not quite 'eating their own lunch'.

akira2501 17 hours ago

I personally dislike rust, but I love kernels, and so I'll always check these projects out.

This is one of the nicer ones.

It looks pretty conservative in it's use of Rust's advanced features. The code looks pretty easy to read and follow. There's actually a decent amount of comments (for rust code).

Not bad!

  • wg0 9 hours ago

    Otherwise is a decent language but what makes it difficult is the borrow semantics and lifetimes. Lifetimes are more complicated to get your head around.

    But then there's this Arc, Ref, Pinning and what not - how deep is that rabbit hole?

    • baq 6 hours ago

      If you’re writing C and don’t track ownership of values, you’re in a world of hurt. Rust makes you do from day one what you could do in C but unless you have years of experience you think it isn’t necessary.

      • wg0 4 hours ago

        Okay, I think it is is more like Typescript. You hate it but one day you just write small JS program and convert it to Typescript to discover that static analysis alone had so many code paths revealed that would have resulted in uncaught errors and then you always feel very uncomfortable writing plain Javascript.

        But what about tools like valgrind in context of C?

        • rcxdude 10 minutes ago

          Valgrind can only tell you about issues that your testcases exercise. It doesn't provide the same guarantees as static checking of memory safety invariants. But, if you're really concerned (especially about unsafe code), belt-and-bracers is a good strategy, and valgrind will work with rust binaries as well. Rust also has a tool called MIRI which can similarly flag up issues in testcases (it's effectively an interpreter for the intermediate representation in the compiler, and it can detect undefined behaviour even if the compiled assembly would happen to look OK. Still has the same limitation of needing extensive testcases though)

        • baq 2 hours ago

          You probably should run your rust programs through valgrind regardless. Rust is safer than C, but any unsafe code drops you to approximately C level of safety and any C FFI calls are obviously outside of rust's control or responsibility.

        • badmintonbaseba 2 hours ago

          Valgrind is great, especially if you write extensive tests and you actually run them through it regularly. And even then, it does not prove the absence of any kind of bugs. Safe rust has strong guarantees.

      • metalloid 5 hours ago

        It was true until LLMs arrive. Feature compilers + IDEs can be integrated with LLMs to help programmers.

        Rust was a great idea, before LLMs, but I don't see the motivation for Rust when LLMs can be the solution initial for C/C++ 'problems'.

        • smolder 5 hours ago

          Relying on LLMs to code for you in no way solves the safety problem of C/C++ and probably worsens it.

        • baq 5 hours ago

          On the contrary LLMs make using safe but constraining languages easier - you can just ask it how to do what you want in Rust, perhaps even by asking it to translate C-ish pseudocode.

    • junon 5 hours ago

      Context: I'm writing a novel kernel in Rust.

      Lifetimes aren't bad, the learning curve is admittedly a bit high. Post-v1 rust significantly reduced the number of places you need them and a recent update allows you to elide them even more if memory serves.

      Arc isn't any different than other languages, not sure what you're referring to by ref but a reference is just a pointer with added semantic guarantees, and Pin isn't necessary unless you're doing async (not a single Pin shows up in the kernel thus far and I can't imagine why I'd have one going forward).

    • oneshtein 7 hours ago

      Rust lifetime is just a label for a region of memory with various data, which is discarded at the end of its life time. When compiler enters a function, it creates a memory block to hold data of all variables in the function, and then discards this block at the exit from the function, so these variables are valid for life time of the function call only.

    • oersted 5 hours ago

      I don’t entirely agree, you can get used to the borrow checker relatively quickly and you mostly stop thinking about it.

      What tends to make Rust complex is advanced use of traits, generics, iterators, closures, wrapper types, async, error types… You start getting these massive semi-autogenerated nested types, the syntax sugar starts generating complex logic for you in the background that you cannot see but have to keep in mind.

      It’s tempting to use the advanced type system to encode and enforce complex API semantics, using Rust almost like a formal verifier / theorem prover. But things can easily become overwhelming down that rabbit hole.

    • KingOfCoders 6 hours ago

      I always feel Arc is the admission that the borrow checker with different/overlapping lifetimes is too difficult, despite what many Rust developers - who liberally use Arc - claim.

      • jeroenhd 3 hours ago

        Lifetime tracking and ownership are very difficult. That's why languages like C and C++ don't do it. It's also why those languages needs tons of extra validation steps and analysis tools to prevent bugs.

        Arc is nothing more than reference counting. C++ can do that too, and I'm sure there are C libraries for it. That's not an admission of anything, it's actually solving the problem rather than ignoring it and hoping it doesn't crash your program in fun and unexpected ways.

        Using Arc also comes with a performance hit because validation needs to be done at runtime. You can go back to the faster C/C++ style data exchange by wrapping your code in unsafe {} blocks, though, but the risks of memory corruption, concurrent access, and using deallocated memory are on you if you do it, and those are generally the whole reason people pick Rust over C++ in the first place.

        • GoblinSlayer an hour ago

          Looking at the code, it consists of long chains of get().unwrap().to_mut().unwrap().get() noise. Looks like coping with library design than ownership tacking. Also why Result<Option<T>>? Isn't Result already Option by itself? I guess that's why you need get().unwrap().to_mut() to get a value from Result<Option<T>> from an average function call?

      • Galanwe 6 hours ago

        It's not that the borrow checker is too difficult, it's that it's too limiting.

        The _static_ borrow checker can only check what is _statically_ verifiable, which is but a subset of valid programs. There are few things more frustrating than doing something you know is correct, but that you cannot express in your language.

        • netbsdusers 2 hours ago

          For kernels (and I suspect database engines might be added to the list, since they seem to have similar requirements to be both scalable and deal with massive amounts of shared state, but I'm not overly familiar with them) is where it gets particularly difficult.

          Several kernels for example use type-stable memory, memory that is guaranteed to only hold objects of a particular type, though perhaps only providing that guarantee for as long as you hold an RCU read-lock (this is the case in Linux with SLAB_TYPESAFE_BY_RCU). It is possible in some cases to be able to safely deal with references to objects where the "lifetime" of the referent has ended, but where by dint of it being guaranteed to be the same type of object, you can still do what you want to do.

          This comes in handy when you have a problem that commonly appears in kernels where you need to invert a typical lock ordering (a classic case is that the page fault codepath might want to lock, say, VM object then page queue, but the page-replacement codepath will want to lock page-queue then VM object.)

          Unfortunately it's hard to think of how the preconditions for these tricks could be formally expressed.

      • lmm an hour ago

        If tracking lifetimes is simple 90% of the time and complex 10% of the time, maybe a tool that lets you have them automatically managed (with some runtime overhead) that 10% of the time is the right way forward.

      • GolDDranks 3 hours ago

        It's not just difficult, sometimes it's impossible to statically know a lifetime of a value, so you must dynamically track it. Arc is one of such tools.

  • IshKebab 16 hours ago

    Rust code is usually well commented in my experience.

    • iknowstuff 13 hours ago

      for the downvoters: it’s true, and it’s because of rustdoc and doctests. comments become publicly browsable documentation, and any code contained within is run as a part of the test suite.

      • 1oooqooq 12 hours ago

        think the downvotes are because of relevance. point was not using advanced rust features, not being documented

        • forks 12 hours ago

          I don't see how the relevance is in question. GGGP said "There's actually a decent amount of comments (for rust code)." GGP seems to be responding to that parenthetical.

    • cies 14 hours ago

      Instead of asking "what other languages and project (open/closed, big/small, web/mobile/desktop, game/consumerapp/bizapp) have you experience with as to come to this conclusion?" people down vote you.

      So lemme ask: what other languages and project (open/closed, big/small, web/mobile/desktop, game/consumerapp/bizapp) have you experience with as to come to this conclusion?

      • ramon156 13 hours ago

        I expect the downvotes to be there because it's talking positively about rust, which is blasphemy! /j

justmarc 15 hours ago

I'm interested in these kind of kernels to run very high performance network/IO specific services on bare metal, with minimal system complexity/overheads and hopefully better (potential) stability and security.

The big concern I have however is hardware support, specifically networking hardware.

I think a very interesting approach would be to boot the machine with a FreeBSD or Linux kernel, just for the purposes of hardware as well as network support, and use a sort of Rust OS/abstraction layer for the rest, bypassing or simply not using the originally booted kernel for all user land specific stuff.

  • nijave 14 hours ago

    Couldn't you just boot the Linux kernel directly and launch a generic app as pid 1 instead of a full blown init system with a bunch of daemons?

    That's basically what you're getting with Docker containers and a shared kernel. AWS Lambda is doing something similar with dedicated kernels with Firecracker VMs

    • justmarc 7 hours ago

      Yes, but I wanted to bypass having the complexity of the Linux kernel completely, too.

      Basically single app directly to network (the world) and as little as possible else in between.

    • mjevans 14 hours ago

      Yes, you can. You can even have a different Pid 1 configure whatever and then replace it's core image with the new Pid 1.

  • cgh 15 hours ago

    If you want truly high-performance networking, you can bypass the kernel altogether with DPDK. So you don't have to worry about alternative kernels for other tasks at all. On the downside, DPDK takes over the NIC entirely, removing the kernel from the equation, so if you need the kernel to see network traffic for some reason, it won't work for you.

    You can check out hardware support here: https://core.dpdk.org/supported/nics/

    • jauntywundrkind 14 hours ago

      This was true a decade ago, with modern io_uring dpdk is probably an anti-pattern.

      • cgh 14 hours ago

        Interesting, it's been awhile since I looked at this stuff so I did a little searching and found this: https://www.diva-portal.org/smash/get/diva2:1789103/FULLTEXT...

        Their conclusion is io_uring is still slower but not by much, and future improvements may make the difference negligible. So you're right, at least in part. Given the tradeoffs, DPDK may not be worth it anymore.

        • loeg 13 hours ago

          There are also just a bunch of operational hassles with using DPDK or SPDK. Your usual administrative commands don't work. Other operations aren't intermediated by the kernel -- instead you need 100% dedicated application devices. Device counters usually tracked by the kernel aren't. Etc. It can be fine, but if io_uring doesn't add too much overhead, it's a lot more convenient.

        • guenthert 4 hours ago

          "io_uring had a maximum throughput of 5.0 Gbit/s "

          Wut? More than 10 years ago, a cheap beige box could saturated a 1Gbps link with a kernel as it came from e.g. Debian w/o special tuning. A somewhat more expensive box could get a good share of a 10Gbps link (using Jumbo frames), so these new results are, er, somewhat underwhelming.

        • renox 5 hours ago

          Not by much?? You're exaggerating..

        • guenthert 4 hours ago

          That's an interesting and valuable study. I was slightly disappointed though that only a single host was used in the 'network' performance tests:

          "SR-IOV was used on the NIC to enable the use of virtual functions, as it was the only NIC that was available during the study for testing and therefore the use of virtual functions was a necessity for conducting the experiments."

      • monocasa 11 hours ago

        I'm not sure that's true for a good chunk of the workloads that dpdk really shines on.

        A lot of the benefit of dpdk is colocating your data and network stack in the same virtual memory context. io_uring I can see getting you there if you have you're serving fixed files as a cdn kind of like netflix's appliances, but for cases where you're actually doing branchy work on the individual requests, dpdk is probably a little easier to scale up to the faster network cards.

      • GoblinSlayer 2 hours ago

        If you use io_uring, you're subject to vulnerabilities in kernel network stack which you have no control over.

  • treeshateorcs 15 hours ago

    i might be wrong but if it's ABI compatible the same drivers will work?

    p.s.: i was wrong

    >While we prioritize compatibility, it is important to note that Asterinas does not, nor will it in the future, support the loading of Linux kernel modules.

    https://asterinas.github.io/book/kernel/linux-compatibility....

    • yjftsjthsd-h 15 hours ago

      Linux doesn't even maintain ABI compatibility with itself, nobody else is going to manage it. The possibility that might work is there's a couple projects that maintain just enough API compatibility to reuse driver code from Linux (IIRC FreeBSD does this for some graphics drivers). But even then you're gambling with whether Linux decides to change implementation details one day, since internal APIs explicitly aren't stable.

      • bcrl 14 hours ago

        The Linux kernel community takes ABI compatibility for userland very seriously. That developers in userland are frequently unwilling to understand issues surrounding ABI stability is not the fault of the Linux kernel.

        • yjftsjthsd-h 14 hours ago

          Oh sure, the user-space ABI is stable; I meant kernel-space. Although I realize now that I failed to write that explicitly.

          • bcrl 12 hours ago

            The past 30 years of the Linux kernel's evolution has proven that there is no need for a stable kernel ABI. That would make refactoring, adding new features and porting to new platforms exceedingly difficult. Pretty much all of the proprietary kernel modules have either become open source or been replaced by open source replacements. The Linux community doesn't need closed source kernel modules for VMWare anymore, and even Nvidia has finally given up on their closed source GPU drivers. Proprietary Linux kernel modules have no place in the modern world.

            • lmm an hour ago

              > The past 30 years of the Linux kernel's evolution has proven that there is no need for a stable kernel ABI.

              My experience of using Linux and having devices that used to work become unsupported suggests just the opposite.

            • GoblinSlayer 2 hours ago

              It depends on your goals, but at least Torvalds believes driver availability is important and unstable ABI is known to hinder driver availability.

            • vlovich123 12 hours ago

              > even Nvidia has finally given up on their closed source GPU drivers.

              lol. No. They just added a CPU and then offloaded all the closed source userspace driver code to it leaving behind the same dumb open sourceable kernel driver shim as before (ie instead of talking to userspace it talks to the GPU’s CPU).

              > The past 30 years of the Linux kernel's evolution has proven that there is no need for a stable kernel ABI.

              What the last 30 years have shown is that there is actually a need for it, otherwise DKMS wouldn’t be a thing. Heck, intel’s performance profiler can’t keep up with the kernel changes which means you get to pick running an up to date kernel or be able to use the open source out-of-tree kernel module. The fact that Linux is alone in this should make it clear it’s wrong. Heck Android even wrote their own HAL to try to make it possible to update the kernel on older devices. It’s an economics problem that the Linux kernel gets to pretend doesn’t exist but it’s a bad philosophical position. It’s possible to support refactoring and porting to new platforms while providing ABI compatibility and Linux is way past the point where it would even be a minor inconvenience - all the code has ossified quite a bit anyway.

    • dathinab 4 hours ago

      in general the ABI is kernel<->user space while the ABI (and potentially even API) on the inside (i.e. for drivers) can change with every kernel version (part of why it's so important to maintain drivers in-tree)

    • bicolao 15 hours ago

      They mention this in https://github.com/asterinas/asterinas/blob/2af9916de92f8ca1...

      > While we prioritize compatibility, it is important to note that Asterinas does not, nor will it in the future, support the loading of Linux kernel modules.

      • justmarc 15 hours ago

        It's a lot "simpler" to support a Linux userland as that means one needs to "just" emulate all the Linux syscalls, than to implement the literally countless internal APIs needed for drivers etc, as that would otherwise mean literally reimplementing the whole Linux kernel and that's neither realistic, nor too useful.

        • mgerdts 8 hours ago

          And that’s not all that simple, as has been experienced by Solaris (never released(?) Linux branded zones, illumos (lx brand), and Windows (WSL1) developers that have tried to make existing kernels act like Linux.

          It’s probably easier if the kernel’s key goal is to be compatible with the Linux ABI rather than being compatible with its earlier self while bolting on Linux compatibility.

        • Jyaif 14 hours ago

          > emulate all the Linux syscalls

          and emulate the virtual filesystems (/proc/...)

    • justmarc 15 hours ago

      No, it means you can run Linux userland/apps on this kernel, to the level/depth which they currently support of course.

      They might not yet implement everything that's needed to boot a standard Linux userland but you could say boot straight into a web server built for Linux, instead of booting into init for example.

  • protoman3000 7 hours ago

    Why don’t you just use a SmartNIC and P4? It won’t get faster than running on the NIC itself

exabrial 12 hours ago

I think this looks incredible. Like how does one create a compatible abi _for all of linux_??? Wow!

> utilize the more productive Rust programming language

Nitpick: it’s 2024 and these ‘more productive’ comparisons are silly, completely unscientific, And a bit of a red flag for your project: The most productive language for a developer is the one they understand what is happening one layer below the level of abstraction they are working with. Unless you’re comparing something rating Ruby vs RiscV assembly, it’s just hocus-pocus.

  • jmmv 7 hours ago

    > I think this looks incredible. Like how does one create a compatible abi _for all of linux_??? Wow!

    FWIW that’s what the Linux compatibility layer in the BSDs does and also what WSL 1 did (https://jmmv.dev/2020/11/wsl-lost-potential.html).

    It’s hard to get _everything_ perfectly right but not that difficult to get most of it working.

    • NewJazz 6 hours ago

      IIRC Fuschia has something similar. And maybe Redox?

  • kelnos 9 hours ago

    > Like how does one create a compatible abi _for all of linux_???

    You look at Linux's syscall table[0], read through the documentation to figure out the arguments, data types, flags, return values, etc., and then implement that in your kernel. The Linux ABI is just its "library" interface to userspace.

    It's probably not that difficult; writing the rest of the kernel itself is more challenging, and, frankly, more interesting. Certainly matching behavior and semantics can be tricky sometimes, I'm sure. And I wouldn't be surprised if the initial implementation of some things (like io_uring, for example, if it's even supported yet) might be primitive and poorly optimized, or might even use other syscalls to do their work.

    But it's doable. While Linux's internal ABI is unstable, the syscall interface is sacred. One of Torvalds' golden rules is you don't break userspace.

    [0] https://filippo.io/linux-syscall-table/

  • dathinab 3 hours ago

    Idk. Asahi Linux GPU driver breaks all "common sense" of "how fast a reliable usable feature rich driver" was produced by a small 3rd party team.

    The company I work for has both rust and python projects (through partially pre "reasonable python type linting" using mypy and co.) and the general consensus there is "overall" rust is noticeable more productive (and stable in usage/reliable), especially if you have code which changes a lot.

    A company I worked previous for had used rust in the very early days (around 1.0 days) and had one of this "let's throw up a huge prototype code base in a matter of days and then rewrite it later" (basically 90% of code had huge tech dept). But that code base stuck around way longer then intended, caused way less issues then expected. I had to maintain it a bit and in my experience with similar code in Python and Js (and a bit Jave) I expected it to be very painful but surprisingly it wasn't, like at all.

    Similar comparing my experience massive time wastes due to having to debug soundness/UB issues in Rust, with experiences in C/C++ it's again way more productive.

    So as long as you don't do bad stuff like over obsessing with the type system everything in my experience tells me using Rust is more productive (for many tasks, definitely not all task, there are some really grate frameworks doing a ton of work for you in some languages against which the rust ecosystem atm. can't compete).

    ---

    > Most productive language for a developer is the one they understand what is happening one layer below the level of abstraction they are working with.

    I strongly disagree, the most productive language is the one where the developer doesn't have to care much about what happens in a layer below in most cases. At least as long as you don't want obsess over micro optimizations not being worth the time and opportunity cost they come with for most companies/use cases.

  • ozgrakkurt 12 hours ago

    Everyone says what they are used to is better or more productive. Even in assembly vs ruby, some stuff are much easier in assembly and maybe impossible in ruby afaik

    • exabrial 12 hours ago

      I’m aging myself, but ~17 years ago I was in San Diego for a conference. There was a table level competition to see who could write the fastest program in 20 minutes (we were doing a full text search of a ‘giant’ 5g file). One of the guys at the table wrote some SPARC assembly to optimize character matching that was a hotspot like he was speaking French.

      Ah good times.

pjmlp 5 hours ago

Besides all examples, Microsoft is now using TockOS for Pluton firmware, another Rust based OS.

https://tockos.org/

tiffanyh 18 hours ago

OT: if you're interested in Asterinas, you might also be interested in Redox (entire OS written in Rust).

https://www.redox-os.org/

  • snvzz 12 hours ago

    Redox has a proper architecture, aka microkernel multiserver.

    Thus it is a much more interesting project.

hkalbasi 13 hours ago

> In the framekernel OS architecture, the entire OS resides in the same address space (like a monolithic kernel) and is required to be written in Rust. However, there's a twist---the kernel is partitioned in two halves ... the unprivileged Services must be written exclusively in safe Rust.

Unprivileged services can exploit known compiler bugs and do anything they want in safe Rust. How this affects their security model?

Klasiaster 14 hours ago

There was also the similar project Kerla¹ but development stalled. Recently people argued that instead of focusing on Rust-for-Linux it would be easier to create a drop-in replacement like these two. I wonder if there are enough people interested to make this happen as a sustained project.

¹ https://github.com/nuta/kerla/

  • kelnos 9 hours ago

    > Recently people argued that instead of focusing on Rust-for-Linux it would be easier to create a drop-in replacement like these two

    I guess it depends on what they mean by "easy". Certainly it's easier in the sense that you can just write code all day long, and not have to deal with the politics about Rust inside Linux, or deal with all the existing C interfaces, finding ways to wrap them in Rust in good, useful ways that leverage Rust's strengths but don't make it harder to evolve those C interfaces without trouble on the Rust side.

    But the bulk of Linux is device drivers. You can build a kernel in Rust (like Asterinas) that can run all of a regular Linux userland without recompilation, and I imagine it's maybe not even that difficult to do so. But Asterinas only runs on x86_64 VMs right now, and won't run on real hardware. Getting to the point where it could -- especially on modern hardware -- might take years. Supporting all the architectures and various bits of hardware that Linux supports could take decades. I suppose limiting themselves to three or four architectures, and only supporting hardware made more recently could cut that down. But still, it's a daunting project.

depressedpanda 17 hours ago

From the README:

> Currently, Asterinas only supports x86-64 VMs. However, our aim for 2024 is to make Asterinas production-ready on x86-64 VMs.

I'm confused.

  • wrs 16 hours ago

    I think it’s “Currently, Asterinas only supports x86-64 VMs. However, [rather than working on additional architectures this year,] our aim for 2024 is to make Asterinas production-ready on x86-64 VMs.”

  • netbsdusers 2 hours ago

    They lack essential things for a kernel that could be used in production, viz. not kernel panicing during out-of-memory conditions, not an easy thing to retrofit when you have designed without consideration of it. It will probably take a bit more than 2 and a half months to rectify that.

    https://github.com/asterinas/asterinas/issues/669

  • favorited 17 hours ago

    Sounds like their goal is to improve their x86-64 support before implementing other ISAs.

  • nurb 17 hours ago

    It's clearer from the book roadmap:

    > By 2024, we aim to achieve production-ready status for VM environments on x86-64. > In 2025 and beyond, we will expand our support for CPU architectures and hardware devices.

    https://asterinas.github.io/book/kernel/roadmap.html

  • None4U 16 hours ago

    Distinction here is between "supports" and "production-ready on", not "x86-64" and "x86-64"

  • MattPalmer1086 17 hours ago

    Yeah, I had to read that a few times... I think they just mean it isn't production ready yet, but that's what they are aiming for.

valunord 16 hours ago

I like what they're working towards with V in Vinix as well. Exciting times to see such things with ABI compat with Linux opening new paradigms.

cryptonector 14 hours ago

> Linux-compatible ABI

There's no specification of that ABI, much less a compliance test suite. How complete is this compatibility?

phlip9 13 hours ago

Super cool project. Looks like the short-term target use-case is running a Linux-compatible OS in an Intel TDX guest VM with a significantly safer and smaller TCB. Makes sense. This way you also postpone a lot of the HW driver development drudgery and instead only target VM devices.

wg0 9 hours ago

Side question - I have always wondered how a Linux system is configured at the lowest level?

Let's take example of network. There's IP address, gateway, DNS, routes etc. Depending on distribution we might see something like netplan reading config files and then calling ABI functions?

Or Linux kernel directly also reads some config files? Probably not...

  • NewJazz 6 hours ago

    Linux kernel as much as possible tries not to parse or read external data (besides stuff like acpi tables, device trees, hardware registers). For networking, you might look at the iproute codebase to see how they do things like bring a network device up, or create a bridge device, add a route, et cetera.

    Edit: looks like iproute2 uses NETLINK, but non-networking tools might use syscalls or device ioctls.

    https://en.m.wikipedia.org/wiki/Netlink

wiz21c 5 hours ago

> Linux-compatible ABI

Does it mean it can re-use the drivers written for hardware to run with linux ?

  • eptcyka 4 hours ago

    No. The drivers in Linux are kernel modules, most often in-tree - meaning that the source for the drivers is built along the rest of the kernel source code. Most hardware drivers depend on various common kernel structures that change often - when they do, the source for drivers is fixed practically in the same git branch. There is no driver ABI to speak of.

  • dezgeg 5 hours ago

    No. There is no stable ABI nor API for in-kernel device drivers.

spease 17 hours ago

What’s the intended use case for this? Backend containers?

  • Animats 16 hours ago

    Makes a lot of sense for virtual machine containers. Inside a container inside a VM, you need far less operating system.

xiaodai 12 hours ago

Lol. I am Malaysian Chinese but I honestly don't think anyone will put into production a Chinese made kernel. The risk is too high, same as no one will use a Linux distro coming out of Russian, Iran or NK. It's just cultural bias in the west.

sylware 2 hours ago

Linux is mostly a decades long maintained repository of real hardware programing code, and written in mostly simple "kernel" 'C', not some ultra complex syntax language (unfortunately, it has been tied to compiler specific extensions or "modern C" tantrums, _generic for instance).

Have a look at AMD GPU driver. Massive, and full of 'stabilization/work around' code... happening all the time, for years.

I guess, the real "first thing first" is to design hardware, performant hardware on latest silicon process , with a, as simple as possible, modern, standard and stable hardware programing interface. Because, for many types of hardware, 'now we know how to do it properly' (command hardware ring buffers usually, or a good compromise for modern CPU architecture, like RISC-V).

Another angle of "cleanup", I guess it would be the removal of many of the C compiler extension (or "modern C") tantrums from linux, or at least proper alternatives with not-inline assembly to allow small and alternative compilers to step in.

Personally, I tend to write rv64 assembly (which I interpret on x86_64), but for the userland. If I code C, I push towards mostly "simple and plain C99".

The more I think about it, the more I get the following coming to my mind: 'hardware with simple standard interfaces' and standard assembly for the kernel.

  • artemonster 2 hours ago

    Huh? What is this nonsense?? Are you suggesting that you like to write practical-oriented, simple and working solutions instead of yak shaving half a day at perfecting ridiculous type signatures, removing „unsafe“ code and satisfying borrow checker? Proposterous! /s

rpgraham84 34 minutes ago

oh cool, now I can have an unverifiable kernel from a team in China

jackhalford 15 hours ago

The building process happens in a container?

> If everything goes well, Asterinas is now up and running inside a VM.

Seems like the developers are very confident about it too

havaker 15 hours ago

The license choice is explained with the following:

> [...] we accommodate the business need for proprietary kernel modules. Unlike GPL, the MPL permits the linking of MPL-covered files with proprietary code.

Glancing at the readme, it also looks like they are treating it as a big feature:

> Asterinas surpasses Linux in terms of developer friendliness. It empowers kernel developers to [...] choose between releasing their kernel modules as open source or keeping them proprietary, thanks to the flexibility offered by MPL.

Can't wait to glue some proprietary blobs to this new, secure rust kernel /s

  • yjftsjthsd-h 15 hours ago

    I'm curious about the practical aspect: Are they going to freeze a stable driver ABI, or are they going to break proprietary drivers from time to time?

    • gpm 13 hours ago

      Considering their OS as a framework approach I would guess they are more likely to expose a stable API than a stable ABI. Which also plays well with the MPL license (source file based) rather than something like the LGPL (~linking based).

      • throw4950sh06 13 hours ago

        This is the most interesting new OS I have seen in many years.