prmoustache 22 minutes ago

> docker run -it --privileged --network=host --device=/dev/kvm -v $(pwd)/asterinas:/root/asterinas asterinas/asterinas:0.9.3

Is that the new generation of curl | bashism in action?

  • wslh 15 minutes ago

    Is the "--privileged" option ironic here? The project is very interesting, but it feels a bit pedantic, especially when emphasizing Rust's safety features while downplaying Linux. At the same time, it seems they're not fully applying those principles themselves, which makes it feel like they're not quite 'eating their own lunch'.

weinzierl 11 hours ago

Decades ago Linus Torvalds was asked in an interview if he feared Linux to be replaced by something new. His answer was that some day someone young and hungry would come along, but unless they liked writing device drivers Linux would be safe.

This is all paraphrased from my memory, so take it with a grain of salt. I think the gist of it is still valid: Projects like Asterinas are interesting and have a place, but they will not replace Linux as we have it today.

(Asterinas, from what I understood, doesn't claim to replace Linux, but it a common expectation.)

  • loeg 11 hours ago

    More recently, in a similar vein:

    > Torvalds seemed optimistic that "some clueless young person will decide 'how hard can it be?'" and start their own operating system in Rust or some other language. If they keep at it "for many, many decades", they may get somewhere; "I am looking forward to seeing that". Hohndel clarified that by "clueless", Torvalds was referring to his younger self; "Oh, absolutely, yeah, you have to be all kinds of stupid to say 'I can do this'", he said to more laughter. He could not have done it without the "literally tens of thousands of other people"; the "only reason I ever started was that I didn't know how hard it would be, but that's what makes it fun".

    https://lwn.net/Articles/990534/

    • ackfoobar 10 hours ago

      > Hohndel clarified that by "clueless", Torvalds was referring to his younger self

      As the saying goes "We do this not because it is easy, but because we thought it would be easy."

      Occasionally these are starts of great things.

      • nickpsecurity 7 hours ago

        Sometimes, we do such things because it’s hard. We enjoy the challenge. Those that succeed are glad to make it, too.

        • dathinab 2 hours ago

          but most times, even in such cases, people underestimate or not estimate at all the "hard task they do as a challenge" it's kinda part of the whole thing

          • BodyCulture 23 minutes ago

            Sometimes we just don’t know if a person that started something did know how hard it would be or not. Sometimes it is not possible to know how hard things can be or not.

            Generally this is a very interesting question hat could be discussed in a very long thread, but still the reader will not get any value from it.

    • m463 5 hours ago

      "You are enthusiastic and write kernel device drivers in rust. Write a device driver for an Intel i350 4 Port gigabit ethernet controller"

      • sshine 42 minutes ago

        LLMs are notoriously bad at improvising device drivers in no-std Rust.

      • NetOpWibby 3 hours ago

        Some future VC-funded company will unironically have this same requirement

        • m463 3 hours ago

          It wasn't a requirement, it was a prompt :)

          • NetOpWibby 3 hours ago

            Haha damn, it’s so obvious now. I should be asleep.

  • GoblinSlayer 40 minutes ago

    Just ask an AI to riir linux drivers. Anybody tried it?

  • linsomniac 9 hours ago

    I feel like there's a potentially large audience for a kernel that targets running in a VM. For a lot of workloads, a simple VM kernel could be a win.

    • prmoustache 18 minutes ago

      this x1000

      Provided you have virtio support you are ticking a lot of boxes already.

    • yjftsjthsd-h 7 hours ago

      How is that different from Linux with all virtio drivers? (You can just not compile real hardware drivers)

      • m463 5 hours ago

        I would imagine that virtualized device drivers would have a well-defined api and vastly simplified logic.

        • prmoustache 16 minutes ago

          Shouldn't we start building hardware that have a builtin translation layer that makes them driveable by virtio drivers themselves? At least for the most capabilities?

        • yjftsjthsd-h 4 hours ago

          I imagine they do. But given that Linux has those simple drivers, why not use them?

    • pjmlp 2 hours ago

      This is already the reality today with native cloud computing, managed runtimes.

      It doesn't matter how the language gets deployed, if the runtime is on a container, a distroless container, or directly running on an hypervisor.

      The runtime provides enough OS like services for the programming language purposes.

  • mdhb 4 hours ago

    Also this mysterious new Fuchsia OS from Google is also shooting for full Linux compatibility and is about to show up in Android, I think this is a much more realistic path of the next generation of operating systems that have a real chance to replace Linux but who knows what their actual plans are here at the moment but I don’t believe for a moment that that project is dead in any way.

    • vbezhenar 2 hours ago

      I wonder if decision for stable syscalls was genius? Like imagine that Linux syscalls will become what C ABI is now. And there will be multiple compatible kernels, so you can choose any and run the same userspace.

    • lifty 3 hours ago

      Can you give more details about it being used in Android? I thought they started using it in some small devices like nest but haven’t heard anything about Android

      • mdhb 2 hours ago

        It’s about to turn up inside Android running in a VM [1] but it was less clear exactly for what purpose.

        My theory is that this is essentially a long term project to bring the core of Chrome OS and Android to rely on Fuschia for its core which gives them syscall level compatibility with what they both use at the moment and that they would both essentially sit as products on top of that.

        This is essentially the exact strategy they used if I remember correctly with the Nest devices where they swapped out the core and left the product on top entirely unchanged. Beyond that in a longer term scenario we might also just see a Fuchsia OS as a combined mobile / desktop workstation setup and I think part of that is also why we are seeing ChromeOS starting to take a dependency on Android’s networking stack as well right now.

        [1] https://www.androidauthority.com/microfuchsia-on-android-345...

akira2501 15 hours ago

I personally dislike rust, but I love kernels, and so I'll always check these projects out.

This is one of the nicer ones.

It looks pretty conservative in it's use of Rust's advanced features. The code looks pretty easy to read and follow. There's actually a decent amount of comments (for rust code).

Not bad!

  • wg0 6 hours ago

    Otherwise is a decent language but what makes it difficult is the borrow semantics and lifetimes. Lifetimes are more complicated to get your head around.

    But then there's this Arc, Ref, Pinning and what not - how deep is that rabbit hole?

    • baq 3 hours ago

      If you’re writing C and don’t track ownership of values, you’re in a world of hurt. Rust makes you do from day one what you could do in C but unless you have years of experience you think it isn’t necessary.

      • wg0 2 hours ago

        Okay, I think it is is more like Typescript. You hate it but one day you just write small JS program and convert it to Typescript to discover that static analysis alone had so many code paths revealed that would have resulted in uncaught errors and then you always feel very uncomfortable writing plain Javascript.

        But what about tools like valgrind in context of C?

      • metalloid 3 hours ago

        It was true until LLMs arrive. Feature compilers + IDEs can be integrated with LLMs to help programmers.

        Rust was a great idea, before LLMs, but I don't see the motivation for Rust when LLMs can be the solution initial for C/C++ 'problems'.

        • smolder 3 hours ago

          Relying on LLMs to code for you in no way solves the safety problem of C/C++ and probably worsens it.

        • baq 3 hours ago

          On the contrary LLMs make using safe but constraining languages easier - you can just ask it how to do what you want in Rust, perhaps even by asking it to translate C-ish pseudocode.

    • junon 2 hours ago

      Context: I'm writing a novel kernel in Rust.

      Lifetimes aren't bad, the learning curve is admittedly a bit high. Post-v1 rust significantly reduced the number of places you need them and a recent update allows you to elide them even more if memory serves.

      Arc isn't any different than other languages, not sure what you're referring to by ref but a reference is just a pointer with added semantic guarantees, and Pin isn't necessary unless you're doing async (not a single Pin shows up in the kernel thus far and I can't imagine why I'd have one going forward).

    • oersted 3 hours ago

      I don’t entirely agree, you can get used to the borrow checker relatively quickly and you mostly stop thinking about it.

      What tends to make Rust complex is advanced use of traits, generics, iterators, closures, wrapper types, async, error types… You start getting these massive semi-autogenerated nested types, the syntax sugar starts generating complex logic for you in the background that you cannot see but have to keep in mind.

      It’s tempting to use the advanced type system to encode and enforce complex API semantics, using Rust almost like a formal verifier / theorem prover. But things can easily become overwhelming down that rabbit hole.

    • oneshtein 4 hours ago

      Rust lifetime is just a label for a region of memory with various data, which is discarded at the end of its life time. When compiler enters a function, it creates a memory block to hold data of all variables in the function, and then discards this block at the exit from the function, so these variables are valid for life time of the function call only.

    • KingOfCoders 4 hours ago

      I always feel Arc is the admission that the borrow checker with different/overlapping lifetimes is too difficult, despite what many Rust developers - who liberally use Arc - claim.

      • jeroenhd 33 minutes ago

        Lifetime tracking and ownership are very difficult. That's why languages like C and C++ don't do it. It's also why those languages needs tons of extra validation steps and analysis tools to prevent bugs.

        Arc is nothing more than reference counting. C++ can do that too, and I'm sure there are C libraries for it. That's not an admission of anything, it's actually solving the problem rather than ignoring it and hoping it doesn't crash your program in fun and unexpected ways.

        Using Arc also comes with a performance hit because validation needs to be done at runtime. You can go back to the faster C/C++ style data exchange by wrapping your code in unsafe {} blocks, though, but the risks of memory corruption, concurrent access, and using deallocated memory are on you if you do it, and those are generally the whole reason people pick Rust over C++ in the first place.

      • Galanwe 4 hours ago

        It's not that the borrow checker is too difficult, it's that it's too limiting.

        The _static_ borrow checker can only check what is _statically_ verifiable, which is but a subset of valid programs. There are few things more frustrating than doing something you know is correct, but that you cannot express in your language.

      • GolDDranks 14 minutes ago

        It's not just difficult, sometimes it's impossible to statically know a lifetime of a value, so you must dynamically track it. Arc is one of such tools.

  • IshKebab 13 hours ago

    Rust code is usually well commented in my experience.

    • iknowstuff 10 hours ago

      for the downvoters: it’s true, and it’s because of rustdoc and doctests. comments become publicly browsable documentation, and any code contained within is run as a part of the test suite.

      • 1oooqooq 10 hours ago

        think the downvotes are because of relevance. point was not using advanced rust features, not being documented

        • forks 10 hours ago

          I don't see how the relevance is in question. GGGP said "There's actually a decent amount of comments (for rust code)." GGP seems to be responding to that parenthetical.

    • cies 12 hours ago

      Instead of asking "what other languages and project (open/closed, big/small, web/mobile/desktop, game/consumerapp/bizapp) have you experience with as to come to this conclusion?" people down vote you.

      So lemme ask: what other languages and project (open/closed, big/small, web/mobile/desktop, game/consumerapp/bizapp) have you experience with as to come to this conclusion?

      • ramon156 11 hours ago

        I expect the downvotes to be there because it's talking positively about rust, which is blasphemy! /j

justmarc 13 hours ago

I'm interested in these kind of kernels to run very high performance network/IO specific services on bare metal, with minimal system complexity/overheads and hopefully better (potential) stability and security.

The big concern I have however is hardware support, specifically networking hardware.

I think a very interesting approach would be to boot the machine with a FreeBSD or Linux kernel, just for the purposes of hardware as well as network support, and use a sort of Rust OS/abstraction layer for the rest, bypassing or simply not using the originally booted kernel for all user land specific stuff.

  • nijave 12 hours ago

    Couldn't you just boot the Linux kernel directly and launch a generic app as pid 1 instead of a full blown init system with a bunch of daemons?

    That's basically what you're getting with Docker containers and a shared kernel. AWS Lambda is doing something similar with dedicated kernels with Firecracker VMs

    • justmarc 5 hours ago

      Yes, but I wanted to bypass having the complexity of the Linux kernel completely, too.

      Basically single app directly to network (the world) and as little as possible else in between.

    • mjevans 12 hours ago

      Yes, you can. You can even have a different Pid 1 configure whatever and then replace it's core image with the new Pid 1.

  • cgh 12 hours ago

    If you want truly high-performance networking, you can bypass the kernel altogether with DPDK. So you don't have to worry about alternative kernels for other tasks at all. On the downside, DPDK takes over the NIC entirely, removing the kernel from the equation, so if you need the kernel to see network traffic for some reason, it won't work for you.

    You can check out hardware support here: https://core.dpdk.org/supported/nics/

    • jauntywundrkind 12 hours ago

      This was true a decade ago, with modern io_uring dpdk is probably an anti-pattern.

      • GoblinSlayer 2 minutes ago

        If you use io_uring, you're subject to vulnerabilities in kernel network stack which you have no control over.

      • cgh 12 hours ago

        Interesting, it's been awhile since I looked at this stuff so I did a little searching and found this: https://www.diva-portal.org/smash/get/diva2:1789103/FULLTEXT...

        Their conclusion is io_uring is still slower but not by much, and future improvements may make the difference negligible. So you're right, at least in part. Given the tradeoffs, DPDK may not be worth it anymore.

        • loeg 11 hours ago

          There are also just a bunch of operational hassles with using DPDK or SPDK. Your usual administrative commands don't work. Other operations aren't intermediated by the kernel -- instead you need 100% dedicated application devices. Device counters usually tracked by the kernel aren't. Etc. It can be fine, but if io_uring doesn't add too much overhead, it's a lot more convenient.

        • guenthert an hour ago

          "io_uring had a maximum throughput of 5.0 Gbit/s "

          Wut? More than 10 years ago, a cheap beige box could saturated a 1Gbps link with a kernel as it came from e.g. Debian w/o special tuning. A somewhat more expensive box could get a good share of a 10Gbps link (using Jumbo frames), so these new results are, er, somewhat underwhelming.

        • guenthert 2 hours ago

          That's an interesting and valuable study. I was slightly disappointed though that only a single host was used in the 'network' performance tests:

          "SR-IOV was used on the NIC to enable the use of virtual functions, as it was the only NIC that was available during the study for testing and therefore the use of virtual functions was a necessity for conducting the experiments."

        • renox 3 hours ago

          Not by much?? You're exaggerating..

      • monocasa 8 hours ago

        I'm not sure that's true for a good chunk of the workloads that dpdk really shines on.

        A lot of the benefit of dpdk is colocating your data and network stack in the same virtual memory context. io_uring I can see getting you there if you have you're serving fixed files as a cdn kind of like netflix's appliances, but for cases where you're actually doing branchy work on the individual requests, dpdk is probably a little easier to scale up to the faster network cards.

  • treeshateorcs 13 hours ago

    i might be wrong but if it's ABI compatible the same drivers will work?

    p.s.: i was wrong

    >While we prioritize compatibility, it is important to note that Asterinas does not, nor will it in the future, support the loading of Linux kernel modules.

    https://asterinas.github.io/book/kernel/linux-compatibility....

    • yjftsjthsd-h 12 hours ago

      Linux doesn't even maintain ABI compatibility with itself, nobody else is going to manage it. The possibility that might work is there's a couple projects that maintain just enough API compatibility to reuse driver code from Linux (IIRC FreeBSD does this for some graphics drivers). But even then you're gambling with whether Linux decides to change implementation details one day, since internal APIs explicitly aren't stable.

      • bcrl 12 hours ago

        The Linux kernel community takes ABI compatibility for userland very seriously. That developers in userland are frequently unwilling to understand issues surrounding ABI stability is not the fault of the Linux kernel.

        • yjftsjthsd-h 11 hours ago

          Oh sure, the user-space ABI is stable; I meant kernel-space. Although I realize now that I failed to write that explicitly.

          • bcrl 10 hours ago

            The past 30 years of the Linux kernel's evolution has proven that there is no need for a stable kernel ABI. That would make refactoring, adding new features and porting to new platforms exceedingly difficult. Pretty much all of the proprietary kernel modules have either become open source or been replaced by open source replacements. The Linux community doesn't need closed source kernel modules for VMWare anymore, and even Nvidia has finally given up on their closed source GPU drivers. Proprietary Linux kernel modules have no place in the modern world.

            • vlovich123 9 hours ago

              > even Nvidia has finally given up on their closed source GPU drivers.

              lol. No. They just added a CPU and then offloaded all the closed source userspace driver code to it leaving behind the same dumb open sourceable kernel driver shim as before (ie instead of talking to userspace it talks to the GPU’s CPU).

              > The past 30 years of the Linux kernel's evolution has proven that there is no need for a stable kernel ABI.

              What the last 30 years have shown is that there is actually a need for it, otherwise DKMS wouldn’t be a thing. Heck, intel’s performance profiler can’t keep up with the kernel changes which means you get to pick running an up to date kernel or be able to use the open source out-of-tree kernel module. The fact that Linux is alone in this should make it clear it’s wrong. Heck Android even wrote their own HAL to try to make it possible to update the kernel on older devices. It’s an economics problem that the Linux kernel gets to pretend doesn’t exist but it’s a bad philosophical position. It’s possible to support refactoring and porting to new platforms while providing ABI compatibility and Linux is way past the point where it would even be a minor inconvenience - all the code has ossified quite a bit anyway.

    • dathinab an hour ago

      in general the ABI is kernel<->user space while the ABI (and potentially even API) on the inside (i.e. for drivers) can change with every kernel version (part of why it's so important to maintain drivers in-tree)

    • bicolao 13 hours ago

      They mention this in https://github.com/asterinas/asterinas/blob/2af9916de92f8ca1...

      > While we prioritize compatibility, it is important to note that Asterinas does not, nor will it in the future, support the loading of Linux kernel modules.

      • justmarc 13 hours ago

        It's a lot "simpler" to support a Linux userland as that means one needs to "just" emulate all the Linux syscalls, than to implement the literally countless internal APIs needed for drivers etc, as that would otherwise mean literally reimplementing the whole Linux kernel and that's neither realistic, nor too useful.

        • mgerdts 6 hours ago

          And that’s not all that simple, as has been experienced by Solaris (never released(?) Linux branded zones, illumos (lx brand), and Windows (WSL1) developers that have tried to make existing kernels act like Linux.

          It’s probably easier if the kernel’s key goal is to be compatible with the Linux ABI rather than being compatible with its earlier self while bolting on Linux compatibility.

        • Jyaif 12 hours ago

          > emulate all the Linux syscalls

          and emulate the virtual filesystems (/proc/...)

    • justmarc 13 hours ago

      No, it means you can run Linux userland/apps on this kernel, to the level/depth which they currently support of course.

      They might not yet implement everything that's needed to boot a standard Linux userland but you could say boot straight into a web server built for Linux, instead of booting into init for example.

  • protoman3000 4 hours ago

    Why don’t you just use a SmartNIC and P4? It won’t get faster than running on the NIC itself

pjmlp 2 hours ago

Besides all examples, Microsoft is now using TockOS for Pluton firmware, another Rust based OS.

https://tockos.org/

exabrial 10 hours ago

I think this looks incredible. Like how does one create a compatible abi _for all of linux_??? Wow!

> utilize the more productive Rust programming language

Nitpick: it’s 2024 and these ‘more productive’ comparisons are silly, completely unscientific, And a bit of a red flag for your project: The most productive language for a developer is the one they understand what is happening one layer below the level of abstraction they are working with. Unless you’re comparing something rating Ruby vs RiscV assembly, it’s just hocus-pocus.

  • jmmv 5 hours ago

    > I think this looks incredible. Like how does one create a compatible abi _for all of linux_??? Wow!

    FWIW that’s what the Linux compatibility layer in the BSDs does and also what WSL 1 did (https://jmmv.dev/2020/11/wsl-lost-potential.html).

    It’s hard to get _everything_ perfectly right but not that difficult to get most of it working.

    • NewJazz 4 hours ago

      IIRC Fuschia has something similar. And maybe Redox?

  • dathinab an hour ago

    Idk. Asahi Linux GPU driver breaks all "common sense" of "how fast a reliable usable feature rich driver" was produced by a small 3rd party team.

    The company I work for has both rust and python projects (through partially pre "reasonable python type linting" using mypy and co.) and the general consensus there is "overall" rust is noticeable more productive (and stable in usage/reliable), especially if you have code which changes a lot.

    A company I worked previous for had used rust in the very early days (around 1.0 days) and had one of this "let's throw up a huge prototype code base in a matter of days and then rewrite it later" (basically 90% of code had huge tech dept). But that code base stuck around way longer then intended, caused way less issues then expected. I had to maintain it a bit and in my experience with similar code in Python and Js (and a bit Jave) I expected it to be very painful but surprisingly it wasn't, like at all.

    Similar comparing my experience massive time wastes due to having to debug soundness/UB issues in Rust, with experiences in C/C++ it's again way more productive.

    So as long as you don't do bad stuff like over obsessing with the type system everything in my experience tells me using Rust is more productive (for many tasks, definitely not all task, there are some really grate frameworks doing a ton of work for you in some languages against which the rust ecosystem atm. can't compete).

    ---

    > Most productive language for a developer is the one they understand what is happening one layer below the level of abstraction they are working with.

    I strongly disagree, the most productive language is the one where the developer doesn't have to care much about what happens in a layer below in most cases. At least as long as you don't want obsess over micro optimizations not being worth the time and opportunity cost they come with for most companies/use cases.

  • kelnos 7 hours ago

    > Like how does one create a compatible abi _for all of linux_???

    You look at Linux's syscall table[0], read through the documentation to figure out the arguments, data types, flags, return values, etc., and then implement that in your kernel. The Linux ABI is just its "library" interface to userspace.

    It's probably not that difficult; writing the rest of the kernel itself is more challenging, and, frankly, more interesting. Certainly matching behavior and semantics can be tricky sometimes, I'm sure. And I wouldn't be surprised if the initial implementation of some things (like io_uring, for example, if it's even supported yet) might be primitive and poorly optimized, or might even use other syscalls to do their work.

    But it's doable. While Linux's internal ABI is unstable, the syscall interface is sacred. One of Torvalds' golden rules is you don't break userspace.

    [0] https://filippo.io/linux-syscall-table/

  • ozgrakkurt 10 hours ago

    Everyone says what they are used to is better or more productive. Even in assembly vs ruby, some stuff are much easier in assembly and maybe impossible in ruby afaik

    • exabrial 9 hours ago

      I’m aging myself, but ~17 years ago I was in San Diego for a conference. There was a table level competition to see who could write the fastest program in 20 minutes (we were doing a full text search of a ‘giant’ 5g file). One of the guys at the table wrote some SPARC assembly to optimize character matching that was a hotspot like he was speaking French.

      Ah good times.

tiffanyh 16 hours ago

OT: if you're interested in Asterinas, you might also be interested in Redox (entire OS written in Rust).

https://www.redox-os.org/

  • snvzz 10 hours ago

    Redox has a proper architecture, aka microkernel multiserver.

    Thus it is a much more interesting project.

hkalbasi 10 hours ago

> In the framekernel OS architecture, the entire OS resides in the same address space (like a monolithic kernel) and is required to be written in Rust. However, there's a twist---the kernel is partitioned in two halves ... the unprivileged Services must be written exclusively in safe Rust.

Unprivileged services can exploit known compiler bugs and do anything they want in safe Rust. How this affects their security model?

Klasiaster 12 hours ago

There was also the similar project Kerla¹ but development stalled. Recently people argued that instead of focusing on Rust-for-Linux it would be easier to create a drop-in replacement like these two. I wonder if there are enough people interested to make this happen as a sustained project.

¹ https://github.com/nuta/kerla/

  • kelnos 7 hours ago

    > Recently people argued that instead of focusing on Rust-for-Linux it would be easier to create a drop-in replacement like these two

    I guess it depends on what they mean by "easy". Certainly it's easier in the sense that you can just write code all day long, and not have to deal with the politics about Rust inside Linux, or deal with all the existing C interfaces, finding ways to wrap them in Rust in good, useful ways that leverage Rust's strengths but don't make it harder to evolve those C interfaces without trouble on the Rust side.

    But the bulk of Linux is device drivers. You can build a kernel in Rust (like Asterinas) that can run all of a regular Linux userland without recompilation, and I imagine it's maybe not even that difficult to do so. But Asterinas only runs on x86_64 VMs right now, and won't run on real hardware. Getting to the point where it could -- especially on modern hardware -- might take years. Supporting all the architectures and various bits of hardware that Linux supports could take decades. I suppose limiting themselves to three or four architectures, and only supporting hardware made more recently could cut that down. But still, it's a daunting project.

phlip9 11 hours ago

Super cool project. Looks like the short-term target use-case is running a Linux-compatible OS in an Intel TDX guest VM with a significantly safer and smaller TCB. Makes sense. This way you also postpone a lot of the HW driver development drudgery and instead only target VM devices.

cryptonector 12 hours ago

> Linux-compatible ABI

There's no specification of that ABI, much less a compliance test suite. How complete is this compatibility?

wg0 6 hours ago

Side question - I have always wondered how a Linux system is configured at the lowest level?

Let's take example of network. There's IP address, gateway, DNS, routes etc. Depending on distribution we might see something like netplan reading config files and then calling ABI functions?

Or Linux kernel directly also reads some config files? Probably not...

  • NewJazz 4 hours ago

    Linux kernel as much as possible tries not to parse or read external data (besides stuff like acpi tables, device trees, hardware registers). For networking, you might look at the iproute codebase to see how they do things like bring a network device up, or create a bridge device, add a route, et cetera.

    Edit: looks like iproute2 uses NETLINK, but non-networking tools might use syscalls or device ioctls.

    https://en.m.wikipedia.org/wiki/Netlink

wiz21c 3 hours ago

> Linux-compatible ABI

Does it mean it can re-use the drivers written for hardware to run with linux ?

  • eptcyka an hour ago

    No. The drivers in Linux are kernel modules, most often in-tree - meaning that the source for the drivers is built along the rest of the kernel source code. Most hardware drivers depend on various common kernel structures that change often - when they do, the source for drivers is fixed practically in the same git branch. There is no driver ABI to speak of.

  • dezgeg 2 hours ago

    No. There is no stable ABI nor API for in-kernel device drivers.

depressedpanda 15 hours ago

From the README:

> Currently, Asterinas only supports x86-64 VMs. However, our aim for 2024 is to make Asterinas production-ready on x86-64 VMs.

I'm confused.

  • wrs 14 hours ago

    I think it’s “Currently, Asterinas only supports x86-64 VMs. However, [rather than working on additional architectures this year,] our aim for 2024 is to make Asterinas production-ready on x86-64 VMs.”

  • favorited 15 hours ago

    Sounds like their goal is to improve their x86-64 support before implementing other ISAs.

  • nurb 14 hours ago

    It's clearer from the book roadmap:

    > By 2024, we aim to achieve production-ready status for VM environments on x86-64. > In 2025 and beyond, we will expand our support for CPU architectures and hardware devices.

    https://asterinas.github.io/book/kernel/roadmap.html

  • None4U 14 hours ago

    Distinction here is between "supports" and "production-ready on", not "x86-64" and "x86-64"

  • MattPalmer1086 15 hours ago

    Yeah, I had to read that a few times... I think they just mean it isn't production ready yet, but that's what they are aiming for.

valunord 14 hours ago

I like what they're working towards with V in Vinix as well. Exciting times to see such things with ABI compat with Linux opening new paradigms.

spease 15 hours ago

What’s the intended use case for this? Backend containers?

  • Animats 14 hours ago

    Makes a lot of sense for virtual machine containers. Inside a container inside a VM, you need far less operating system.

xiaodai 10 hours ago

Lol. I am Malaysian Chinese but I honestly don't think anyone will put into production a Chinese made kernel. The risk is too high, same as no one will use a Linux distro coming out of Russian, Iran or NK. It's just cultural bias in the west.

jackhalford 13 hours ago

The building process happens in a container?

> If everything goes well, Asterinas is now up and running inside a VM.

Seems like the developers are very confident about it too

havaker 13 hours ago

The license choice is explained with the following:

> [...] we accommodate the business need for proprietary kernel modules. Unlike GPL, the MPL permits the linking of MPL-covered files with proprietary code.

Glancing at the readme, it also looks like they are treating it as a big feature:

> Asterinas surpasses Linux in terms of developer friendliness. It empowers kernel developers to [...] choose between releasing their kernel modules as open source or keeping them proprietary, thanks to the flexibility offered by MPL.

Can't wait to glue some proprietary blobs to this new, secure rust kernel /s

  • yjftsjthsd-h 12 hours ago

    I'm curious about the practical aspect: Are they going to freeze a stable driver ABI, or are they going to break proprietary drivers from time to time?

    • gpm 11 hours ago

      Considering their OS as a framework approach I would guess they are more likely to expose a stable API than a stable ABI. Which also plays well with the MPL license (source file based) rather than something like the LGPL (~linking based).

      • throw4950sh06 10 hours ago

        This is the most interesting new OS I have seen in many years.