Hacker News new | past | comments | ask | show | jobs | submit login
Run0, a systemd based alternative to sudo, announced (mastodon.social)
466 points by CoolCold 20 days ago | hide | past | favorite | 856 comments



> Or in other words: the target command is invoked in an isolated exec context, freshly forked off PID 1, without inheriting any context from the client (well, admittedly, we do propagate $TERM, but that's an explicit exception, i.e. allowlist rather than denylist).

I think in practice, this is going to be an endless source of problems, so much so that it won't be adopted. The usual use case of sudo is that you have a normal shell command, making use of the environment for context in all the ways that shell commands do, but it doesn't have all the permissions it needs, so you add "sudo" as an adverb.

Sometimes it makes use of environment variables. Sometimes stdin or stdout is redirected to a file, or to something more exotic than a file. Sometimes that means it runs inside of a chroot, or a Docker container. Sometimes you care about which process group it runs in.

And sometimes the thing you're running is a complicated shell script or shell-script-like object, eg "sudo make install". In this case, you don't really know what its dependencies are. In fact this is a common enough case that, if run0 becomes widespread, I expect it'll have a flag or a set of flags that make it act exactly like sudo, and I expect people to wind up learning that they should always give run0 those flags.

And I'm kind of worried that when this breaks stuff, the systemd project is going to push forward with some plan to get rid of sudo, and not gracefully accept the feedback that this is breaking things. I'm particularly worried about this because of the whole saga of KillUsersProcesses breaking nohup and screen, which to my knowledge is still broken many years later.


> And I'm kind of worried that when this breaks stuff, the systemd project is going to push forward with some plan to get rid of sudo, and not gracefully accept the feedback that this is breaking things.

See for example perhaps, "systemd can't handle the process previlege that belongs to user name starts with number, such as 0day":

* https://github.com/systemd/systemd/issues/6237

Never mind that POSIX allows it:

* https://pubs.opengroup.org/onlinepubs/009695399/basedefs/xbd...

* https://pubs.opengroup.org/onlinepubs/009695399/basedefs/xbd...

> I'm particularly worried about this because of the whole saga of KillUsersProcesses breaking nohup and screen, which to my knowledge is still broken many years later.

For anyone curious, see "Systemd v230 kills background processes after user logs out, breaks screen, tmux" from 2016:

* https://news.ycombinator.com/item?id=11782364


I don't know what your sudo does, but mine requires the --preserve-env flag if you want the new process to have access to all your environment variables.

The thing you're saying is going to be an endless source of problems should already be an endless source of problems! (And I think I've been briefly confused by some missing environment variable once or twice so far.)


It depends on whether sudo was compiled with --disable-env-reset or not, it's on by default[1].

Also some variables are inherited regardless (e.g. DISPLAY, TERM), and some useful ones (e.g. HOME) are initialized by sudo, but I can't tell where that's done.

[1]: https://github.com/sudo-project/sudo/blob/ef52db46f9b375d7ff...


Even without the flag, sudo preserves a bunch of stuff. And it's not even consistent. Some implementations preserve locale setting, while others don't for example.


It depends on what options sudo us built with. Notably `--disable-env-reset` option.


> And I'm kind of worried that when this breaks stuff, the systemd project is going to push forward with some plan to get rid of sudo, and not gracefully accept the feedback that this is breaking things.

Given Lennart already declared SUID concept as “bad”, I think this is the game plan all along.

Systemd: Do all the things, but not very well, and don’t listen to anyone.


I agree with Lennart so I'm curious what the argument is against the notion that SUID was a bad idea and we should move away from it in Linux?


The problem with this line of thinking is it gives automatic carte blanche to anyone pointing out problems to implement "solutions" to those problems with little interrogation of whether those solutions are actually better.

SUID, like any system, is flawed. Most of those flaws are balanced trade-offs; if you're addressing one you need to be aware of the severity of any counter-problems you're inevitably introducing.

Lennart is well known for criticising existing systems while simultaneously ignoring & dismissing criticism of the proposed solutions - you need to be able to weigh up both sides in a balanced way. Lennart demonstrably isn't.


> you need to be able to weigh up both sides in a balanced way. Lennart demonstrably isn't.

That's why nobody uses his software. I mean just nothing he does gets adopted.

The 'run0' solution uses an already existing mechanism that is already used for a lot of things.


> nobody uses his software

Yes, you're absolutely right. Popularity is the best indicator of quality.


Its not the best indicator but to claim its meaningless is idiotic.

Specially since we are talking about free software, and not some software that Microsoft can preinstall on your laptop.


Just because it has been pushed by RedHat and others are semi-forced to adapt, it doesn't mean all distros got in line to get a copy of the software and get it adopted.

Pulse has been replaced with the pipewire as soon as it arrived, for example.


Ah the old 'we were forced to use this free thing'. Sure.

> Pulse has been replaced with the pipewire as soon as it arrived

Pipewire combines alsa, pulse and jack. They all have different strength.

And Pulse was started by Lennard but he hasn't been involved for a very long time.


> Ah the old 'we were forced to use this free thing'. Sure.

Well, I was a tech lead of a Debian derivative when Debian held the vote. I have seen and read enough. Didn't see the other "threats" thing, but since I had access to debian-devel, I was in the middle of it.

My views about systemd has not changed since then, and can be found if you search HN.

On the other hand, I have used 4-5 init systems in the last 20 years, and none of them were that aggressive and had the "we know the best" attitude, while going against all the best practices and making the same mistakes done in the past.

> Pipewire combines alsa, pulse and jack. They all have different strength.

Nope. ALSA is always there, working as the primary sink, delineating user space and hardware. I used Jack back in the day for recording, and never got to like pulse because of its backwards defaults and lassies-faire behavior about multi-channel audio (plus glitches, etc).

Pipewire is a great sound server which sits on top of ALSA, and replaces Pulse transparently, and makes everything 100x nicer along the way.

Lastly, it's Lennart Pottering. Not Lennard. :)

P.S.: It's important to understand that my views are not against the persons, but the behavior of the projects. I'd drink a nice round of beer with all of them, if I had the chance. :)


The reality Debian didnt want to or couldnt develop their own. The system people used them was simply shit. And the alternatives like Upstart were just crap.

Nobody forced Debian. I followed it live too. I remember him talking to Debian and he made a technical argument for it.

I had already switched to Arch and had already been using Systemd for years at that point.

The reality is, nobody was stepping up with better solutions. Would porting SMF have been better, maybe, but nobody was porting that.

There are distros with Systemd, often very compatible ones, and almost nobody uses them.

BSD folks for years have been hoping for the linux exodus over systemd and it has never happen.

And it has to be said a 1000x times. Systemd was never just init and it mever claimed it was. By now Systemd is just a software project that makes all kind of software that you can use with or without systemd the service manager.

The should just call it the "Linux Userland Software Group" and change their naming. Then people wouldnt get triggered by the term 'systemd'.

You can have whatever technical opinion you like about systemd. Fact is most people use it, including in very large organisations. And the other fact is nobody forced systemd on Debian. Whatever consipiricy was apread in 'Devel' (and elsewhere).


Thanks for confirming that I can have technical opinions about systemd, as an admin who touches more than a thousand physical servers. :)

I'll agree to disagree on the systemd's "we will replace anything and everything we even slightly dislike, and slowly make them dependent on systemd (the service manager) while not listening to you and your pesky experiences" attitude, and wish you more power for your future endeavors.

Have a nice day. :)


Just a short reminder that Lennart is working for Microsoft. https://unpkg.com/@material-design-icons/[email protected]/outline...

SUID mechanism doesn’t always “elevate to root”. It’s a mechanism to “run as another user” and with SGID allows great flexibility in user permission management. You can allow all kinds of (responsible) user switch tricks for multi-admin servers and multi-user systems.

Focusing all of this to sudo and framing SUID as “just implemented to enable sudo” is not painting the correct picture.

Moreover, removing SUID breaks tons of mechanisms and scenarios.

Security of sudo can be debated, but evolving current sudo to a better state step by step is miles better than banishing and rebuilding it and making it dependent on systemd + polkit. systemd already breaks tons of UNIX conventions and way more complicated than it should be.

When you think, it sounds like “conquering” another part of user space mixed with NIH (and we know the best), and making systemd more entrenched. systemd is already a pretty large surface area to attack to begin with.

XZ back door reached SSHd over libaystemd. Do we need another “integrated target” to attack in Linux?


All these ideas that tie permissions to a file completely fail when files need to be accessed either over network, or inside a container.

I can see how the original authors didn't consider these cases, because they simply weren't there yet... but knowing what we know today: SUID is an awful idea.


Sorry for my ignorance, but what’s a scenario that you run a SUID/GUID binary from a network or a container?

If you access and run, it’s SSH or similar, so it works on the system scope. If it’s a container built correctly, it has its own users and isolation already, so it shouldn’t be able to fire any binary on your “base” system, and any effect is already in the container scope.

I have never had the need to SUID/GUID a non executable, and didn’t need to trigger something on the system inside a container in the last ~20 years.


> but what’s a scenario that you run a SUID/GUID binary from a network or a container?

A lot of publicly available container images require elevated permissions to simply function, not for anything extraordinary. So, the user in container needs to be a superuser. It's often even not to perform the program's main function, but because various ordinary things in Linux require elevated permissions.

> container built correctly

That's a spherical horse in vacuum. If you write code s.t. there aren't any errors, you don't need to do error handling, right? You don't get to choose how containers are built. You need to deal with all possibilities of how containers can be built.

Network filesystem? -- /usr/shared, /usr/opt and /usr/local? That's by design... very typical for cluster management software to mount these from NAS. It's also very not typical to keep these as "only text files". Pretty sure a lot of Google's stuff installs automatically into /usr/shared. I think even Go compiler and other infra at some point was being installed there by default.

Finally: the same argument as with containers. You, for some reason, are trying to fantasize the world where problems don't exist because you chose the world w/o problems. But this isn't the real world. It's a fantasy. In real world, with or without reason, programmers and other computer users will do what's possible, not what you want them to do.


Setuid is a mechanism where you take a program, and mark it so it always runs as root (or some other user, but in this case root). The idea is that an unprivileged user can run a setuid program, and the program itself decides what privileges to allow.

The problem is that the user controls the program's view of the filesystem, environment variables, and other attributes, and this is an attack surface that can be used to trick it into loading and running code provided by the unprivileged user, which runs as root. For example, ordinary programs have a preamble inserted by the compiler where they load a programming-language runtime, usually from somewhere like /usr/lib; but a setuid program can't safely do this, because the user could use a chroot to replace /usr/lib with something different.

In practice, this means that writing a setuid program correctly is exceptionally difficult and error prone, can only be done in C, and imposes security requirements on the compiler flags/makefiles rather than the source code, which creates a large risk of distro- or compiler-specific vulnerabilities. In practice, sudo is the only program people allow to use the setuid mechanism, and sudo is a unique and dangerous snowflake.


> because the user could use a chroot to replace /usr/lib with something different

You need to be root in the first place to be able to do that


SUID has flaws, but it's not clear that there are any more convenient alternatives?


it depends on what you're needing suid for but the first thing id evaluate is if you can just grant a specific CAPAB instead.


you missed the memo. it's dbus. (wish i could end with /s)


run0 has already been exploited: https://twitter.com/hackerfantastic/status/17854955875146385...

There will be plenty more where that came from. Yet another terrible idea and terrible implementation from Poettering.


To be fair, this is not at all Poettering’s idea. There is, for example, precedent in the form of s6-sudo[1], a utility from the s6 service supervisor, itself very much an anti-systemd project (except I believe it predates systemd?..).

And honestly I’d be okay with a suidless Unix. For example, as best as I can tell, the only reason the kernel needs to know what executable formats even are—beyond the bare minimum needed to load PID 1—is s[ug]id binaries.

[1] https://skarnet.org/software/s6/s6-sudo.html


I like s6! One of the key differences here is that s6-sudo builds on, rather than replaces, the standard unix permissions model.

s6-sudod listens on a unix domain socket. Unix domain sockets are just files, so they have an owner, group and mode bits. The answer to "who is potentially allowed to run a differently-privileged command?" is just `ls -l /path/to.sock`.

For finer-grained access control, a unix domain socket listener can call `getpeereuid()` or `getsockopt(..., SO_PEERCRED, ...)` to learn who it's talking to. You can build powerful – but still relatively simple, and importantly, readily-inspectable – access control policy on top of these basic unix primitives. That's what s6 does. Look at how simple rule definition is. [0]

Or, you could throw all that out the window and build something much more complex and much less inspectable, which is the systemd approach. The answer to "who is potentially allowed to run a differently-privileged command?" under `run0` is to...spend the evening reading through polkit xml rules, I guess?

I realize systemd uses D-Bus, and D-Bus uses a unix domain socket. But that socket is writable by world. We're trusting polkit and complex policy xml and probably a constellation of other services to get things right after the SO_PEERCRED check.

Maybe that's fine for desktop apps, but a reminder that we're talking about sudo here.

Complexity is the enemy of security. The complexity of the systemd ecosystem broadly writ is how we get CVEs like this polkit privesc, which took 12 years to notice [1].

Addendum: it's possible to regard systemd as dangerously complex AND sudo as dangerously complex. OpenBSD as usual had the right idea with `doas`.

[0] https://skarnet.org/software/s6/s6-accessrules-cdb-from-fs.h...

[1] https://www.cvedetails.com/cve/CVE-2021-4034/


Like many other things in Unix, SO_PEERCRED and getpeereid are half-implemented hacks that should not be used for security. They both only return the uid that was used at the time of calling connect(). Meaning you have to be incredibly careful what you do when creating the socket and you cannot really pass any sockets off to other processes if you want to try to do security that way because they will still inherit the wrong credentials. Also the usual complexities apply of how to interpret that when interacting with a process in a container.

I have a pretty low opinion of s6 because of things like this, you pretty much have to create a more complex system like polkit and systemd if you want this stuff to actually work. You don't have to use XML and javascipt like polkit does but you do have to do more than what s6 is trying to do. (Also, I personally don't find the "random collection of ad-hoc text files" style they do to be any less complex than systemd, but that's a different conversation)


You do realize D-Bus also uses SO_PEERCRED right? And transitively polkit, systemd, and everything in that ecosystem.

https://gitlab.freedesktop.org/dbus/dbus/-/blob/master/dbus/...

> Meaning you have to be incredibly careful what you do when creating the socket and you cannot really pass any sockets off to other processes if you want to try to do security that way because they will still inherit the wrong credentials.

I see nothing new here beyond "handle privileged resources with care." Don't overshare. Got an open pipe to `sh` running as root? Maybe you oughtta set O_CLOEXEC on that fd before you exec and overshare with a child. Got a socket that's been peer authed? The same.

This is pretty basic unix stuff. If you stick to the basics and avoid the siren call of complexity, the security properties remain relatively easy to reason about. Most privileged resources are fds. Mind your fds.

I'm not a huge fan of sending file descriptors over sockets – maybe we agree on that part.


> Unix domain sockets are just files, so they have an owner, group and mode bits. The answer to "who is potentially allowed to run a differently-privileged command?" is just `ls -l /path/to.sock`.

Yeah, except that is not true. To quote unix(7):

       On Linux, connecting to a stream socket object requires write permission on that socket; sending
       a datagram to a datagram socket likewise requires write permission on that socket.   POSIX  does
       not make any statement about the effect of the permissions on a socket file, and on some systems
       (e.g.,  older  BSDs),  the socket permissions are ignored.  Portable programs should not rely on
       this feature for security.
So s6 just has a wide, easily exploitable security hole there. Or is not portable, contrary to its claims.


Lol okay man. Maybe if you're running FreeBSD 4.2 or HP-UX or some BSD derivative from the 90s. All unix systems from about 2000 on will honor unix domain socket permissions.


That's not an exploit, that's just a sequence of basic misunderstandings about how things work on Linux. Which would be fine, nobody knows everything, if they weren't coated with grand claims and not-so-veiled personal abuse.


What's the difference between this and ptracing the bash session that you run sudo under?


None, it's a nonsense "hack"


The linked PoC requires that the attacker already has root so that it can disable the default ptrace protection.


Requires root not just for the ptrace protection, but also to gain membership of the 'tty' group which gives control over all ttys. And then goes all surprised pikachu when it turns out that allows taking over ttys. Duh?


Huh. I'm not at all a fan of how Poettering operates, but it's neither the ideas nor the implementation where I'd fault him. Well, it depends on what you mean by implementation, I guess; I'm talking about the core "how does it do its thing", not the interface by which you use it.

I think Poettering has great ideas and great implementation. It's the execution and interface that are often terrible. If the square peg doesn't fit in the round hole, then he'll always say that the peg is perfect and the world just needs to chisel out the corners of the hole.


What do you mean by "great ideas and great implementation / bad execution and bad interface." Is this a plumbing vs porcelain distinction?


Yes? Well, partly.

For systemd and pulseaudio, the systems they were replacing legitimately had major problems. There were variants and workarounds that fixed some of these, but no holistic solution that I've ever heard of. There were just so many limitations if you maintained any degree of compatibility. People were (understandably) unwilling to start over and rearchitect something that desperately needed rearchitecting. Poettering designed and implemented replacements that were substantially better, and worked. Worked well, in fact. That's the great ideas & implementation part.

Much of this was enabled by a willingness to throw out compatibility with nearly everything. Backwards, forwards, sideways. If I were making a bold and breaking change like this, I would sacrifice compatibility but try to make up for it by bending over backwards to catch as much of the "once working, now broken" wreckage that inevitably piled up as I could, by creating shims and compatibility stubs and transition mechanisms. I'd certainly listen to people's problems and try to work out solutions.

Poettering, as far as I can tell is more of a honey badger (excuse the dated meme). He just doesn't give a shit. If your stuff doesn't work in the brave new world, then your stuff is broken and is going to have to adapt. That's the bad execution part. (Which is not to say that bending over backwards is always the right approach; it can massively increase the burden on the new system's implementer, to the point that it never happens. There's a reason why Poettering's stuff is taking over the world.)

As for bad interface, this is a lot more subjective, so it's easier to disagree. But the tools to configure and use the new system are done in the style of an isolated cathedral. The tools do a ton of stuff, but they do it all in a new way, and that's great once you learn the blessed paths and internalize the new architecture. But all of your existing knowledge is now useless, and you can't flexibly combine and extend the functionality with the usual tools you'd use (bash, grep, awk, find, sort, tee....) The main functionality of the new system is not new — none of this is really adding fundamental new capabilities, it's just improving things that were already being done. But the way you interface with that functionality is all new, even though it could have been exposed in ways at least a little more unix-like and composable. Instead, the tool author determines all the things you should be doing and gives you a way to do them. If you want more or different, then you're doing something wrong.

Normally, I'd expect something like this to die out as it rubbed up against the surrounding functionality. "Great system, but too much effort when we keep having to fix thing after thing." Surprisingly (to me), in systemd's case in particular, what has actually happened is that the cathedral just keeps expanding to swallow up enough of its surroundings to keep from being ejected.

Maybe it's sour grapes, but my guess is that this was only possible because the previous systems were so bad. esd was a nightmare. sysvinit scripts were baroque and buggy and error-prone. Sure, the first 80% was just plain simple shell scripting. But everything did the last 20% slightly differently or just punted. It was all buggy and idiosyncratic and failed intermittently. Supposedly some of the init system variants managed to corral it all together enough to do actual dependencies and get decent startup speed, but I never used such a system. And based on the quality of the init scripts I've seen from random packages, I'm guessing the successes only happened when a relatively small group of people wrote or repaired a metric shitload of init scripts by hand. And even then, systemd provides more in its base functionality set. Architecturally, it's really quite nice.


> Much of this was enabled by a willingness to throw out compatibility with nearly everything. Backwards, forwards, sideways. If I were making a bold and breaking change like this, I would sacrifice compatibility but try to make up for it by bending over backwards to catch as much of the "once working, now broken" wreckage that inevitably piled up as I could, by creating shims and compatibility stubs and transition mechanisms. I'd certainly listen to people's problems and try to work out solutions.

You do realize that systemd was the only init system that offered distributions a migration path from the sysv-rc init scripts?

daemontools, s6, openrc, upstart all did not have this. systemd was the only system caring about migration and backward compatibility...

> Poettering, as far as I can tell is more of a honey badger

As far as I know, he was the only author of an alternative init system that, for example, did actually talk to distributions to understand which problems they have. Unlike the authors of most alternatives that don't give a shit (and in turn nobody gives a shit about their init). To this day you'll find the s6 author just claim "nobody needs feature X from an init" because they themselves might not need it.


you have the wrong view point. he just have a different opinion than you.

he single handled managed to fool RH and all distros into turning Linux administration just like windows. systemctl list of services is so inspired by the atrocious windows' admin list of services (which have 3 fields supposed to describe the service, but they all just tell you the name again).

it's no wonder his reward was a job at Microsoft.

but again, he's good in all three aspects. you just disagree on building the torment Nexus that is putting Linux in the "standard certification" target for sysadmins.


I continue to be baffled at this widespread belief that Poettering somehow hoodwinked every single major Linux distro into accepting a shit product with, idk, hypnosis or something.

Is it not possible that systemd is simply better than the alternatives, and the distro owners are smart enough to notice that, instead of just wrapping themselves cultish mantras about The Unix Way and how anything which resembles a design used in Windows is bad by definition? Or could that not possibly be it and he must've used mind control magic.


Yes, it has always reminded me of the old "Apple just sells all those shiny devices because they're good at marketing" trope.

As if marketing alone could do that. Poettering does seem to be, to a casual observer, kind of a dick. Arrogant, dismissive of competing products... kind of like that other guy — also kind of a dick — who supposedly had that "reality distortion field" that hoodwinked all those poor saps into buying his phones.

There's no fucking way in hell you are able do that if the user base doesn't think the product is good. To those saying it, I always reply, "It may not be the product you want, but a shitload of people disagree with you, quite obviously."

I'm not personally a huge fan of the iPhone or systemd. But they are both clearly "the best" for the largest number of people. (And that is even clearer for systemd, as it doesn't cost hundreds or thousands of dollars more then the competing products.)


never said that.

just that his vision was garbage, and everyone knows. but he stood by it. and nobody was putting the same energy he was to either offer better or stop it (rejecting bad ideas also take energy. see gnome deep dive into garbage as another example)

Linux is mostly made from scraps (eg Bluetooth and wifi entire stacks) or misguided but funded things. the age of scratching own itch is mostly gone


> nobody was putting the same energy he was to either offer better or stop it

which was much easier thing to do, compared to an outsider, considering he was on Red Hat's payroll, along with the people (gnome/freedesktop crowd) he had need to convince


It's inspired by Apple's launchd.


only in it's a rewrite of the concepts from inetd but using dbus and abused for local services.

which is a big part, but not the one most people complains about.

the actual UX is very much windows like.


I wouldn't worry too much about that. It's a tricky piece of security-critical software, receiving its first round of outside auditing; of course it has vulnerabilities. Sudo does have the advantage of being much more battle-tested, but that will even out with time; what will matter is how secure it is two years from now.


I think what is perhaps something to consider is how much of an attack surface sudo is and how unaware people are of the fact. Many people think they can configure sudo to be safe to use for unprivileged users, by only allowing specific things to run with it. But they don't realize all the ways it can be abused for privilege escalation. Getting rid of all that configuration removes that false sense of security, which is a good thing, it has been a huge footgun in Linux for decades. Some incompatibility is price well worth paying for that imho


I think these problems are basically negligible because the amount of people trying to "configure sudo to be safe to use for unprivileged users, by only allowing specific things to run with it" is negligible. Virtually all users of sudo are using it on their own computer which they are the sole user and ultimately the administrator of. Even in corporate contexts where the company owns the machine instead of the user, I've only ever seen cases where the use of sudo is unrestricted albeit logged. Where are these organizations where developers or syaadmins are allowed to use sudo but only with white listed commands? I don't doubt that some people are doing this, I just think it's not common.

Replacing the whole of sudo with some weird new thing to better support a niche usecase seems disconnected from reality to me.


> Virtually all users of sudo are using it on their own computer which they are the sole user and ultimately the administrator of.

This is not the case at all. The vast vast majority of Linux installs are on servers.


That's not what parent means. They are arguing it's not generally used to delegate partial root access to unprivileged users, i.e by adding narrow sudoers rules to allow "some" specific things to be run as root for some users who don't have full root access otherwise.

I tend to agree that 99% or use cases are just a convenient way to gain full root for users with full root access. Configuring sudoers for the former use case has long known to be a bit dangerous, i.e it's easy to get it wrong and create privilege escalation holes.


Then I propose letting systemd hijack sudo's usefulness only on server installs.


My last job was at a UK bank. All our *nix systems were configured with a specific whitelist of commands that could be run via sudo. We found this an enormous pain in the arse when the powers that be decided to deploy ansible everywhere, and found that none of its "become" methods would work if sudo was set up like that.


I had a job once which had a sudo whitelist, but vi was included. !sh and you had root.


Classic case of #CorporateIT applying white paper "rules" and not understanding what they're doing. If I had a nickel...


Exactly, forms were filled in and boxes were checked off.


I also liked one where you could `sudo rpm -i`


Those environments could continue to use sudo. I'm sure Red Hat will support it until long after we all dead.


Not even using "su" as become_method? Granted, it would require the root's password, so it's another tradeoff, but...


Its a common finding during pentests, but that's just unconvincing anecdote vs. anecdote. Another argument, besides misconfiguration, is the reduced attack surface by removing the huge complexity of sudo entirely. If your argument is basically that its fine because nobody is using the complexity of sudo, then I don't quite understand what your objection is to removing that complexity. You might need to manually restore some env-vars, whats the big deal?

But I suspect this would just turn into a "doing things differently is bad because its doing things differently" argument, its not a very useful conversation to have.


'systemd-run except with privilege escalation' is a thing I wished for for a long time, needed in production.

Glad they finally made it, too bad it took them so long. (To be honest, it feels like it should have just been part of systemd-run in the first place.)


I mean, systemd-run could do privilege escalation from day one. It's even the default (otherwise overridable by systemd-run -pUser=<user>). I have used systemd-run --shell on countless occasions when I needed a clean root shell without any traces of the current environment.

What is being announced is merely a thin layer of cmdline syntactic sugar over an existing feature, to make it closer to sudo in usage.

So I'm not sure what exactly you were missing?


Currently you have to do `sudo systemd-run --shell` if you want a root shell from a regular user's account.


> Virtually all users of sudo are using it on their own computer

Nope. If I had to guess, it's in containers, like Docker. And those run in lots of places, and often in places with easy access to company's cloud account, credit card info etc.


Yeah we had to explicitly set that to No and enable linger to let pulseaudio keep running for users so they can continue to stream sound from their remote browsers in BrowserBox/CloudTabs. Ie at: https://puter.com/app/cloudtabs-browserbox

Your comment was really well expressed btw. Made your thoughts and emotions very clear about this. Inspiring communication skill! :)


The way I see it, this is actually a good thing. Superuser access should impose a tiny bit of friction in this regard, to enforce discipline where discipline is warranted.

Run0 builds character. :^)


One binary to rule them all, one binary to find them, one binary to bring them all and in the darkness bind them; in the Land of Lennart where the shadows lie. Bwaahahaha.


This gave me a genuine chuckle. Clever humor, thank you.


This is playing on the difference between hoping that sudo does the right thing juggling setuid and capabilities, and having a strict IPC boundary between privilege levels.

It sounds like a great use of systemd, for those who want to use it.


There's like 3 components involved in making setuid safe (the kernel, the dynamic loader, and your exec), and at least one of them wasn't doing its job correctly (the dynamic loader). IPC by definition involves a superset of these components.

There's no reason to think that if you can't make a simple setuid binary safe, you can make IPC safe. IPC is an order of magnitude more involved. Specially because in order to gain any effective security you need a 3 way IPC (1st level = the client, which is completely untrusted; 2nd level = the request parser, which is trusted but runs without elevated permissions; 3rd level = the actual elevator process, which must run with elevated permissions).


It's not clear why the request parser would have to be trusted. I assume you're just speaking about the call to execve running in context? That's not much of a request parser. At the point that you tell `run0` to launch a shell, you're not calling the actual commands to the shell the request parser, right?

I also think the notion of an untrusted client is kind of a hashed out thing. As said in the post itself, `run0` is an interface to `systemd-run`. `systemd-run` as a client may be more _involved_ but it doesn't seem like that has any relevance to whether or not it's more secure. It's a separate layer for the insecurity. While sudo is a single process, if it was two processes it wouldn't all have to run as root. That by necessity means something that was previously running as root isn't, which makes it more secure -- not less, right?

The actual elevator process is systemd itself which already runs as init on every machine you'll have `run0`. But by nature it's always the top of the process tree, it seems like it's _less_ complex to have systemd-init the immediate parent process. There are fewer thing that can leak into or be inherited by the spawned process.


> It's not clear why the request parser would have to be trusted [...] I assume you're just speaking about the call to execve running in context

No, I'm talking about the part which is going to parse the command line, arguments, environment, decide whether the user is allowed the elevation or not, decide which environment, file descriptors, etc. are to be passed through, etc. All of this must NOT be in the same context as the caller, as it can simply fake all these decisions. You need to handle this from a process running in another context (suid or not).

> While sudo is a single process, if it was two processes it wouldn't all have to run as root

Yes it would ? At least one of them would need to be suid for the actual execution. But the problem is that the process which was NOT suid would be running as the same user as the caller, so by the same reason as above -- you cannot trust what it does. The only thing the non-root process would be able to do is to massage the request a bit, then forward it (IPC!) to the root/suid process which you CAN trust. We are just moving the security border, and it is not clear what would be gained by it.

In this proposal, instead of a suid binary, you have a constantly running "sudod" process (or worse, pid 1), but otherwise is the same. Everything must be IPC'd to it.

> There are fewer thing that can leak into or be inherited by the spawned process.

To have this IPC complexity just because apparently we can't figure out how to do suid without inheriting anything is bonkers.

As a trade-off you now have a user-accessible IPC system with the _gazillion_ possible vulnerabilities it entails. At least before you needed root to talk to pid1..


> As a trade-off you now have a user-accessible IPC system with the _gazillion_ possible vulnerabilities it entails. At least before you needed root to talk to pid1..

Read the linked post again. This is all already available, and always has been, since forever.


> There's like 3 components involved in making setuid safe (the kernel, the dynamic loader, and your exec), and at least one of them wasn't doing its job correctly (the dynamic loader). IPC by definition involves a superset of these components

Incorrect, because nowhere in the IPC dance are these components exposed to the same untrusted environment as they are with sudo.


Yes, they are. The kernel for obvious reasons. Second, the IPC server now has to handle (and possibly pass through data) from the untrusted environment, unless you are happy with a sudo that does not even ferry stdio. Frankly, having properly working suid (the kernel does most of the job) sounds MUCH easier than having this type of APIs exposed to arbitrary users from pid1. In fact, as per Lennart's last sudo tty bug, the issue was with how sudo was exec()ing the target binary in the _target_ context (not the original context). Having sudo as a global daemon instead of a suid exec is not going to protect you against those; actually may make them worse for all I know.


> One could say, "run0" is closer to behaviour of "ssh" than to "sudo", in many ways.

This is an interesting offhand comment. You could implement a very similar tool by SSHing to localhost.


Indeed, there was a blog by a redhat engineer doing that: https://tim.siosm.fr/blog/2023/12/19/ssh-over-unix-socket


I had to write an ssh client for an embedded system long ago.

Looking at the design, I found it to be sort of messy.

You could restrict commands ssh could invoke, but it didn't seem super secure.

Also scp/sftp was not well designed. You basically had to give ssh access to your system to allow a file to be copied, and there were no real path restrictions.

I personally thought ssh could be much more robust in what you could run and what you couldn't. And scp/sftp could have better filesystem semantics so you could have more security in what you could access.

And I thought having a write-only scp would be really interesting, sort of like a dropbox for people to send you files securely, but not have to give someone ssh credentials to do it. And an anoymous scp/sftp for distribution or a dropbox could have been really interesting too.


Well, yes, rsync to replace scp. Sftp's also regarded a hack anyway imho.

The write-only scp intrigues me. I guess it's not hard to write a program to do that. But, right, that's not easy with standard tools only. The Linux file system was also not designed for that (although it doesn't prohibit such software) I guess.


> The Linux file system was also not designed for that (although it doesn't prohibit such software) I guess.

There's no 'the' Linux file system. There's plenty of file system to choose from.

And, in fact, it would be relatively easy to write a write-only filesystem with FUSE. (https://en.wikipedia.org/wiki/Filesystem_in_Userspace)


>> And I thought having a write-only scp would be really interesting

I think you can achieve that at the file system level. At least, a long long time ago I maintained a public server with exactly that functionality. I’ve forgotten the details now but if I were tasked with this today my first attempt would be add a sticky bit like we do with /tmp: chmod +t dropbox/

If you don’t want to allow me to delete or overwrite my own files I believe (but haven’t tested) that chattr +a on the dropbox dir would achieve that.


You can restrict SSH commands by having it serve a restricted shell instead of arbitrary shell. Like how there's games where you can SSH into server to play

https://crawl.develz.org/wordpress/howto#connecting


Technically `sudo -u` can switch to any user on the system while only a limited few would be allowed as ssh targets. Even root might not be allowed as an ssh target if `PermitRootLogin` is set to `no`, which I do on all my systems.


I do use that a lot

  sudo -H -u user bash
after I ssh into a server with my own account. That other user might even be a no login account.


You can just use `-i` instead of `bash`. (This method indeed requires a shell configured, your method is needed with nologin.)


>Even root might not be allowed as an ssh target if `PermitRootLogin` is set to `no`, which I do on all my systems.

would something like PermitRootLogin=localhost punch an enormous hole in your intricate opsec?


I've set up tor on some machines to forward ssh as a hidden service for an easy to configure way to get past NAT before. That shows up as a login from localhost. (could be configured differently, with some extra work)

There's so ways to configure access to a system, each with footguns I'm surely unaware of.


In my previous job, we set up a privileged account on a server with shell set to `git-shell`, leave some shell scripts in $HOME, so that we can do:

> ssh user@privileged-commands ./do-something


It's the usual way to make changes to a privileged file in Emacs; ssh to localhost with root.


'sudo' and 'doas' are inline connection methods in TRAMP, so no need for SSH to localhost. Curious that systemd-run isn't already supported, but I imagine that will quickly change.


I like to have one of my tmux windows be a sudo session with emacs running as root and mostly used to run emerge world or apt update etc. also a window tailing all the logs. I haven’t been on a machine where more than one admin was logged in at the same time in quite some time.


Why not use sudoedit?


Just a side note: sudo is largely maintained by just one dude https://github.com/sudo-project/sudo/graphs/contributors


Don't worry I offered him help. I recently helped xz library too


also worth mentioning: Lennart Poettering

"Poettering is known for having controversial technical and architectural positions regarding the Linux ecosystem"

https://en.wikipedia.org/wiki/Lennart_Poettering


His positions are mostly controversial because he challenges the way things have been done for a long time. Whenever he presents some new idea/architecture my first reaction is often confusion. Why would he change something that has worked so well for such a long time? But then I take the time to read up on the reasoning behind his ideas and then things start to make sense. Even when something isn't exactly broken, there is still room for better solutions.


There have been lots of suggestions for how to improve linux / unix for a very long time.

The first great war I remember, and I'm sure there were more before I was around, was DJB vs everyone. For the most part, I think his designs, "weird" as they were / are, are still better than almost every crackpot variation of them that's come since.


Dude you cannot compare DJB to Pottering.

DJB is a genius, responsible for all of the non-NSA asymmetric cryptosystems, symmetric cryptosystems, and authenticated encryption algorithms supported by TLS (curve25519, chacha20, Poly1305). He's also the one who got us off of the footgun-by-design, broken-random-number-generator-will-spray-your-privatekey-everywhere nondeterministic nonce signature schemes prior to Ed25519 (the first standardized signature scheme which required deterministic nonces). Oh yeah and the only post-quantum cryptosystem that OpenSSH was comfortable shipping.

And pottering gave us pulseaudio. The gift that keeps on giving.


What I said was that the first holy war of unix I remember is the DJB vs everyone else.

As far as I can tell, as odd as DJB's designs may have seemed, they were and are ... way better than what was and still hold up today; most of the following "lets unix better" designs seem to just adopt some of DJB's designs, typically poorly.

Systemd certainly seems to have cribbed elements of daemontools et al, but seemingly none of the notion of "least privilege" ...


I think maybe your memory decieves you?

The great thing about unix is that there are no "wars" over these things, because everybody gets to decide for themselves.

Well at least that's how it was before systemd -- and all of DJB's unix work long predates systemd. By the time systemd came around DJB had been focusing on ECC exclusively for almost a decade.

The way I remember it is that most people didn't understand DJB and just kinda ignored his work, while a bunch of other people recognized what he was on to and integrated his ideas into software with frendlier user interfaces. For example, runit, which is PID1 for Void Linux to this day, and s6, which is PID1 for both Liminix ("NixOS-on-your-wifi-AP") and Spectrum ("Qubes for Nix"). Indeed increasing numbers of NixOS users are ditching systemd for s6.

Anyways I don't remember anything close to a "holy war".


He's controversial because numerous times his ego has so severely clouded his judgemental that he refuses to see egregious bugs in his programs for what they are. Just one example: https://github.com/systemd/systemd/issues/6237#issuecomment-...

The "people hate him because he makes new stuff" narrative is just more ego-protecting cope. Many developers of other new systems are widely respected and appreciated because their stuff works and they stay humble. Wireguard and Pipewire devs don't get hate poured on them in HN discussions because their shit works, solves problems people have, and because they know how to deal with people.


Or in Linus Torvalds' words[1]:

It does become a problem when you have a system service developer who thinks the universe revolves around him, and nobody else matters, and people sending him bug-reports are annoyances that should be ignored rather than acknowledged and fixed. At that point, it's a problem.

[1]: https://lkml.org/lkml/2014/4/2/580


This is about Kay Sievers, not Lennart Poettering.


Same difference


But even then, system service developers don't try to 'own the whole world' so to speak and so they do need to play nicely with others. Mr. Poopering philosophy is the minute a dependencies maintainer becomes a thorn in his side - he absorbs that project into systemd. The distribution packagers follow like starving dogs on a hunt


> Mr. Poopering

This is childish and petty, I suggest you delete your account.


You can't delete your HN account.


Interesting! IANAL, but I think this should be basic functionality, ever since the recent-ish European and Californian privacy regulations. Although I think a quick e-mail to [email protected] would suffice.


Would it really? Asking cause genuinely curious, literally the only online forum I can't remove my past public information from is HN.


Even if you delete your account, it wouldn't really matter that much. Whole HN is probably crawled and archived on a daily basis due to a simplistic API


No thanks!


> Just one example: https://github.com/systemd/systemd/issues/6237#issuecomment-...

1. He gave a clear reason why it is how it is 2. He realizes it is/might be frustrating 3. even `adduser` will not allow it by default 4. The issue that it still runs the unit even with config errors has been addressed: https://github.com/systemd/systemd/commit/bb28e68477a3a39796... (~2 weeks after the issue was opened)


His reason, although clear, is also plainly wrong. Such usernames although bizarre may be encountered by SystemD so it shouldn't break when it sees them. Computer programs, particularly important ones, should be conservative in what they emit and liberal with what they accept and that means not breaking when they encounter weird but technically permissible usernames. His response should have been "Golly, that's a weird username, I didn't think that was possible" and then fix the bug.


There is a certain personality type that likes to reimagine that their original thinking was not flawed, even when presented with a detail that they did not incorporate into their original thinking. If the detail had been in their awareness from the start, they would have arrived at a different position, but they are bound to a strict sense of linearity for reasons inexplicable to me except for ego protection.


Alternatively, if, like he says in the comments of that bug, he really means that SystemD shouldn't support systems that allow such usernames, then he should ensure SystemD won't run on such systems.

Silently doing the wrong thing is not a good thing, especially when "doing the wrong thing" is running stuff as root that wasn't supposed to run as root.


Disclaimer: I know nothing about the particular bug. Postel's Law has its tradeoffs, and its fuzzy lines are a nice place for security issues to arise.


For sure, there are limits. In this particular case, maybe we say that SystemD shouldn't support weird usernames beginning with numbers, but the other half of the law should still apply. The conservative emission would be logging an error message, not running that unit file as root.


> 3. even `adduser` will not allow it by default

5. useradd does allow it (as noted in a comment). 6. Local users, and the utilities that create them, are not the only source, there things like LDAP and AD.

7. POSIX allows it:

* https://github.com/systemd/systemd/issues/6237#issuecomment-...


Is this guy still hated and receiving death threats? Also didn't knew he is working for Microsoft now, that's an interesting career change.


And explains so much!


And another of the systemd devs, Kay Sievers, was banned from contributing to the Linux kernel due to his bad attitude and unwillingness to collaborate.

Poettering and Sievers are skilled devs with huge egos.


For three decades. I suspect he hasn't seen much money for the work, but hopefully I'm wrong.


From his personal page:

For the past 30+ years I’ve been the maintainer of sudo. I’m currently in search of a sponsor to fund continued sudo maintenance and development. If you or your organization is interested in sponsoring sudo, please let me know.[0]

[0]: https://www.millert.dev/


Sounds like a prime candidate for the Linux, Apache, Mozilla, etc. foundations.

Y'know. Before some strangely-named benefactor from within the UTC+03:00 time zone swoops in.


Seems like that might be an issue.

From his website…

>I’m currently in search of a sponsor to fund continued sudo maintenance and development. If you or your organization is interested in sponsoring sudo, please let me know.


So many critical components in our system are maintained by just a random good guy on the internet.

I can't help but think of another XZ crisis that is yet to come.

https://xkcd.com/2347/


I think this may be the more accurate sentiment:

I can't help but think of all the other xz crises yet to be discovered


Did you expect more?


I have seldom come across unix multiuser environments getting used anymore for servers. Its generally just one user on one physical machine now a days. I understand run0's promise is still useful but i would really like to see the whole unix permission system simplified for just one user who has sudo access.


> across unix multiuser environments getting used anymore for servers

I guess it depends on the servers. I'm in academic/research computing and single-user systems are the anomaly. Part of it is having access to beefier systems for smaller slices of time, but most of it is being able to share data and collaboration between users.

If you're only used to cloud VMs that are setup for a single user or service, I guess your views would be different.


> If you're only used to cloud VMs that are setup for a single user or service, I guess your views would be different.

This is overwhelmingly the view for business and personal users. Settings like what you described are very rare nowadays.

No corporate IT department is timesharing users on a mainframe. It's just baremetal laptops or VMs on Windows with networked mountpoints.


Multi-user clusters are still quite common in HPC. And I think you're not going to see a switch away from multi-user systems anytime soon. Single user systems like laptops might be a good use-case, but even the laptop I'm using now has different accounts for me and my wife (and it's a Mac).

When you have one OS that is used on devices from phones, to laptops, to servers, to HPC clusters, you're going to have this friction. Could Linux operate in a single-user mode? Of course. But does that really make sense for the other use-cases?


you could potentially create multiple containers in that machine which are single user and give to every user who needs access. CPU/Memory/GPU can be assigned in any way you want(shared/not shared). Now no user can mess up another user.


Isn't that just reinventing multiuser operating systems? Normal Linux already has the property that no user can mess up any other user (unless they are root or have sudo rights)


no it is not


It's not "that machine" it's a cluster of dozens or hundreds of machines that is partitioned in various ways and runs batch jobs submitted via a queuing system (probably slurm).


Not containers, but cgroups, and that is how HPC clusters work today. You still need multiple users though.


is it? most HPC (if GPU clusters count) are probably in industry and managed by containers


HPC admin here.

Yes. First, we use user level container systems like apptainer/singularity, and these containers run under the user itself.

This is also same for non academic HPC systems.

From schedulers to accounting, everything is done at user level, and we have many, many users.

It won’t change anytime soon.


I thought most containers shared the same user, ie. `dockremap` in the case of docker.

I understand academia has lots of different accounts.


Nope, full usermode containers (e.g.: apptainer) run under the user's own context, and furthermore under a cgroup (if we're talking HPC/SLURM at least) which restricts the user's resources to what they requested in their job file.

Hence all containers are isolated from each other, not only at process level, but at user + cgroup level too.

Apptainer: https://apptainer.org


I think a admin would better understand the system if there was only one subsystem doing a particular type of security and not two. Two subsystems doing security would lead to more problems down the road.


For HPC, there are two different contexts where users need to be considered - interactive use and batch job processing. Users login to a cluster, write their scripts, work with files, etc. This is your typical user account stuff. But they also submit jobs here.

Second, there are the jobs users submit. These are often executed on separate nodes and the usage is managed. Here you have both user and cgroup limits in place. The cgroups make sure that the jobs on have the required resources. The user authentication makes sure that the job can read/write data as the user. This was the user can work with their data on the interactive nodes.

So the two different systems have different rationales, and both are needed. It all depends on the context.


If we forget how the current system is architected, we are looking at two problems: First problem is that Linux capabilities are also dealing with isolating processes so they have limited capabilities because the user based isolation is not enough. Second problem is that local identity has no relation to the cloud identity which is undesirable. If we remove user based authentication and rely on capabilities only with identity served by cloud or kubernetes, it could be a simpler way to do authenticating and authorization

I'm not sure I even follow...

The primary point of user-authentication is that we need to be able to read/write data and programs. So you have to have a user-level authentication mechanism someplace to be able to read and write data. cgroups are used primarily for restricting resources, so those two sets of restrictions are largely orthogonal to each other.

Second, user-authentication is almost always backed (at least on interactive nodes) by an LDAP or some other networked mechanism, so I'm not sure what "cloud" or "k8s" really adds here.

If you're trying to say that we should just run HPC jobs in the cloud, that's an option. It's not necessarily a great option from a long-term budget perspective, but it's an option.


there is no reason for users to be maintained in the kernel.


Can you elaborate on that?


Containers rely on many privilege separation systems to do what they do, they are in fact a rather extreme case of multi-user systems, but they tend to present as “single” user environs to the container’s processes.


> they are in fact a rather extreme case of multi-user systems

Are they? My understanding was that by default, the `dockerd` (or whatever) is root and then all containers map to the same non-privileged user.


Good software hides complexity. User does not have to understand user group permissions suid etc etc


> No corporate IT department is timesharing users on a mainframe

Not a mainframe perhaps, but this sentiment is flat wrong otherwise, because that is how Citrix and RDS (fka Terminal Server) do app virtualization. It's an approach in widespread use both for enterprise mobile/remote access, and for thin clients in point of sale or booth applications. What's more, a *nix as the underlying infrastructure is far from unusual.

I have first-hand insider knowledge of two financial institutions that prefer this delivery model to manage the attack surface in retail settings, and a supermarket chain that prefers it because employee theft is seen as a problem. It’s also a model that is easy to describe and pitch to corporate CIOs, which is undoubtedly a merit in the eyes of many project managers.

One of the above financial institutions actually does still have an entire department of users logged in to an S/390 rented from IBM. They’ve been trying to discontinue the mainframe for years. I’m told there are similar continuing circumstances in airline reservations and credit card schemes; not just transaction processing, but connected interactive user sessions.

This is what corporate IT actually looks like. It is super different to the tech environments and white-collar head offices many of us think are the universal exemplar.


I wonder if they might be more common than you think. You will never see someone standing up at a conference and describing this setup, but there are millions of machines out there quietly doing work which are run by people who do not speak at conferences.

Where i work, we have a lot of physical machines. The IT staff own the root account, and development teams get some sort of normal user accounts, with highly restricted sudo.


I always still split up "sysadmin" from "deploy".

Ephemeral setups (amongst which k8s) remove that need but introduce a big load of other stuff.

Having a VPS that is managed by sysadmins (users with sudo rights, authed with keys) and on which partly overlapping "deploy" users can write to small parts and maybe do a passwordless "sudo sysctl restart fooapp" but only that, is a nice and simple setup.

I manage at least seven of these. And nothing in me even considers porting this to my k8s infra.

Edit: The reason for this setup is simple and twofold: deploy is safe and clear: deployers can be confident that whatever crap they pull, the server will churn on, data will be safe, recovery is possible. And all devs/ops having their own keys and accts gives a trail, logs and makes it very easy to remove that contractor after she did her work.


Yes, we are moving more and more towards a system of immutable deployments.

That's good! We don't patch executable binaries these days: we just compile a new one from source, when we made a change. Similarly, more and more we just build new systems (or their images) from source, instead of mucking around with existing systems.


I think you mean systemctl.


He probably meant sysadmin as in the account with sudo access.


s/sysctl/systemctl/


Correct. Typed it on mobile.


NixOS may be helping multiuser make a comeback, at least it is for me and my home servers. I no longer have to containerize my apps, i can have one baremetal server with a dozen+ services, all with their own users and permissions, and i don't have to actually think about any of the separation.

Plus there’s network shares. Multiple people in my home with linux PCs, each with their own slice of the NFS pie based on user perms. Sure, it’s not secure, but these are people I live with, not state-sponsored hackers.

All that said, I’d also love a simpler single-user perm setup. For VMs, containers, etc it would be amazing


> i can have one baremetal server with a dozen+ services, all with their own users and permissions

I've used nixos and I don't really see how nixos is special apart from the declarative config. The same can/should be done with any distro and any config manager.

And unless you were running Podman in rootless mode, the same setup applies to containers too.


Sure i could do this on debian, but like, i wont. Some software comes packaged with nice scripts to provision new users for running systemd services, but a lot do not.

For me and my home network, if the default security mode is “manage users yourself”, i chmod -R 777 on all applicable files and call it a day. Nixos lets me be lazy, as all nixos modules (that I’ve ever used) have their own user setups with minimal permissions by default


I’m not sure how “I don’t have to actually think about any of the separation” meshes with the fact that you explicitly setup multiple users and configured file and group permissions accordingly. You clearly put a lot of thought into it.

Alternatively, containers really are a no-thinking-required solution. Everything maximally isolated by default.


Containers are isolated but a far, far cry from maximally isolated. They’re still sharing a Linux Kernel with some hand waving and cgroups. The network isolation and QoS side is half-baked even in the most mature implementations.

HVM hypervisors were doing stronger, safer, better isolation than Docker was 10 years ago. They are certainly no-thinking required though which leads to the abysmal state of containerized security and performance we have currently.


> I’m not sure how “I don’t have to actually think about any of the separation” meshes with the fact that you explicitly setup multiple users and configured file and group permissions accordingly. You clearly put a lot of thought into it.

That's the thing, with NixOS you usually don't have to explicitly setup users and permissions. For most simple services, the entire setup is a single line of code in your NixOS configuration. E.g.

    services.uptime-kuma.enable = true;
will make sure that your system is running an uptime-kuma instance, with its own user and all.

Some more complex software might require more configuration, but most of the time user and group setup is not part of that.


There have been no big cves of container escapes for a while now. I guess it can be considered secure enough.


A lot of Kernel privescs are also technically container escapes, so 2 months ago was the last one actually: https://www.cvedetails.com/cve/CVE-2024-1086/


but then even traditional multi-user would be compromised in this case.


Containerisation (either with containers or via VMs) doesn't have to be expensive.

In principle, you can have just exactly the binary (or binaries) you need in the container or VM, without having a full Linux install.

See eg Unikernels like Mirage.


DietPi does exactly the same using Debian


NixOS at it again :)


Many, many daemons run under their own users. Just because a single human is using the system, it doesn’t mean the system has a single user.

Also, people noted HPC, and other still very relevant scenarios.


You'll just end up implementing multiuser support anyway due to different permissions to different devices services


How about only in servers where you only have CPU/Memory/disk/GPU with open source trusted drivers?


Visit the research computing environment sometime, for instance. The libzma SSH compromise was considered very worrying, after all.


That didn't need multi-users.


No, but that's the case I've overwhelmingly seen over the decades. Anyway, are you going to redesign ssh not to require a user, for instance? I assume you wouldn't want sshd running as the putative single user.

[I'm all for replacing notions of privileges/permissions with capabilities.]


Yes, i'd rather that the sshd daemon ran with a restricted set of capabilities.


Technically not with virtual machines as the hardware is shared, though I agree, nowadays accounts and access control of the system belong to the virtualization layer below. The benefits of multiple accounts per machine are tiny and not worth the complexity for server setups.

We could significantly simplify things by getting rid of the account system. The same could be said for a lot of systems like database servers. Typically it's just one database, one user (your application server) with full access. The account system is mostly an annoyance.

For big company use cases where you want to reduce attack surface, why not spawn a second server with different credentials? Anyway big companies typically have many database servers in a cluster and the same credentials are shared by many server processes... The tendency there is literally in the opposite direction.


>> Typically it's just one database, one user (your application server) with full access

This is a terrifying way to access databases.

Super user, A Modify user (just below super but cant delegate rights) for schema changes. A read/write app user... Probably a pile of read only users who, have audit trails... You might want some admins or analytics users (who have their own scheme additions).

The words security and audit trails all spring to mind.


A simpler solution is to simply not give direct access to the database to anyone who doesn't own a large stake in the project. Expose it via a more restrictive CRUD interface with access control in the application layer.


in some other systems the concept has become overloaded. instead of multiple real people as users, different software with different permissions are different users. its not a bad abstraction.


Maybe containers are a better way of isolating processes as mentioned in other comments.


The humans are now spawns of multithread shells and other things. Linux land is still very multiuser oriented. But it is the rise of the mschines instead.


You only have one admin? How do you know who logged in, ssh certificates?


Only one human per machine. If you need to share the machine, make multiple containers and give everyone a separate container.


You don't run any services where more than one person shares responsibility for managing that service? E.g. kubernetes. That is just one guy holding it up?


In an on-prem cluster, yes one guy or a few sysadmins who either share passwords or can somehow put their keys in the authorized keys file and ssh.

In the cloud, AWS/GCP let or not let an IAM user reach a server.


That's convenient but doesn't scale and really not too great for security for a bunch of reasons, but it can work great for smaller teams and minimize friction.


Signed ssh certs make your life easy here


Maintaining your own PKI isn't exactly easy unless it's your full time job.


Its fairly easy to get setup and after done correctly pretty low maintenance. But i have done it a few times at this point.


You run everything as root or how am I supposed to understand that?

Sudo exists to execute commands with a different user. It's an abbreviation of "switch user (then) do" for a reason.

Most daemons run under a specific user. Things like docker that use a root Daemon are a security nightmare.


You dont need to use docker. Containerd or just just direct cgroup manipulation: https://access.redhat.com/documentation/en-us/red_hat_enterp...


I've never understood the need for sudo(1) on single-user, physical machines: I keep a root shell (su(1)) around for admin tasks, and it's always been sufficient.


Everything I run with sudo is logged so I know how I messed up.

Nothing worse than ansible with its “sudo /tmp/whatever.sh” which hides what it’s doing.


> Everything I run with sudo is logged so I know how I messed up.

FWIW, shells have a (configurable) history file. I'm not sure how it compares to sudo's logging though. I also personally perform little day to day admin tasks (I don't have as much time nor interest to toy around as I used to, and my current setup has been sufficient for about a decade).

> Nothing worse than ansible with its “sudo /tmp/whatever.sh” which hides what it’s doing.

That's a nightmare indeed; for sensitive and complex-enough tasks requiring a script, those scripts should at least be equipped with something as crude as a ``log() { printf ... | tail $logfile`` }.


Its just maybe easier way to not have to go to the root shell.


Makes sense (I keep one warm in a tmux, two shortcuts away at most, so it never occurred to me).


One password is easier than two and it feels weird to use the same password for both accounts. About half of my sudo invocations are 'sudo su' lmao.


You could probably save a process with `sudo -i`


Slightly less convienent to type.


You're entering your own accounts password, not root, when you use sudo. It's a security measure to prove our shell hasn't been hijacked and to make you pause and acknowledge your running a command that may affect the entire system.

You can also disable it in the sudoers file.


> it feels weird to use the same password for both accounts

I'm not sure different passwords adds more protection for single-user machines, especially when sudo(1) can spawn root shells!


of mine are sudo bash.


Scripting.


We use Userify which manages multiple user logins (via SSH) and sudo usage.. there are definitely many, many use cases for teams logging into remote servers, and most security frameworks (PCI-DSS, HIPAA, NIST, ISO 27000) require separate credentials for separate humans. Sudo has some issues, but it works very well and is well understood by many different tools.


It could all be simplified and map one to one to your identity provider credentials at a higher level. Having a complicated user system on the servers makes it a problem.


Userify doesn't seem complicated.. it is just Linux users, created with adduser just like you'd type in at the command line: https://github.com/userify/shim/blob/master/shim.py#L227


Seems it uses useradd not adduser


access management is usually delegated to other systems that supervise UNIX, like AWS


Or Kubernetes. Thats where a standard way of authentication/authorization should be there.


I haven’t seen it doesn’t mean it doesn’t exist.


Exist does not mean it should keep existing if it is unnecessary complexity from the past.


Yeah but elevated permissions may be needed from time to time anyway. Either on the client, the baremetal server or the container. Running everything as root is even for containers not recommended. Considering how popular these have become, it's a bit of an irony that systemd isn't available on the container without considerable detours.


One user with sudo for sysadmins on baremetal and a sudo access without CAP_SYSADMIN on container should be good.


I like seeing qmail as blueprint how a secure app that needs elevates permissions should be designed, in fact it has 7 users.


I wonder what other existing programs Will Systemd attempt to replace in the future.

My bet is /bin/sh, maybe they went further to replace the entire POSIX utilities.


filesystemd, replacing ext4/btrfs/etc.

It will come with `filectl` for all your file operations, so you will no longer need `cd`, `pwd`, `touch`, `rm`, `mkdir`, `cat`, `grep`, `find`, etc. Instead you do everything through `filectl` commands.

This will deprecate many commands from GNU coreutils, which is a good thing because replacing things is always good.

Then, since programs are just files, and filesystem will be part of systemd, any program you want to use will obviously have to go through systemd as well, meaning they will need to be a service unit of type `oneshot`, because this way we keep everything well integrated together.

Don't worry tho, you only write the unit files once and they work forever. The only thing you need to remember is that, instead of `cargo build` you'll need to use `filectl exec -u cargo build` (`filectl exec -u` is only 3 words, so you don't have the right to ever complain about this tiny little change).

Anyone who doesn't like these changes is stuck in the past.


> The only thing you need to remember is that, instead of `cargo build` you'll need to use `filectl exec -u cargo build`

No, you forgot about `buildctl` which compiles any language into systemd bytecode, that runs on the systemd VM. At long last, write once, run anywhere!


> instead of `cargo build` you'll need to use `filectl exec -u cargo build`

You joke, but I already have to do this with lxc commands, and the systemd-compatible version of those commands is even longer than you imagined. See my other comment for details.


And one day you boot into the single mode and can't execute a file because you you can't connect to systemd D-BUS error 0xDEADBEAF contact your systemd admnistrator


> You will no longer need `cd`, `pwd`, `touch`, `rm`, `mkdir`, `cat`, `grep`, `find`, etc. Instead you do everything through `filectl` commands.

This is not as ridiculous as it sounds. Arguably, the file system is more of an exception, because it is directly exposed by the kernel. But, for example, to manage the files in a tar archive you do everything with the `tar` command and to manage a git repository you do everything with the `git` command.


busyboxd?


Let’s call it GNU/systemd/linux


I think it's more that they're replacing the traditional GNU programs with systemd.

That will result in systemd/Linux.


Guix will reimplement POSIX utils and extras with tools written in Gule.


Please say this is a joke


https://nlnet.nl/project/Gash/

It wouldn't be a bad idea. Also, Guile's JIT could be interesting there.

Also, a shell with a live REPL instead of failing on errors can be pretty interesting.


“gash” oh my, what a name. Do you think they know?


Hopefully glibc and coreutils!


I mean, it already has replaced the shell in a lot of ways. Systemd is basically the one reason why it's feasible to banish POSIX shells to where they belong, albeit still not usually practical.


Overall, this seems great.

However...

> [...] by default it will tint your terminal background in a reddish tone while you are operating with elevated privileges

?!! ouch ... seems orthogonal to the actual important parts.

Disclaimer: I didn't try it.


This is a perfect example of a choice that a developer makes to suit his/her personal preference and environment, believing that everyone does (or should) use their computer the same way. Which is sadly becoming a more common trend.

I like the idea, but I don't think it should be on by default. The rest of us have just used root-specific shell prompts for the last few decades or so.


It's fine.

Not every software needs to be infinitely configurable and open source just in case the configurations don't cover the needs of all.

We need opinionated software, if you don't want to make any choice for me, you can't even give me an assembly editor for fear of forcing your CPU arch of choice.


I don't understand the part about the assembly editor, but I'm not sure I agree with the rest.

Whenever I hear someone describe their software as "opinionated," I have found what that usually means is that the developer thinks they are smarter than everyone else and all of the unfriendly attitude that usually comes along with that.

Whoever made the decision that run0 should turn your terminal red by default doesn't understand that there are practically infinite terminal configurations out there that this will interfere with or be outright incompatible with. My argument is that the decision comes from a place of ignorance of the sheer diversity of the users of the software, not from a place of, "we are so smart, and are the first ones to think of this feature."


You can't think of ways this could break things? I would find this a useful feature, but I'm also aware of how this works, and the issues it could cause.


It could, but this is a non-default tool focused on new use so the first question I’d ask is how many of the people using it are running the weird edge-case terminals where that’d break something. I wouldn’t want to end up in a Microsoft-style trap where nothing can improve because someone somewhere depends on strict fidelity with 1993.


No? Of all the esoteric escape sequences that terminals handle the ones that change colors are well trodden.


There are at least 3 different ways of expressing colour as covered by https://en.wikipedia.org/wiki/ANSI_escape_code#Colors, and given the wide propensity of newer terminals to misidentify what they are (I know I have some additional checking in my shell startup to unbreak things if needed), and/or bad termcap/terminfo settings on older systems, sending terminal sequences that are apparently supported but are not happens surprisingly often (enough such that I've made sure to always install two different terminals which use different rendering backends, e.g. xterm and VTE).


Will you install a new systemd version on such an old system with wrong termcap/terminfo settings or attached to a physical vt100?


Counterpoint: GNOME and the modern GTK framework

(I needn't say more.)


Gnome is great if you're willing to do things their way. I Like Gnome a lot actually.


Gnome's UI peaked at 2.32. Find out how the users operate and implement so users can work efficiently. Don't make changes just to make your mark or to make users work on the desktop like they see on a cell phone. That is so basic.


I don't know if you are DE shopping, but I've been very happy for the past few years with the MATE Desktop Environment, which "...is the continuation of GNOME 2. It provides an intuitive and attractive desktop environment using traditional metaphors for Linux and other Unix-like operating systems."

https://mate-desktop.org/

Among a great number of things I really like, I will mention that Caja, the MATE version of GNOME 2's Nautilus file manager, can still be switched to spatial mode.

https://en.wikipedia.org/wiki/Spatial_file_manager

Generally speaking, I too really liked GNOME 2.32 and its predecessors, and, as far as I'm concerned, MATE is as it describes itself.

EDIT: Wording mistake.


I have no idea why Gnome keeps on insisting on breaking expectations. From the shell as a whole to the widgets and even the window titlebars they seem to insist on being different for the sake of being different.


Because their choices are better, at least for some of us. Users who prefer the traditional desktop paradigm have a wealth of alternative DEs to choose from.


I suppose my brutally minimalist Sway config with barely there titlebars and a skinny little status bar and not an icon, button, or widget in sight doesn't give me great standing to call for a respect of conventions.

I suppose I should say I found Gnomes luridly chunky decorations and widgets to be personally offensive.


What widgets? Gnome has just the one black bar at the top.


And it's a thick monster with all kinds of extra crap (I'm my not so humble opinion) shoved in it.


It literally has an activities button, the time and date, and a tiny button for interacting with settings on the right.


Gnome is always getting better (if you want to do things the Gnome way). Why should a DE show any elements begging to be clicked on my desktop while I am working (window list, etc)? I am only interested in what's in my IDE, terminal, and browser. Present Gnome comes closest to my ideal of fading into the background and letting me focus on my tasks.


These days KDE gets a lot of the same things right that GNOME 2 did.


Yes. I prefer having tools that do one thing well. That's the point of unix. How the user uses them should be up to her.

GNOME offering a monolithic environment with heavy opinionation is the opposite.


How is GNOME a "monolithic environment"? The entire GNOME ecosystem is basically small apps that do a single thing well:

https://apps.gnome.org/


So is X11, by that logic

https://cyber.dabamos.de/unix/x11/


That page literally just lists all sorts of random apps that work under X11, so yeah? Of course it's not a single monolithic system.

actually you do, because gnome is great


I tried it a bit ago (when it was still called uid0, pre-release), I also wasn't a fan of the tinting.

I like the intent behind it, but some terminals already tint the header color when running sudo, I haven't tested if its done specifically for sudo or if its in a more generic way that could handle this as well.


I can think of a number of things this tinting would break.


It violates the principle of Least Surprise; if I'm invoking run0 I'm expecting it to run my program with a different UID and return the same stdout I'd have gotten if I had just run the program in my shell. Not inject a whole bunch of color control bytes in there. Which hopefully my terminal will handle. Unless it doesn't.

I'll give them the benefit of the doubt and assume they only do this if $TERM supports color. But still. That $TERM variable can surprise a poor programmer in all sorts of ways.


But sudo already doesn’t do that either. Eg sudo may ask for a password, and output some control sequences which hide the text so your password is not visible.

This feels like much ado about nothing.

Edit: Also don’t forget the “with great power comes great responsibility” blurb that sudo likes to output. I know that doesn’t happen in scripts when output is redirected, but I’m sure run0 will figure that out too.


asking for a password to do an authenticated action is about as far away from surprising as I can legitimately reason about.

The contextual blurb does have a way of disabling it in a persistent config, which is easy enough to set. It also goes to stderr and not stdout and does nothing to alter the output of the command itself.

It also does not show if you have NOPASSWD: set in your sudoers. So even less surprising.


> sudo may ask for a password, and output some control sequences which hide the text so your password is not visible.

You can turn this off for certain users and/or programs.


Any sane command line program will only output color codes if isatty(STDOUT_FILENO) succeeds.


That can succeed in a number of cases, where it actually isn't a tty with a user on the other end of it. There are a number of internal tools at work that only output logging if there is a tty and thus are run in their cronjobs with a tty. If there were unexpected color outputs in the logs, that would suck since the log aggregators probably wouldn't know what to do with it.


Can you name any?


Containers run with a tty attached but no console on the other end and then trying to read the logs, for starters. Additionally, as mentioned, conflicts with the user's color scheme or even the program's. Further, it's possible to do this without the help of run0, so I suspect users already doing that are going to get their colors messed up and be annoyed. For example, prod machines are usually red, and running as root on a prod machine is royal purple. If this is used, seeing a red background instead of a purple one is likely non-desirable.


The user's choice of color schemes.


XMODEM and the like expect the terminal stream to not have garbage added by e.g. run0.

The terminal line should be clean between XMODEM at the terminal emulator and at the client end.


Is xmodem still used outside of computer museums (including private computer museums)?


It's e.g. still very popular with embedded development. One example: u-boot supports it.

It is the easiest way to upload an image to u-boot, as it does use the same terminal, thus there is no need to set up a secondary path; If you can talk with the u-boot CLI, you can also upload with xmodem.


If the program you're running prints red text, then you get red bgcolor with red fgcolor. Good luck reading that.

(Also, users with the wrong color scheme get that experience by default. Though that is a niche use case enough that I'd be surprised if systemd devs cared about it.)


> I also wasn't a fan of the tinting.

From the linked mastodon thread:

> For example, by default it will tint your terminal background in a reddish tone while you are operating with elevated privileges. That is supposed to act as a friendly reminder that you haven't given up the privileges yet, and marks the output of all commands that ran with privileges appropriately. (If you don't like this, you can easily turn it off via the --background= switch).

(emphasis mine)


I think it's more that the default seems backwards than the lack of ability to change it.


It's three things:

* here is a feature which we are defaulting to on

* there's no persistent config for it

* we know better than you do about your preferences


"Defaulting to on" is just a symlink to an existing binary so that's not really much a problem.


A symlink to a binary that I'm going to pass a password to seems like a security bug waiting to happen (just in the manner that any complexity around privilege escalation is a bad idea).


I for one love to type out 13 extra characters to a 4 character command to disable dumb choices by the developer.

On a more serious note, I wonder what random ASCII escape sequences we can send.


> I for one love to type out 13 extra characters

FWIW, systemd is normally pretty good at providing autocomplete suggestions, so even if you don't want to set up an alias you'll probably just have to type `--b<TAB> ` to set it.

> I wonder what random ASCII escape sequences we can send.

According to the man page source[0]:

> The color specified should be an ANSI X3.64 SGR background color, i.e. strings such as `40`, `41`, …, `47`, `48;2;…`, `48;5;…`

and a link to the relevant Wikipedia page[1]. Given systemd's generally decent track record wrt defects and security issues, and the simplicity of valid colour values, I expect there's a fairly robust parameter verifier in there.

In fact, given the focus on starting the elevated command in a highly controlled environment, I'd expect the colour codes to be output to the originating terminal, not forwarded to the secure pty. That way, the only thing malformed escapes can affect is your own process, which you already have full control over anyway.

(Happy to be shown if that's a mistaken expectation though.)

[0] https://github.com/systemd/systemd/blob/main/man/run0.xml

[1] https://en.wikipedia.org/wiki/ANSI_escape_code#SGR_(Select_G...


'alias' is your friend.


I shouldn't need to alias behaviour that violates the principle of least surprise on every single machine I need to run elevated commands on.


Eh.

`alias grep='grep --color=auto'`

`alias ls='ls --color=auto'`

It's canon.


It was a bit unclear to me from the thread, is there a persistent configuration option for this? I like the idea of tinting the terminal, but I also want to be able to turn it off with a global config option rather than having to type out a --background flag every invocation.


Aliasing the command as the command + your default arguments is the easiest general solution to this kind of problem. I'm not sure if there is a "systemd way" to permanently set it though.


True, I was thinking a simple environment variable or systemd configurable would be fine but I guess an alias is a good idea.


I accidentally compile color support out of st, or set xterm*colorMode:false to avoid seeing the backside of a unicorn randomly rubbed all over the terminal, on account of git and other wares being bad at their inability to not spew color codes. A sensible default would be to set no colors, in the event that the colors are unreadable (due to colorblindness, etc) or distracting, but that ship sailed. Most of my vim config on RedHat linux was disabling wacky vendorisms, and back when I used linux I did have a "special terminal" for some NVIDIA installer that mandated colors to be usable. Maybe the terminal title was set to Fisher-Price, maybe not.


Dang. I wish I had the autism to bristle at colors. Think about all the lost hours agonizing over themes! Not feeling the agonizing tension between the fact that cool-retro-term made your terminal into an awesome monochrome CRT but that it's monochrome green so your syntax highlighting is all messed up!

> back when I used linux

What do you use now? :0 BSD? Plan 9???


This is a thing in certain environments. I don’t mind it.


Yeah, that's the part that stuck out for me. "sudo is bad because it does all these things it shouldn't do instead of just the one thing it's for and nothing else. My tool is good because it does just the one thing it's for — plus this other random thing because I think it's cool."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: