(This article has been edited since it’s first publication.)
In Linux, and other UNIX-like systems, you have to be root (have superuser privileges) in order to listen to TCP or UDP ports below 1024 (the well-known ports).
This port 1024 limit is a security measure. But it is based on an obsolete security model and today it only gives a false sense of security and contributes to security holes.
The port 1024 limit forces you to run all network daemons with
superuser privileges, which might open security holes. Without the port 1024 limit, most network daemons (except
sshd), could be run without superuser privileges. Some daemons try to remedy this potential security hole by dropping the superuser privileges after binding to the port, but you still need to start the daemon as root. And this does not work if the daemon is written in Java, which is quite popular for web servers.
Today the typical Linux machine is not used in a way which makes the port 1024 limit relevant. Either you use it as a desktop client (workstation) with only one user which have superuser access via
sudo. In the desktop case the limit is a source of frustration since you have to use
sudo more often than necessary.
Or you use it as a firewall, router or web/database/DNS/mail/news server and then only trusted administrators can login at all. The database or web application have it’s own user account system separate from the system’s and these untrusted users cannot install any daemons at all.
And even if you use a Linux machine in a way where the port 1024 limit could be useful, i.e. allowing untrusted users to login, you’d better not count on it. If a malicious user can login to a normal unprivileged account, it might be possible to exploit some security hole and gain superuser access. So if you allow untrusted users to login to a Linux machine, you should not use that machine for anything else and the daemons running on that machine should not be trusted.
The port 1024 limit actually bites itself in the tail. It forces a daemon practice that might open security holes which make the limit ineffective as a security measure.
I do not blame those who invented the port 1024 limit, it was a natural and important security feature given how UNIX machines was used in 1970’s and 1980’s. A typical UNIX machine allowed a bunch of not necessary fully trusted people to log in and do stuff. You don’t want these untrusted users to be able to install a custom daemon pretending to be a well-known service such as
ftp since that could be used to steal passwords and other nasty things.
The port 1024 limit is part of the same security model that gave us the rlogin authentication. That security model is based on the assumption that every machine (but not every user) on the network can be trusted, that only trusted users have root access to any network connected machine and that all network connected machines have the same port 1024 limit.
It is well-known that this security model is obsolete and totally useless on the public Internet today. Virtually anyone can connect a computer to the pubic Internet and have root access, and there is at least one popular operating system without this port 1024 limit – Microsoft Windows.
Because of this, no sensible person uses rlogin authentication on an Internet connected network today (at least not without being protected by a carefully configured firewall). But the port 1024 limit is still there in Linux and most other UNIX style systems. I wonder why. Isn’t it time to declare the port 1024 limit as obsolete too and remove it?
It would probably not be a good idea to remove the port 1024 limit completely immediately. Some machines depend on the limit for security and their system configuration has to be properly adjusted. The solution is to make it configurable.
FreeBSD have a pair of
sysctl parameters allowing you to adjust (or effectively remove) this port limit,
net.inet.ip.portrange.reservedhigh. It would be nice if something similar was implemented in Linux (and in other UNIX-like systems). It’s probably not very useful to be able to adjust the lower part of the range, it can be fixed at 0.
But it would be very useful to adjust the higher part of the range from the current value 1023. If you set it to 79 you can run a web server without superuser privileges, but only root can bind to the lower ports (such as ssh).
What you’re looking for is a capabilities system for your OS. That would allow you to assign the ‘bind to any port’ capability to some binary, but none of the other root privileges.
There’s still one advantage to keeping the limit in place: you can’t masquerade as one of those root-controlled programs. If you could, you could crash sshd and put something in its place. With the limit, crashing it is just a DOS.
BTW: your captcha system is really busted. This is my 4th attempt at a comment, we’ll see if it works.
Sorry, but there are literally millions and millions of machines using NFS within internal organizations and these absolutely require the 1024 limit for security. Machines controlled by the administration are allowed by specific IP, and if regular users could bind to ports below 1024 on those machines, all file system security (apart from root\’s) would be lost. You are thinking single user/single machine. Computers are used for more than just web surfing and downloading porn. They are used for work too. Your Captcha is BROKEN
OK, my previous post got hosed. Use unescape() or whatever!
>And this does not work if the daemon is written in Java,
>which is quite popular for web servers.
This statement, I think, is the basis of your complaint.
Here’s how I see it: Java doesn’t work perfectly with the UNIX privilege model, you haven’t bothered to look into a super server daemon (xinetd, daemontools), the obstacle you see is the Linux *kernel*, and you think everyone else wouldn’t be bothered much by such a fundamental privilege change.
Please step out of your Java world for a minute and realize that there were and still are very good security/trust reasons for keeping user daemons off of the “trusted” ports.
And, dude, your captcha is borked.
I agree with the limit being a bit useless these days, but do not agree that it should be removed… Primarily for the situations that were presented in your own document here.
Actually most of your assumptions are completely wrong…
First and foremost, your declaration that running daemons as root could open a potential torrent of attacks is flawed. Most software packages to date rely on the root user to initialize the parent process (for the port limits), but then fork a child process to another safe user, such as \’nobody\’, \’www-data\’, \’ftp\’, or something along the sorts. This provides a dual source of security: a) when a user connects to the remote system, they are generally confident in the security of the service they are connecting to, and b) if there were to be a successful attack, the remote-user would drop to the owner of the thread, not root. As you stated yourself, if a user could create a service on a known port, it could lead to collecting data that they are not normally privileged to.
The MicroSoft reference is also another testament to reasons why it should not be trusted on the internet, nor as anything but a workstation. More importantly, you can not even install a server software (legit) without having Administrator access, thus voiding the necessity of this. Some group policies may actually force that they can only have outbound ports active, and not be able to listen.
Lastly, your statement about applications having a different user-schema than that of say PAM or LDAP that your server might be running on is a dying concept. Many administrators and, more importantly, users are wanting to tie these authentication schemas together to avoid updating multiple databases for the same data. Because of this, it is becoming more and more pertinent to protect your server in every way possible to avoid having issues of gaining global access.
And yeah, your captcha sucks
In my experience, you are completely correct, and all of these commenters are living in the past. The port 1024 limit is a relic of the past; by the time someone has compromised your server, the damage is already done. Compromising the web user opens you up to XSS attacks; compromising the database user reveals your whole database and also opens you up to XSS attacks.
A capability based security system would be cool though….
If anything is still relying on the 1024 limit for security, that application needs to be rewritten. This model is flawed from the beginning and should be removed and replaced.
The Linux kernel should put in an option to remove this restriction. As a replacement, I suggest using the standard allow/deny list based method used in many applications today.
It takes two minutes to google the issue and you might actually learn something from it.
First, being root is not the actual requirement in Linux. It is to have the capability cap_net_bind_service. Root has that by default, to be backwards compatible with posix. So this is actually a posix requirement and not something that can be “declared obsolete” just because you don’t like it. (Then, this is Linux so this is something you can configure at compile time in your kernel. Look at prot_sock in sock.h.) The clean way to do this is to have a wrapper started as root and use capset to drop all non required capabilities and then start your daemon.
On a modern Linux box you should instead use whatever security system your distribution has (in Red Hat it’s SELinux, in SuSE AppArmor). This can be used to tailor the exact capabilities your process should have. With SELinux you define several roles for any user and that roles decide what capabilities the processes inherit.
Running as a certain user is the portable way of security. Not necessarily the best. That’s why changing that behaviour is not something you would want to do.
I think it’s true that in most situations, most protocols today don’t (and certainly shouldn’t!) arbitrarily trust a host based on this sort of thing anymore. That’s why we have SSL certs and all the rest.
The only thing that does, which is still in widespread use, is NFS when used without Kerberos. With NFS, the client kernel does all the permissions checking, so you really want to have an authorized client connecting. When using Kerberos, this isn’t the case so much anymore.
He’s wrong about it leading to more security holes. Most server processes that need to listen on a low-numbered port will open it as root, then use something like setreuid() to drop privileges. Apache, for instance, does this. It runs as root for a split-second, and doesn’t accept any network connections until it’s shed its root privileges.
Letting non-root users bind those ports really can hurt things. Consider this: you’ve got a webserver running at port 80. For some reason, that webserver crashes at 3AM. Some user with an account on the machine working late at night notices, and fires up his own unprivileged webserver process listening on port 80, which he can now do. Suddenly his own web server looks to everyone else like it’s real. What’s more, the real server will refuse to start because port 80 is already in use; admins would have to use netstat to track down the user’s process and kill it. That’s a bad situation.
So yeah, it shouldn’t be relied upon for authentication, but it helps when you’re running a multiuser OS like Linux.
How old is the article you got this data from? I\’d like to see where you\’re seeing an ABUNDANCE of webservers written in java. It doesnt happen. It hasnt for a while. Not in YEARS
Pingback: Labnotes » Rounded Corners - 165 (Cultural)
You can change it…
Go to nclude/net/sock.h and change:
#define PROT_SOCK 1024
Easy, i think. 😉
It could be useful to have a way of describing what user can run daemons on what port less than 1024, i.e a file named /etc/daemonusr with a list of pairs port:user like this:
Then you could start the apache server with the user apache and you wouldn\’t need root privileges to run the daemon initially.
I don’t think it is much of a limitation.
Have you ever thought about port forwarding a restricted port to a port above 1024 where you have an application listening and running as a regular user ID. Doing it this way will still allow root to control who or what is pretending to be a well known service by controlling the port forwarding without having to run your application as root.
The guys that wrote Unix dropped the 1024 port nonsense and even the superuser when they wrote their next OS – Plan 9 From Bell Labs.
He is ABSOLUTELY right. Because of this stupid shit, I cannot delegate the administration of my Web server, database server or mail server to anyone else — because subadministrators still woulnd’t be able to restart the service by hand (unless I give them sudo prilveles to the init.d script) and if I do that, then any subadministrator can inject harmful config stanzas that will be executed as root when the daemon starts.
How are you keeping state for the captcha? I don’t see any cookies or magic information passed in a form field. Are you generating a new random number on each page view?
yes, this should be done!
As far as I know, you cannot use inetd or similar for a highly loaded web server without unacceptable performance penalty.
Like all people said, you could crash sshd and run your own daemon in place. That’s secure huh ?
Commons-daemon does all that privilege demoting stuff for java daemons. There’s no plausible reason to drop that restriction (I guess you can do it on that other *cof* ms *cof* os).
The captcha is now fixed.
Many many organizations use nfs, it is not outdated or stupid, and by removing checks on ports < 1024, you break the nfs rpc mechanism letting any idiot on the local network read and write ANY file on the nfs share, bypassing security. Same goes for a lot of unix rpc which is still in common use today. USE THIS ADVICE WITH CAUTION.
The authentication mechanism in NFS is outdated and should be replaced with something better. And this has been done in NFSv4, although it seems like many still use NFSv3.
I use NFS myself, but only on a restricted local network protected by a firewall. I would consider it foolish to use NFSv3 on an unrestricted network.
BTW, exactly how would a removal of the port 1024 limit break the security of NFS?
I must stick with the other guys and say that removing this >1024 thing is stupid, not becuse of it’s secure or god security design. Rather that the hole linux community depends on it. However something like http://www.olafdietsche.de/linux/accessfs/ Would be nice to have in the Vannila Kernel. This way every port below 1024 would by default be owned by root. The administrator that want to delegate control to users, (even outside the 1024 span, assume that user irc, and only that user should be able to bind to 6667) this is easy acomplished. This way the kernel still will mostly comp to POSIX standards, i think, (not sure=). And we then have a nice multiuser network stack,
Apparently nobody talk about this but authbind is the solution in Linux to allow non-root running process to bind to ports below 1024.
Isn\’t it ?
If it can be usefull for anyone.
Looks like you aren’t the only one: http://icculus.org/cgi-bin/finger/finger.pl?user=icculus&date=2006-08-30
Pingback: honeynet project chapter » » Know Your Enemy – SSH Honeypot
It’s still a security hole for applications to have root access they don’t need, even for a very short period of time — and we all know that some applications will have root access for their entire lifetime.
The most robust solution generally available is to keep the app on a non-privileged port and use a trusted proxy (firewall) to reroute the traffic.
Don’t hold your breath about this changing any time soon. There are too many systems out there that rely upon (or just perpetuate) the status quo. MD5 had a massive security hole, but it still took years to get people to stop using it for secure transactions, even when better options had been available for years. If someone has devised a proper solution to this 0-1024 hole, they’ve done a really good job at keeping it secret.
from a programmer in 2021: nice read. thank you for writing this!