ulimit settings are little-known settings to most Linux desktop users, but they are a really painful and often-encountered issue when working with servers. In a nutshell, ulimit settings control many aspects around a process' resource usage just like our Docker resource tweaks we covered earlier and they are applied to every process and shell that has been started. These limits are almost always set on distributions to prevent a stray process from taking down your machine, but the numbers have usually been chosen with regular desktop usage in mind, so trying to run server-type code on unchanged systems is bound to hit at least the open file limit, and possibly some other limits.
We can use ulimit -a to see what our current (also called soft) settings are:
$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 29683
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 29683
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
As you can see, there are only a few things set here, but there is one that stands out: our "open files" limit (1024) is fine for general applications, but if we run many services that handle a large number of open files (such as a decent amount of Docker containers), this value must be changed or you will hit errors and your services will effectively die.
You can change this value for your current shell with ulimit -S <flag> <value>:
$ ulimit -n
1024
$ # Set max open files to 2048
$ ulimit -S -n 2048
$ # Let's see the full list again
$ ulimit -a
<snip>
open files (-n) 2048
<snip>
But what if we try to set this to something really high?
$ ulimit -S -n 10240
bash: ulimit: open files: cannot modify limit: Invalid argument
Here, we have now encountered the hard limit imposed by the system. This limit is something that will need to be changed at the system level if we want to modify it beyond those values. We can check what these hard limits are with ulimit -H -a:
$ ulimit -H -a | grep '^open files'
open files (-n) 4096
So, if we want to increase our open files number beyond 4096, we really need to change the system-level settings. Also, even if the soft limit of 4086 is fine with us, the setting is only for our own shell and its child processes, so it won't affect any other service or process on the system.
To change this setting, you need to do a combination of options that is dependent on your distribution:
- Create a security limits configuration file. You can do this rather simply by adding a few lines to something like /etc/security/limits.d/90-ulimit-open-files-increase.conf. The following example sets the open file soft limit on root and then on all other accounts (* does not apply to the root account) to 65536. You should find out what the appropriate value is for your system ahead of time:
root soft nofile 65536
root hard nofile 65536
* soft nofile 65536
* hard nofile 65536 -
Add the pam_limits module to Pluggable Authentication Module (PAM). This will, in turn, affect all user sessions with the previous ulimit change setting because some distributions do not have it included otherwise your changes might not persist. Add the following to /etc/pam.d/common-session:
session required pam_limits.so
- Alternatively, on some distributions, you can directly add the setting to the affected service definition in systemd in an override file:
LimitNOFILE=65536