How to Solve “Too Many Open Files” Error in Linux

fatmawati achmad zaenuri/Shutterstock.com

On Linux computers, system resources are shared between users. Try to use more than your fair share and you’ll hit an upper limit. You can also clog other users or processes.

Shared system resources

Among its other billions of tasks, the kernel of a Linux computer is always busy watching who uses how many finite system resources, such as RAM and CPU cycles. A multi-user system requires constant attention to ensure that people and processes do not use more of a given system resource than is appropriate.

It’s not fair, for example, for someone to hog so much CPU time that the computer seems slow to everyone. Even if you are the only person using your Linux computer, there are limits on the resources your processes can use. After all, you are just another user.

Some system resources are well known and obvious, such as RAM, CPU cycles, and hard disk space. But there are many, many more resources that are monitored and for which each user – or each belonging to the user treat— has a defined upper limit. One of them is the number of files a process can open at the same time.

If you’ve ever seen the “Too many open files” error message in a terminal window or found it in your system logs, it means the upper limit has been reached and the process is no longer. allowed to open other files.

It’s not just the files you’ve opened

There is a system-wide limit to the number of open files Linux can handle. This is a very large number, as we will see, but there is still a limit. Each user process has an allocation that it can use. They each receive a small share of the system total allocated to them.

What is actually allocated is a number of file descriptors. Each open file requires a handle. Even with fairly generous, system-wide allocations, file descriptors can run out faster than you might at first imagine.

Linux abstracts almost everything so that it appears as if it were a file. Sometimes it will just be old files. But other actions such as opening a directory also use a file descriptor. Linux uses special block files as a kind of driver for hardware devices. Character special files are very similar, but are more often used with devices that have a concept of throughput, such as channels and serial ports.

Block special files handle blocks of data at a time and character special files handle each character separately. These two special files are only accessible using file descriptors. Libraries used by a program use a file descriptor, streams use file descriptors, and network connections use file descriptors.

Abstracting all of these different requirements so that they appear as files simplifies interfacing with them and allows things like piping and flows to work.

You can see that behind the scenes Linux only opens files and uses file handles to run itself, not to mention your user processes. The number of open files is not just the number of files you have open. Almost everything in the operating system uses file descriptors.

File descriptor limits

The system-wide maximum number of file descriptors can be seen with this command.

cat /proc/sys/fs/file-max

Find system maximum for open files

This returns a ridiculously large number of 9.2 quintillion. This is the theoretical maximum of the system. This is the largest possible value you can fit into a 64-bit signed integer. Whether your poor computer can actually handle so many open files at once is another matter.

At the user level, there is no explicit value for the maximum number of open files you can have. But we can basically get away with it. To know the maximum number of files that one of your processes can open, we can use the ulimit order with the -n (open files).

ulimit -n

Finding the number of files a process can open

And to find the maximum number of processes a user can have, we will use ulimit with the -u (user process).

ulimit -u

Find the number of processes a user can have

Multiplying 1024 and 7640 gives us 7,823,360. Of course, many of these processes will already be used by your desktop environment and other background processes. So that’s another theoretical maximum, and one you’ll never realistically reach.

The important number is the number of files a process can open. By default, this is 1024. It should be noted that opening the same file 1024 times simultaneously is equivalent to opening 1024 different files simultaneously. Once you’ve used all your file descriptors, you’re done.

It is possible to adjust the number of files a process can open. There are actually two values ​​to consider when adjusting this number. One is the value it is currently set to or you are trying to set it to. This is called the soft limit. There is a hard limit too, and this is the highest value you can increase the soft limit to.

The way to think about it is that the soft limit is really the “current value” and the upper limit is the highest value that the current value can reach. A regular, non-root user can increase their soft limit to any value up to their hard limit. The root user can increase their hard limit.

To see the current soft and hard limits, use ulimit with the -S (soft and -H options (difficult) and the -n (open files).

ulimit -Sn
ulimit -Hn

Finding the soft and hard boundary for process file handles

To create a situation where we can see the soft limit applied, we created a program that repeatedly opens files until it fails. It then waits for a keystroke before giving up any file descriptors it has used. The program is called open-files.

./open-Files

Open Files program reaches soft limit of 1024

It opens 1021 files and fails when trying to open file 1022.

1024 minus 1021 equals 3. What happened to the other three file descriptors? They were used for the STDIN, STDOUTand STDERR streams. They are created automatically for each process. These always have file descriptor values ​​of 0, 1, and 2.

RELATED: How to use the Linux lsof command

We can see them using the lsof order with the -p (process) and the process ID of the open-filesprogram. Conveniently, it prints its process ID in the terminal window.

lsof -p 11038

stdin, stdout, and stderr streams and file descriptors in lsof command output

Of course, in a real situation, you might not know which process just gobbled up all the file descriptors. To begin your investigation, you can use this sequence of piped commands. It will tell you the fifteen most prolific users of file handles on your computer.

lsof | awk '{ print $1 " " $2; }' | sort -rn | uniq -c | sort -rn | head -15

See which processes are using the most file descriptors

To see more or fewer entries, adjust the -15 parameter to head ordered. Once you identify the process, you need to determine if it has gone rogue and is opening too many files because it is out of control, or if it really needs those files. If it needs it, you should increase its file descriptor limit.

Soft limit increase

If we increase the soft limit and run our program again, we should see it open more files. We will use the ulimit order and the -n (open files) with a numeric value of 2048. This will be the new soft limit.

ulimit -n 2048

Setting a new soft file descriptor limit for processes

This time we managed to open 2045 files. As expected, this is three less than 2048, due to the file descriptors used for STDIN , STDOUT and STDERR.

Make permanent changes

Increasing the soft limit only affects the current shell. Open a new terminal window and check the soft limit. You will see that this is the old default. But there is a way to globally set a new default for the maximum number of open files a process can have that are persistent and survive restarts.

Outdated advice often recommends editing files like “/etc/sysctl.conf” and “/etc/security/limits.conf”. However, on systemd-based distributions, these changes do not work consistently, especially for graphical login sessions.

The technique shown here is the way to do it on systemd-based distributions. There are two files we need to work with. The first is the “/etc/systemd/system.conf” file. We will have to use sudo .

sudo gedit /etc/systemd/system.conf

Editing the system.conf file

Find the line containing the string “DefaultLimitNOFILE”. Remove the hash “#” from the beginning of the line and change the first number to what you want your new soft limit for processes to be. We chose 4096. The second number on this line is the hard limit. We have not adjusted this.

The DefaultLimitNOFILE value in the system.conf file

Save the file and close the editor.

We must repeat this operation on the “/etc/systemd/user.conf” file.

sudo gedit /etc/systemd/user.conf

Editing the user.conf file

Make the same adjustments on the line containing the “DefaultLimitNOFILE” string.

The DefaultLimitNOFILE value in the user.conf file

Save the file and close the editor. You must either restart your computer or use the systemctl order with the daemon-reexec option so that systemd is rerun and ingests the new settings.

sudo systemctl daemon-reexec

Restarting systemd

Opening a terminal window and checking the new limit should show the new value you set. In our case, it was 4096.

ulimit -n

Checking the new soft limit with ulimit -n

We can test that this is live operational value by re-running our file-intensive program.

./open-Files

Checking the new soft limit with the open-files program

The program is unable to open file number 4094, which means that 4093 files have been opened. This is our expected value, 3 less than 4096.

Everything is a file

This is why Linux is so dependent on file descriptors. Now, if you start to run low, you know how to increase your quota.

RELATED: What are stdin, stdout and stderr in Linux?

Comments are closed.