Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is there a way to improve performance of linux pipes?

I'm trying to pipe extremely high speed data from one application to another using 64-bit CentOS6. I have done the following benchmarks using dd to discover that the pipes are holding me back and not the algorithm in my program. My goal is to achieve somewhere around 1.5 GB/s.

First, without pipes:

dd if=/dev/zero of=/dev/null bs=8M count=1000
1000+0 records in
1000+0 records out
8388608000 bytes (8.4 GB) copied, 0.41925 s, 20.0 GB/s

Next, a pipe between two dd processes:

dd if=/dev/zero bs=8M count=1000 | dd of=/dev/null bs=8M
1000+0 records in
1000+0 records out
8388608000 bytes (8.4 GB) copied, 9.39205 s, 893 MB/s

Are there any tweaks I can make to the kernel or anything else that will improve performance of running data through a pipe? I have tried named pipes as well, and gotten similar results.

like image 369
KyleL Avatar asked Sep 27 '12 16:09

KyleL


People also ask

Are pipes faster than sockets?

So, in relative terms, named pipes are approximately 30% faster than UNIX sockets with a block size of 100 bytes.

What are the limitation of pipe in Linux?

A limitation of pipes for interprocess communication is that the processes using pipes must have a common parent process (that is, share a common open or initiation process and exist as the result of a fork system call from a parent process). A pipe is fixed in size and is usually at least 4,096 bytes.

Which is faster pipe or shared memory?

IPC messages mimic the reading and writing of files. They are easier to use than pipes when more than two processes must communicate by using a single medium. The IPC shared semaphore facility provides process synchronization. Shared memory is the fastest form of interprocess communication.

Are Linux pipes secure?

It is the world's most popular server platform, after all. Thanks to the open-source nature of Linux, it is often considered far more secure than most alternatives, as any vulnerabilities usually are patched pretty quickly.


1 Answers

Have you tried with smaller blocks?

When I try on my own workstation I note successive improvement when lowering the block size. It is only in the realm of 10% in my test, but still an improvement. You are looking for 100%.

As it turns out testing further, really small block sizes seem to do the trick:

I tried

dd if=/dev/zero bs=32k count=256000 | dd of=/dev/null bs=32k
256000+0 records in
256000+0 records out
256000+0 records in
256000+0 records out
8388608000 bytes (8.4 GB) copied8388608000 bytes (8.4 GB) copied, 1.67965 s, 5.0 GB/s
, 1.68052 s, 5.0 GB/s

And with your original

dd if=/dev/zero bs=8M count=1000 | dd of=/dev/null bs=8M
1000+0 records in
1000+0 records out
1000+0 records in
1000+0 records out
8388608000 bytes (8.4 GB) copied8388608000 bytes (8.4 GB) copied, 6.25782 s, 1.3 GB/s
, 6.25203 s, 1.3 GB/s

5.0/1.3 = 3.8 so that is a sizable factor.

like image 60
opaque Avatar answered Nov 15 '22 19:11

opaque