Re: [RFC v2 00/35] optimize cost of inter-process communication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* Bo Li <libo.gcs85@xxxxxxxxxxxxx> wrote:

> # Performance
> 
> To quantify the performance improvements driven by RPAL, we measured 
> latency both before and after its deployment. Experiments were 
> conducted on a server equipped with two Intel(R) Xeon(R) Platinum 
> 8336C CPUs (2.30 GHz) and 1 TB of memory. Latency was defined as the 
> duration from when the client thread initiates a message to when the 
> server thread is invoked and receives it.
> 
> During testing, the client transmitted 1 million 32-byte messages, and we
> computed the per-message average latency. The results are as follows:
> 
> *****************
> Without RPAL: Message length: 32 bytes, Total TSC cycles: 19616222534,
>  Message count: 1000000, Average latency: 19616 cycles
> With RPAL: Message length: 32 bytes, Total TSC cycles: 1703459326,
>  Message count: 1000000, Average latency: 1703 cycles
> *****************
> 
> These results confirm that RPAL delivers substantial latency 
> improvements over the current epoll implementation—achieving a 
> 17,913-cycle reduction (an ~91.3% improvement) for 32-byte messages.

No, these results do not necessarily confirm that.

19,616 cycles per message on a vanilla kernel on a 2.3 GHz CPU suggests 
a messaging performance of 117k messages/second or 8.5 usecs/message, 
which is *way* beyond typical kernel interprocess communication 
latencies on comparable CPUs:

  root@localhost:~# taskset 1 perf bench sched pipe
  # Running 'sched/pipe' benchmark:
  # Executed 1000000 pipe operations between two processes

       Total time: 2.790 [sec]

       2.790614 usecs/op
         358344 ops/sec

And my 2.8 usecs result was from a kernel running inside a KVM sandbox 
...

( I used 'taskset' to bind the benchmark to a single CPU, to remove any 
  inter-CPU migration noise from the measurement. )

The scheduler parts of your series simply try to remove much of 
scheduler and context switching functionality to create a special 
fast-path with no FPU context switching and TLB flushing AFAICS, for 
the purposes of message latency benchmarking in essence, and you then 
compare it against the full scheduling and MM context switching costs 
of full-blown Linux processes.

I'm not convinced, at all, that this many changes are required to speed 
up the usecase you are trying to optimize:

  >  61 files changed, 9710 insertions(+), 4 deletions(-)

Nor am I convinced that 9,700 lines of *new* code of a parallel 
facility are needed, crudely wrapped in 1970s technology (#ifdefs), 
instead of optimizing/improving facilities we already have...

So NAK for the scheduler bits, until proven otherwise (and presented in 
a clean fashion, which the current series is very far from).

I'll be the first one to acknowledge that our process and MM context 
switching overhead is too high and could be improved, and I have no 
objections against the general goal of improving Linux inter-process 
messaging performance either, I only NAK this particular 
implementation/approach.

Thanks,

	Ingo




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux