Performance%20measurement%20of%20DC%20software%20at%20Japanese%20testbed - PowerPoint PPT Presentation

About This Presentation
Title:

Performance%20measurement%20of%20DC%20software%20at%20Japanese%20testbed

Description:

32 x dual 1.4 GHz PentiumIII with Gigabit Ethernet NIC (Intel ... Online Software : Online-00-17-02. 1x1: A RootController, a DCcontroller and a L2SV on a PC ... – PowerPoint PPT presentation

Number of Views:24
Avg rating:3.0/5.0
Slides: 31
Provided by: yasu4
Category:

less

Transcript and Presenter's Notes

Title: Performance%20measurement%20of%20DC%20software%20at%20Japanese%20testbed


1
Performance measurement of DC software at
Japanese testbed
  • Yoji Hasegawa, Yoshiji Yasu,
  • Yasushi Nagasaka and Mako Shimojima
  • Contents
  • Setup
  • 1x1 system
  • 3x1 system
  • 6x1 system
  • 13x1 system
  • 1x25 system
  • 25x1 system
  • 13x13 system
  • Summary

2
Setup
3
  • 32 x dual 1.4 GHz PentiumIII with Gigabit
    Ethernet NIC (Intel 82545EM e1000 driver)
  • Gigabit Ethernet Switch (BlackDiamond 6816 with
    4 G8Ti modules)
  • RedHat 7.3 with gcc2.95.3
  • Linux kernel2.4.18-19 with HZ4096 parameter
  • DC software DC-00-02-02
  • Online Software Online-00-17-02

4
Configuration
1x1 A RootController, a DCcontroller and a L2SV
on a PC A DFM on a PC A ROS on a PC A SFI on a
PC 3x1 A RootController, a DCcontroller and a
L2SV on a PC A DFM on a PC 3 ROSs on 3 PCs
each A SFI on a PC 6x1 A RootController, a
DCcontroller and a L2SV on a PC A DFM on a PC 6
ROSs on 6 PCs each A SFI on a PC 13x1 A
RootController, a DCcontroller and a L2SV on a
PC A DFM on a PC 13 ROSs on 13 PCs each A SFI
on a PC 1x25 A RootController, a DCcontroller
and a L2SV on a PC A DFM on a PC A ROS on a
PC 25 SFIs on 25 PCs each 25x1 A
RootController, a DCcontroller and a L2SV on a
PC A DFM on a PC 25 ROSs on 25 PCs each A SFI
on a PC 13x13 A RootController, a DCcontroller
and a L2SV on a PC A DFM on a PC 13 ROSs on 13
PCs each 13 SFIs on 13PCs each
5
DC Parameters
ROSE Number of RoB 1 robDataSize
variable SFI MaxNumberOfAssembledEventsHigh
50 MaxNumberOfAssembledEventsLow 45
DefaultCredits 10 TimeoutSetup 500
TrafficShapingParameter 30 DFM Credits 3
Timeout 5000 MaxAssignQueueSize 1000
ClearGroupSize 50 LVL2 decision group a LVL2
accept in a decision
6
1x1 system
7
(No Transcript)
8
Results from the 1x1 system
  • Performed push and pull scenarios with similar
    setups.
  • Push slightly better than pull!
  • Almost NO re-asks observed.
  • EoE rates dropped quickly with input trigger
    rates above 30-40 kHz.
  • EoE rates depend on kernel buffer size (default
    64KB vs. Piotr's 8MB) See next slide.
  • - EoE rates in pull scenario found unstable
    with the default values
  • - in push scenario performance is good in
    either case

9
(No Transcript)
10
Results from a 1x1 system
  • Kernel Buffer Size ( default vs. Piotrs value )

11
3x1 system
12
3x1
13
Results from the 3x1 system
  1. Performed pull scenarios with similar setups.
  2. Plateau observed in EoE rates after it reaches
    peak rate of 7 kHz.
  3. Results consistent with Magali's measurements.

14
6x1 system
15
(No Transcript)
16
Results from the 6x1 system
  1. With the default kernel buffer sizes of 64 KB,
    push behaves rather badly.
  2. Pull seems to be already as good with default
    values as with increased values.
  3. Consistent with our past experiences at CERN.

17
13x1 system
18
(No Transcript)
19
Results from the 13x1 system
  • Push and pull similar performance with
    robDataSize of 2.5kB.
  • - even with multicast (SFI assignment) from
    DFM to ROSE in push scenario
  • 2. EoE rates in push drops when robDataSize
    increased
  • - Lots of re-asks observed
  • - Caused by network congestion between ROSE
    and SFI??? Or kernel buffer overflow at SFI?

20
1x25 system
21
(No Transcript)
22
Results from the 1x25 system
  1. Performed push and pull scenarios with similar
    setups.
  2. Push slightly better than pull!
  3. Almost NO re-asks observed.
  4. EoE rates dropped quickly with input trigger
    rates above 30-40 kHz.

23
25x1 system
24
(No Transcript)
25
Results from the 25x1 system
  • Push and pull similar performance with
    robDataSize of 256(1kB).
  • - even with multicast (SFI assignment) from
    DFM to ROSE in push scenario
  • 2. EoE rates in push drops when robDataSize
    increased
  • - Lots of re-asks observed
  • - Caused by network congestion between ROSE
    and SFI??? Or kernel buffer overflow at SFI?
  • 3. No performance degradation in pull scenario
    with any robDataSize.

26
13x13 system
27
(No Transcript)
28
Results from the 13x13 system
  1. Performed push and pull scenarios with similar
    setups.
  2. Push slightly better than pull!
  3. Almost NO re-asks observed.
  4. EoE rates dropped quickly with input trigger
    rates above 30-40 kHz.
  5. There was no network congestion from ROSE to SFI
    in push scenario? Or no kernel buffer overflow at
    SFI? The congestion avoidance algorithm worked?

29
Summary
  • Except the following point, push scenario was
    better than pull scenario.
  • - EoE rates in push scenario dropped when
    robDataSize increased on Nx1 system.
  • We have to investigate the following points.
  • - Why EoE rate in push scenario was poorly
    performed
  • at large robDataSize on Nx1 system. Network
  • congestion from ROSEs to SFIs? Or kernel
    buffer overflow at SFI?
  • - The way to improve EoE rate in push
    scenario at
  • large robDataSize. We investigate QoS as the
    way.

30
Acknowledgments
Thanks to Mashimo-san, Matsumoto-san,
Sakamoto-san and staff of ICEPP for supporting
our measurements Thanks to Hanspeter, Christian
and other DC staff for their help Thanks to
Beniamino and Magali for their measurements
Write a Comment
User Comments (0)
About PowerShow.com