L2tp Performance

L2TP related questions
Post Reply
haniaro
Posts: 7
Joined: 29 Dec 2019, 14:18

L2tp Performance

Post by haniaro » 16 Feb 2020, 14:54

:D I am so happy because I got rid of cisco LNS
I am using accel-ppp in my production environment and this is my machine info:
##########################
$ less /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU E5645 @ 2.40GHz
stepping : 2
microcode : 0x15
cpu MHz : 2399.381
cache size : 12288 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 2
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx rdtscp lm con
stant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes hy
pervisor lahf_lm kaiser tsc_adjust arat
######################################################
$ htop
https://pasteboard.co/IUXCppn.png

########################################################
accel-ppp# show stat
accel-ppp# show stat
uptime: 5.04:02:34
cpu: 3%
mem(rss/virt): 64876/388584 kB
core:
mempool_allocated: 18799111
mempool_available: 452023
thread_count: 4
thread_active: 1
context_count: 2097
context_sleeping: 0
context_pending: 0
md_handler_count: 4329
md_handler_pending: 0
timer_count: 6288
timer_pending: 0
sessions:
starting: 0
active: 2065
finishing: 0
l2tp:
tunnels:
starting: 0
active: 24
finishing: 0
sessions (control channels):
starting: 0
active: 2065
finishing: 0
sessions (data channels):
starting: 0
active: 2065
finishing: 0
radius(1, 192.168.2.X):
state: active
fail count: 0
request count: 0
queue length: 0
auth sent: 129970
auth lost(total/5m/1m): 4624/0/0
auth avg query time(5m/1m): 42/117 ms
acct sent: 217080
acct lost(total/5m/1m): 3835/0/0
acct avg query time(5m/1m): 2/2 ms
interim sent: 491519
interim lost(total/5m/1m): 2052/0/0
interim avg query time(5m/1m): 2/2 ms

#####################################################

thank you Dimka88

dimka88
Posts: 649
Joined: 13 Oct 2014, 05:51
Contact:

Re: L2tp Performance

Post by dimka88 » 16 Feb 2020, 16:31

Hi, nice result. But I'm sure that you can reach more performance and IRQ balance. On the screenshot I saw the first CPU core load more than the second core. Can you provide `cat /proc/interrupts`
For better profiling, provide please screenshot `perf top` command

haniaro
Posts: 7
Joined: 29 Dec 2019, 14:18

Re: L2tp Performance

Post by haniaro » 16 Feb 2020, 17:24

yes I noticed that difference between the 1st cpu and the second, so please how can I fix that??? :?

# cat /proc/interrupts
CPU0 CPU1
0: 13 0 IO-APIC 2-edge timer
1: 201 0 IO-APIC 1-edge i8042
8: 1 0 IO-APIC 8-edge rtc0
9: 0 0 IO-APIC 9-fasteoi acpi
12: 3814 0 IO-APIC 12-edge i8042
14: 0 0 IO-APIC 14-edge ata_piix
15: 0 0 IO-APIC 15-edge ata_piix
16: 1 0 IO-APIC 16-fasteoi vmwgfx
18: 60 0 IO-APIC 18-fasteoi uhci_hcd:usb1
19: 0 0 IO-APIC 19-fasteoi ehci_hcd:usb2
24: 0 0 PCI-MSI 344064-edge PCIe PME, pciehp
25: 0 0 PCI-MSI 346112-edge PCIe PME, pciehp
26: 0 0 PCI-MSI 348160-edge PCIe PME, pciehp
27: 0 0 PCI-MSI 350208-edge PCIe PME, pciehp
28: 0 0 PCI-MSI 352256-edge PCIe PME, pciehp
29: 0 0 PCI-MSI 354304-edge PCIe PME, pciehp
30: 0 0 PCI-MSI 356352-edge PCIe PME, pciehp
31: 0 0 PCI-MSI 358400-edge PCIe PME, pciehp
32: 0 0 PCI-MSI 360448-edge PCIe PME, pciehp
33: 0 0 PCI-MSI 362496-edge PCIe PME, pciehp
34: 0 0 PCI-MSI 364544-edge PCIe PME, pciehp
35: 0 0 PCI-MSI 366592-edge PCIe PME, pciehp
36: 0 0 PCI-MSI 368640-edge PCIe PME, pciehp
37: 0 0 PCI-MSI 370688-edge PCIe PME, pciehp
38: 0 0 PCI-MSI 372736-edge PCIe PME, pciehp
39: 0 0 PCI-MSI 374784-edge PCIe PME, pciehp
40: 0 0 PCI-MSI 376832-edge PCIe PME, pciehp
41: 0 0 PCI-MSI 378880-edge PCIe PME, pciehp
42: 0 0 PCI-MSI 380928-edge PCIe PME, pciehp
43: 0 0 PCI-MSI 382976-edge PCIe PME, pciehp
44: 0 0 PCI-MSI 385024-edge PCIe PME, pciehp
45: 0 0 PCI-MSI 387072-edge PCIe PME, pciehp
46: 0 0 PCI-MSI 389120-edge PCIe PME, pciehp
47: 0 0 PCI-MSI 391168-edge PCIe PME, pciehp
48: 0 0 PCI-MSI 393216-edge PCIe PME, pciehp
49: 0 0 PCI-MSI 395264-edge PCIe PME, pciehp
50: 0 0 PCI-MSI 397312-edge PCIe PME, pciehp
51: 0 0 PCI-MSI 399360-edge PCIe PME, pciehp
52: 0 0 PCI-MSI 401408-edge PCIe PME, pciehp
53: 0 0 PCI-MSI 403456-edge PCIe PME, pciehp
54: 0 0 PCI-MSI 405504-edge PCIe PME, pciehp
55: 0 0 PCI-MSI 407552-edge PCIe PME, pciehp
56: 876209 0 PCI-MSI 1572864-edge vmw_pvscsi
57: 808408443 0 PCI-MSI 5767168-edge ens192-rxtx-0
58: 2904371885 0 PCI-MSI 5767169-edge ens192-rxtx-1
59: 0 0 PCI-MSI 5767170-edge ens192-event-2
60: 3947356100 0 PCI-MSI 9961472-edge ens224-rxtx-0
61: 167983604 0 PCI-MSI 9961473-edge ens224-rxtx-1
62: 0 0 PCI-MSI 9961474-edge ens224-event-2
63: 358766 0 PCI-MSI 1097728-edge ahci[0000:02:03.0]
64: 0 0 PCI-MSI 129024-edge vmw_vmci
65: 0 0 PCI-MSI 129025-edge vmw_vmci
NMI: 0 0 Non-maskable interrupts
LOC: 156116891 53284050 Local timer interrupts
SPU: 0 0 Spurious interrupts
PMI: 0 0 Performance monitoring interrupts
IWI: 1 0 IRQ work interrupts
RTR: 0 0 APIC ICR read retries
RES: 119267159 198539217 Rescheduling interrupts
CAL: 47327 47953 Function call interrupts
TLB: 10084 8833 TLB shootdowns
TRM: 0 0 Thermal event interrupts
THR: 0 0 Threshold APIC interrupts
DFR: 0 0 Deferred Error APIC interrupts
MCE: 0 0 Machine check exceptions
MCP: 2325 2324 Machine check polls
ERR: 0
MIS: 0
PIN: 0 0 Posted-interrupt notification event
PIW: 0 0 Posted-interrupt wakeup event

dimka88
Posts: 649
Joined: 13 Oct 2014, 05:51
Contact:

Re: L2tp Performance

Post by dimka88 » 16 Feb 2020, 20:14

Try execute for balancing NICs interrupt

Code: Select all

echo 2 >/proc/irq/58/smp_affinity
echo 2 >/proc/irq/61/smp_affinity
I think after this small tuning balance between 2 cores will be better. You can see result when you run `htop` or `top` and press 1

lbw
Posts: 18
Joined: 09 Mar 2019, 00:16

Re: L2tp Performance

Post by lbw » 18 Feb 2020, 04:15

I posted on GitHub some improvements for L2TP as well - CLI and performance. They might be of assistance to you. I've found the only real performance bottleneck of accel-ppp is the performance of the hardware to handle enough network interrupts, which is great!

Post Reply

Who is online

Users browsing this forum: No registered users and 1 guest