PPPoE throughput

PPPoE related questions
Post Reply
proca
Posts: 14
Joined: 07 Dec 2018, 17:24

PPPoE throughput

Post by proca »

hello world

First of all I want to congratulate the the coders for this fine peace of software, it looks very promising.
I want to migrate from rp_pppoe based linux aggregators to accel-ppp and I think i encountered a throughput issue through the pppoe sessions during the tests.

Test setup:

Uplink router <-2x1Gbond->ACCEL-PPP<-2x1Gbond->PPPoE_LAN<->TEST_VM

The server seems to run fine at first but after ~1 day of having 300 sessions connected the throughput through the pppoe tunnel is significantly lower than if i just run plain IP between the hosts.

session:
root@pppoe-1:/home/proca# ppp show sessions ifname,username,state,called-sid,ip | grep procatest2
ppp317 | procatest2 | active | bond1.18 | 89.35.50.93

static ip on interface:
root@pppoe-1:/home/proca# ip addr | grep bond1.18
11465: bond1.18@bond1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
inet 89.35.50.201/30 brd 89.35.50.203 scope global bond1.18

testvm session:

root@allview:~# ip addr show ppp0
144: ppp0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1492 qdisc pfifo_fast state UNKNOWN group default qlen 3
link/ppp
inet 89.35.50.93 peer 10.0.0.1/32 scope global ppp0
valid_lft forever preferred_lft forever

testvm staticip:
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:45:24:77 brd ff:ff:ff:ff:ff:ff
inet 89.35.50.202/30 scope global ens192
valid_lft forever preferred_lft forever

test via ppp:
root@allview:~# ip route get 89.149.0.26
89.149.0.26 dev ppp0 src 89.35.50.93
cache
root@allview:~# wget -O /dev/null http://speed.ines.ro/lg/1000MB.test
--2018-12-07 18:50:02-- http://speed.ines.ro/lg/1000MB.test
Resolving speed.ines.ro (speed.ines.ro)... 89.149.0.26, 2a02:2a00::1a
Connecting to speed.ines.ro (speed.ines.ro)|89.149.0.26|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1000000000 (954M)
Saving to: ‘/dev/null’

/dev/null 100%[=======================================================================================================================================>] 953.67M 50.7MB/s in 18s

2018-12-07 18:50:20 (53.1 MB/s) - ‘/dev/null’ saved [1000000000/1000000000]

test via direct ip:
root@allview:~# wget -O /dev/null http://speed.ines.ro/lg/1000MB.test
--2018-12-07 18:51:29-- http://speed.ines.ro/lg/1000MB.test
Resolving speed.ines.ro (speed.ines.ro)... 89.149.0.26, 2a02:2a00::1a
Connecting to speed.ines.ro (speed.ines.ro)|89.149.0.26|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1000000000 (954M)
Saving to: ‘/dev/null’

/dev/null 100%[=======================================================================================================================================>] 953.67M 99.9MB/s in 9.6s

2018-12-07 18:51:38 (99.7 MB/s) - ‘/dev/null’ saved [1000000000/1000000000]

So speed seems double if I do not use the PPP session, only difference seems the mtu.
To test I lowered the test client MTU to 1492 (ppp went down to 1484) and I get same 100MB/s on the direct IP.

Any idea of why I might get lower througput on the PPP ?
To mentioned I do not use shaper functions. OS on both devices is Debian9 with stock kernel.

Attached a file with

- accel-ppp config
- mpstat -P ALL 5 one run
- accel-cmd show stat
- sysctl adaptations
- lspci -vv for NIC


Any ideas why is this difference in throughput ?
Attachments
info.zip
(10.17 KiB) Downloaded 119 times
dimka88
Posts: 866
Joined: 13 Oct 2014, 05:51
Contact:

Re: PPPoE throughput

Post by dimka88 »

Hi, note to interrupts and enable RPS.
proca
Posts: 14
Joined: 07 Dec 2018, 17:24

Re: PPPoE throughput

Post by proca »

Hey Dimka88,

Thanks for the fast reply.
I read only recently about RPS and will try to enable it now.

Will this make my manual smp affinity bind useless i presume ?

Thanks
dimka88
Posts: 866
Joined: 13 Oct 2014, 05:51
Contact:

Re: PPPoE throughput

Post by dimka88 »

I think SMP affinity use for L3 traffic (packet), you need and smp affinity
proca
Posts: 14
Joined: 07 Dec 2018, 17:24

Re: PPPoE throughput

Post by proca »

Ok, so i need to enable RPS for the PPP interfaces ?
dimka88
Posts: 866
Joined: 13 Oct 2014, 05:51
Contact:

Re: PPPoE throughput

Post by dimka88 »

for nic
proca
Posts: 14
Joined: 07 Dec 2018, 17:24

Re: PPPoE throughput

Post by proca »

anything other than the below for 16 cpu ?

echo ffff > /sys/class/net/[ifname]/queues/rx-[0-7]/rps_cpus
dimka88
Posts: 866
Joined: 13 Oct 2014, 05:51
Contact:

Re: PPPoE throughput

Post by dimka88 »

Also enable rps on you client VM and test pppoe speed again
Post Reply