High CPU Usage not balancing IRQ

Questions related to general functionality
Post Reply
catweb
Posts: 9
Joined: 07 Mar 2018, 14:35

High CPU Usage not balancing IRQ

Post by catweb » 15 May 2018, 10:47

Hello Everybody, would anyone helpme?

using debian stretch

CPU Usage showing high in only one core, not balancing with others..

Code: Select all

/sys/class/net/eth0.2001/queues# cat /proc/interrupts
           CPU0       CPU1       CPU2       CPU3
  0:          9          0          0          0   IO-APIC   2-edge      timer
  1:         10          0          0          0   IO-APIC   1-edge      i8042
  8:          1          0          0          0   IO-APIC   8-edge      rtc0
  9:          0          0          0          0   IO-APIC   9-fasteoi   acpi
 12:         16          0          0          0   IO-APIC  12-edge      i8042
 14:          0          0          0          0   IO-APIC  14-edge      ata_piix
 15:          0          0          0          0   IO-APIC  15-edge      ata_piix
 16:          1          0          0          0   IO-APIC  16-fasteoi   vmwgfx
 18:         64          0          0          0   IO-APIC  18-fasteoi   uhci_hcd:usb1
 19:          0          0          0          0   IO-APIC  19-fasteoi   ehci_hcd:usb2
 24:          0          0          0          0   PCI-MSI 344064-edge      PCIe PME, pciehp
 25:          0          0          0          0   PCI-MSI 346112-edge      PCIe PME, pciehp
 26:          0          0          0          0   PCI-MSI 348160-edge      PCIe PME, pciehp
 27:          0          0          0          0   PCI-MSI 350208-edge      PCIe PME, pciehp
 28:          0          0          0          0   PCI-MSI 352256-edge      PCIe PME, pciehp
 29:          0          0          0          0   PCI-MSI 354304-edge      PCIe PME, pciehp
 30:          0          0          0          0   PCI-MSI 356352-edge      PCIe PME, pciehp
 31:          0          0          0          0   PCI-MSI 358400-edge      PCIe PME, pciehp
 32:          0          0          0          0   PCI-MSI 360448-edge      PCIe PME, pciehp
 33:          0          0          0          0   PCI-MSI 362496-edge      PCIe PME, pciehp
 34:          0          0          0          0   PCI-MSI 364544-edge      PCIe PME, pciehp
 35:          0          0          0          0   PCI-MSI 366592-edge      PCIe PME, pciehp
 36:          0          0          0          0   PCI-MSI 368640-edge      PCIe PME, pciehp
 37:          0          0          0          0   PCI-MSI 370688-edge      PCIe PME, pciehp
 38:          0          0          0          0   PCI-MSI 372736-edge      PCIe PME, pciehp
 39:          0          0          0          0   PCI-MSI 374784-edge      PCIe PME, pciehp
 40:          0          0          0          0   PCI-MSI 376832-edge      PCIe PME, pciehp
 41:          0          0          0          0   PCI-MSI 378880-edge      PCIe PME, pciehp
 42:          0          0          0          0   PCI-MSI 380928-edge      PCIe PME, pciehp
 43:          0          0          0          0   PCI-MSI 382976-edge      PCIe PME, pciehp
 44:          0          0          0          0   PCI-MSI 385024-edge      PCIe PME, pciehp
 45:          0          0          0          0   PCI-MSI 387072-edge      PCIe PME, pciehp
 46:          0          0          0          0   PCI-MSI 389120-edge      PCIe PME, pciehp
 47:          0          0          0          0   PCI-MSI 391168-edge      PCIe PME, pciehp
 48:          0          0          0          0   PCI-MSI 393216-edge      PCIe PME, pciehp
 49:          0          0          0          0   PCI-MSI 395264-edge      PCIe PME, pciehp
 50:          0          0          0          0   PCI-MSI 397312-edge      PCIe PME, pciehp
 51:          0          0          0          0   PCI-MSI 399360-edge      PCIe PME, pciehp
 52:          0          0          0          0   PCI-MSI 401408-edge      PCIe PME, pciehp
 53:          0          0          0          0   PCI-MSI 403456-edge      PCIe PME, pciehp
 54:          0          0          0          0   PCI-MSI 405504-edge      PCIe PME, pciehp
 55:          0          0          0          0   PCI-MSI 407552-edge      PCIe PME, pciehp
 56:   19567526          0          0          0   PCI-MSI 5767168-edge      eth0-rxtx-0
 57:    8335140          0          0          0   PCI-MSI 5767169-edge      eth0-rxtx-1
 58:    8442909          0          0          0   PCI-MSI 5767170-edge      eth0-rxtx-2
 59:    8356847          0          0          0   PCI-MSI 5767171-edge      eth0-rxtx-3
 60:          0          0          0          0   PCI-MSI 5767172-edge      eth0-event-4
 66:   12747717          0          0          0   PCI-MSI 14155776-edge      eth4-rxtx-0
 67:   13007137          0          0          0   PCI-MSI 14155777-edge      eth4-rxtx-1
 68:   12870746          0          0          0   PCI-MSI 14155778-edge      eth4-rxtx-2
 69:   13032285          0          0          0   PCI-MSI 14155779-edge      eth4-rxtx-3
 70:          0          0          0          0   PCI-MSI 14155780-edge      eth4-event-4
 71:      22268          0          0          0   PCI-MSI 1572864-edge      vmw_pvscsi
 72:       2386          0          0          0   PCI-MSI 1097728-edge      ahci[0000:02:03.0]
 73:          0          0          0          0   PCI-MSI 129024-edge      vmw_vmci
 74:          0          0          0          0   PCI-MSI 129025-edge      vmw_vmci
NMI:          0          0          0          0   Non-maskable interrupts
LOC:   23950209    2027132    1761533    1808457   Local timer interrupts
SPU:          0          0          0          0   Spurious interrupts
PMI:          0          0          0          0   Performance monitoring interrupts
IWI:          2          1          0          0   IRQ work interrupts
RTR:          0          0          0          0   APIC ICR read retries
RES:     478799     746756     725165     730054   Rescheduling interrupts
CAL:      46474    1965965    1949109    1935292   Function call interrupts
TLB:      37895      54853      52594      52308   TLB shootdowns
TRM:          0          0          0          0   Thermal event interrupts
THR:          0          0          0          0   Threshold APIC interrupts
DFR:          0          0          0          0   Deferred Error APIC interrupts
MCE:          0          0          0          0   Machine check exceptions
MCP:         17         17         17         17   Machine check polls
ERR:          0
MIS:          0
PIN:          0          0          0          0   Posted-interrupt notification event
PIW:          0          0          0          0   Posted-interrupt wakeup event

Code: Select all

root@bras-1:# cat /proc/irq/56/smp_affinity
f
root@bras-1:#
in cpu usage, showing this (HIGH CPU USAGE):

Code: Select all

top - 06:40:25 up  1:22,  1 user,  load average: 0.62, 0.54, 0.55
Tasks: 166 total,   2 running, 164 sleeping,   0 stopped,   0 zombie
%Cpu0  :  1.3 us,  2.5 sy,  0.0 ni, 40.5 id,  0.0 wa,  0.0 hi, 55.7 si,  0.0 st
%Cpu1  :  2.0 us,  6.9 sy,  0.0 ni, 51.5 id,  0.0 wa,  0.0 hi, 39.6 si,  0.0 st
%Cpu2  :  2.2 us,  4.5 sy,  0.0 ni, 68.5 id,  0.0 wa,  0.0 hi, 24.7 si,  0.0 st
%Cpu3  :  2.2 us,  7.6 sy,  0.0 ni, 55.4 id,  0.0 wa,  0.0 hi, 34.8 si,  0.0 st
KiB Mem :  2052176 total,  1530168 free,   278116 used,   243892 buff/cache
KiB Swap:  2095100 total,  2095100 free,        0 used.  1604160 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
    3 root      20   0       0      0      0 S   7.7  0.0   5:20.51 ksoftirqd/0
   16 root      20   0       0      0      0 S   5.8  0.0   0:23.88 ksoftirqd/1
   28 root      20   0       0      0      0 R   4.8  0.0   0:24.30 ksoftirqd/3
  863 root      20   0  346004  21656   4580 S   4.8  1.1   4:45.55 accel-pppd
 3607 www-data  20   0  262972  12440   6364 S   2.9  0.6   0:05.11 apache2
 4709 root      20   0   44924   3740   3068 R   2.9  0.2   0:00.16 top
   22 root      20   0       0      0      0 S   1.9  0.0   0:23.35 ksoftirqd/2
23053 www-data  20   0  262748  12012   5992 S   1.9  0.6   0:02.22 apache2
25273 www-data  20   0  262748  12012   5992 S   1.9  0.6   0:01.46 apache2
    7 root      20   0       0      0      0 S   1.0  0.0   0:15.64 rcu_sched
 2009 root      20   0       0      0      0 S   1.0  0.0   0:00.01 kworker/1:2
 3572 www-data  20   0  262972  12320   6244 S   1.0  0.6   0:05.54 apache2
 4478 www-data  20   0  262976  12440   6360 S   1.0  0.6   0:32.96 apache2
28704 www-data  20   0  262748  12012   5992 S   1.0  0.6   0:01.08 apache2
    1 root      20   0  206984   9164   5232 S   0.0  0.4   0:10.63 systemd
    2 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kthreadd
    5 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/0:0H
    8 root      20   0       0      0      0 S   0.0  0.0   0:00.00 rcu_bh
    9 root      rt   0       0      0      0 S   0.0  0.0   0:01.76 migration/0
   10 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 lru-add-drain
   11 root      rt   0       0      0      0 S   0.0  0.0   0:00.01 watchdog/0
   12 root      20   0       0      0      0 S   0.0  0.0   0:00.00 cpuhp/0

dimka88
Posts: 419
Joined: 13 Oct 2014, 05:51
Contact:

Re: High CPU Usage not balancing IRQ

Post by dimka88 » 16 May 2018, 08:54

Hi, try static bind irq to core

Code: Select all

echo '8' > /proc/irq/56/smp_affinity
echo '4' > /proc/irq/57/smp_affinity
echo '2' > /proc/irq/58/smp_affinity
echo '1' > /proc/irq/59/smp_affinity

echo '8' > /proc/irq/66/smp_affinity
echo '4' > /proc/irq/67/smp_affinity
echo '2' > /proc/irq/68/smp_affinity
echo '1' > /proc/irq/69/smp_affinity
and check cpu load on top

Post Reply

Who is online

Users browsing this forum: No registered users and 1 guest