Bug 192 - [Anolis OS 8.4 龙芯版][4.19.190-3.an8.loongarch64]iproute-tc包流量统计功能报错,Error: Specified qdisc not found.
Summary: [Anolis OS 8.4 龙芯版][4.19.190-3.an8.loongarch64]iproute-tc包流量统计功能报错,Error: Spe...
Status: VERIFIED FIXED
Alias: None
Product: Anolis OS 8
Classification: Anolis OS
Component: Images&Installations (show other bugs) Images&Installations
Version: 8.4
Hardware: loongarch Linux
: P3-Medium S2-major
Target Milestone: ---
Assignee: HFD
QA Contact: shuming
URL:
Whiteboard:
Keywords:
: 193 (view as bug list)
Depends on:
Blocks:
 
Reported: 2021-12-23 20:33 UTC by xugangjian
Modified: 2022-01-21 16:38 UTC (History)
1 user (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description xugangjian alibaba_cloud_group 2021-12-23 20:33:42 UTC
[问题描述]
使用 Anolis OS 8.4 龙芯版 4.19.190-3.an8.loongarch64   环境 在测试流量控制功能时报错, iproute-tc包流量统计功能报错,Error: Specified qdisc not found.


【测试环境】

[内核]
内核:
[root@localhost ~]# uname -r
4.19.190-3.an8.loongarch64

# cat /etc/os-release
NAME="Anolis OS"
VERSION="8.4"
ID="anolis"
ID_LIKE="rhel fedora centos"
VERSION_ID="8.4"
PLATFORM_ID="platform:an8"
PRETTY_NAME="Anolis OS 8.4"
ANSI_COLOR="0;31"
HOME_URL="https://openanolis.cn/"


# lscpu
Architecture:        loongarch64
Byte Order:          Little Endian
CPU(s):              4
On-line CPU(s) list: 0-3
Thread(s) per core:  1
Core(s) per socket:  4
Socket(s):           1
NUMA node(s):        1
CPU family:          Loongson-64bit
Model name:          Loongson-3A5000LL
BogoMIPS:            4600.00
L1d cache:           64K
L1i cache:           64K
L2 cache:            256K
L3 cache:            16384K
NUMA node0 CPU(s):   0-3
Flags:               cpucfg lam ual fpu lsx lasx complex crypto lvz lbt_x86 lbt_arm lbt_mips


# free -h
              total        used        free      shared  buff/cache   available
Mem:           15Gi       1.1Gi        12Gi        21Mi       1.7Gi        12Gi
Swap:         7.9Gi          0B       7.9Gi


[问题复现]

[root@localhost ~]# ifconfig
enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 30.240.156.103  netmask 255.255.248.0  broadcast 30.240.159.255
        inet6 fe80::223:9eff:fe25:9234  prefixlen 64  scopeid 0x20<link>
        ether 00:23:9e:25:92:34  txqueuelen 1000  (Ethernet)
        RX packets 315411  bytes 329958882 (314.6 MiB)
        RX errors 0  dropped 5  overruns 0  frame 0
        TX packets 338455  bytes 32682450 (31.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 45  base 0x4000

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 140  bytes 30411 (29.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 140  bytes 30411 (29.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:ed:f0:af  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

执行:
[root@localhost ~]# tc qdisc add dev enp3s0 root handle 1: htb
Error: Specified qdisc not found.
Comment 1 shuming admin 2021-12-31 17:30:28 UTC
*** Bug 193 has been marked as a duplicate of this bug. ***
Comment 2 xugangjian alibaba_cloud_group 2022-01-07 11:42:08 UTC
rc2仍存在
[root@localhost ~]# tc qdisc add dev enp3s0 root handle 1: htb
Error: Specified qdisc not found.
[root@localhost ~]#
Comment 3 xugangjian alibaba_cloud_group 2022-01-18 11:49:47 UTC
rc4 内核下 流量统计功能验证通过:

[root@localhost ~]# rpm -qa | grep iproute-tc
iproute-tc-5.9.0-4.an8.loongarch64

# 1. 建立一个队列,绑定到网络物理设备bond0上,其编号为1:0
# tc qdisc add dev bond0 root handle 1: htb
[root@localhost ~]# tc qdisc add dev enp3s0 root handle 1: htb

# 2. 查看队列情况
# tc qdisc show dev bond0
[root@localhost ~]# tc qdisc show dev enp3s0
qdisc htb 1: root refcnt 2 r2q 10 default 0 direct_packets_stat 115 direct_qlen 1000


可以查看到device enp3s0上的流量统计结果
测试通过

[root@localhost ~]# ifconfig
enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 30.240.156.103  netmask 255.255.248.0  broadcast 30.240.159.255
        inet6 fe80::223:9eff:fe25:9234  prefixlen 64  scopeid 0x20<link>
        ether 00:23:9e:25:92:34  txqueuelen 1000  (Ethernet)
        RX packets 30649  bytes 17848256 (17.0 MiB)
        RX errors 0  dropped 5  overruns 0  frame 0
        TX packets 25368  bytes 8027851 (7.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 45  base 0x4000

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 16518  bytes 1002948 (979.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 16518  bytes 1002948 (979.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
Comment 4 xugangjian alibaba_cloud_group 2022-01-18 12:07:41 UTC
流量控制功能验证:

我用的 replace  设置 超时 成功了


1. 在其他环境ping 测试机
[xgj01228606@loginhost1.et15sqa /home/xgj01228606]
$ping 30.240.156.103
PING 30.240.156.103 (30.240.156.103) 56(84) bytes of data.
64 bytes from 30.240.156.103: icmp_seq=1 ttl=49 time=28.3 ms
64 bytes from 30.240.156.103: icmp_seq=2 ttl=49 time=28.2 ms



2. 设置add 失败,更换 replace 设置 延迟100ms
[root@localhost ~]# tc qdisc show
qdisc noqueue 0: dev lo root refcnt 2
qdisc htb 1: dev enp3s0 root refcnt 2 r2q 10 default 0 direct_packets_stat 463 direct_qlen 1000
[root@localhost ~]#
[root@localhost ~]# tc qdisc replace dev enp3s0 root netem delay 100ms
[root@localhost ~]#

3.在其他环境ping 测试机
[xgj01228606@loginhost1.et15sqa /home/xgj01228606]
$ping 30.240.156.103
PING 30.240.156.103 (30.240.156.103) 56(84) bytes of data.
64 bytes from 30.240.156.103: icmp_seq=1 ttl=49 time=128 ms
64 bytes from 30.240.156.103: icmp_seq=2 ttl=49 time=128 ms
64 bytes from 30.240.156.103: icmp_seq=3 ttl=49 time=128 ms


4. 可以看到时延由42ms增加100ms到142ms
恢复环境
[root@localhost ~]# tc qdisc del dev enp3s0 root netem delay 100ms
[root@localhost ~]#

5.此时 再次 ping 测试机
[xgj01228606@loginhost1.et15sqa /home/xgj01228606]
$ping 30.240.156.103
PING 30.240.156.103 (30.240.156.103) 56(84) bytes of data.
64 bytes from 30.240.156.103: icmp_seq=1 ttl=49 time=28.2 ms
64 bytes from 30.240.156.103: icmp_seq=2 ttl=49 time=28.2 ms
64 bytes from 30.240.156.103: icmp_seq=3 ttl=49 time=28.2 ms



[root@localhost ~]# uname -r
4.19.190-4.an8.loongarch64
[root@localhost ~]#
Comment 5 xugangjian alibaba_cloud_group 2022-01-18 13:57:33 UTC
清除原有设置后,add 方法可用,设置正常,测试pass,该bug 关闭

[root@localhost ~]# uname -r
4.19.190-4.an8.loongarch64

1. 清除原有设置
[root@localhost ~]# tc qdisc del dev enp3s0 root netem delay 100ms
[root@localhost ~]#

2. ping  测试
[xgj01228606@loginhost1.et15sqa /home/xgj01228606]
$ping 30.240.156.103
PING 30.240.156.103 (30.240.156.103) 56(84) bytes of data.
64 bytes from 30.240.156.103: icmp_seq=1 ttl=49 time=28.2 ms
64 bytes from 30.240.156.103: icmp_seq=2 ttl=49 time=28.2 ms
64 bytes from 30.240.156.103: icmp_seq=3 ttl=49 time=28.2 ms
^C
--- 30.240.156.103 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 28.236/28.248/28.268/0.137 ms

3. add 设置
[root@localhost ~]# tc qdisc add dev enp3s0 root netem delay 100ms
[root@localhost ~]#

4. ping 测试
[xgj01228606@loginhost1.et15sqa /home/xgj01228606]
$ping 30.240.156.103
PING 30.240.156.103 (30.240.156.103) 56(84) bytes of data.
64 bytes from 30.240.156.103: icmp_seq=1 ttl=49 time=128 ms
64 bytes from 30.240.156.103: icmp_seq=2 ttl=49 time=128 ms
^C
--- 30.240.156.103 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 128.254/128.265/128.277/0.358 ms

[root@localhost ~]# date
Tue 18 Jan 2022 01:54:12 PM CST
[root@localhost ~]#