ngress qdisc

Ingress qdisc

All qdiscs discussed so far are egress qdiscs. Each interface however
can also have an ingress qdisc which is not used to send packets out to
the network adaptor. Instead, it allows you to apply tc filters to
packets coming in over the interface, regardless
of whether they have a local destination or are to be forwarded.

As the tc filters contain a full Token Bucket Filter implementation, and
are also able to match on the kernel flow estimator, there is a lot of
functionality available. This effectively allows you to police incoming
traffic, before it even enters the IP stack.

14.4.1. Parameters & usage

The ingress qdisc itself does not require any parameters. It differs
from other qdiscs in that it does not occupy the root of a device.
Attach it like this:

# delete original
tc qdisc del dev eth0 ingress
tc qdisc del dev eth0 root

# add new qdisc and filter
tc qdisc add dev eth0 ingress
tc filter add dev eth0 parent ffff: protocol ip prio 50  u32 match ip src 0.0.0.0/0 police rate 2048kbps burst 1m drop flowid :1
tc qdisc add dev eth0 root tbf rate 2048kbps latency 50ms burst 1m

I played a bit with the ingress qdisc after seeing Patrick and Stef
talking about it and came up with a few notes and a few questions.

: The ingress qdisc itself has no parameters.  The only thing you can do
: is using the policers.  I have a link with a patch to extend this :
: http://www.cyberus.ca/~hadi/patches/action/ Maybe this can help.
:
: I have some more info about ingress in my mail files, but I have to
: sort it out and put it somewhere on docum.org.  But I still didn‘t
: found the the time to do so.

Regarding policers and the ingress qdisc.  I have never used them before
today, but have the following understanding.

About the ingress qdisc:

  - ingress qdisc (known as "ffff:") can‘t have any children classes     (hence the existence of IMQ)
  - the only thing you can do with the ingress qdisc is attach filters

About filtering on the ingress qdisc:

- since there are no classes to which to direct
the packets, the only reasonable option (reasonable, indeed!) is to drop
the packets
  - with clever use of filtering, you can limit particular traffic signatures to particular uses of your bandwidth

Here‘s an example of using an ingress policer to limit inbound traffic
from a particular set of IPs on a per IP basis.  In this case, traffic
from each of these source IPs is limited to a T1‘s worth of bandwidth.
Note that this means that this host can receive up to 1536kbit (768kbit +
768kbit) worth of bandwidth from these two source IPs alone.

# -- start of script
#! /bin/ash
#
# -- simulate a much smaller amount of bandwidth than the 100MBit interface
#
RATE=1536kbit
DEV=eth0
SOURCES="10.168.53.2/32 10.168.73.10/32 10.168.28.20/32"

# -- attach our ingress qdisc
#
tc qdisc add dev $DEV ingress

# -- cap bandwidth from particular source IPs
#

for SOURCE in $SOURCES ; do

tc filter add dev $DEV parent ffff: protocol ip   \
    u32 match ip src $SOURCE flowid :1              \
    police rate $RATE mtu 12k burst 10k drop

done

# -- end of script

Now, if you are using multiple public IPs on your masquerading/SNAT host,
you can use "u32 match ip dst $PER_IP" with a drop action to force a
particular rate on inbound traffic to that IP.

My entirely unquantified impression is that latency suffers as a result,
but traffic is indeed bandwidth limited.

Just a few notes of dissection:

tc filter add dev $DEV   # -- the usual beginnings
    parent ffff:           # -- the ingress qdisc itself
    protocol ip            # -- more preamble  | make sure to visit
    u32 match ip           # -- u32 classifier | http://lartc.org/howto/
    src $SOURCE            # -- could also be "dst $SOME_LOCAL_IP"
    flowid :1              # -- ??? (but it doesn‘t work without this)
    police rate $RATE      # -- put a policer here
    mtu 12k burst 10k      # -- ???
    drop                   # -- drop packets exceeding our police params

Maybe a guru or two out there (Stef?, Bert?, Jamal?, Werner?) can explain
why mtu needs to be larger than 1k (didn‘t work for me anyway) and also
how these other parameters should be used.

时间: 2024-10-27 05:38:32

ngress qdisc的相关文章

从veth看虚拟网络设备的qdisc

背景 前段时间在测试docker的网络性能的时候,发现了一个veth的性能问题,后来给docker官方提交了一个PR,参考set tx_queuelen to 0 when create veth device,引起了一些讨论.再后来,RedHat的网络专家Jesper Brouer 出来详细的讨论了一下这个问题. 可以看到,veth设备qdisc队列,而环回设备/桥接设备是没qdisc队列的,参考br_dev_setup函数. 内核实现 在注册(创建)设备时,qdisc设置为noop_qdis

linux 下模拟延时与丢包 tc qdisc

TC qdisc linux下的流量控制 修改ping测试的延时 增加规则: # tc qdisc add dev eth0 root 删除规则: # tc qdisc add dev eth0 root 修改规则: # tc qdisc change dev eth0 root 查看规则: # tc qdisc list Or # tc qdisc show 使用从eth0接口ping的延时为1000ms  上下波动为10ms tc qdisc add dev eth0 root netem

链路层输出 -qdisc

二层发送中,实现qdisc的主要函数是__dev_xmit_skb和net_tx_action,本篇将分析qdisc实现的原理,仅对框架进行分析. 其框架如下图所示 qdisc初始化 pktsched_init注册了几个系统算法,register_qdisc只是添加算法到一个全局的链表中注册设备驱动的时候会调用register_netdevice(), register_netdevice()会调用dev_init_scheduler来初始化默认的qidsc为noop_qdisc,noop_qd

keepalived+nginx

高集成:keepalived 负载均衡:nginx 1.服务器IP Client: 172.25.254.25 Keepalived+Nginx1: 172.25.254.115 Vip: 172.25.254.100 Keepalived+Nginx2: 172.25.254.215 Vip: 172.25.254.100 二.安装 1.安装keepalived yum install keepalived 2.编译安装Nginx [[email protected] ~]#useradd -

使用TCP时序图解释BBR拥塞控制算法的几个细节

周六,由于要赶一个月底的Deadline,因此选择了在家VPN加班,大半夜就爬起来跑用例,抓数据...自然也就没有时间写文章和外出耍了...不过利用周日的午夜时间(不要问我为什么可以连续24小时不睡觉,因为我觉得吃饭睡觉是负担),我决定把工作上的事情先放下,还是要把每周至少一文补上,这已经成了习惯.由于上周实在太忙乱,所以自然根本没有更多的时间去思考一些"与工作无关且深入"的东西,我指的与工作无关并非意味着与IT,与互联网无关,只是意味着不是目前我在做的.比如在两年前,VPN,PKI这

KVM的基础功能(网络、内存、cpu、存储的配置)

KVM的基础功能(网络.内存.cpu.存储的配置) cpu的配置 1)查看cpu插槽数量 [[email protected] ~]# cat /proc/cpuinfo |grep "physical id" |wc -l 24 2)查看cpu核心数量 [[email protected] ~]# cat /proc/cpuinfo |grep "core id"| wc -l 24 3)查看cpu的模型 [[email protected] ~]# cat /p

虚拟机之openVZ简单基础

OpenVZ的是免费的开源软件,基于GNU GPL协议. OpenVZ的是基于Linux的容器虚拟化. OpenVZ在一台服务器上能够创建创建多个安全隔离的Linux容器(也称为虚拟环境或的VPS),实现更好的服务器资源利用率并确保应用程序不冲突.每个容器运行都类似于单台独立的服务器;一个容器可以独立重启并拥有root权限,用户,IP地址,内存,进程,文件,应用程序,系统库以及配置文件. OpenVZ 修改 Linux 内核将高级的容器化功能加入其中,藉此容许隔离了的进程组别在一个 init 的

VmWare Workstation 关于Linux 虚拟机快照或克隆后 识别不到网卡问题

当我们在用VmWare Workstation 做Linux测试时,做的快照或克隆,重新启动虚拟机后,会识别不到网卡,我们只需做如下操作,让Linux自动识别到网卡. [[email protected] rules.d]# pwd/etc/udev/rules.d[[email protected] rules.d]# lltotal 36-rw-r--r--. 1 root root 1652 Nov 12  2010 60-fprint-autosuspend.rules-rw-r--r-

使用nmcli配置Bonding连接和Teaming连接

一.Bonding的模式 1.balance-rr:轮询模式 2.active-backup:热备模式 3.broadcast:广播模式 二.配置步骤(命令配置) 1. 创建主接口 nmcli con add type bond con-name bond0 ifname bond0 mode active-backup 2. 给主接口分配ip地址 nmcli con mod bond0 ipv4.addresses '192.168.0.100/24' nmcli con mod bond0