Статьи
Постоянное развитие беспроводных сетей заставляет сетевых администраторов безостановочно модернизировать свои проводные сегменты, поддерживающие работу Wi-Fi. На сегодняшний день при использовании высокоскоростных точек доступа проводные интерфейсы коммутаторов доступа начинают становиться узким местом, ограничивая скорости передачи данных в беспроводном сегменте. Это кажется невероятным, но даже гигабитного интерфейса порой уже становится недостаточно. Однако переход к использованию сетей на базе 10GE не будет считаться оправданным в течение ещё долгого времени, так как пока скорости, предоставляемые Wi-Fi точками доступа, не планируют перешагнуть этот рубеж.
Предвидя осложнение ситуации в проводном сегменте в связи с появлением устройств с поддержкой 802.11AC Wave 2, компания Cisco Systems совместно с рядом других производителей основала в 2014 году альянс NBASE-T. Целью данной организации стала разработка стандартов Ethernet, позволяющих осуществлять передачу данных на скоростях 2,5 и 5 Гбит/с, используя существующую кабельную инфраструктуру категорий 5е и 6.
Чем же грозит сетевым администраторам появление устройств с поддержкой «второй волны» стандарта 802.11AC? Во-первых, произойдёт увеличение максимальных теоретических скоростей передачи данных до 6.8 Гбит/с. Конечно, это лишь теоретический максимум (реальные скорости окажутся традиционно в два раза ниже), к тому же достижимый лишь при использовании самых производительных клиентов, расположенных в непосредственной близости от точки доступа. Второе улучшение, предусмотренное в стандарте 802.11AC Wave 2, состоит в поддержке технологии MU-MIMO. Использование MU-MIMO позволит более эффективно распределять доступную полосу пропускания между несколькими беспроводными клиентами, работающими одновременно. Так, например, точка доступа с антенной конфигурацией 4×4 сможет обслуживать двух клиентов 2×2 одновременно, а не последовательно, как это было раньше.
Оба улучшения, доступные в 802.11AC Wave 2, приведут к значительно большей утилизации проводных сегментов сети. Именно для устранения узких мест в современных L2-сегментах и были разработаны стандарты NBASE-T, позволяющие с минимальными затратами подготовиться к внедрению беспроводного оборудования с поддержкой IEEE 802.11AC Wave 2. Кроме этого, в NBASE-T нельзя не отметить наличие поддержки технологии Power over Ethernet, позволяющей осуществлять удалённое питание точек доступа, камер видеонаблюдения и иного сетевого оборудования.
Сегодня в нашей тестовой лаборатории находится коммутатор Cisco Catalyst WS-C3560CX-8XPD-S (IOS версии 15.2(6)E), обладающий двумя мультигигабитными интерфейсами. Указанные интерфейсы поддерживают следующие скорости передачи: 100 Мбит/с, 1 Гбит/с, 2.5 Гбит/с, 5Гбит/с и 10 Гбит/с. Конечно же, два мультигигабитных порта – не единственные интерфейсы коммутатора. Модель 3560CX-8XPD оснащена также шестью портами Gigabit Ethernet, а также двумя разъёмами для модулей SFP+. Все восемь медных интерфейсов поддерживают передачу питания PoE+, энергетический бюджет коммутатора составляет 240 Вт.
Конфигурация
Коммутатор Cisco 3560CX-8XPD несёт на борту четыре интерфейса 10GE: Te1/0/1, Te1/0/2, Te1/0/7 и Te1/0/8, два последних как раз и обладают поддержкой NBASE-T.
fox_3560CX-8XPD#sho int status
Port Name Status Vlan Duplex Speed Type
Gi1/0/1 notconnect 1 auto auto 10/100/1000BaseTX
Gi1/0/2 notconnect 1 auto auto 10/100/1000BaseTX
Gi1/0/3 notconnect 1 auto auto 10/100/1000BaseTX
Gi1/0/4 notconnect 1 auto auto 10/100/1000BaseTX
Gi1/0/5 notconnect 1 auto auto 10/100/1000BaseTX
Gi1/0/6 notconnect 1 auto auto 10/100/1000BaseTX
Te1/0/7 connected 1 a-full 100 100/1G/2.5G/5G/10GBaseT
Te1/0/8 connected 1 a-full 100 100/1G/2.5G/5G/10GBaseT
Te1/0/1 notconnect 1 full 10G Not Present
Te1/0/2 notconnect 1 full 10G Not Present
Мультигигабитные интерфейсы не требуют какой-либо дополнительной конфигурации – даже скорость может быть определена автоматически.
fox_3560CX-8XPD(config)#int te1/0/7
fox_3560CX-8XPD(config-if)#?
Interface configuration commands:
aaa Authentication, Authorization and Accounting.
access-session Access Session specific Interface Configuration Commands
arp Set arp type (arpa, probe, snap) or timeout or log options
auto Configure Automation
auto Configure Automation
bandwidth Set bandwidth informational parameter
bfd BFD interface configuration commands
bgp-policy Apply policy propagated by bgp community string
carrier-delay Specify delay for interface transitions
cdp CDP interface subcommands
channel-group Etherchannel/port bundling configuration
channel-protocol Select the channel protocol (LACP, PAgP)
crypto Encryption/Decryption commands
cts Configure Cisco Trusted Security
dampening Enable event dampening
datalink Interface Datalink commands
default Set a command to its defaults
delay Specify interface throughput delay
description Interface specific description
downshift link downshift feature
exit Exit from interface configuration mode
flow-sampler Attach flow sampler to the interface
flowcontrol Configure flow operation.
help Description of the interactive help system
history Interface history histograms - 60 second, 60 minute and 72 hour
hold-queue Set hold queue depth
ip Interface Internet Protocol config commands
ipv6 IPv6 interface subcommands
isis IS-IS commands
iso-igrp ISO-IGRP interface subcommands
keepalive Enable keepalive
l2protocol-tunnel Tunnel Layer2 protocols
lacp LACP interface subcommands
link Interface link related commands
lldp LLDP interface subcommands
load-interval Specify interval for load calculation for an interface
location Interface location information
logging Configure logging for interface
mac MAC interface commands
macro Command macro
macsec Enable macsec on the interface
mdix Set Media Dependent Interface with Crossover
media-proxy Enable media proxy services
metadata Metadata Application
mka MACsec Key Agreement (MKA) interface configuration
mls mls interface commands
mvr MVR per port configuration
neighbor interface neighbor configuration mode commands
network-policy Network Policy
nmsp NMSP interface configuration
no Negate a command or set its defaults
onep Configure onep settings
ospfv3 OSPFv3 interface commands
pagp PAgP interface subcommands
power Power configuration
priority-queue Priority Queue
queue-set Choose a queue set for this queue
rep Resilient Ethernet Protocol characteristics
rmon Configure Remote Monitoring on an interface
routing Per-interface routing configuration
service-policy Configure CPL Service Policy
shutdown Shutdown the selected interface
small-frame Set rate limit parameters for small frame
snmp Modify SNMP interface parameters
source Get config from another source
spanning-tree Spanning Tree Subsystem
speed Configure speed operation.
srr-queue Configure shaped round-robin transmit queues
storm-control storm configuration
subscriber Subscriber inactivity timeout value.
switchport Set switching mode characteristics
timeout Define timeout values for this interface
topology Configure routing topology on the interface
transmit-interface Assign a transmit interface to a receive-only interface
tx-ring-limit Configure PA level transmit ring limit
udld Configure UDLD enabled or disabled and ignore global UDLD setting
vtp Enable VTP on this interface
fox_3560CX-8XPD(config-if)#speed ?
100 Force 100 Mbps operation
1000 Force 1000 Mbps operation
10000 Force 10000 Mbps operation
2500 Force 2500 Mbps operation
5000 Force 5000 Mbps operation
auto Enable AUTO speed configuration
Настройка дуплекса для портов NBASE-T отсутствует.
fox_3560CX-8XPD(config)#int gi1/0/1
fox_3560CX-8XPD(config-if)#du ?
auto Enable AUTO duplex configuration
full Force full duplex operation
half Force half-duplex operation
fox_3560CX-8XPD(config-if)#int te1/0/7
fox_3560CX-8XPD(config-if)#du ?
% Unrecognized command
Что же касается работы функции Auto MDI/MDIX, то в данном коммутаторе определение использованного кабеля производится вне зависимости от скорости, на которой функционирует интерфейс.
При использовании автоматического согласования скорости администратор может в явном виде указать, какие скорости допустимы для согласования. Справедливости ради, стоит отметить, что аналогичная настройка доступна и для гигабитных интерфейсов тоже.
fox_3560CX-8XPD(config-if)#spe au ?
100 Include 100 Mbps in auto-negotiation advertisement
1000 Include 1000 Mbps in auto-negotiation advertisement
10000 Include 10000 Mbps in auto-negotiation advertisement
2500 Include 2500 Mbps in auto-negotiation advertisement
5000 Include 5000 Mbps in auto-negotiation advertisement
Мы соединили патч-кордом интерфейсы Te1/0/7 и Te1/0/8 между собой. Вывод некоторых диагностических команд представлен ниже.
fox_3560CX-8XPD#sho run int te1/0/7
Building configuration.
Current configuration : 59 bytes
!
interface TenGigabitEthernet1/0/7
load-interval 30
end
fox_3560CX-8XPD#sho run int te1/0/8
Building configuration.
Current configuration : 71 bytes
!
interface TenGigabitEthernet1/0/8
load-interval 30
speed 5000
end
fox_3560CX-8XPD#sho int status
Port Name Status Vlan Duplex Speed Type
Gi1/0/1 notconnect 1 auto auto 10/100/1000BaseTX
Gi1/0/2 notconnect 1 auto auto 10/100/1000BaseTX
Gi1/0/3 notconnect 1 auto auto 10/100/1000BaseTX
Gi1/0/4 notconnect 1 auto auto 10/100/1000BaseTX
Gi1/0/5 notconnect 1 auto auto 10/100/1000BaseTX
Gi1/0/6 notconnect 1 auto auto 10/100/1000BaseTX
Te1/0/7 connected 1 a-full a-5000 100/1G/2.5G/5G/10GBaseT
Te1/0/8 connected 1 a-full 5000 100/1G/2.5G/5G/10GBaseT
Te1/0/1 notconnect 1 full 10G Not Present
Te1/0/2 notconnect 1 full 10G Not Present
fox_3560CX-8XPD#sho int te1/0/7
TenGigabitEthernet1/0/7 is up, line protocol is up (connected)
Hardware is Ten Gigabit Ethernet, address is 9c57.adb0.3487 (bia 9c57.adb0.3487)
MTU 1500 bytes, BW 5000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 5000Mb/s, media type is 100/1G/2.5G/5G/10GBaseT
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:01, output 00:00:01, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
30 second input rate 0 bits/sec, 0 packets/sec
30 second output rate 1000 bits/sec, 1 packets/sec
538 packets input, 102715 bytes, 0 no buffer
Received 325 broadcasts (325 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 325 multicast, 0 pause input
0 input packets with dribble condition detected
1622 packets output, 172091 bytes, 0 underruns
0 output errors, 0 collisions, 1 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out
fox_3560CX-8XPD#sho int te1/0/8
TenGigabitEthernet1/0/8 is up, line protocol is up (connected)
Hardware is Ten Gigabit Ethernet, address is 9c57.adb0.3488 (bia 9c57.adb0.3488)
MTU 1500 bytes, BW 5000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 5000Mb/s, media type is 100/1G/2.5G/5G/10GBaseT
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:01, output 00:00:09, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
30 second input rate 0 bits/sec, 0 packets/sec
30 second output rate 0 bits/sec, 0 packets/sec
1626 packets input, 172811 bytes, 0 no buffer
Received 1413 broadcasts (1397 multicasts)
0 runts, 0 giants, 0 throttles
4 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 1397 multicast, 0 pause input
0 input packets with dribble condition detected
561 packets output, 104187 bytes, 0 underruns
0 output errors, 0 collisions, 1 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out
fox_3560CX-8XPD#
Перед тем как непосредственно перейти к нагрузочному тестированию, мы решили подключить точку доступа с поддержкой PoE к интерфейсу Te1/0/7 и предоставить нашим читателям некоторую диагностическую информацию.
fox_3560CX-8XPD#sho power inline tenGigabitEthernet 1/0/7
Interface Admin Oper Power Device Class Max
(Watts)
--------- ------ ---------- ------- ------------------- ----- ----
Te1/0/7 auto on 15.4 Ieee PD 4 30.0
Interface AdminPowerMax AdminConsumption
(Watts) (Watts)
---------- --------------- --------------------
Te1/0/7 30.0 15.4
fox_3560CX-8XPD#sho lldp ne de
fox_3560CX-8XPD#sho lldp ne detail
------------------------------------------------
Local Intf: Te1/0/7
Chassis id: b8ec.a3ac.5c19
Port id: 1
Port Description: UPLINK
System Name: WAC6103D-I
System Description:
Linux
Time remaining: 118 seconds
System Capabilities: B,W,R
Enabled Capabilities: B,W,R
Management Addresses:
IP: 192.168.1.2
Auto Negotiation - not supported
Physical media capabilities - not advertised
Media Attachment Unit type - not advertised
Vlan ID: - not advertised
Total entries displayed: 1
fox_3560CX-8XPD(config)#int te1/0/7
fox_3560CX-8XPD(config-if)#po
fox_3560CX-8XPD(config-if)#power ?
inline Inline power configuration
fox_3560CX-8XPD(config-if)#power i
fox_3560CX-8XPD(config-if)#power inline ?
auto Automatically detect and power inline devices
consumption Configure the inline device consumption
never Never apply inline power
police Police the power drawn on the port
port Configure Port Power Level
static High priority inline power interface
На этом раздел, посвящённый конфигурированию мультигигабитных интерфейсов, подошёл к концу.
Тестирование
В данном разделе мы хотим предоставить нашим читателям результаты тестов производительности коммутатора не только при использовании мультигигабитных интерфейсов, но и «стандартных» портов. В качестве трафик-генератора использовался программно-аппаратный комплекс IXIA. Начать мы решили с измерения производительности (полоса пропускания, задержка и джиттер) модели 3560CX-8XPD, выполняющей коммутацию с использованием интерфейсов Gigabit Ethernet. Измерения производились для фреймов различной длины. Во время проведения данных тестов мы не фиксировали потери пакетов (исключая тест маршрутизации IPv6), поэтому данный график мы решили не включать в статью.
Поскольку модель Cisco Catalyst 3560CX-8XPD обладает возможностью выполнять не только коммутацию Ethernet-кадров, но маршрутизацию IP-пакетов, мы решили не оставлять данную функциональность без внимания.
Как видно из приведённых выше графиков, производительность устройства практически не отличается как при выполнении функций L2, так и L3.
Следующим этапом стало измерение производительности коммутатора при подключении к его оптическим портам 10GE также в L2 и L3 режимах.
Тестируемый коммутатор может выполнять не только маршрутизацию трафика IPv4, но также и IPv6. Естественно, мы не оставили без внимания такую возможность.
Здесь стоит отметить, что при маршрутизации пакетов, размер которых составлял 72 байта, наблюдались потери пакетов 0.029%. Величина потерь невелика, однако мы всё равно посчитали необходимым упомянуть об этом.
Мы подошли, пожалуй, к самой интересной части данного раздела – измерению производительности коммутатора при использовании портов NBASE-T. Трафик-генератор подключался к оптическим интерфейсам коммутатора (10GE). Мультигигабитные интерфейсы были соединены друг с другом с помощью патч-корда длиной 0.5 метра. На графиках ниже представлены значения скорости, задержки и джиттера для всех поддерживаемых интерфейсами скоростей. При построении графиков зависимости задержки от размера пакетов мы, естественно, учитывали, что в данной схеме фреймы проходят коммутатор дважды.
В заключение данного раздела нам бы хотелось предоставить нашим читателям те же зависимости, но без использования коммутатора, то есть в ситуации, когда порты трафик-генератора соединялись друг с другом напрямую.
На этом мы завершаем раздел тестирования и переходим к подведению итогов.
Заключение
В данной статье мы рассмотрели медные мультигигабитные интерфейсы на примере компактного коммутатора Cisco Catalyst 3560CX-8XPD, обладающего двумя портами с поддержкой NBASE-T. Использование портов NBASE-T позволит с минимальными затратами обновить существующие проводные сегменты сети и подготовиться к развёртыванию беспроводных сетей следующего поколения IEEE 802.11AC Wave 2. Использование NBASE-T позволяет значительно увеличить производительность проводных сегментов без замены кабельной инфраструктуры. Поддержка мультигигабитными интерфейсами также и «стандартных» скоростей 1GE и 10GE позволит осуществить замену оборудования максимально плавно и последовательно.
Стоит также заметить, что появление новых стандартов с «промежуточными» скоростями произошло не только в мире медных сетевых интерфейсов с разъёмами RJ45 (8P8C), но также и в сегменте оптических сетей. Примером может служить коммутатор Cisco Nexus 3232C с высокой плотностью портов, несущий на борту 32 фиксированных 100GE интерфейса формата QSFP28, что позволяет обеспечить поддержку следующих скоростей оптических интерфейсов: 10G/25G/40G/50G/100G. Однако это уже совсем другая история.
You have no rights to post comments
Если заметили ошибку, выделите фрагмент текста и нажмите Ctrl+Enter
- Вы здесь:
- Главная
- Статьи
- Articles
- Articles
- Мультигигабитные интерфейсы NBASE-T
Impact of Transmit Ring Size (tx-ring-limit)
Technology Resources » QoS Mechanisms » Impact of Transmit Ring Size (tx-ring-limit) Output interface queues in most software switching platforms contain a software-only component and a FIFO queue shared between the CPU and the outgoing interface. That FIFO queue is usually organized as a ring structure, and its maximum size can be controlled with the tx-ring-limit parameter in Cisco IOS. The impact of that parameter should be obvious: larger tx-ring-limit values cause more delay and jitter, resulting in reduced quality-of-service of time-critical applications (like voice-over-IP) over low-speed interfaces. A series of tests performed in a small tightly controlled test-bed quantifies the actual impact.
Overview
The default value of tx-ring-limit is a good compromise between the latency/jitter requirements of medium speed links and increased CPU utilization due to I/O interrupts caused by low tx-ring-limit values. On low-speed links (128 kbps and below), the tx-ring-limit should be decreased to 1.
Test Bed
Two 2800-series routers were connected with a back-to-back serial link, one of them generating the clock. PPP encapsulation was used on the serial link. A traffic-generating node was connected to the Ethernet port of one of the routers to generate the background load. IP SLA using ICMP echo packets was started on the same router to measure response time and jitter of a simple request-response application (ICMP ping).
Router Configurations
Minimum router configurations were used with no dynamic routing. The relevant parts of router configurations are displayed below:
Configuration of the IP SLA originating router
hostname a1 ! ip cef ! class-map match-all Echo match protocol icmp ! policy-map EchoPriority class Echo priority 64 class class-default fair-queue ! interface FastEthernet0/0 ip address 10.0.0.5 255.255.255.0 ! interface Serial0/1/0 bandwidth 512 ip address 172.16.1.129 255.255.255.252 encapsulation ppp load-interval 30 service-policy output EchoPriority ! end
The second router has an almost identical configuration (with different IP addresses).
Load Generation
UDP flooding implemented in PERL was used to generate the background load and saturate the WAN interface. Two sets of measurements were performed. In the first test, a continuous flood of constantly-spaced fixed-size packets sent to a single UDP port was generated (similar to constant bit rate traffic). This traffic stream generated a single conversation in the fair queuing model used on the WAN interface.
Cisco IOS uses fair queuing as soon as a service policy including a queuing action is configured on an interface.
The second test flooded the WAN link with variable-sized packets sent at a fixed interval to random destination UDP ports. The generated bandwidth varied widely due to random packet sizes and the traffic stream generated hundreds of conversations in the fair queuing structure. In both cases, the CPU utilization was measured on a1 and a2 to verify that the CPU load does not increase 50% (high CPU load could impact the jitter measurements).
a1#show proc cpu | inc CPU CPU utilization for five seconds: 8%/6%; one minute: 8%; five minutes: 5%
a2#sh proc cpu | inc CPU CPU utilization for five seconds: 20%/10%; one minute: 12%; five minutes: 8%
Response Time and Jitter Measurement
An IP SLA probe was configured on one of the routers generating small ICMP ECHO packets that were sent over the WAN link. The ICMP traffic is classified as priority traffic in the service policy attached to the WAN interface ensuring that the ICMP packets enter the hardware queue prior to any packets generated by the UDP flood. The measured delay and jitter is thus solely the result of the hardware queue between the software fair queuing structures and the interface hardware.
ip sla 50000 icmp-jitter 172.16.1.130 timeout 1000 threshold 500 frequency 2 history hours-of-statistics-kept 1 history distributions-of-statistics-kept 2
The IP SLA probe was started after the test load has reached a steady state (saturated WAN interface) and the aggregated SLA statistics were inspected while the load was still present. The show ip sla statistics aggregated command was used to inspect the statistics and the RTT and source-to-destination jitter values were collected
a1#show ip sla statistics aggregated Round Trip Time (RTT) for Index 50000 Start Time Index: 13:15:22.851 UTC Fri Apr 18 2008 Type of operation: icmpJitter RTT Values: Number Of RTT: 387 RTT Min/Avg/Max: 6/15/31 Latency one-way time: Number of Latency one-way Samples: 0 Source to Destination Latency one way Min/Avg/Max: 0/0/0 Destination to Source Latency one way Min/Avg/Max: 0/0/0 Jitter Time: Number of Jitter Samples: 334 Source to Destination Jitter Min/Avg/Max: 1/6/23 Destination to Source Jitter Min/Avg/Max: 1/1/1
Test Results
The tests were performed at line speeds of 128 and 512 kbps with different tx-ring-limit settings. The tx-ring-limit values were changed at both ends of the WAN link. The test result values are triplets: minimum, average and maximum measured value as reported by IP SLA.
This article written by Ivan Pepelnjak in early 2000s was originally published on CT3 wiki which became unreachable in 2019. The text was retrieved from an Internet Archive snapshot, updated, and republished on ipSpace.net.
Tx\Rx bandwidth limit на 2950-той Сиське «( . )( . )» ?
Прошу прощения) за нескромный вопрос)) но мне раньше не доводилось иметь длительные отношения с коммутаторами от Cisco =)) Как настроить ограничение пропускной способности (bandwidth limit) на конкретный порт ? К примеру порт Fa0/24 (Rx 128 kbps)(Tx 512 kbps)
Architector120
18.01.16 16:41:16 MSK
conf t int fa0/24 traffic-shape # хз не помню rate-limit # аналогично end
redixin ★★★★
( 18.01.16 16:43:38 MSK )
Ответ на: комментарий от redixin 18.01.16 16:43:38 MSK
Хотя такое может не прокатить на 2950
redixin ★★★★
( 18.01.16 16:46:10 MSK )
хмм.. вот список всех доступных команд на порт Fa0/24
cisco(config)#interface fastEthernet 0/24 cisco(config-if)#? Interface configuration commands: arp Set arp type (arpa, probe, snap) or timeout auto Configure Automation bandwidth Set bandwidth informational parameter carrier-delay Specify delay for interface transitions cdp CDP interface subcommands channel-group Etherchannel/port bundling configuration channel-protocol Select the channel protocol (LACP, PAgP) default Set a command to its defaults delay Specify interface throughput delay description Interface specific description dot1x Interface Config Commands for 802.1x duplex Configure duplex operation. exit Exit from interface configuration mode fair-queue Enable Fair Queuing on an Interface help Description of the interactive help system hold-queue Set hold queue depth ip Interface Internet Protocol config commands keepalive Enable keepalive lacp LACP interface subcommands load-interval Specify interval for load calculation for an interface logging Configure logging for interface mac MAC interface commands mac-address Manually set interface MAC address mls mls interface commands mvr MVR per port configuration no Negate a command or set its defaults pagp PAgP interface subcommands random-detect Enable Weighted Random Early Detection (WRED) on an Interface rmon Configure Remote Monitoring on an interface service-policy Configure QoS Service Policy shutdown Shutdown the selected interface snmp Modify SNMP interface parameters spanning-tree Spanning Tree Subsystem speed Configure speed operation. storm-control storm configuration switchport Set switching mode characteristics timeout Define timeout values for this interface transmit-interface Assign a transmit interface to a receive-only interface tx-ring-limit Configure PA level transmit ring limit udld Configure UDLD enabled or disabled and ignore global UDLD setting
Architector120
( 18.01.16 17:11:43 MSK ) автор топика
Ответ на: комментарий от Architector120 18.01.16 17:11:43 MSK
Ну да, это походу из тех свичей что так совсем не умеют. Могу ошибаться
redixin ★★★★
( 18.01.16 17:15:05 MSK )
Ответ на: комментарий от redixin 18.01.16 17:15:05 MSK
На некоторых ресурсах пишут что нужно править системный конфиг, но подробного описания процедуры нет
Architector120
( 18.01.16 17:20:25 MSK ) автор топика
Ответ на: комментарий от Architector120 18.01.16 17:11:43 MSK
bandwidth Set bandwidth informational parameter Этот параметр отвечает за пропускную способность интерфейса читай мануал по этому параметру.
Understanding and Tuning the tx-ring-limit Value
The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
Contents
Introduction
This document discusses the function of a hardware transmit ring and the purpose of the tx-ring-limit command on ATM router interface hardware that supports per-virtual circuit (VC) queueing. Cisco router interfaces configured with service policies store packets for an ATM VC in one of two sets of queues depending on the congestion level of the VC:
Queue | Location | Queueing Methods | Service Policies Apply | Command to Tune |
---|---|---|---|---|
Hardware queue or transmit ring | Port adapter or network module | FIFO only | No | tx-ring-limit |
Layer-3 queue | Layer-3 processor system or interface buffers | N/A | Yes | Varies with queueing method: — vc-hold-queue — queue-limit |
Prerequisites
Requirements
There are no specific requirements for this document.
Components Used
This document is not restricted to specific software and hardware versions.
Conventions
Refer to Cisco Technical Tips Conventions for more information on document conventions.
Understanding Particles
Before discussing the transmit ring, we first need to understand what a particle is. A particle forms the basic building block of packet buffering on many platforms, including the Cisco 7200 router series and the versatile interface processor (VIP) on the Cisco 7500 router series. Depending on the packet length, Cisco IOS® software uses one or more particles to store a packet. Let’s look at an example. When receiving a 1200-byte packet, IOS retrieves the next free particle and copies the packet data into the particle. When the first particle is filled, IOS moves to the next free particle, links it to the first particle, and continues copying the data into this second particle. Upon completion, the 1200 bytes of the packet are stored in three discontiguous pieces of memory that IOS logically makes a part of a single packet buffers. IOS particle size varies from platform to platform. All particles within a given pool are the same size. This uniformity simplifies the particle management algorithms and helps contribute to efficient memory use.
Understanding Buffer Rings
Along with public and private interface pools, Cisco IOS creates special buffer control structures called rings. Cisco IOS and interface controllers use these rings to control which buffers are used to receive and transmit packets to the media. The rings themselves consist of media-controller-specific elements that point to individual packet buffers elsewhere in I/O memory. Each interface has a pair of rings — a receive ring for receiving packets and a transmit ring for transmitting packets. The size of the rings can vary with the interface controller. In general, the size of the transmit ring is based on bandwidth of the interface or VC and is a power of two (Cisco Bug ID CSCdk17210).
Interface | Rings | |||||
---|---|---|---|---|---|---|
Line Rate(Mb/s) | 2 | 10 | 20 | 30 | 40 | . |
txcount | 2 | 4 | 8 | 16 | 32 | 64 |
Note: On the 7200 series platform, the transmit ring packet buffers come from the receive ring of the originating interface for a switched packet or from a public pool if the packet was originated by IOS. They are deallocated from the transmit ring and returned to their original pool after the payload data is transmitted.
PA-A3 Architecture Overview
To ensure high forwarding performance, the PA-A3 port adapter uses separate receive and transmit segmentation and reassembly (SAR) chips. Each SAR is supported by its own subsystem of onboard memory to store packets as well as key data structures like the VC table. This memory specifically includes 4 MB of SDRAM, which is chunked into particles. The following table illustrates the number and size of particles on the receive and transmit paths on the PA-A3.
Ring | Particle Size | Number of Particles |
---|---|---|
Receive Ring | 288 bytes | n/a |
Transmit Ring | 576* bytes | 6000 (144 particles are reserved) |
* The transmit ring’s particle size also is described as being 580 bytes. This value includes the 4-byte ATM core header that travels with the packet inside the router. The sizes in the above table were selected because they are divisible by 48 (the size of a cell’s payload field) and by the cache line size (32 bytes) for maximum performance. They are designed to prevent the SAR from introducing inter-buffer delay when a packet requires multiple buffers. The transmit particle size of 576 bytes also was selected to cover about 90 percent of Internet packets.
Transmit Ring Allocation Scheme on the PA-A3
The PA-A3 driver assigns a default transmit-ring value to each VC. This value varies with the ATM service category assigned to the VC. The following table lists the default values.
VC Service Category | PA-A3-OC3, T3, E3 Default Transmit Ring Value | PA-A3-IMA Default Transmit Ring Value | PA-A3-OC12 Default Transmit Ring Value | Time of Enforcement |
---|---|---|---|---|
VBR-nrt | Based on formula**: (48 x SCR) / (Particle_size x 5) Minimum value is 40, and overrides any calculated value less than 40 with a very low SCR. Note: SCR is the cell rate with ATM overhead included. | Based on formula: (48 x SCR) / (Particle_size x 5) Minimum value is 40, and overrides any calculated value less than 40 with a very low SCR. Note: SCR is the cell rate with ATM overhead included. | Based on the following formula: Average rate (SCR) * 2 * TOTAL_CREDITS / VISIBLE_BANDWIDTH TOTAL_CREDITS = 8192 VISIBLE_BANDWIDTH = 599040 Note: If this formula calculates a value which is less than the default of 128, then the VC’s transmit ring limit is set to 128. | Always |
ABR | 128 | 128 | N/A | Always* |
UBR | 40 | 128 | 128 | Only when total credit utilization exceeds 75 percent or the tx_threshold value, as shown in show controller atm. |
* Originally, the PA-A3-OC12 did not implement always-active limiting of VBR-nrt PVCs to the current transmit ring value. Bug ID CSCdx11084 resolves this issue. . ** SCR should be expressed in cells/sec.
Displaying the Current Transmit Ring Values
Originally, the value of the transmit ring was only visible via a hidden command. The show atm vc command now displays the current value. You also can use the debug atm events command to view the VC setup messages between the PA-A3 driver and the host CPU. The following sets of output were captured on a PA-A3 in a 7200 series router. The transmit ring value is displayed as the tx_limit value, which implements the particle buffer quota allocated for a specific VC in the transmit direction. PVC 1/100 is configured as VBR-nrt. Based on an SCR of 3500 kbps, the PA-A3 assigns a tx_limit of 137. To see how this calculation is made, we need to convert an SCR of 3500 kbps to cells/sec. Notice that (3,500,000 bits /sec) * (1 byte / 8 bits) * (1 cell / 53 byes ) = (3, 500, 000 cells) / (8 * 53 sec) = 8254 cells / sec. Once we have the SCR value in cells / sec, we can apply the formula above to ger tx_limit = 137.
7200-17(config)#interface atm 4/0 7200-17(config-if)#pvc 1/100 7200-17(config-if-atm-vc)#vbr-nrt 4000 3500 94 7200-17(config-if-atm-vc)# *Oct 14 17:56:06.886: Reserved bw for 1/100 Available bw = 141500 7200-17(config-if-atm-vc)#exit 7200-17(config-if)#logging *Oct 14 17:56:16.370: atmdx_setup_vc(ATM4/0): vc:6 vpi:1 vci:100 state:2 config_status:0 *Oct 14 17:56:16.370: atmdx_setup_cos(ATM4/0): vc:6 wred_name:- max_q:0 *Oct 14 17:56:16.370: atmdx_pas_vc_setup(ATM4/0): vcd 6, atm hdr 0x00100640, mtu 4482 *Oct 14 17:56:16.370: VBR: pcr 9433, scr 8254, mbs 94 *Oct 14 17:56:16.370: vc tx_limit=137, rx_limit=47 *Oct 14 17:56:16.374: Created 64-bit VC count
PVC 1/101 is configured as ABR. The PA-A3 assigns the default ABR tx_limit value of 128. (See the table above.)
7200-17(config-if)#pvc 1/102 7200-17(config-if-atm-vc)#abr ? Peak Cell Rate(PCR) in Kbps rate-factors Specify rate increase and rate decrease factors (inverse) 7200-17(config-if-atm-vc)#abr 4000 1000 7200-17(config-if-atm-vc)# *Oct 14 17:57:45.066: Reserved bw for 1/102 Available bw = 140500 *Oct 14 18:00:11.662: atmdx_setup_vc(ATM4/0): vc:8 vpi:1 vci:102 state:2 config_status:0 *Oct 14 18:00:11.662: atmdx_setup_cos(ATM4/0): vc:8 wred_name:- max_q:0 *Oct 14 18:00:11.662: atmdx_pas_vc_setup(ATM4/0): vcd 8, atm hdr 0x00100660, mtu 4482 *Oct 14 18:00:11.662: ABR: pcr 9433, mcr 2358, icr 9433 *Oct 14 18:00:11.662: vc tx_limit=128, rx_limit=47 *Oct 14 18:00:11.666: Created 64-bit VC counters
PVC 1/102 is configured as UBR. The PA-A3 assigns the default UBR tx_limit value of 40. (See the table above.)
7200-17(config-if)#pvc 1/101 7200-17(config-if-atm-vc)#ubr 10000 7200-17(config-if-atm-vc)# *Oct 14 17:56:49.466: Reserved bw for 1/101 Available bw = 141500 *Oct 14 17:57:03.734: atmdx_setup_vc(ATM4/0): vc:7 vpi:1 vci:101 state:2 config_status:0 *Oct 14 17:57:03.734: atmdx_setup_cos(ATM4/0): vc:7 wred_name:- max_q:0 *Oct 14 17:57:03.734: atmdx_pas_vc_setup(ATM4/0): vcd 7, atm hdr 0x00100650, mtu 4482 *Oct 14 17:57:03.734: UBR: pcr 23584 *Oct 14 17:57:03.734: vc tx_limit=40, rx_limit=117 *Oct 14 17:57:03.738: Created 64-bit VC counters
- Individual quota on each VBR-nrt and ABR VC — Compares each VC’s tx_count and tx_limit values. It discards subsequent packets when the tx_count is greater than the tx_limit on any one VC. It is important to note that a burst of packets can exceed the transmit ring of a VBR-nrt VC at an instant in time and lead to output drops.
- Overall quota — Considers the tx_threshold value. The PA-A3 allows for larger bursts on UBR VCs by enforcing traffic policing on such VCs only when the total packet buffer usage on the PA-A3 reaches this preset threshold.
Note: If a packet requires multiple particles and the transmit ring is full, the PA-A3 allows a VC to exceed its quota if particles are available. This scheme is designed to accommodate a small burst of packets without output drops.
The show controller atm command displays several counters relevant to transmit credits.
7200-17#show controller atm 4/0 Interface ATM4/0 is up Hardware is ENHANCED ATM PA - OC3 (155000Kbps) Framer is PMC PM5346 S/UNI-155-LITE, SAR is LSI ATMIZER II Firmware rev: G125, Framer rev: 0, ATMIZER II rev: 3 idb=0x622105EC, ds=0x62217DE0, vc=0x62246A00 slot 4, unit 9, subunit 0, fci_type 0x0059, ticks 190386 1200 rx buffers: size=512, encap=64, trailer=28, magic=4 Curr Stats: VCC count: current=7, peak=7 SAR crashes: Rx SAR=0, Tx SAR=0 rx_cell_lost=0, rx_no_buffer=0, rx_crc_10=0 rx_cell_len=0, rx_no_vcd=0, rx_cell_throttle=0, tx_aci_err=0 Rx Free Ring status: base=0x3E26E040, size=2048, write=176 Rx Compl Ring status: base=0x7B162E60, size=2048, read=1200 Tx Ring status: base=0x3E713540, size=8192, write=2157 Tx Compl Ring status: base=0x4B166EA0, size=4096, read=1078 BFD Cache status: base=0x62240980, size=6144, read=6142 Rx Cache status: base=0x62237E80, size=16, write=0 Tx Shadow status: base=0x62238900, size=8192, read=2143, write=2157 Control data: rx_max_spins=3, max_tx_count=17, tx_count=14 rx_threshold=800, rx_count=0, tx_threshold=4608 tx bfd write indx=0x4, rx_pool_info=0x62237F20
The following table describes the values used by the PA-A3 to enforce the overall transmit credit scheme:
Value | Description |
---|---|
max_tx_count | Histogram of the maximum number of transmit particles ever held by the PA-A3 microcode. |
tx_count | Total number of transmit particles currently being held by the PA-A3 microcode. |
Note: The PA-A3 microcode also tracks the tx_count of each VC. When a particle is sent to the PA-A3 microcode from the PA-A3 driver, the tx_count increments by one.
When Should the Transmit Ring Be Tuned?
The transmit ring serves as a staging area for packets in line to be transmitted. The router needs to enqueue a sufficient number of packets on the transmit ring and ensure that the interface driver has packets with which to fill available cell timeslots.
Originally, the PA-A3 driver did not adjust the transmit ring size when a service policy with low latency queueing (LLQ) was applied. With current images, the PA-A3 tunes down the value from the above defaults (Cisco Bug ID CSCds63407) to minimize queueing-related delay.
The primary reason to tune the transmit ring is to reduce latency caused by queueing. When tuning the transmit ring, consider the following:
- On any network interface, queueing forces a choice between latency and the amount of burst that the interface can sustain. Larger queue sizes sustain longer bursts while increasing delay. Tune the size of a queue when you feel the VC’s traffic is experiencing unnecessary delay.
- Consider the packet size. Configure a tx-ring-limit value that accommodates four packets. For example, if your packets are 1500 bytes, set a tx-ring-limit value of 16 = (4 packets) * (4 particles).
- Ensure the transmit credit is large enough to support one MTU-sized packet and/or the number of cells equal to the maximum burst size (MBS) for a VBR-nrt PVC.
- Configure a low value with low-bandwidth VCs, such as a 128 kbps SCR. For example, on a low-speed VC with an SCR of 160 kbps, a tx-ring-limit of ten is relatively high and can lead to significant latency (for example, hundreds of milliseconds) in the driver-level queue. Tune the tx-ring-limit down to its minimum value in this configuration.
- Configure higher values for high-speed VCs. Selecting a value of less than four may inhibit the VC from transmitting at its configured rate if the PA-A3 implements back pressure too aggressively and the transmit ring does not have a ready supply of packets waiting to be transmitted. Ensure that a low value does not affect VC throughput. (See Cisco Bug ID CSCdk17210.)
In other words, the size of the transmit ring needs to be small enough to avoid introducing latency due to queueing, and it needs to be large enough to avoid drops and a resulting impact to TCP-based flows.
An interface first removes the packets from the layer-3 queueing system and then queues them on the transmit ring. Service policies apply only to packets in the layer-3 queues and are transparent to the transmit ring.
Queueing on the transmit ring introduces a serialization delay that is directly proportional to the depth of the ring. An excessive serialization delay can impact latency budgets for delay-sensitive applications such as voice. Thus, Cisco recommends reducing the size of the transmit ring for VCs carrying voice. Select a value based on the amount of amount of serialization delay, expressed in seconds, introduced by the transmit ring. Use the following formula:
((P*8)*D)/S P = Packet size in bytes. Multiply by eight to convert to bits. D = Transmit-ring depth. S = Speed of the VC in bps.
Note: IP packets on the Internet are typically one of three sizes: 64 bytes (for example, control messages), 1500 bytes (for example, file transfers), or 256 bytes (all other traffic). These values produce a typical overall Internet packet size of 250 bytes.
Note: The following table summarizes the advantages and disadvantages of larger or smaller transmit ring sizes:
Size of Transmit Ring | Advantage | Disadvantage |
---|---|---|
High Value | Recommended for data VCs to accommodate bursts. | Not recommended for voice VCs. Can introduce increased latency and jitter. |
Low Value | Recommended for voice VCs to reduce delay due to queueing and jitter. | Not recommended for relatively high-speed VCs. Can introduce reduced throughput if tuned to such a low value that no packets are ready to be sent once the wire is free. |
Use the tx-ring-limit command in VC configuration mode to tune the size of the transmit ring.
7200-1(config-subif)#pvc 2/2 7200-1(config-if-atm-vc)#? ATM virtual circuit configuration commands: abr Enter Available Bit Rate (pcr)(mcr) broadcast Pseudo-broadcast class-vc Configure default vc-class name default Set a command to its defaults encapsulation Select ATM Encapsulation for VC exit-vc Exit from ATM VC configuration mode ilmi Configure ILMI management inarp Change the inverse arp timer on the PVC no Negate a command or set its defaults oam Configure oam parameters oam-pvc Send oam cells on this pvc protocol Map an upper layer protocol to this connection. random-detect Configure WRED service-policy Attach a policy-map to a VC transmit-priority set the transmit priority for this VC tx-ring-limit Configure PA level transmit ring limit ubr Enter Unspecified Peak Cell Rate (pcr) in Kbps. vbr-nrt Enter Variable Bit Rate (pcr)(scr)(bcs) 7200-1(config-if-atm-vc)#tx-ring-limit ? 3-6000> Number (ring limit)
Use the show atm vc command to display the currently configured value.
7200-1#show atm vc VC 3 doesn't exist on interface ATM3/0 ATM5/0.2: VCD: 3, VPI: 2, VCI: 2 VBR-NRT, PeakRate: 30000, Average Rate: 20000, Burst Cells: 94 AAL5-LLC/SNAP, etype:0x0, Flags: 0x20, VCmode: 0x0 OAM frequency: 0 second(s) PA TxRingLimit: 10 InARP frequency: 15 minutes(s) Transmit priority 2 InPkts: 0, OutPkts: 0, InBytes: 0, OutBytes: 0 InPRoc: 0, OutPRoc: 0 InFast: 0, OutFast: 0, InAS: 0, OutAS: 0 InPktDrops: 0, OutPktDrops: 0 CrcErrors: 0, SarTimeOuts: 0, OverSizedSDUs: 0 OAM cells received: 0 OAM cells sent: 0 Status: UP
In addition, use the show atm pvc vpi/vci command to view both the current transmit and receive ring limits. The following output was captured on a 7200 Series router running Cisco IOS Software Release 12.2(10).
viking#show atm pvc 1/101 ATM6/0: VCD: 2, VPI: 1, VCI: 101 UBR, PeakRate: 149760 AAL5-LLC/SNAP, etype:0x0, Flags: 0xC20, VCmode: 0x0 OAM frequency: 0 second(s), OAM retry frequency: 1 second(s), OAM retry frequency: 1 second(s) OAM up retry count: 3, OAM down retry count: 5 OAM Loopback status: OAM Disabled OAM VC state: Not Managed ILMI VC state: Not Managed VC TxRingLimit: 40 particles VC Rx Limit: 800 particles
Impact of Very Small tx-ring-limit Values
On the transmit path, the host CPU transfers the payload from the host buffers to the local particle buffers on the PA-A3. The firmware running on the PA-A3 caches several buffer descriptors and frees them in a group. During the caching period, the PA-A3 does not accept new packets even though the contents of the local memory have been transmitted on the physical wire. The purpose of this scheme is to optimize overall performance. Thus, when configuring a non-default tx-ring-limit value, consider the buffer descriptor return delay.
In addition, if you configure a tx-ring-limit value of one with given a particle size of 576 bytes, a 1500-byte packet is removed from the queue as follows:
- The PA-A3 driver queues the first particle in the transmit ring, and remembers that this packet is stored in two other memory particles.
- During the next time that the transmit ring is empty, the second particle of the packet is put in the transmit ring.
- During the next time that the transmit ring is empty again, the third particle is put in the transmit ring.
Even though the transmit ring consists of only one 576 byte particle, MTU/port-speed is still the worst-case latency through the transmit ring.
Known Issues
When the tx-ring-limit command is applied to a VC through a vc-class statement, the PA-A3 does not apply the configured value. Confirm this result by displaying the current value in the show atm vc detail command. Tuning the transmit ring using a vc-class was implemented in Cisco IOS Software Release 12.1 (Cisco Bug ID CSCdm93064). CSCdv59010 resolves a problem with the tx-ring-limit in certain versions of Cisco IOS Software Release 12.2. When you apply the tx-ring-limit command through the vc-class statement to an ATM PVC, the transmit ring size is not modified. Confirm this result using the show atm vc detail command, after applying the command through the vc-class and class-vc command pairs.
When added to a PVC on a PA-A3 in a Cisco 7200 series router running Cisco IOS Software Release 12.2(1), the tx-ring-limit command is duplicated, as shown below (Cisco Bug ID CSCdu19350).
interface ATM1/0.1 point-to-point description dlci-101, cr3640 ip unnumbered Loopback0 pvc 0/101 tx-ring-limit 3 tx-ring-limit 3
The condition is harmless and does not affect the operation of the router.
Cisco bug ID CSCdv71623 resolves a problem with output drops on a multilink PPP bundle interface when the traffic rate is well below the line rate. This problem was seen in CSCdv89201 on an ATM interface with a tx-ring-limit value greater than five. The problem becomes particularly apparent when fragmentation is disabled or when the link weights (fragment size limits) are large — common on higher speed links like T1s or E1s — and the data traffic consists of a mix of small and large packets. Enabling fragmentation and using a small fragment size (set by the interface configuration command ppp multilink fragment delay) improves operation significantly. However, you should verify that your router has sufficient processing capacity to support these high levels of fragmentation without overloading the system CPU, before using this as a workaround.
Cisco bug ID CSCdw29890 resolves a problem with the tx-ring-limit command being accepted by the CLI for ATM PVC bundles, but not taking effect. However, you do not normally need to change the tx-ring-limit on ATM PVC bundles. The reason is that, reducing the ring size effectively moves all the transmit buffering to a QoS-controlled queue, so an arriving priority packet is transmitted immediately to minimize delay on low-speed interfaces. With ATM PVC bundles, cells from packets of all the member VCs are always sent simultaneously (and interleaved), so the delay is minimized automatically.
Tuning the tx-ring-limit on 3600 and 2600 Routers
Current Cisco IOS software images support tuning the transmit ring on the ATM network modules for Cisco 2600 and 3600 series routers (Cisco Bug ID CSCdt73385). The current value appears in the show atm vc output.
Related Information
- More ATM Information
- Tools and Resources — Cisco Systems
- Technical Support & Documentation — Cisco Systems