SOFTWARE DEFINED NETWORKING Data Plane The data plane - TopicsExpress



          

SOFTWARE DEFINED NETWORKING Data Plane The data plane handles incoming datagrams (on the wire, fiber, or in wireless media) through a series of link-level operations that collect the datagram and perform basic sanity checks. A well-formed (i.e., correct) datagram[6] is processed in the data plane by performing lookups in the FIB table (or tables, in some implementations) that are programmed earlier by the control plane. This is sometimes referred to as the fast path for packet processing because it needs no further interrogation other than identifying the packet’s destination using the preprogrammed FIB. The one exception to this processing is when packets cannot be matched to those rules, such as when an unknown destination is detected, and these packets are sent to the route processor where the control plane can further process them using the RIB. It is important to understand that FIB tables could reside in a number of forwarding targets—software, hardware-accelerated software (GPU/CPU, as exemplified by Intel or ARM), commodity silicon (NPU, as exemplified by Broadcom, Intel, or Marvell, in the Ethernet switch market), FPGA and specialized silicon (ASICs like the Juniper Trio), or any combination[7]—depending on the network element design. The software path in this exposition is exemplified by CPU-driven forwarding of the modern dedicated network element (e.g., router or switch), which trades off a processor intensive lookup (whether this is in the kernel or user space is a vendor-specific design decision bound by the characteristics and infrastructure of the host operating system) for the seemingly limitless table storage of processor memory. Its hypervisor-based switch or bridge counterpart of the modern compute environment has many of the optimizations (and some of the limitations) of hardware forwarding models. Historically, lookups in hardware tables have proven to result in much higher packet forwarding performance and therefore have dominated network element designs, particularly for higher bandwidth network elements. However, recent advances in the I/O processing of generic processors, spurred on by the growth and innovation in cloud computing, are giving purpose-built designs, particularly in the mid-to-low performance ranges, quite a run for the money. The differences in hardware forwarding designs are spread across a variety of factors, including (board and rack) space, budget, power utilization, and throughput[8] target requirements. These can lead to differences in the type (speed, width, size, and location) of memory as well as a budget of operation (number, sequence, or type of operations performed on the packet) to maintain forwarding at line rate (i.e., close to the maximum signaled or theoretical throughput for an interface) for a specific target packet size (or blend). Ultimately, this leads to differences in forwarding feature support and forwarding scale (e.g., number of forwarding entries, number of tables) among the designs. The typical actions resulting from the data plane forwarding lookup are forward (and in special cases such as multicast, replicate), drop, re-mark, count, and queue. Some of these actions may be combined or chained together. In some cases, the forward decision returns a local port, indicating the traffic is destined for a locally running process such as OSPF or BGP[9]. These datagrams take what is referred to as the punt path whereby they leave the hardware-forwarding path and are forwarded to the route processor using an internal communications channel. This path is generally a relatively low-throughput path, as it is not designed for high-throughput packet forwarding of normal traffic; however, some designs simply add an additional path to the internal switching fabric for this purpose, which can result in near-line rate forwarding within the box. In addition to the forwarding decision, the data plane may implement some small services/features commonly referred to as forwarding features (exemplified by Access Control Lists and QoS/Policy). In some systems, these features use their own discrete tables, while others perform as extensions to the forwarding tables (increasing entry width). Additionally, different designs can implement different features and forwarding operation order (Figure 2-4). Some ordering may make some feature operations exclusive of others. With these features, you can (to a small degree) locally alter or preempt the outcome of the forwarding lookup. For example: An access control list entry may specify a drop action for a specific matching flow (note that in the ACL, a wider set of parameters may be involved in the forwarding decision). In its absence, there may have been a legitimate forwarding entry and thus the packet would NOT be dropped. A QOS policy can ultimately map a flow to a queue on egress or remark its TOS/COS to normalize service with policies across the network. And, like the ACL, it may mark the packet to be dropped (shaped) regardless of the existing forwarding entry for the destination/flow. These forwarding features overlap the definition of services in Chapter 7. Arguably, a data plane and control plane component of these services exists, and their definition seems to diverge cleanly when we begin to discuss session management, proxy, and large-scale transforms of the datagram header. As part of the forwarding operation, the data plane elements have to do some level of datagram header rewrite.
Posted on: Thu, 20 Nov 2014 08:40:31 +0000

Trending Topics



Recently Viewed Topics




© 2015