Planning for and managing the future of your network A look at - TopicsExpress



          

Planning for and managing the future of your network A look at network essentials and network management Introduction This white paper is intended for network engineers, administrators, and IT professionals who are responsible for corporate and/or government networks. The purpose of this paper is to provide a look at network design considerations, network basics, security, and best practices in network management. This information will help IT professionals design, implement, and manage networking solutions such as Voice over IP (VoIP), Power over Ethernet (PoE), wireless (WiFi), Virtual Private Networking (VPN), and general security. More than just a white paper on planning your network’s expansion, this paper also serves as a reference guide to help you along the way. If you are new to network engineering, this guide will provide a foundation for success. If you’re a veteran, it can serve as a reference to which you can refer back. 2 When considering how a network should be built or upgraded, IT professionals should first consider the specific needs of their business. Of course, your network will grow by number of users, but, equally as important, your network must also be able to handle new technologies, applications, protocols, and the evergrowing coverage of remote workers. Building a scalable network that is efficient and flexible, while providing a foundation for growth, requires intentional design. A network is like a set of Lego blocks; each component is customized and shaped according to the design of the network engineer. A good deal of network design involves trying to predict the future of an IT organization, as well as taking into account the associated software requirements, network equipment vendors, and service providers. There is no perfect abstract network design, but a network can always be more efficient, consume less power, and provide more productive data back to the organization. The best network design for your company will differ from that of other companies. Always consider your network’s future needs. What are the initiatives your organization is planning over the next two years? Think about the impact of those initiatives against the readiness of your network. Whether the initiatives involve business or technology, effective network design requires planning now for the future. Business needs A network must be designed to meet the needs of your business. Since each business has unique needs, there is no single template on which to create the perfect network. When considering how a network should be built or upgraded, IT professionals should first consider the specific needs of their business. Small Consumer Sales Business This would be a company with a single location with a large sales team that makes use of VoIP phones to keep long distance charges down. They outsource their CRM to salesforce so they don‘t have to worry about supporting their customer database in-house. Since the sales team produces a large volume of daily transactions, their accounting department is constantly fulfilling orders and billing customers. They also host their own Web site, which is primarily how prospects find them. In this example, the company doesn’t have a large network, but it does have a lot of critical requirements. If the VoIP quality isn’t high, the sales team can’t close sales, and if their Internet connection is interrupted or slowed, sales can’t access customer information and prospects can’t view their Web site. Medium-sized B2B services business This company is comprised of several locations in the US and abroad, with employees at remote locations. Having grown organically, this company needs to support operations in many locations. The office uses an enterprise VoIP phone Network design consideration: It’s not about today, it’s about tomorrow. Building a network is easy. Building a network that efficiently and productively provides a foundation for fault tolerance, security, and quality of service (QoS) is difficult. Building a network that supports your organization with these core technologies as it grows is even harder. 3 When designing or expanding a network, careful consideration should be given to ensuring there is redundancy and failover for critical services and devices. system, and some of the office locations are large and include wireless networks so employees can be mobile within the office. In addition, the company hosts its own Exchange server, makes full use of Active Directory, keeps all data in an integrated Oracle database, and manages a large number of virtual servers that support data and operational needs. The considerations for this company should include flexibility in supporting multiple locations, including single employees who are not located in a physical office location. In addition, there are many types of data flowing through the network: VoIP, application, and Internet. The IT admins in this example also need to manage their wireless networks and keep an eye on their virtual servers. This is a fairly complex network with a wide variety of requirements that must be met to ensure the network can handle the company’s needs. Types of data Consideration of the different types of data that share the network is important when planning infrastructure and capacity. Each has its own quality requirements where integrity must be preserved in order to ensure that the business isn’t adversely impacted as different types of data are layered on the network. VoIP data is often critical to the business and requires stable infrastructure and quality guarantees so calls aren’t dropped and quality remains high. However, you wouldn’t want to sacrifice data-network speed for VoIP quality. If you host a lot of your own applications, such as Exchange, SQL, and Active Directory, the same considerations apply. Mission-critical requirements Networks must support the critical operations of any business. As companies do more and more business online using SaaS solutions and shared data, a well-performing network is critical. When designing or expanding a network, careful consideration should be given to ensuring there is redundancy and failover for critical services and devices. For example, back up all company data off-site in case of a natural disaster; create a second firewall device in case one fails. Providing for such considerations will ensure the network can handle any unplanned event and maintain company operations. Governance, risk and compliance responsibilities the design and administration of your network will also be influenced by regulatory mandates, information and process risks, as well as corporate procedures and policies. All organizations have voluntary and mandated obligations and must comply with specific requirements, whether you are part of an educational institutional, healthcare company, government agency, or a public company. Regardless of the type of company, you will also have to comply with new and updated information security protocols such as IPv6. Network protocols configured for the range of governance, risk- and compliance-management requirements will help detect and prevent misconduct related to policy and compliance-based requirements. Properly configured networks help identify and address weaknesses that have yet to be exploited. The network can also analyze trends that may indicate an increase (or decrease) in the likelihood that an adverse event will materialize, and monitor underlying user activities that drive risk strategies. A common mistake is to rely on too few sources of information for anything beyond past breaches and attempts to violate security protocols. To correct this, network managers should apply a full suite of management and user controls as well as monitoring techniques for each related risk and requirement. Control activities should be designed in such a way that violations trigger automated notifications based on threshold conditions and business 4 rules. Management will most likely use human judgment to determine if these violations represent actual issues of interest, but the trigger is an important first step. Triggers can be embedded in all types of controls: transaction controls, access controls, master data controls, configuration controls, and other network operational controls. Questions to ask when developing network controls include: • How will we know if this control is violated? • What information sources might be used to indicate future violations? • Who should be informed if the control fails? • What will the follow-up process entail? Monitoring activities are intended to determine if the internal control and compliance regime is designed and operating effectively. In some automated systems, control activities and monitoring activities are essentially blended together so that control performance is actually the control test. Any deficiency—minor, significant, or material—should be logged in the system so that trends can be identified. Some specific external compliance mandates involving network resources as a source of related failures, violations, reports, and detective and preventive controls, including Sarbanes-Oxley, PCI-DSS and HIPAA. It’s important to stress the absolute necessity for internal governance over information management (including processes for quality assurance, testing, auditing, monitoring, and risk assessment), which offers a proven way to ensure a reliable and effective network. Violation and potential violation of regulations and legal requirements continually surface in the IT department, so it’s best to stay in front of them. If you are interested in further reading on governance, risk, and compliance, the non-profit Open Compliance and Ethics Group (oceg.org) is a leading resource. Read more at OCEG.org. Sarbanes-Oxley Act The Sarbanes-Oxley Act of 2002 is a United States federal law enacted on July 30, 2002. The law requires public companies to provide stronger transparency in financial and accounting systems, which places pressure on IT departments to provide accurate real-time transaction reports to management. The bill was enacted in response to a number of major corporate and accounting scandals, including those affecting Enron, Tyco International, Adelphia, Peregrine Systems and WorldCom. These scandals cost investors billions of dollars when share prices collapsed, and also shook public confidence in the nation’s securities markets. PCI-DSS If your company processes credit cards, you may be subject to the Payment Card Industry Data Security Standard PCI-DSS. The PCI-DSS standard was developed by the major credit card companies as a guideline to help organizations that process card payments to prevent credit card fraud. The standard mandates that a company processing, storing, or transmitting payment card data be PCI DSS-compliant or risk losing its ability to process credit card payments, in addition to being subject to audit and/or fine. For the IT department, this means that as you expand your network, there are configurations and considerations that will drive your decisions regarding design, purchasing, and information security management. For more information about compliance with PCI-DSS, visit the PCI website: https:// pcisecuritystandards.org/security_ standards/pci_dss.shtml. HIPAA The Health Insurance Portability and Accountability Act (HIPAA) sets guidelines for the privacy of a patient’s electronic records, notably in the medical and healthcare industries. HIPAA provides specific and widely applicable personal information privacy standards and procedures for most contexts of customer It‘s important to stress the absolute necessity for internal governance over information management. 5 information retrieval, management, storage, and third-party transfer. The Department of Defense mandated a transition to IPv6 (Internet Protocol version 6) by the summer of 2008. IPv6 succeeds IPv4, the IP in widespread use today. Network DNA: The building blocks Now that we’ve reviewed the issues that need to be considered when designing a network, it’s important to understand the technologies and devices that go into building the physical network. The following information will serve as a useful reference point for anyone wishing to differentiate networking devices and understand the technologies that make modern networks function. Ethernet Ethernet is a family of frame-based computer networking technologies for local area networks (LANs). The name comes from the concept of the physical medium (the cable) carrying data through the “ether,” or space. The technology includes a number of wiring and signaling standards for the physical layer through means of network access at the Media Access Control (MAC)/Data Link Layer, and a common addressing format. Ethernet is standardized as IEEE 802.3. The combination of the twisted-pair versions of Ethernet for connecting end systems to the network, along with the fiber-optic versions for site backbones, is the most widespread of wired LAN technologies. Ethernet comes in many speeds: • 10Mbps • 100Mbp • 1000Mbps • 1Gbps (aka Gigabit Ethernet) • 10Gbps (aka Gigabit Ethernet) Most Ethernet standards run over copper cabling or fiber cabling. Other terms commonly used to describe Ethernet include 10/100, which indicates support of both 10Mbps and 100Mbps, and triple speed, which refers to support of 10/100/1000Mbps. Most servers, notebooks, and desktop PCs today come with triple-speed network interface cards (NICs). Most switches have GigE or 10GbE uplink ports for connecting to the data center. Currently, 10GbE is used almost exclusively for connectivity between network devices, but an increase in 10GbE servers is anticipated. GigE is moving to the end-point from the data center at a slow and steady rate. GigE is already becoming a standard at the data-center level. Network hardware Network infrastructure is comprised mainly of switches and routers. These core technologies are the cornerstone of how data traverses your network. Although not the most sophisticated devices, switches and routers are the building blocks of the network and can often be the root cause of many problems. For this reason, strong network management software that supports multiple vendors is essential to ensuring data monitoring and management as the network connects users and productivity. Hubs A hub is a basic networking device. It joins multiple computers together within one LAN. A network hub contains multiple ports. When a packet arrives at one port, it is copied unmodified to all ports of the hub. The destination address is not changed to a broadcast message. A network hub operates at the physical layer (layer 1) of the Open System Interconnection (OSI) model. Switches A network switch is a small hardware device that joins multiple computers together within one LAN. Technically, network switches operate at layer two (Data Link Layer) of the OSI model. Network switches appear nearly identical to network hubs, but a switch Switches and routers are the building blocks of the network and can often be the root cause of many problems. 6 generally contains more “intelligence” (and a slightly higher price tag) than a hub. Unlike hubs, network switches are capable of inspecting data packets as they are received, determining the source and destination device of that packet, and forwarding it appropriately. By delivering each message only to the connected device for which it was intended, a network switch conserves bandwidth and generally offers better performance than a hub. As with hubs, Ethernet implementations of network switches are the most common. Mainstream Ethernet network switches support 10 Mbps, 100 Mbps, or 10/100 Mbps Ethernet standards. Different models of network switches support differing numbers of connected devices. Most corporate- grade network switches provide either 24 or 48 connections for Ethernet devices. Switches can be connected to each other. Such “daisy chaining” allows progressively larger numbers of devices to join the same LAN. Some common switch functions include: VLANs (virtual local area networks) and 802.1q tagging/ trunking, QoS, PoE, Layer 3 IP Routing, and, in some cases, firewall security. Routers Routers connect two or more logical subnets, which do not necessarily map one-to-one to the physical interfaces of the router. The router is an appliance whose software and hardware are usually tailored to the tasks of routing and forwarding information. Routers generally contain a specialized operating system (for example, Cisco’s IOS, or Juniper Networks JUNOS and JUNOSe), RAM, NVRAM, flash memory, and one or more processors. A router is a basic component of a wide area network (WAN). Its core function is to forward traffic at Layer 3 across the widest variety of LAN and WAN interfaces, from dial-up modems to 10GbE. Common IP routing protocols include OSPF (open shortest path first), EIGRP (enhanced interior gateway routing protocol), BGP (border gateway protocol) and IS-IS (intermediate system to intermediate system). Routers also provide voice-circuit termination, NAT, firewall, policy routing, VPN, wireless connectivity, accounting, monitoring, and virtualization. Sometimes firewalls can be considered routers, such as products sold by Cisco, SonicWALL, Fortinet, and others. Firewalls A firewall is designed to block unauthorized access while permitting authorized communications. It is configured to permit, deny, encrypt, decrypt or proxy computer traffic between different security domains based upon a set of rules and other criteria. Firewalls include both physical hardware devices and software. All messages entering or leaving an intranet pass through the firewall, which examines each message and blocks those that don‘t meet the specified security criteria. A firewall’s basic task is to regulate the flow of traffic between computer networks of different trust levels. Typical examples include the Internet (a zone with no trust) and an internal network (a zone of higher trust). A zone with an intermediate trust level between the Internet and a trusted internal network is often referred to as a “perimeter network,” or Demilitarized zone (DMZ). The function of a virtual firewall within a network is similar to that of a physical firewall with fire doors in building construction. Just as the virtual firewall prevents intrusion to a private network, the physical firewall contains and delays structural fire from spreading. Network appliances In addition to the physical network devices that help connect computers and manage the flow of data, software programs called “network appliances” are used to aid in specialized functionality on the network. Network Software programs called “network appliances” are used to aid in specialized functionality on the network. 7 appliances are embedded system devices that provide a narrow range of functions and generally use a dedicated hardware platform. The Linux operating system is popular among many computer appliances. The following are all common network appliances: Web accelerators A web accelerator is a proxy server that reduces Web site access time. It is an appliance connected to a Web site’s front end that compresses data and shortcuts inefficient HTTP redirection. Web accelerators may use several techniques to reduce website access time: • Cache recently or frequently accessed documents so they may be sent to the client with less latency or at a faster transfer rate than the remote server • Freshen objects in the cache to ensure frequently accessed content is readily available for display • Preemptively resolve host names present in a document (HTML or JavaScript) in order to reduce latency • Pre-fetch documents that are likely to be accessed in the near future • Compress documents to a smaller size; for example, by reducing the quality of images or by sending only what’s changed since the document was last requested • Optimize the code from certain documents (such as HTML or JavaScript) • Filter out ads and other undesirable objects so they are not sent to the client • Maintain persistent TCP connections between the client and the proxy server WAN accelerators WAN accelerators are appliances placed at two or more remote sites that transparently intercept network traffic in an attempt to optimize it. Specifically, these products make it easy for organizations to accelerate the applications that are most important to users. The goal of WAN accelerators is to allow workers on the move to enjoy LAN-like access to files and applications, whether they are working from home, on the road, or even at customer sites. Most WAN accelerators optimize traffic in both directions, transparent to production applications, and require no reconfiguration to client and server software. WAN accelerators usually include: • Data streaming to optimize WAN traffic by removing redundant data and prioritizing traffic through advanced QoS mechanisms • Transport streamlining to improve the behavior of TCP Application streamlining to reduce application protocol inefficiencies and enable disconnected operations • Management streamlining to simplify the deployment, maintenance, and management of appliances Most WAN accelerator vendors claim higher performance by 5–100 times. File sharing, printing, backup, and replication reap the biggest performance gains with WAN acceleration. The gains are amplified as latency increases. A cross-town T1 link with 5 ms latency will see much less benefit than a cross-country T1 with 40 ms of latency. Finally, it should be noted that “chatty” and interactive apps such as SQL Server, Citrix and Telnet see little or no improvement from WAN accelerators. For the moment, encrypted traffic such as SSH (Secure Shell) and SSL/ TLS cannot be easily compressed and usually see only slight improvement from WAN accelerators. However, Riverbed has recently introduced SSL/ TLS acceleration (the mechanism requires that the WAN accelerators get copies of the Web site’s private keys and certificates). Though it remains unclear if this technique will stand up to audit standards like those of the Payment Card Industry (PCI), other vendors are quickly following suit. Because of performance variability, organizations considering WAN accelerators should follow three important steps. 1. Identify key applications that need improvement and define a benchmarking process The goal of WAN accelerators is to allow workers on the move to enjoy LAN-like access to files and applications, whether they are working from home, on the road, or even at customer sites. 8 2. Research optimization techniques of potential vendors to see how their techniques will help your applications. 3. Test products of multiple vendors to verify performance gains When weighing the value of WAN accelerators, three additional points are worth noting: 1. WAN performance may have already been addressed in other ways, such as by using Citrix or deploying local file or email servers 2. Most WAN accelerators work by tunneling traffic, which may require substantial changes to the existing routing, security, monitoring, and (Quality of Service) QoS policies 3. Operating systems such as Windows Vista and Longhorn include their own WAN acceleration techniques that may overlap or conflict with the use of dedicated WAN accelerators Content networking appliances Content networking is a general term for network devices that integrate with applications in order to improve performance, availability, security, or manageability. Content filtering appliances block or allow data based on analysis of content, rather than source or other criteria. Such appliances are widely used on the Internet to filter email and web access. Email spam appliances (detailed below) fall into this category. Content filtering is often broken up into outbound and inbound filtering. Outbound content filtering manages content as it leaves the corporate network. Many organizations under HIPAA, OSHA, and other regulatory mandates must inspect content before it leaves the network. Inbound content filtering works in the opposite direction, and many solutions allow the product to be used for both. Both SonicWALL and Barracuda Networks provide content filtering solutions that help companies meet government regulations. Email spam appliances Anti-spam appliances are hardware devices integrated with onboard software that implement anti-spam techniques (email) and/or anti-spam for instant messaging (also called “spim”) and are deployed at the gateway or in front of the mail server. They are normally driven by an operating system optimized for spam filtering. They are generally used in larger networks such as corporations, ISPs, universities, etc. The most wellknown spam vendors include Barracuda Networks and IronPort Systems (now owned by Cisco Systems). The reasons for choosing anti-spam appliances instead of software could include: • Preference for hardware over software • Ease of installation • Operating system requirements (for example, company policy requires Linux, but software is not available under this OS) • Independence of existing hardware Load-balancing appliances Load balancing is a technique to spread work between two or more computers, network links, CPUs, hard drives, or other resources, in order to get optimal resource utilization, throughput, or response time. The use of multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The balancing service is usually provided by a dedicated program or hardware device (such as a multilayer switch). It is commonly used to mediate internal communications in computer clusters, especially high-availability clusters. Server Load Balancing (SLB) takes incoming network connections and distributes them across multiple servers (a server farm). Transparent to both user and server, Server Load Balancing allows a service to scale beyond a single server, gracefully handle server outages and allow servers to be taken offline for maintenance in a non-disruptive manner. Say, for example, a user attempts to visit dell. Traffic for this URL will be directed at the SLB, which forwards the traffic to an available server. Should Content networking is a general term for network devices that integrate with applications in order to improve performance, availability, security, or manageability. 9 that server to fail, the SLB ensures the user is quickly redirected to one of the remaining servers. SLB sounds easy, but implementation can be burdensome without a deep understanding of how applications run on the network. In fact, Load Balancing can sometimes become a problem instead of a productivity enhancer. Smart organizations lacking expertise in this area bring in IT consultants to provide strategic direction and training. Load balancers can include a variety of special features: • Asymmetric load—A ratio can be manually assigned to cause some backend servers to get a greater share of the workload than others. This is sometimes used as a crude way to account for some servers being faster than others. • Priority activation—When the number of available servers drops below a certain number, or load gets too high, standby servers can be brought online • SSL offload and acceleration—SSL applications can be a heavy burden on the resources of a Web Server, especially on the CPU and the end users may see a slow response (or at the very least the servers are spending a lot of cycles doing things they weren’t designed to do). To resolve these kinds of issues, a Load Balancer capable of handling SSL Offloading in specialized hardware may be used. When Load Balancers are taking the SSL connections, the burden on the Web Servers is reduced and performance will not degrade for the end users. • Distributed denial of service (DDoS) attack protection—Load balancers can provide features such as SYN cookies and delayedbinding (the back-end servers don’t see the client until it finishes its TCP handshake) to mitigate SYN flood attacks and generally offload work from the servers to a more efficient platform. • HTTP compression—Reduces the amount of data to be transferred for HTTP objects by utilizing zip compression available in all modern web browsers. • TCP offload—Different vendors use different terms for it, but the idea is that each HTTP request from each client is normally a different TCP connection. This feature utilizes HTTP/1.1 to consolidate multiple HTTP requests from multiple clients into a single TCP socket to the back-end servers. • TCP buffering—The load balancer can buffer responses from the server and spoon-feed the data out to slow clients, allowing the server to move on to other tasks. • HTTP caching—The load balancer can store static content so that some requests can be handled without contacting the web servers. • Content filtering—Some load balancers can arbitrarily modify traffic on the way through. • HTTP security—Some load balancers can hide HTTP error pages, remove server identification headers from HTTP responses, and encrypt cookies so end users can’t manipulate them. SSL accelerator appliances SSL acceleration is a method of offloading the processor-intensive public key encryption algorithms involved in SSL transactions to a hardware accelerator. Typically, this is a separate card that plugs into a PCI slot in a computer with one or more co-processors able to handle much of the SSL processing. An SSL accelerator comes either as a standalone appliance or integrated into an SLB or Web-accelerator product. It works to encrypt and decrypt SSL/TLS data and to offload the CPU-intensive SSL negotiation that occurs upon initial setup of a connection. Where a regular Web server may handle a few hundred concurrent SSL sessions, an SSL accelerator uses specialized hardware to handle many thousands of them. There are two main uses of SSL accelerators. With SSL offload, the SSL accelerator encrypts traffic to the client, but not to the server. This use allows the server the full CPU benefits of not having to deal with any encryption. Public certificates are loaded only on the SSL accelerator. Server Load Balancing allows a service to scale beyond a single server, gracefully handle server outages and allow servers to be taken offline for maintenance in a non-disruptive manner. 10 For security of the decrypted data, the server and SSL accelerator should be located in the same secure facility with as few network hops between the two as possible. With SSL end-to-end, the SSL accelerator encrypts traffic all the way from the client to the server, but it is briefly decrypted and re-encrypted within the accelerator so that an SLB or other content networking product can inspect the content—for instance, to make a load balancing decision or enforce an application firewall rule. Public certificates are loaded only on the SSL accelerator, but the servers still need to have public or self-signed certificates. SSL end-to-end is used in environments where regulations or best practices mandate that decrypted traffic never be sent across a network. Application offload Application offload appliances fill a niche, offloading the inter-server message processing associated with Service-Oriented Architecture (SOA). SOA describes the interconnected services commonly found in businessto- business and back-end applicationserver environments. For instance, a user’s Web site search for an airline fare may fire off dozens or hundreds of back-end messages behind the scenes. XML messages are the most common, but MQ, JMS, and other protocols still have a significant foothold. A nearly human-readable protocol, XML makes for easy development but inefficient processing. As a result, appliances have emerged specifically to offload XML validation, transformation, compression, encryption/decryption, and forwarding. Though not actually a building block of the network, application offload appliances do overlap with traditional areas such as content networking, application firewalls, load balancing, and SSL offload. Consequently, many vendors’ product lines are in the process of converging. Wireless networking In addition to the wired network, wireless access is becoming more common and more critical. Wireless networking is often used to extend the reach of the network and provide mobility to users. Wireless is also rapidly gaining popularity in business networking. Wireless technology continues to improve, while the cost of wireless products continues to decrease. Popular wireless local area networking (WLAN) products conform to the 802.11 “Wi-Fi” standards. Many businesses today are moving towards Wireless LANs (WLAN). A WLAN typically extends an existing wired local area network. WLANs are built by attaching a device called the access point (AP) to the edge of the wired network. Clients communicate with the AP using a wireless network adapter similar in function to a traditional Ethernet adapter. Beyond laptops, wireless can connect near-line-of-site buildings and be used for inventory tracking. Wireless APs A wireless access point (WAP or AP) is a device that allows wireless communication devices to connect to a wireless network. The WAP usually connects to a wired network, and can SSL end-toend is used in environments where regulations or best practices mandate that decrypted traffic never be sent across a network. SSL Vendors Array Networks Cisco Systems Citrix Systems Coyote Point Systems F5 Networks Foundry Networks Juniper Networks Nortel Radware SonicWALL 11 relay data between the wireless devices (such as computers or printers) and wired devices on the network. A WAP can be used to join wireless devices to a wired network or to extend the range of a wireless network. It doesn’t provide the DNS, DHCP, firewall or other functions commonly found in wireless routers; it simply takes a wired or wireless network input and relays it to the wireless devices within its broadcast range. Protocols that share an RF band will coexist only with a significant performance penalty. For example, a single wireless client running 802.11b will significantly slow the performance of all 802.11g clients attached to the same AP. Some organizations attempt to limit RF band use by user type. For example, 802.11b/g in the 2.4GHz band could be used for dense “mileage-may-vary” notebook wireless connectivity, while 802.11a in the 5.0GHz band could be reserved for critical connectivity such as VoIP phones and tablet PCs. Limitations One IEEE 802.11 WAP can typically communicate with 30 client systems located within a radius of 100 meters. However, the actual range of communication can vary significantly, depending on such variables as indoor or outdoor placement, height above ground, nearby obstructions, other electronic devices that might actively interfere with the signal by broadcasting on the same frequency, type of antenna, the current weather, operating radio frequency, and the power output of devices. Network designers can extend the range of WAPs through the use of repeaters and reflectors, which can bounce or amplify radio signals that ordinarily would go un-received. In experimental conditions, wireless networking has operated over distances of several kilometers. Most jurisdictions have only a limited number of frequencies legally available for use by wireless networks. Usually, adjacent WAPs will use different frequencies to communicate with their clients in order to avoid interference between the two nearby systems. However, wireless devices can “listen” for data traffic on other frequencies, and can rapidly switch from one frequency to another to achieve better reception on a different WAP. But the limited number of frequencies can be problematic when overlap causes interference, such as in crowded downtown areas with tall buildings housing multiple WAPs. Wireless networking lags behind wired networking in terms of increasing bandwidth and throughput. As of 2004, typical wireless devices for the consumer market can reach speeds of 11 Mbit/s (megabits per second) (IEEE 802.11b) or 54 Mbit/s (IEEE 802.11a, IEEE 802.11g), wired hardware of similar cost reaches 1000 Mbit/s (Gigabit Ethernet). One impediment to increasing the speed of wireless communications comes from Wi-Fi’s use of a shared communications medium, so a WAP is only able to use somewhat less than half the actual overthe- air rate for data throughput. Thus a typical 54 MBit/s wireless connection actually carries TCP/IP data at 20 to 25 Mbit/s. Users of legacy wired networks expect the faster speeds, and people using wireless connections keenly want to see the wireless networks catch up. Top AP vendors • Cisco • 3Com • Aruba • Juniper • SonicWALL • Meru The latest standard for wireless networking is IEEE 802.11n. This new standard operates at speeds up to 600 Mbit/s and at longer distances (~50 m) than 802.11g. Use of legacy wired networks (especially in consumer applications) is expected to decline sharply as the common 100 Mbit/s speed is surpassed and users no longer Beyond laptops, wireless can connect near-lineof- site buildings and be used for inventory tracking. 12 need to worry about running wires to attain high bandwidth. That being said, some vendors are quick to market even though the standard has not been ratified (which means it could and probably will change a bit). Vendors that currently support some form of 802.11n include: • Aruba Networks • Cisco Systems • Colubris Networks • Meru Networks • Motorola / Symbol Technologies • Trapeze Networks Risks Wireless networking involves risks. By its nature, information can be intercepted when sent wirelessly. All WAPs include the ability to encrypt data and prevent unauthorized access, however, the devices must be configured to do so. In addition, not all forms of authentication and encryption are fully secure and many can be easily broken. It is possible for unauthorized users to connect to wireless networks, or even connect their own WAPs to your network. Such users are then not only able to consume your bandwidth, but also pose a security threat since they potentially have access inside your network. If your network includes wireless access points, it is important that you be able to identify rogue users and rogue access points. Power over Ethernet (PoE) When considering which hardware to purchase for your network, it is important to think about what devices will connect to that hardware. Many devices can be powered off the network, removing the need for a multitude of power outlets and allowing flexibility in placing the devices. Not all networking devices support PoE, and they are often more expensive, so it is important to consider this when making purchases. PoE technology entails a system to transmit electrical power, along with data, to remote devices over standard twisted-pair cable in an Ethernet If your network includes wireless access points, it is important that you be able to identify rogue users and rogue access points. Figure 1: Examples of PoE appliances 13 network. This technology is useful for powering IP telephones, wireless LAN access points, network switches and routers, and applications where it would be inconvenient, expensive or infeasible to supply power separately. The lure of PoE is that it works with an unmodified Ethernet cabling infrastructure, such as Cat5 (the cable that is probably running through the walls of your building). Appliances are a primary driver of PoE implementations on corporate networks Applications for PoE include video conferencing, kiosks and touch screen systems, and Wireless APs with high power requirements (802.11n and WiMAX). PoE is enabled using either an end-span or mid-span approach. With an endspan approach, the PoE is embedded into the network switches. With a midspan approach, a PoE-enabled patch panel (or individual power injector) is used to add power to cables after leaving the network switch. A word about physical cabling Almost all buildings these days are lined with Category 5 cable, commonly known as Cat 5 or “Cable and Telephone.” Cat 5 is a twisted pair cable designed for high signal integrity. This type of cable is often used for computer networks such as Ethernet, basic voice services, token ring, and ATM (at up to 155 Mbit/s, over short distances). Cat5e cabling will suffice for today’s 10/100/1000Mbps Ethernet standards. For GigE support it‘s important to purchase switches with Time Domain Reflectometer (TDR) capability. Although Cat 6 is available, there has been little traction in corporate networks. For fiber, most installations are singlemode fiber (SMF) and it is the generally accepted standard. An alternative is multimode fiber (MMF). However, support for 10GbE is relegated to short routes. Multi-location networking When organizations grow beyond a single location, they must take into consideration the requirements, infrastructure and needs of connecting and managing multiple local networks. There are several types of multilocation networks, so it is important to understand each one and how they differ. Below is a list of the most common multi-location network types. Wide area networks WANs are computer networks that cover a broad area (i.e., any network whose communications links cross metropolitan, regional, or national boundaries), or, less formally, a network that uses routers and public communications links. By contrast, personal area networks (PANs), LANs, campus area networks (CANs), and metropolitan area networks (MANs) are usually limited to a room, building, campus or specific metropolitan area, respectively. The largest and most wellknown WAN example is the Internet. In short, WANs are used to connect LANs and other types of networks together, so that users and computers in one location can communicate with users and computers in other locations. Many WANs are built for one particular organization and are private. Ethernet WAN and MAN services Ethernet services go by many names such as Gigaman, Optiman, OPT-EMAN, optical Ethernet, switched Ethernet services, resilient packet ring (RPR), E-VPLS, and provider backbone transport (PBT). Some services provide only point-to-point connectivity, while others provide multisite connectivity. They all have in common the ability to connect locations using Ethernet at a much lower cost per megabit than traditional T1 and T3 links without the requirement of a router. In evaluating Ethernet services, a few questions deserve consideration: • Is the connectivity shared (for example, an Ethernet switch) or dedicated, such as traditional TDM (time division multiplexing)? Shared services are considerably cheaper, but do not offer the comfort of dedicated when organizations grow beyond a single location, they must take into consideration the requirements, infrastructure and needs of connecting and managing multiple local networks. 14 bandwidth guarantees. Will the service provider honor client Quos? If so, to what degree? • Will the service’s geographic reach match your organization‘s needs? Typically, the narrower the geographic range, the more service options are available. Will the service be able to support multiple VLANs (802.1q)? Some older equipment requires that customer VLAN numbering be coordinated with the provider. • Will the service’s redundancy and fault tolerance match expectations? Many Ethernet services have no last-mile fiber or hardware diversity. • Will the extra bandwidth of an Ethernet MAN or WAN be sufficient enough for an organization’s long-term mobility goals (for example, keeping per-user bandwidth consumption as practicably low as possible)? Point-to-point leased lines A leased line connects exactly two locations, typically with a router at each side. Leased lines connect an organization’s branch locations to each other and to the central hub. They also typically connect an organization to the Internet. All private WAN services discussed in this guide (such as MPLS, frame relay and ATM) begin by first connecting the customer locations to the provider with leased lines. The value proposition of a symmetric digital subscriber line (SDSL) and cable modem WAN links can often be offset by increased latency that impairs the effective throughput. It’s also worth noting that traditional TDM services such as DS1 and DS3 are taxed by states. Telecom companies, working with public service commissions, have set prices that are generally uniform and nonnegotiable. In contrast, Ethernet, SDSL and cable modem services are subject to competitive price negotiations. Some organizations prefer self-managed leased lines over a provider WAN (such as ATM, frame relay or MPLS), because leased lines usually have guaranteed bandwidth, while provider WANs may charge different fees based on average usage, maximum (burst) usage or the QoS settings of client traffic. Leased lines tend to be more secure and reliable, with lower latencies since they take direct paths between the endpoints. By contrast, provider WANs backhaul all traffic to a central point, often hundreds of miles from the endpoints. Provider WANs do, however, have some significant advantages: • With a provider WAN, each branch location needs only a single router WAN interface to the service provider in order to communicate with all other branch locations. With leased lines, the hub location needs an expensive router interface for each branch office. A provider WAN permits branch locations to be meshed, meaning branches can communicate directly with each other without traversing a hub location. • Provider WANs have a single support contact for WAN moves, changes or outages, whereas leased lines may involve many vendors and different avenues of support. Provider WAN pricing is often negotiable, especially for a network composed of a large number of branch locations. Many leased line services have inflexible pricing due to tariffs. • Provider WANs may have value-add features such as backup paths, remote access, Internet access and VoIP services. MPLS VPNs Multi Protocol Label Switching (MPLS) is a data-carrying mechanism that belongs to the family of packet-switched networks. MPLS operates at an OSI Model layer that is generally considered to lie between traditional definitions of Layer 2 (Data Link Layer) and Layer 3 (Network Layer), and thus is often referred to as a “Layer 2.5” protocol. It was designed to provide a unified data-carrying service for both circuit-based clients and packet-switching clients, which provide a datagram service model. It can be used to carry many different kinds of traffic, including IP packets, as well as native ATM, SONET, and Ethernet frames. Leased lines tend to be more secure and reliable, with lower latencies since they take direct paths between the endpoints. 15 A number of different technologies were previously deployed with essentially identical goals, such as frame relay and ATM. MPLS is now replacing these technologies in the marketplace, mostly because it is better aligned with current and future technology needs. Service providers embrace MPLS because it makes operating, troubleshooting and connecting new branch locations much easier than with other types of WANs, resulting in lower pricing. Customers like it because it provides full-mesh connectivity (important for VoIP and video conferencing) where branch locations can communicate directly with each other without a central hub. MPLS does have a few drawbacks: • MPLS just drops packets and does not notify, unlike Frame Relay and ATM • MPLS only supports IPv4 unicast routing • MPLS has poor support for VoIP on fractional T1 links of 768Kbps or less Framer relay and ATM Frame relay is a data link network protocol designed to transfer data on Wide Area Networks (WANs) over fiber optic or ISDN lines. The protocol offers low latency and, to reduce overhead, does not perform any error correction, which is instead handled by other components of the network. Frame relay has traditionally provided a cost-effective way for telecommunications companies to transmit data over long distances. With the advent of MPLS, VPN and dedicated broadband services such as cable modem and DSL, the end may loom for the frame relay protocol and encapsulation. Unlike MPLS, neither ATM nor frame relay works over non-serial links such as Ethernet. Frame relay has excellent support for fragmentation and interleave on slow-speed WAN circuits. With most service providers, ATM and frame relay are fully interoperable. If you are designing a network for the future, it‘s likely that Frame Relay considerations will become less and less, however understanding the implications across WAN or T3 links between disparate network locations is important. Network Quality of Service (QoS) Quality of Service is the ability to provide different priority (see graphic below) to different applications, users, or data flows, or to guarantee a certain level of performance to a data flow. For example, a required bit rate, delay, jitter, packet dropping probability and/or bit error rate may be guaranteed. Qualities of Service guarantees are important if the network capacity is insufficient, especially for real-time streaming multimedia applications. Qualities of Service guarantees are important if the network capacity is insufficient, especially for realtime streaming multimedia applications. Priority Traffic Type 0 Best Effort 1 Background 2 Standard (Spare) 3 Excellent Load (Business Critical) 4 Controlled Load (Streaming Media) 5 Voice and Video (Less than 100ms Latency and Jitter) 6 Layer 3 Network Control Reserved Traffic (Less than 10ms Latency and Jitter) 7 Layer 2 Network Control Reserved Traffic (Lowest Latency and Jitter). 16 A defined Quality of Service may be required for certain types of network traffic, for example: • Dedicated link emulation requires guaranteed throughput and imposes limits on maximum delay and jitter • A safety-critical application, such as remote surgery, may require a guaranteed level of availability (this is also called hard QoS) • A remote system administrator may want to prioritize variable (and usually small) amounts of SSH traffic to ensure a responsive session even over a heavilyladen link • Streaming multimedia may require guaranteed throughput to ensure that a minimum level of quality is maintained • IPTV offered as a service from a service provider such as AT&T’s U-verse IP telephony or Voice over IP (VOIP) may require strict limits on jitter and delay • Video Teleconferencing (VTC) requires low jitter and latency • Alarm signaling (for example,, Burglar alarm) These types of service are called inelastic, meaning that they require a certain minimum level of bandwidth and a certain maximum latency to function. By contrast, elastic applications can take advantage of however much or little bandwidth is available. Bulk file transfer applications that rely on TCP are generally elastic. Network security A recent Forrester IT management report indicated that the number one challenge for network engineers is security. It’s been the number one challenge 10 years running. The good news is that solutions are getting stronger and there is no shortage of vendors. In fact, there are over 300 vendors of security software and services that provide solutions to companies ranging from small businesses to the Fortune 500. Unified threat management (a.K.A. Firewalls on steroids) Firewalls are at the core of most corporate security strategies. A firewall is a device or set of devices configured to permit, deny, encrypt, or proxy all computer traffic between different security domains based upon a set of rules and other criteria. Usually a firewall is a dedicated appliance or machine running firewall software that inspects network traffic passing through it, and denies or permits passage based on a set of rules. Firewalls, for obvious reasons, are deployed mainly at the perimeter of the network and typically protect a network from intrusion via outside links (most times the Internet). Generally, in larger more complex networks, firewalls are also placed intra-network to protect corporate resources from internal threats—both unintentional and malicious. Unified Threat Management (UTM) is used to describe network firewalls that have many features in one box, including e-mail spam filtering, antivirus capability, an intrusion detection (or prevention) system (IDS or IPS), and World Wide Web content filtering, along with the traditional activities of a firewall. These firewalls use proxies to process and forward all incoming traffic, though they can still frequently work in a transparent mode that disguises this fact. Higher-level inspection can be disabled so that the firewall functions like a much simpler network address translation (NAT) gateway. Top market share vendors—network security: • Cisco • Juniper • CheckPoint • Nortel • Secure Computing • SonicWALL • ISS Deep packet inspection Deep packet inspection (DPI) is a form of computer network packet filtering that examines the data and/or header part of a packet as it passes an inspection point (usually a firewall or UTM device), searching for non-protocol compliance, While most firewalls control the flow of data, application firewalls control the execution of data. 17 viruses, spam, intrusions, or predefined criteria to decide if the packet can pass or needs to be routed to a different destination, or for the purpose of collecting statistical information. This is in contrast to shallow-packet inspection (usually just called “packet inspection”) which simply checks the header portion of a packet. Deep-packet inspection (and filtering) enables advanced security functions and most firewalls today contain this capability. Application firewall While most firewalls control the flow of data, application firewalls control the execution of data. This is especially important for corporate networks that reside in the cloud, such as with Web applications. An application firewall limits the access software applications have to the operating system services and consequently, to the internal hardware resources found in a computer, much as a firewall between apartments in a residential building limits access of heat, or even fire, to the residents on either side. It has become commonplace and an industry standard to deploy application firewalls in addition to traditional network firewalls. However, many network firewalls have begun to include application firewall features, so the differences between the two are becoming grayer (similar to routers and switches). When making a purchasing decision, think first about your business requirement. Then think about the potential incremental cost of combining both functions into one device. This will save time and money down the road, as you will have one less hardware device to maintain and support. Network Access Control (NAC) Network Access Control is an approach to computer network security that attempts to unify endpoint security technology (such as antivirus, host intrusion prevention, and vulnerability assessment), user or system authentication, and network security enforcement. Because NAC represents an emerging category of security products, its definition is both evolving and controversial. The overarching goals of the concept can be distilled to: 1. Mitigation of zero-day attacks—The key value proposition of NAC solutions is the ability to prevent end-stations that lack antivirus, patches, or host-intrusion prevention software from accessing the network and placing other computers at risk of cross-contamination of network worms. 2. Policy enforcement—NAC solutions allow network operators to define policies, such as the types of computers or roles of users allowed to access areas of the network, and enforce them in switches, routers, and network middle boxes. 3. Identity and access management—Where conventional IP networks enforce access policies in terms of IP addresses, NAC environments attempt to do so based on authenticated user identities, at least for user end-stations such as laptops and desktop computers. Deployment When organizations have a local IT department and perhaps one office, many times network access control at the endpoint is sufficient. However, larger networks with WAN connections and remote offices and users need to think on a more global basis. There are three common installations of NAC solutions: 1. Inline NAC—for most businesses an inline NAC appliance installed locally is the best deployment. Examples include the Sonic WALL Agential series of appliances. The only downside to this scenario is that all network traffic must traverse the NAC; therefore, the IT admin is perhaps at increasing risk of losing access control in the event of appliance failure. The benefit is easy deployment that won’t break the bank. From a network management perspective, it also means fewer nodes to monitor and manage. As with all appliances, ensure that your NAC device supports SNMP and it is enabled. 2. Out-of-band NAC—for medium-to large organizations, an out-of-band NAC appliance is the better solution because Network Access Control is an approach to computer network security that attempts to unify endpoint security technology user or system authentication, and network security enforcement. 18 only the posture-assessment traffic will traverse the NAC appliance. 3. DHCP registration—for larger enterprises, DHCP (dynamic host configuration protocol) registration along with an out of- band NAC (mentioned in #2) is the best approach. This approach will dynamically assign IP addresses to endpoints. Challenges How to integrate NAC with the existing workstation login procedures represents a key challenge, especially if workstation login requires network access. For example, if workstation authentication is tied to LDAP (lightweight directory access protocol) or Active Directory (AD), then cached credentials must be used and login scripts may need to be adjusted to allow for the delay of the posture-assessment phase. The login process can also be complicated by third-party security software (such as VPN clients and two-factor authentication products like SecurlD) and remote control software. Other challenges include: • Supporting software that runs at boot time (in unattended mode or before workstation login), such as Preboot execution Environment (PXE) or Wake-on-LAN (WoL). • Supporting non-workstation network devices such as VoIP phones and network printers. Will such devices be configured as exceptions to NAC rules and, if so, how will the network defend against a malicious user who spoofs a trusted device’s network settings? • Designing and testing the NAC network. With all its complexity and software interdependence, the NAC should never be implemented without first testing the design in a proper lab environment with real workstations, printers, VoIP phones and other devices. Not all organizations have such lab environments. • Detecting unauthorized NAT routers. If just one device connected to a NAT router passes posture assessment, all other devices attached to that router will also be allowed in, undermining security policy. RF signatures make detection of wireless routers relatively straightforward, but wired routers are far more difficult to find. Tying it all together: Network management Network Management is a term used to describe a broad subject of managing computer networks. There exists a wide variety of software and hardware products that help network system administrators manage a network. Generally, however, network management covers: • Security: Ensuring that the network is protected from unauthorized users. • Performance: Eliminating bottlenecks in the network. • Reliability: Making sure the network is available to users and responding to hardware and software malfunctions. Specific functions that are performed as part of network management include controlling, planning, allocating, deploying, coordinating and monitoring the resources of a network, network planning, frequency allocation, predetermined traffic routing to support load balancing, cryptographic key distribution authorization, configuration management, fault management, security management, performance management, bandwidth management, and accounting management. FCAPS The baseline of most Network Management Systems is the support of FCAPS. FCAPS is the ISO Telecommunications Management Network model and framework for network management. FCAPS is an acronym for Fault, Configuration, Accounting, Performance, and Security, which are the management categories into which the ISO model defines network management tasks. In nonbilling organizations, Accounting is usually replaced with Administration. Fault management a fault is an event that has a negative significance. The goal of fault management is to recognize, isolate, Integrating NAC with the existing workstation login procedures represents a key challenge, especially if workstation login requires network access. 19 correct and log faults that occur in the network. Fault management uses trend analysis to predict errors, so that the network will always be available. This can be established by monitoring different things for abnormal behavior. When a fault or event occurs, a network component will often send a notification to the network operator using a proprietary or open protocol such as SNMP, or at least write a message to its console for a console server to catch and log/page. This notification is supposed to trigger automatic or manual activities such as the gathering of more data to identify the nature and severity of the problem or to bring specific down equipment back on-line. When choosing network management software, consider using a system that supports automatic remediation. It‘s one thing to alert the network engineer when a fault occurs, it is better when the system can automatically remediate the problem. For example, a server goes haywire every so often blowing through memory. You can‘t seem to solve the problem, but you know that a reboot will at least plug the hole for a few weeks at a time. Strong network management software will notify you that the fault has occurred, and also reboot the machine automatically. Configuration management Configuration management is the process of managing firmware versions and configurations of the firmware on managed devices. This include gathering and storing configurations, backing up configurations, tracking changes of Fault management uses trend analysis to predict errors, so that the network will always be available. Figure 2: Fault and performance management align IT to business Servers Database Network Applications 20 configurations, and creating policies to mass update configurations. Make sure your NMS supports mass updates of configurations across a pool of devices. Some network management systems will allow the user to create a policy so mass configuration updates and changes can be impl
Posted on: Sat, 15 Jun 2013 17:19:17 +0000

Trending Topics



/b>
Personally I have no strong feelings about what Scotland does one

Recently Viewed Topics




© 2015