Network Switches Grow Up

Once just hubs that directed network traffic, network switches have steadily gained intelligence. They can now help with user access control of applications and enable fault tolerance to prevent network downtime. This increased functionality not only makes the corporate network run better; it can relieve some of the administrative load on network administrators as well as other IT professionals.

"Enterprises can do much more with their existing equipment thanks to the increased sophistication and speed of switches," says Jim Kelton, president of Altius Information Technology Inc., an IT consulting firm based in Santa Ana, Calif., that specializes in performing network and security assessments. "And even as speeds go up -- we've seen them rise from 10 megabits to 100 megabits to 1 gigabit -- prices are coming down dramatically. As a result, we see companies rapidly upgrading to the higher-capacity switches."

"IT is very interested in this because many companies are building converged networks, putting IP and telephone traffic on the same network, and in general trying to enable various kinds of traffic using the same infrastructure," says Mary Petrosky, a network analyst based in San Mateo, Calif.

All On Top
The increased functionality is evident in that switches are no longer merely acting as Layer 2 and Layer 3 devices, but have moved up the network topology ladder to being Layer 4 and even Layer 7 devices. (article continues)


Topology is the term used to describe device configuration. Layer 1 switches are rapidly becoming obsolete; not so much switches as simple hubs, they don't manage traffic, but merely pass it from one network device to another. Slightly more sophisticated, Layer 2 switches can interconnect a small number of devices in a home or office. Layer 3 switches are built on a higher topology still. Often referred to as routers, they increase network efficiency by delivering traffic only to those ports that have been configured to "listen" to that traffic.

There are no Layer 5 or Layer 6 switches. The definition of Layer 7 switches varies from vendor to vendor, but these are typically capable of performing packet inspection at a very granular level while controlling quality of service (QoS) as well as a host of security-related functions.

Enterprises are using the more intelligent devices -- specifically, the Layer 4 and Layer 7 switches -- in the following ways:

  • Control user access to applications and other computing resources "The more advanced switches have very granular control over traffic, which allows an enterprise to determine what content goes where and who has access to it," says Petrosky. For example, companies can develop policies that determine which users get access to which applications and databases, and can easily let companies allow or disallow access to specific users very quickly. 
  • Ensure quality of service (QoS)  With the converged voice and data networks that many companies have installed, the more sophisticated (article continues)     


routers can detect and prioritize voice traffic -- which is more sensitive to latency than data traffic -- over data traffic. Likewise, higher priority data traffic can take precedence over less important data.

  • Enable fault tolerance The higher-end switches offer redundant power supplies, cooling and forwarding engines that make networks more fault-tolerant. Manufacturers are also migrating this functionality down to smaller platforms. This is good news, especially in smaller offices using only a single switch; if that switch goes down, the entire office is offline. Yet now even lower-end switches are adding this capability.
  • Aid compliance "Due to regulations such as Sarbanes Oxley, it is very important to show that only certain people have had access to certain data," says Petrosky. "The higher-end switches provide you with a clear audit trail that allows you to do just that."

All this additional capacity and functionality raises even more security issues. "The higher bandwidth these devices deliver allows people to email ever-larger documents of all types. And any time that you connect more people to more information more easily, you up the risk of security breaches," says Kelton.

Still, he says, the pros greatly outweigh the cons. With switches evolving so rapidly, Kelton recommends that enterprises evaluate their networks and upgrade their switches at least every two years. "Everything is changing so rapidly and the increased functionality is so potentially valuable that enterprises need to make a point of keeping up," he says.

The Pros and Cons of Server Consolidation

One of the hottest prospects to address the high administrative, support and infrastructure costs of the data center is server consolidation. "In the long term, server consolidation is something you absolutely must do," says Tim Pacileo, executive consultant at Compass American Inc., a metrics-based IT consulting firm based in New York City.

However, along with consolidation's significant advantages, there are also significant challenges that may diminish its benefits. "The cost savings look great on paper," concedes Pacileo. "But once you consider all the factors, you understand that you might not realize all those savings right away."

Together At Last
Server consolidation involves gathering data and applications stored or running on two or more physical servers onto a single server. This reduces the number of physical machines required to meet all the processing and/or storage requirements of the data center. Some organizations take this concept a step further through virtualization, which makes a single physical server look like multiple logical servers. Either way, there are a number of ways in which consolidation reduces the data center's total cost of ownership (TCO):

  • Lessened hardware requirements Because it's no longer necessary to purchase a new server for every application or business unit, you can cut the total number of machines. When each department has its own dedicated servers, these reductions can be dramatic.
  • Minimized physical space Fewer servers mean less total space dedicated to your data center.
  • Reduced energy costs Likewise, fewer physical servers in a reduced physical space means reduced energy required to heat and cool the facilities. As energy costs continue to escalate, cutting down on servers can result in significant (article continues)


savings. "With the current drive toward 'green' data centers, this is a major business driver," says John Sloan, a senior research analyst at Info-Tech    Research Group, in London, Ont., Canada.

  • Decreased labor and support costs Experts estimate that more than 70 percent of the TCO of the data center derives from administrative, labor and outsourced services. Server consolidation should slash those expenses. "Just take the amount of time it takes to provision a new server," says Jacob Farmer, chief technology officer at Cambridge Computer Corp., a consulting firm specializing in data protection and storage networking, located in Waltham, Mass. "Before, you had to order it, transport it to and from the loading dock, screw it into the racks and configure it. Today, all you need to do is sit down at a terminal and conjure up a new virtual server."

Curb Your Enthusiasm
Before moving ahead with consolidation plans, though, data center managers must consider a number of challenges. Chief among them: disaster recovery. "You have to have a backup machine that has the same capabilities of each production server," Pacileo points out. "These hardware costs -- and the related support costs -- can add up, and reduce your anticipated savings."

An added detriment: In the past, when a server went down, it might inconvenience a couple of hundred people. Now, says Pacileo, "If you consolidate 30 servers onto one, and that goes down, you take down thousands of users. The risks are much greater."

Paradoxically, because it's so easy to create new virtual servers on existing hardware, many data centers find the number of logical servers proliferating, leading to higher complexity and greater related support costs.
"It's so easy to get as many virtual servers as you want without jumping through hoops," says Farmer. "People think, 'Why not just create another server?' without thinking through the ramifications."  (article continues)


Follow a Plan
Implementing an effective server consolidation is a four-step process, according to Chris Taylor, director of professional services at Evolving Solutions, a Minneapolis, Minn.-based consulting integrator specializing in server and storage virtualization:

  • Assess "The first step is taking an inventory of each existing server, and the applications or data that reside on them," says Taylor. "You have to understand the dependencies and utilization of all the resources that applications take today so you can anticipate your future needs."
  • Design Data center managers must design the way the data center will look after the consolidation by specifying which servers will be consolidated together.
  • Migrate IT organizations should begin the transfer of data slowly, with lower-priority applications, and take a phased approach to consolidation.
  • Optimize Finally, data center managers must continue optimizing the server consolidation effort through continuous monitoring of performance and capacity. It's critical to establish a baseline to measure the existing state of server utilization, related hardware infrastructure, and support costs for comparison with post-consolidation metrics. One caveat: "In some cases, data centers end up with more virtual than physical resources, and they are back to where they started in terms of complexity and support costs," says Taylor.

Despite today's challenges, businesses looking to cut costs, increase utilization, and promote organizational effectiveness will inevitably choose consolidation. "As we move from decentralized, department-centric ways to manage computing resources back to a centralized model, reducing the number of servers is the required first step," Sloan says.

Revise Your Technology Refresh Strategy

High on the list of certainties, right after death and taxes, is the knowledge that the computers your company bought just three months ago are already obsolete. But when is the best time to upgrade?

When developing technology refresh strategies, enterprises are caught between the pull of new technology and the realities of budgets and operations. While these competing forces play out differently from company to company -- sometimes from business unit to business unit -- the underlying issues are similar.

Until recently, hardware refresh strategies were often driven by tax depreciation schedules. Many companies were unwilling to replace assets on the books, even if their real value was little or nothing.

During the last five years, with prices falling and computers becoming more powerful, accounting imperatives have become less important. "The asset may not be depreciated, but the business isn't getting any value [from the asset] because it can't take advantage of the newer software it needs to compete," says Mike Yudkin, senior vice president of Design Strategy Corporation in New York City. During the last few years, he has seen more companies expensing their equipment and others moving to shorter, usually three-year, depreciation schedules in order to gain earlier access to new technology. (article continues)


Multiple Forces Drive Change
The demand for powerful new software isn't the only consideration driving refresh schedules. Not everyone requires up-to-date technology, and in many cases other factors become more important. These include:

  • The failure rate of aging equipment Equipment failure rates exceeding seven to 10 percent per year, depending on warranty coverage, may drive up total cost of ownership. "Even if the item is under warranty, the repair still takes time and attention," says Yudkin.  "But failure rates have fallen as equipment quality has improved, and this may justify extending equipment life."  Adds Bob Bowling, a vice president of desktop operations at a major New York-based financial institution, "You could stretch it into a fourth year of the cycle, even though it's out of warranty -- there's nothing wrong with that."
  • The IT department may not be able to support aging equipment According to Bowling, some of his company's business units are holding on to older equipment they don't want to replace, and his staff has had to scrounge for parts to keep this equipment operational. 
  • Existing applications may be incompatible with new hardware Applications created in-house may have to be redeveloped, tested, packaged and deployed; off-the-shelf applications may have to be upgraded. In both cases, users may need retraining. Refreshing software is expensive both for the IT department that performs the upgrades and for the business unit that must perform acceptance testing, and it is also potentially disruptive to operations.
  • The need to stabilize equipment expenditures Companies often adopt rolling refresh cycles in order to keep the IT budget level from year to year. But rolling replacements can lead to incompatibilities -- for example, colleagues may not be able to exchange files.
  • The potential instability of new operating systems "Many companies do not rush into brand-new technology because they want to get the kinks out first," Yudkin says. "The general guideline is not to be the first on the block for a mass rollout."

No One Size Fits
Because so many factors come into play, it's best not to impose a one-size-fits-all rule on the entire company when you develop a tech refresh schedule. Rather, consider evaluating each business unit separately. Weigh the benefits of the proposed upgrade against the costs and potential risks. Keep in mind that revenue-driving units may need more frequent upgrades than back-office operations.

Evaluate your changes based on lower maintenance costs, higher productivity and faster time to market. But keep in mind that even the most carefully drawn refresh strategy may be overridden by changing business needs. A unit that is likely to be sold or relocated probably won't be given new equipment even if it is theoretically due for a refresh.

In another example, Bowling says that his company's refresh strategy is being impacted by the shift from desktops to laptops. With the narrowing of the desktop/laptop price differential, the growth of mobility and increasing attention given to business continuity and flexible work arrangements, more and more employees are getting laptops. A strong case can be made for not replacing their desktop computers. "Why not have just one asset?" Bowling asks. "If you have to maintain two assets, then all the patches and updates have to be done twice. It's not an effective use of IT."

Continuous Data Protection: Securing Data

It's virtually a no-brainer. Your data is backed up the instant you make a change to it -- any change, no matter how small. You can retrieve it immediately. That's right; no need to hassle with locating and accessing the right backup tape. And the expense is easily justified by the time you save from not having to redo work lost since the system was last backed up using traditional methods -- whether that was an hour ago, last night, or last week.

Given all this, small wonder that enterprises in increasing numbers are implementing continuous data protection, aka CDP.

"CDP is rapidly turning out to be one of the major technologies emerging to protect all-important enterprise data," says Jim Addlesberger, president and CEO of NavigateStorage, a Boston-based systems integrator specializing in data storage and backup.

Of course, IT departments have been doing backup and recovery for more than 40 years, but until recently it has been the practice to make periodic copies of all of your most important data onto tape. That tape is subsequently detached from the network and stored in a safe place. Most enterprises take it a step further by making copies of such backup tapes and keep them offsite for extra protection. (article continues)


Something Completely Different
But CDP takes a different approach entirely. "Backup isn't the issue -- protection is the issue," says Mike Karp, a senior analyst at Enterprise Management Associates in Portsmouth, N.H. "Everyone does backups. The question is, can you recover data in its correct state at the exact point at which it needs to be recovered?"

"Agree," says Benjamin Aronson, president of Aronson & Associates Inc., a Sunnyvale, Calif.-based IT consulting firm. "Recovering data from tape is an arduous, time-consuming chore," he says. "No one likes to do it. Frequently it doesn't work. And you don't necessarily -- by definition of the process -- get back all the data you need." On the other hand, adds Aronson, CDP promises seamless backup, seamless recovery and virtually nonexistent administrative overhead.

But to implement CDP requires a major shift in focus: from thinking about backup as a time-based action to an event-based one, according to Karp. With traditional backup methods and supporting technologies, data copying is instigated according to a predefined schedule. This could be every week, every night or every hour, depending on the needs of your business. However, with CDP, every time you make a change to data, that "event" triggers the change to be copied incrementally onto the backup device. Later on, if you want to recover a previous version of that data, you scan through the data -- using the "recovery" interface provided by the CDP application -- and find the precise snapshot of the data that you require. (article continues)


"Say a virus gets into your system, and you don't realize it for several hours," says Farid Neema, president of Peripheral Concepts Inc., a storage-management consultancy based in Santa Barbara, Calif. "You want to go back to that exact point in time before that virus was introduced. The difference between CDP and the traditional periodic backups of data -- even if done every hour -- is that with traditional methods, you can lose a substantial amount of work. This can be serious for many professionals, but for people in the financial services or other transaction-intensive industries it can be disastrous."

And it seems that IT managers are getting it. A recent survey by Peripheral Concepts found that the number of enterprise sites that have already implemented CDP grew from 34 percent in 2005 to 43 percent this year. In 2007, that number will increase to 58 percent. 

And, according to the participants in the study, CDP is extremely easy to justify: return on investment (ROI) of implementing CDP was achieved in less than two years by 61 percent of respondents.

Why Wait?
Given the overwhelming arguments in favor of implementing CDP, why are some enterprises still hesitating? There are several reasons, according to Karp. "Most CDP products have been developed by smaller vendors, which makes enterprises nervous," he says. "Moreover, they are stand-alone products, not integrated with the mainstream data backup applications sold by larger vendors." What's required: easy integration of CDP with enterprises' existing backup and recovery products and processes.

Understandably, most IT managers are reluctant to force their employees to learn yet another backup tool. "But when CDP is integrated and accepted as an 'evolutionary' improvement to existing backup-and-recovery processes," adds Karp, "sales of the technology will really take off."

Are Remote Users Threatening Your Security?

The troubling news is remote access remains as much of a security threat for enterprise as it ever was. The good news is that companies can diminish the likelihood of a breach with the right management and technology.

"Remote access is probably the key problem to all problems we have within security,'' says Doug Howard, chief operating officer at BT Counterpane, a managed security firm in Chantilly, Va. "10 years ago, most of the IT security concerns we have today didn't exist because you didn't have people logging into remote systems. Today, in order for anyone to do business, you have to let other people inside our systems whether it be partners, remote workers or suppliers."

Securing networks that are accessed remotely is as much a business issue as it is a technology one, security experts say. Preventing security breaches is not just about putting layers of technology on the network -- but ensuring that the technology is properly managed.

A Management Issue
While remote access is a manageable threat, it's one that tends to be a low priority for IT, says Jon Oltsik, a senior analyst at Enterprise Strategy Group in Milford, Mass. (article continues)


"Part of the problem is in the number of remote access methods and the sloppiness of the way things are managed,'' says Oltsik. "Companies tend to have multiple ways they allow remote workers to get onto their systems."

Those workers may have user accounts and laptops they just don't manage properly. IT needs to get a better handle on what is on an employee's laptop. "Managing it is as important as providing the access," says Oltsik. "IT doesn't look at remote access as an end-to-end solution but as a point requirement, and doesn't integrate it into security as a whole."

Both Oltsik and Howard recommend deploying an IDS (intrusion detection system) and/or an IPS (intrusion prevention system), which both examine incoming network traffic and block it or alert a network administrator if the system sees something suspicious.

Companies make an incorrect assumption that once someone is given a remote access account they have the appropriate credentials. But Oltsik says no one accessing the network should be trusted until the proper steps have been taken to ensure they do not pose a security threat. Strong auditing and reporting is also lacking and that's a function of having too many different accounts in too many places, making it difficult to get an adequate big picture, the experts say. A classic example is when a female employee gets married and changes her name -- her old account is then deleted in one place on the network but not another. If someone knows about that vulnerability, they can exploit it. (article continues)


Besides redundant accounts, another challenge is that users may be accessing internal applications or Web applications that reside on different servers with different system administrators, says Oltsik. That creates another challenge when trying to audit everything from a single place.

A Balancing Act
Howard cautions that as IT looks for ways to decrease remote access security risks by adding more technology to the mix, the complexity for the end user increases. For example, he says some companies will use two-factor authentication (a security technique that combines something you have with something you know), which means when someone remotely logs in with a user name and password, they will also have a physical token that generates a number that must be input as a second step. That adds more complexity for the end user, but also adds more security to the process. Yet, tokens have shown to be very successful and most corporations use two-factor authentication for accessing the network remotely.

Another potential hassle for the remote worker -- but one both Howard and Oltsik say is critical -- is to check the remote computer's antivirus settings before letting anyone onto the network. If those systems are outdated, a laptop could be infected and a virus could spread to the network.

"It's a somewhat tedious process because if users are trying to connect remotely," says Howard, "they're trying to do it fast and if they get a message saying, 'Go update your antivirus software,' it's frustrating for them."

The demand for remote connections will only increase, and the potential for unintended malicious access will continue to be a serious concern for enterprise security teams.  "We've done a good job of allowing people to access networks remotely, but not at securing that access,'' says Oltsik. "Remote access has grown organically and now it's time we take a step back and figure out what to do strategically and consolidate as we need to."