Outsource Your Data Center

Rapid change in technology is nothing new. But some things remain a constant: CIOs will always be under pressure to prove IT spending improves business growth and, at the same time, find ways to deliver more with less. These days, that means figuring out which parts of IT to keep in-house and which to outsource.

One logical conclusion, industry experts say, is to let someone else take over the organization's IT assets, or physical infrastructure. After all, says Laura DiDio, a research fellow at IT firm Yankee Group, based in Boston, Mass., "If you could get rid of an Excedrin headache, you would, wouldn't you?''

Someone Else's Headache
More and more IT departments concede that they don't have the budget to buy an increasingly complex infrastructure, let alone hire and train the technicians to service it. Outsourcing to a hosted environment offers an attractive solution. "The trend started several years ago, so moving the physical assets is not a new concept," says Eugene V. Zakharov, a senior analyst at Technology Business Research, Inc., an IT market research firm in Hampton, N.H. Recently, he adds, "I'm seeing more acceptance in the marketplace."

A hosted hardware model best suits companies that have data-intensive environments. Financial services and manufacturing industries are prime candidates. (article continues)




The potential advantages of outsourcing the infrastructure include:

  • Reduction in overhead costs, including avoiding capital expenditures Having someone else host your hardware frees up resources and helps to offload non-core functions. Zakharov says that it also helps companies avoid the cost of "chasing technology" when they need to upgrade their infrastructure. "Now the third-party vendor is responsible for ensuring that your server is running the most recent technology and has the right patches,'' he says.
  • Reduced staffing issues When companies outsource, the vendor is responsible for finding the right people and ensuring they are properly trained; turnover and retention become their problem.

Challenges Remain
For all the potential advantages, the decision to switch to a hosted data center is not a slam-dunk. The stumbling blocks include:

  • Higher costs than anticipated  While the cost of outsourcing a hardware infrastructure varies widely, Zakharov estimates that it could consume over 50 percent of an IT department's annual budget. DiDio sees companies spending between 20 percent and 40 percent of their budgets on maintenance and equipment management, with more for applications and people costs.   
  • Vendor performance issues  Glitches often arise over security, not meeting promised cost savings or failing to reduce downtime. "Not all contracts go well," warns Zakharov. "Companies end up taking the assets -- whether people or equipment -- back in-house.''
  • Losing direct control  "When you're in a hosted environment, you're at least one step removed and that's enough to make managers hesitate," says Tere' Bracco, a senior research director at Current Analysis, located in Seaside, Calif. "They want to make sure they can actually produce reports, spot problems and take action to keep them in compliance." (article continues)



IT managers also experience a loss of control when the data crosses the firewall into a hosted environment. Says Bracco, "One of the things they are completely unsure about is how they are going to be able to track access and data bleed. Is that hosted environment really complying with the regulatory requirements of their industry?"

Caveat Emptor
Before you jump to a hosted data center, do a thorough due diligence. Is the vendor in a secure financial position? Does it provide quality products and services? Can the vendor show actual results it has delivered? Are they going to store sensitive information somewhere safe from hackers? How solid are the vendor's business continuity and disaster recovery services? How does the vendor deal with employee attrition?

When customers consider outsourcing, DiDio always advises them to have a liaison in the IT department who works closely with the outsourcer to determine what they're doing, if they're doing it correctly and whether they are doing it in the most efficient manner. "You need to be kept apprised and abreast of what you're hosting and ask them for the appropriate depth in the reports," she says. "You want to hear about the latest updates in hardware and the corrective action they took if there was a problem."

In the final analysis, bear in mind that just because equipment moves outside the physical walls doesn't mean you relinquish overall management for it. "This is a live, ongoing investment,'' DiDio says. "Your data is still your primary asset, so you have to take responsibility."

Data on Demand

Ramiro Perez doesn't like to wait around. As purchasing manager for Copart Inc., a $500 million automotive services firm based in Fairfield, Calif. helping insurance companies process and sell "total loss" vehicles, his job is all about efficiency -- and that's as much about making sure his organization doesn't waste time as it is about controlling costs. That's why last year Perez convinced his IT director to sign up with ExpenseWatch, a hosted Web-based application for managing operating expenses.

"We run 120 sites throughout the United States and Canada," says Perez, "and the people in our organization who have the authority to sign purchase orders and invoices are always traveling. With ExpenseWatch, our executives can access expense reports, contracts, invoices and quotes as they need them from anywhere in the world. It's extraordinarily efficient."

Welcome to the era of on-demand software. Also known as software as a service (SaaS), these applications are enticing enterprises to move critical data from internally based systems to multi-tenant, hosted solutions.

Access Anywhere at Any Time
SaaS offers companies new solutions to the perennial problems of cost, risk and, most of all, access.

Although corporations have long provided employees with remote access to applications and data inside the firewall, on-demand applications make remote access the norm rather than an additional access route. For the most part, there's no client component allowing new users to sign up swiftly and easily. Employees can use any Internet PC (article continues)


equipped with a browser to gain access to critical data.

"There's no longer any need to go through the hassle of setting up employees to work through a virtual private network (VPN) to gain remote access, since all of the remote data resides online," says Eric Berridge, co-founder and principal of the Bluewolf Group, an IT consulting firm in New York, N.Y., specializing in helping enterprises move to on-demand applications. Bluewolf derives most of its revenues from implementations of Salesforce, an on-demand customer relationship management (CRM) application, and Open Air, a hosted, professional services automation (PSA) solution.

Instead of paying hundreds of thousands or even millions of dollars upfront to install on-premise software, companies pay hosted applications monthly subscription fees. Furthermore, these fees can be deducted as operating expenses rather than amortized as capital expenditures.

Hosted applications also result in reduced risk. Most on-premises application installations fail during an implementation process typically so bug-ridden that it can take years to complete. The time an on-demand service requires to get required functionality customized and ready to use is just weeks -- or even days. And if on demand solutions don't work, the customer can simply walk away -- no strings attached.

Earlier versions of SaaS often foundered on the issue of security. Companies were uncomfortable with the notion that their information crown jewels were residing on someone else's servers. But as more time passes without data breaches at on-demand vendors, companies are becoming infinitely more reassured about the safety of their data. "Security is a non-issue as far as we're concerned," says Perez. (article continues)


SaaS-y Promises
Hosted applications provide further advantages which, while less obvious, save time and boost an organization's bottom line. These include:

More reliable off-site backup Many companies are beginning to see that SaaS' built-in off-site data storage makes it a perfect solution for their data recovery and business continuity needs, says Jeffrey Kaplan, managing director of THINKStrategies, a consulting firm based in Wellesley, Mass. "It's one of those unintended benefits that has increased in importance as people become more aware of what SaaS can do," he says.

Compliance with regulatory mandates Most companies are facing increased regulatory mandates, either because of universal laws such as Sarbanes-Oxley (SOX) or industry-specific regulations like the Health Insurance Portability and Accountability Act (HIPAA). Most on demand solutions offer automatic compliance with these regulations, says Kaplan.

Cheaper archiving After instant and always available data access, the main reason Copart chose ExpenseWatch was because the archiving of critical information could be achieved less expensively than with an on-premises solution. "Document retention was huge for us," says Perez. "Otherwise, we would have had to invest in servers, bring them in-house and hire personnel to maintain them."

Easy transition to globalization Because on demand applications are so easily scalable, as well as being instantly available in other languages, many international companies are turning to them as well. "An on-premises solution generally requires installing a whole new version of the software. It's much easier to switch between languages and currencies (with a hosted application)," says Berridge. "All it takes is a click of a mouse."

The bottom line: Any company looking to cut implementation costs, reduce network-operating risk and provide near-universal data access should further investigate whether SaaS' promises pay off.

Five Critical Criteria for Server Virtualization

Virtualization is the topic du jour among IT professionals. As CIOs continue to search for more efficient - and cost-effective - ways to manage the data center, the promise of being able to do more with fewer resources is spurring a large proportion of enterprises to test the virtualization waters.

Server virtualization is defined as a technology that allows you to transform a physical computing resource into a logical one. The technology can be implemented as a single physical machine that operates like multiple servers, or multiple servers that appear as a single machine. Either way, it promises significant benefits, among them better hardware utilization, improved load balancing, more flexible provisioning, lower power consumption and reduced data center personnel costs.

"Virtualization allows organizations to better utilize their resources, as well as have the ability to respond quickly and agilely to changing business needs," says Barb Goldworm, president and chief analyst at Focus Consulting, a Boulder, Colo.-based research firm specializing in systems and storage. "It's a very powerful technology."

But as is often the case with new technologies, the reality may not live up to the promise. (article continues)


The Right Steps to Real Results
How can you maximize your chances of getting the results you want?  Before implementing the technology, there are critical factors to take into account. The top five actions include:

  • Analyze which applications are best served by virtualization. Not all applications work well in a virtualized environment. "One of the main challenges is figuring out the types of applications that are most applicable for benefiting from this environment," says Matt Brudzyn, a senior research consultant with the Info-Tech Research Group in London, Ontario. "Transaction-intensive applications like databases that strain the network and storage components of a virtualized resource tend not to be the most effective use of the technology." There are the licensing issues to consider, too. Many software vendors still sell licenses based on the amount of hardware used, and "those license fees can rapidly add up in a virtualized environment," he says. 
  • Decide what problem you want to tackle first, and advance incrementally. As with any potentially large IT project, you "probably don't want to flip the switch on your entire data center overnight," says Gordon Haff, principal IT advisor with industry analyst firm Illuminata Inc. in Nashua, N.H. It's better to start with a pilot project and proceed gradually, he says. "Go for the low-hanging fruit. Luckily, virtualization lends itself nicely to starting on a relatively small scale." (article continues)


  • Carefully evaluate the functionality and pricing offered by each vendor. "With so many new players jumping into the market, it pays to take a step back and understand which virtualization platforms make the most sense for your specific needs," says Tony Iams, senior analyst with consulting firm Ideas International in Rye Brook, N.Y. Maturity and the functional capabilities of these products are obviously vital issues, but price is important, too. "Many of these less-established firms are pricing their products extremely aggressively, and you might not need all the functionality offered by the more expensive offerings," he says.
  • Provide your data center staff with adequate training and professional support. These are brand-new concepts that require brand-new skills from your data center personnel. Although in theory employee productivity should improve, workers will need time-and sufficient training-to get up to speed. And, adds Haff, "you may well decide to hire consultants to help develop your in-house skill sets."
  • Motivate users to adapt swiftly to the new computing model. Your users may resist the idea of losing control of the physical servers formerly dedicated to their applications or worry about sharing capacity resources with other departments or lines of business. In such cases, passing on the often-considerable savings that your data center reaps from virtualization can be a strong motivation for users to embrace the new technology. "If you're using a charge-back model, cut your users' costs proportionally with how much money you're saving," says Goldworm.

The bottom line: "It's too early to give a recipe for success" in the fast-changing virtualization field, says Iams. "But the cost savings, increased efficiency and increased agility are proving very attractive."

The New Virtues of Virtualization

After being sidelined for years by low-cost PCs and servers, the virtues of virtualization are re-emerging. Its adoption is accelerating among companies of all sizes. "The world will be virtualized in several years," says Sal Capizzi, senior analyst at the Boston, Mass. based Yankee Group. "In five years, they'll be doing things you wouldn't even think about today."

The most immediate benefit for administrators is simplified storage management. Virtualization storage software lets administrators manage disk arrays of different types from different manufacturers and scattered in different locations as if they were a single pool of hard disks. This makes daily tasks, like allocating the correct amount of storage to any particular application, a straightforward operation rather than a complex calculation.

Virtualization also streamlines as well as simplifies storage management. Because applications are no longer mapped to specific physical storage devices, disk space doesn't need to be held in reserve for them. This load balancing improves application performance and allows disk space to be used more efficiently.

A less-visible but equally significant advantage: storage virtualization eliminates application downtime. Many tasks that once required applications to be brought down -- swapping out servers whose leases have expired, archiving data, performing backups -- can now be executed without any impact on the application.

More Space for More Storage Functions
Storage managers may find that virtualization increases their choices of disk hardware. Managing a heterogeneous storage environment becomes much simpler in a virtualized environment, where applications no longer interface directly with storage devices. "It wouldn't matter what the hardware is; the application wouldn't worry about that," says Capizzi. Managers are freer to mix and match hardware from different vendors, and to substitute lower-cost for higher-cost disk hardware. (article continues)


However, it's not yet clear whether virtualization will result in more or less demand for storage. Since storage can be used far more efficiently, Capizzi makes the case that managers will be buying "less hardware, fewer storage arrays, fewer servers." On the other hand, he notes, virtualization "might make it possible to do things that you couldn't do before" -- which would require more storage capacity.

Many SMBs, Capizzi says, were never entirely comfortable with the level of backup they were performing or with their preparedness for disasters. Virtualization now makes it economically feasible for them to seamlessly replicate data to a second site on a daily basis. But new functions like disaster recovery could require more storage capacity than is saved by virtualization's increased efficiency.

NAS' New Appeal
Storage-area network (SAN) and network-attached storage (NAS) technologies will both continue to be used as virtualization becomes more prevalent. Capizzi says that virtualization "doesn't make anything obsolete that's there today."

For the most part, the choice between SAN and NAS is driven by whether an application requests data in files or in blocks. Office applications such as word processing typically store data in discrete files, making NAS structures preferable, while DBMSs typically access data in i/o blocks, giving the edge to SAN storage.

Because of the way its disk drives are accessed, SAN is often selected for complex applications that require intensive disk use and high performance, such as replicating data from numerous servers, and for applications requiring a particularly high level of reliability. NAS gets the nod for simpler applications -- and it comes with a less-expensive price tag. (article continues)


To the extent that there is a choice of storage technology, virtualization may provide a boost to NAS.

The reason, according to Farid Neema, president of research firm Peripheral Concepts in Santa Barbara, Calif., is that virtualization addresses NAS's greatest weakness -- its scalability. Placing more than five or 10 NAS on a network can cause serious management and performance problems. "They each had separate operating systems and separate file systems -- it was a nightmare to manage," Neema says. "But virtualization has completely solved that problem. Now, you see only one file system, managed as one pool of storage."

As a result, while SAN was once the predominant storage technology, virtualization has put NAS almost on a par with it. The emergence of NAS as a solution equal in status to SAN is demonstrated, Neema says, by the recent spate of "consolidated" or "unified" storage solutions that allow storage administrators to manage both SAN and NAS from a single vantage point.

Another indicator of NAS' new appeal is found in Peripheral Concepts' recent survey of end users, which finds that the percentage of data stored on NAS is growing along with NAS virtualization. The number of sites storing more than 30 percent of their data on NAS has increased from 25 percent in 2004 to 44 percent in 2006.

The bottom line: If you are implementing storage virtualization, NAS' increased manageability may increase functionality and cut costs.

Building Better IT SWAT Teams

It's an unpleasant and unavoidable fact of life: IT organizations are almost always in problem-solving mode. And the best way to address unusual, urgent or one-time issues is to create temporary IT "solution teams."

The experts on these SWAT teams - the apt acronym for Special Weapons and Tactics - have the skills and tools to solve problems quickly and efficiently under intense time pressure. But it's important to keep in mind that you are bringing together people who may never have worked as a team before, and in larger organizations, they may not even have ever met.

There are right and wrong ways to build, train and maintain these crucial resources. Here are six ways that work.

Appoint a team leader. Forget everything you've heard about peer management. According to Katherine Spencer Lee, executive director of Robert Half Technology, in Menlo Park, Calif, the self-governing team is a myth. "You've got to have a team leader," she says, "and that person needs to have more than just technical skills." In particular, "good communications skills - both oral and written - are essential."

A leader is vital in cases where the team has been assembled from different departments. Someone acting as a coach or facilitator can help get the group to gel. This might be a role taken by the project manager; it might be formally assigned to someone with those specific skills or it might just be naturally assumed by a team member who has a talent for bringing people together. (article continues)


Balance individual accountability with a clear chain of command. The best teams don't necessarily duplicate skills, says Dean Meyer, chairman of NDMA Inc., a management-consulting firm based in Ridgefield, Conn. "Each member of the team needs to understand the specific contribution and/or deliverable that he or she is responsible for, and not meddle in other people's domains," he says.

Find - and involve - people from other disciplines. It's rare that a technical problem impacts only IT. "Once you make changes to an application, to the infrastructure or to the network, there are implications for anyone who uses those resources," says Jeff Gibson, vice president of consulting for The Table Group, a managing consulting firm based in Lafayette, Calif. "You need to have all the stakeholders represented on that team." In particular, members of the affected user community must have someone participating in order to come up with a solution they will be satisfied with.

Clarify the loyalties of team members. Because the team is temporary, by implication all members of it have other "real" jobs to do. To avoid cases of divided loyalties, members must know their responsibilities on the team and how that time commitment relates to their regular job. "Managers must be exceedingly clear about the priorities," stipulates Gibson. "If someone is expected to give 100 percent to the team, that has to be approved by his or her manager - and everyone must be very clear about how it will work on a practical basis from day to day." (article continues)


Establish firm boundaries. A temporary team is just that - temporary. The exact charter for its existence, including its timeframe, deliverables and deadline, must be carefully delineated in advance of the first meeting of the team. "If you don't provide an end date, or sufficiently detailed criteria about what constitutes the success of the project, then the so-called temporary team can easily turn into a standing committee," says Johanna Rothman, president of the Rothman Consulting Group, in Arlington, Mass. "If at all possible, make the goal something measurable; certainly make progress toward it something you can track."

Be prepared to disperse the team if necessary. Just because specific deliverables - and viable deadlines - have been established up front doesn't mean that the team must stay together to the bitter end. "Every healthy IT organization has a methodology for evaluating and killing troubled projects," says Raj Kapur, vice president of the Center for Project Management, in San Ramon, Calif. "You need to keep ascertaining that your resources are allocated where they can provide the most value. If things clearly aren't working out, you need to cut your losses and move on."

The bottom line: The best way to get a team acting like a team is for them to start working together.