Maximizing Multicore Servers

In the perennial push for greater speed and more robust functionality, multicore servers have been promoted as the solution for the future. But despite their obvious benefits in cost-effective and faster parallel processing, there are significant trade-offs. Chief among these are pushback from application developers, skill transfer, memory limitations, provisioning and governance. How can IT managers best harness the gain in speed with minimum risk?

Same Song, New Verse
"Multicore has been on the hype cycle recently, but it is not a new technology. Larger server systems, such as many RISC-based systems running UNIX, have had multicore for several years," says Austin, Texas-based David Stirling, lead systems engineer at The Home Depot. "What we are seeing now is a lot of hype, with Intel and AMD on the bandwagon marketing to consumers."

One cause of confusion is the lack of a standard to describe relative performance. "There are some segments of the market that still believe clock rate [in GHz] is the only meaningful information by which processor performance can be gauged," says Doug Rollins, Director of Research and Development, MPC Computers (Gateway) in Nampa, Idaho. "However, with multicore -- in particular, quad core -- processors, this simply isn't true."

Intel and AMD think "CPU numbers" provide a more valid assessment of processor performance. For example, despite the slower clock rate of a current Xeon 5300 quad core series processor, it may outperform a current Xeon 5100 dual core series one under certain workloads. Furthermore, the extra performance can also translate to lower power consumption. (article continues)


Serial or Parallel Universe?
The market hasn't yet completely made the transition from clock speed to CPU numbers, leaving IT managers confused about how to assess performance. But in the end, the workload that is being processed, not the numbers, that should determine whether you choose a serial or parallel configuration.

"If a set of processing can only be done serially, then having multiple cores does not help," says Chicago, Ill.-based Carl Franklin, Solution Architect for Triton-Tek. "If work can be done in parallel and the application or service can delegate its work in parallel, then multicores give a better work per space and work per watt advantage."

Keep in mind, however, that while multicores technically pack more processing power in the same heat footprint, i.e., more processing power per BTU of heat, not all that processing power is accessible. "Certainly [multicores do] not double the effective processing power, as some marketing folks claim," says Stirling. "This is because much of the operating system and application code today is not designed to take full advantage of the additional cores. This is especially true of multicore desktop/laptop systems, but it applies to some server applications as well."

"The key best practice is to size the processor to the workload. With the advent of dual socket designs (like the MPC NetFRAME 1640, 1740 and 2740), one has a wide choice in CPU selection, both in terms of type, speed and quantity," says Rollins. "Not 'over buying' is as important as not 'under buying' -- and processor numbers can help with this choice." (article continues)


Besting the Beast
To maximize the power of multicore server deployments, Jay Bretzmann, Manager of Product Marketing at IBM Systems in Research Triangle Park, N.C., offers the following advice:

  • Break up performance bottlenecks Look for an architecture that can maintain a balance in system resources to avoid performance bottlenecks. IBM's Enterprise X-Architecture offers customers the ability to independently scale processors, memory and I/O resources which helps it adapt to scalable software workloads like database processing, enterprise applications (SAP, Oracle, etc.) and server consolidation. IBM's x3650 2P server also offers more memory capacity than other vendors.
  • Go virtual when necessary When application software doesn't support more than four concurrent threads of execution, adopt server virtualization technology in production environments in order to harness the power of the cores.
  • Consolidate databases Use software like Microsoft SQL Server 2005 offering database consolidation features to host multiple smaller, underutilized databases on one larger server and one SQL Server license.
  • Synchronize the software To ensure peak performance, determine that basic operating system and application software are written to take advantage of the multiple cores.

The bottom line: Multicore server deployments can pack a wallop in attacking some of the more bedeviling enterprise tasks, but how much of a wallop depends on how well you aim the punch. "You will definitely see improvement by deploying multicore chips as opposed to the previous generation single core chips," says Stirling. "But, as in any capacity management exercise, you should map your load to the actual performance of your application on the processors in question, not the marketing hype."

Is It -- Finally -- Time for the Grid?

Grid computing would seem to be a simple concept: computers are linked together and the machines share resources such as CPU cycles, RAM and data storage capabilities. Free resources on one machine can be tapped by other users on the grid; in return, a machine in need of additional computing capacity can utilize free resources available on other parts of the grid.

"In some senses, the idea is a very old one," says Ian Foster, the director of the Computation Institute at the University of Chicago's Argonne National Laboratory in Chicago, Ill., and considered by most in the field to be the "father of the grid." "When the Internet first appeared in the late 1960s, some people talked about how you might be able to create computing utilities. But it was with the emergence of high-speed networks in the early 1990s that people really started looking seriously at how you could link systems together."

Not surprisingly with such an amorphous concept, there's some confusion about a clear definition of the term "grid computing." Foster suggests that the best way to think about it is as a set of technologies that closely dovetail with other similar sets of technologies. "It's really a continuum from the tightly-coupled parallel machines, like IBM's Blue Gene, to clusters, and then collections of clusters and, in the sciences and some large companies, national-scale grids that link clusters and other systems at many sites." (article continues)


Economy and Scale
Grid computing works especially well for repetitive jobs -- calculations that need not be made in parallel, but instead can be made sequentially. Purdue University (West Lafayette, Ind.) CIO Gerard McCartney, who oversees a grid of 6,001 Linux, Windows, Solaris and Macintosh machines that talk to each other using the University of Wisconsin's Condor grid middleware, says one Purdue faculty member grabs images of viruses from an electron microscope, and then processes the images using the grid. "He could do this on a mainframe that costs millions of dollars. Our way, he essentially does it for free."

While cost savings are an important argument in favor of grid computing, the time element may be even more important. McCartney, who contends that most computers, even when in use, utilize only 20 percent or so of their capacity, cites the case of a materials scientist at Rice University who uses the Purdue grid to analyze zeolite structures. In one day, says McCartney, he may use a CPU year of computation running "fairly small calculations that have to happen thousands of times." Midway through 2007, he'd already used three million hours of computing time, at very little cost. "These are all waste cycles. That's the point," says McCartney.

There are few infrastructure requirements for grid computing. Networking capability, of course, is essential, as is the middleware that enables the machines to distribute tasks intelligently. Foster's Argonne Laboratory, with support from IBM, developed the open-source Globus software, which, he says, "addresses security, data movement, job submission, data replication" and other challenges for large grids. Other popular middleware offerings include the Sun Grid Engine (SGE) and Condor. This software is platform-independent and has the advantage, says McCartney, of being a "lightweight installation -- you're not hiring a cadre of systems programmers to make this happen." (article continues)


Finally, emphasizes Cheryl Doninger, the R&D director of the enterprise computing infrastructure unit at SAS in Cary, N.C., the software actually using the grid -- what's visible on the individual user's desktop -- has to be grid-enabled. She says that SAS added grid capabilities to its software offerings a year and a half ago.

Expanding the Grid
Grid computing is starting to catch on in many enterprises. It is already being used extensively in the financial services, oil and gas, insurance and pharmaceutical industries. Meanwhile, says Doninger, telcos, the travel industry and the entertainment sector, are quickly adopting the technology.

"You can't really pick an area [of the enterprise] that's amenable to grid computing," says McCartney. But, he adds, you can isolate the best uses for the grid. "Science applications work nicely in this environment, Parameter sweeps, statistical analyses and digital rendering also work well."

Any area of the enterprise that needs lots of computing power, and fast, can benefit. Payroll departments can use the grid to help churn out thousands of paychecks overnight, and then forget about the grid the rest of the time. Programmers at SAS use the grid at night -- when they have to quickly process the latest source code builds.

With relatively small initial expenditures, grid computing can enable enterprises to realize extraordinary gains in computing power and efficiency. And in the medium and long term, they can save money. "Grid can run on low-cost commodity and open source operating systems," says Doninger. "We talk [to our customers] about savings, and a lot of times that's why they're starting to look at grid -- because of the hardware savings it can bring."

SOA Is Looking A-OK

The clamor for convergence is reaching a deafening roar as IT departments seek new ways to make scarce budget dollars support broad and sweeping business changes. The more flexible and adaptable the technology, the better the play, the thinking goes. Better still: the technology becomes invisible, connecting multiple uses and applications while leaving the user's focus on the business at hand rather than on the interface commands.

Legacy and other systems intended to improve time-to-market or component re-use, such as CORBA and DCOM, were soon outpaced. "Previous strategies were very rigid and brittle," says Peter Kastner, research vice president of the Aberdeen Group of Boston, Mass. "Any small change could break things." In their place arose Service Oriented Architecture (SOA), an approach to designing business applications built around the concept of services.

What's So Different?
"Past integration strategies focused primarily on simplifying the effort required to make large applications interoperate," says Larry Fulton, senior analyst at Forrester Research based in Cambridge, Mass. "SOA, on the other hand, is about creating or repackaging software components in a way that makes the components themselves easier to use by new or existing systems."

SOA is differentiated from previous technologies in two ways, says Fulton.  First, SOA designs around business process steps, achieving a more naturally reusable granularity of function. Second, the industry has learned a lot about the need to support the adoption of new approaches by also facing the challenges head-on. (article continues)


"SOA, SOA governance or the processes by which SOA decisions are made consistently and effectively are as much a part of the industry discussion as the approach and the related technologies," says Fulton. "This did not happen with earlier attempts."

According to Kastner, early adopters have learned many valuable lessons about implementing SOA. "The complexity has not gone away even though SOA reduces the number of lines of code the organization has to deal with, and the resulting applications are more user friendly," he says. "Companies further down the road with SOA implementation are telling us that it is best to retain outside services to curb the training curve and that they wish they had done far more planning on the front end."

"SOA is widely expected to lower the 40 percent on average of the IT budget currently dedicated to integration. It provides more agility and frees dollars for other IT needs," says Kastner.

Making Connections
SOA is showing yet another major plus: it is highly workable with the push towards IP-based convergence in the wired and mobile communications sectors.

"IP-based convergence offers the same benefits to SOA as it does to other IT efforts," says Fulton. "Just as IP-based convergence brings together a host of capabilities into a single networking model, those same capabilities can be more easily leveraged when they are bundled into easily integrated SOA services, extending the convergence into the application and architecture levels."

According to Forrester's most recent survey data, strategic adoption of SOA and reported measurable benefits continue to grow among North American and European businesses. "All of which demonstrates that unlike earlier strategies, SOA is actually delivering on the promise to improve IT solution delivery," says Fulton. "The most powerful applications of SOA include the creation of services that closely correspond to business units of work."