Three Powerful Trends in Application Development

Last February, Playboy Enterprises in Chicago, Ill. made an unpublicized move that didn't make a news ripple. It re-launched PLAYBOY store, and also opened a new store aimed at female consumers, the BUNNY shop. This was the result of a long and difficult decision-making process for Playboy. "They had to decide whether they wanted to market their products or be a technology company," says Richard Lyons of Lyons Consulting Group in Chicago, Ill., who worked with Playboy in the launch and re-launch. "Very interesting question."

Ultimately, Playboy partnered with Demandware of Woburn, Mass., outsourcing their e-commerce development work to Demandware's cutting-edge technology. "Playboy gets to focus on marketing and merchandising," says Lyons. Meanwhile, Demandware's future e-commerce innovations should be spurred, at least in part, by the small percentage of sales revenue it gets from the store.

Three Trends That Will Thrive
To be sure, the software and hardware have been around for years. But the partnership represents one of three business application development trends, all related, that are likely to thrive in the coming years:

1. Software as a Service
Software as a Service (SaaS) can be thought of as a faster, more agile version of application service provider (ASP) technology. It's a concept that's been discussed for years -- Microsoft, for example, has long talked about moving its Office suite online. But with the increasing availability and quick adoption of high-speed Internet access, SaaS' time has arrived for both enterprise (Demandware, Salesforce, which provides CRM applications, and Google Apps for businesses) and consumer (Google Apps) applications. The technology seems poised for rapid growth. (article continues)


2. Self-Service Service-Oriented-Architecture
Another major trend, closely related to the SaaS model, is being built upon service-oriented architecture (SOA). By linking existing chunks of code, each providing a specific function or service, SOA enables enterprises to construct and customize new applications quickly by reusing old code. Rene Bonvanie of Serena Software in San Mateo, Calif., argues that consumer software trends naturally migrate to business and that business users who have had personal experience with Web 2.0 "mash-ups" are looking for the same kind of functionality in the workplace.

For example, a sales department might want to streamline the discount-approval process. By mashing up services from a salesforce automation system, an HR system and a financial system, a sales department can create an application that enables them to see both who's authorized to approve discounts and how such discounts will affect the company's bottom line. "There are a lot of things on the Web that allow me to easily share and create," says Lyons. "Why can't I create an application for a business process and share it with my colleagues with the same ease that I can get a photo up on Flickr and share it with my friends?"

Developers are still needed to build the mash-ups, but development is more likely to take place only within a group, rather than a corporate IT department. "IT has been dealing with very complex applications, massive, very big applications," says Lyons. "That leaves many small, simple applications that have [previously] been untouched by IT that can highly benefit from this approach." Serena has just rolled out an application named Composer that enables this process. Among the clients Serena is working with are Intuit, Thompson Financial and BYU.

3. Agile Development and Open Source
Ken Krugler, the CEO of Krugle, based in Menlo Park, Calif., agrees that quick development is increasingly important. "Ten years from now, the idea of having big, monolithic systems [such as SAP and Oracle] is going seem kind of silly," he says. "It will seem very muscle-bound. There's a trend toward faster and quicker [development] rather than more structure [in development]."

Krugler argues that a key to rapid development is the reuse of code and building applications using loosely-coupled components. These components may be exposed or created within an SOA environment. But components in the enterprise will increasingly be complemented by open source software, which may provide functionality that in-house code doesn't or may be technically superior to what exists within the organization

Open source code is unwieldy compared to SOA. It resides in a multitude of repositories and is rarely as neatly defined as SOA components. That's where companies like Krugle come in. A leader in search-driven development, Krugle can return more relevant open source results than its few competitors because it is able to isolate meaningful function calls and definitions. "You have your bug-tracking system, your wiki, your source code and other data. How do you time them all together?" asks Krugler. The Krugle search engine can find code both inside and outside the enterprise and help developers look at the structure in a way that can also lead to a better understanding of them. Although most companies using Krugle are presently using it only to search code within the enterprise, Krugler emphasizes that search complements SOA and "complements a lot of existing databases and transactional systems, things that help you with software development already."

The More Things Change ...
While the fundamentals of good business software development are unlikely to change, these three trends -- SaaS, self-service SOA and agile development -- are making headway into enterprises and solving compelling business needs. They're likely to be among the most important development trends in the next few years.

You can read more about App Development here.


Is It Time for Diskless PCs?

Although Oracle, NEC, Wyse and other major vendors have long promoted diskless PCs, the technology has failed to establish a beachhead within enterprise IT shops. This appears to be changing with the release of Microsoft's Vista operating system. Vista offers virtualization features specifically designed to make diskless PCs -- also known as network computers, or "thin clients," because of their significantly reduced cost when compared to traditional PCs -- more attractive.

Still, for a variety of technical and organizational reasons, a complete change of mindset is required before enterprise IT shops will fully accept them. "A lot of users don't like giving up the control of having applications and data stored locally," says Rich Seidner, president of Silicon Valley Virtual Inc., an IT consulting firm based in Woodside, Calif. "Making the shift requires enterprises to put user education and training programs, as well as new technological processes, in place"

Diskless Drawbacks
A diskless PC is exactly what it sounds like: a microcomputer without a dedicated hard drive. Instead, data and applications are stored on remote hard drives, such as those in storage area networks (SANs) kept in the data center. This approach has plusses and minuses. (article continues)

"While people may understand the total cost of ownership [TCO] savings," says Brian Madden, an independent consultant based in Silver Spring, Md. "Moving to diskless PCs requires investing in new software and technologies, and doing things in ways that are completely different from what they've done in the past"

Several challenges have impeded the mainstream deployment of network PCs. For starters, because a network connection is required at all times, this type of hardware tends not to work for companies with a large number of mobile employees who are frequently away from the office, or who habitually take laptops home to work. Likewise, not all applications have the architecture to operate on thin-client hardware. "Diskless PCs require either a server-based computing backend, or some kind of streaming solution," says Madden. Finally, many software vendors have yet to establish licensing arrangements that are compatible with use of diskless PCs.

Thin Clients Gain Weight
However, in addition to lower TCO, there are three key advantages to moving to diskless PCs that are responsible for increased interest in the technology:

  • Access for all If anything goes wrong with a piece of hardware, a user can just move over to the next cubicle and start right up where he or she left off with no loss of productivity. "You're talking seconds rather than hours for getting a user up and running on new hardware after a crash," says Seidner. (article continues)

  • Seamless software transitions Because everything is done at the server level, there's no need to install software on separate machines or individually upgrade applications. "Everything is done behind the scenes, without disturbing users. This dramatically reduces hardware maintenance costs and keeps employees productive during major software transitions," says Seidner.
  • Enhanced security One of the biggest security risks for the enterprise network is unauthorized downloads of programs or content from the Internet. That simply can't happen with thin clients. Likewise, because all antivirus and anti-spam protection exists at the server level, IT management needn't be concerned about security breaches on individual machines. Finally, data residing on the server is much easier to back up and protect against loss or theft -- a prime concern when individual users keep important data on their own personal hard drives.

Network PCs have been on the verge of making a breakthrough to more widespread use for more than a decade. As their advantages are leveraged by the ubiquitous availability of high-speed Internet access and the growing interest in virtualization technologies are making them increasingly attractive to enterprises, the diskless PC's day may finally have dawned.

Reinforcing WiFi Redundancy

Redundancy is routine in the constant scramble to keep a conventional enterprise network functioning. But the wireless infrastructure is often ignored, leaving enterprises vulnerable to malicious attacks and network failure.

No longer a hot-spot sideshow, wireless is on track to become the primary enterprise network sooner than you might think. "Although the all-wireless enterprise, such as Intel's, is not yet the norm, it is expected to be by mid- to late-2008 and into early 2009," says Chris Silva, an analyst with the Forrester research group based in Cambridge, Mass. Redundancy neglect now will only cause greater problems in the future.

Wired + Wireless = One Network
The impending move from wired to wireless is prodding IT professionals to shift gears and build a bulwark of safeguards. Successful transition, however, requires more than simply duplicating key parts of wireless hardware. "A redundant infrastructure means anticipating points of failure for the network and creating ways of preventing the network from failing, no matter what nightmare scenario takes place," advises Stan Schatt, VP of ABI Research in New York, N.Y.

Among the points of potential failures are the hidden recesses of the physical plant. "Redundancy efforts must ensure 100 percent coverage of the building as much as it must ensure constant reliability of the network. You have to account for new obstacles such as building materials, walls, stairwells and corner dead zones," advises Silva.

Paradoxically, despite intensified scrutiny of the wireless infrastructure, IT departments cannot afford to ignore the wired network. (article continues)

"Even though these networks are separate, wireless users often connect to a wire-line network. A network manager has to be aware of issues associated with the WiFi device and network that could bring down the wire-line network," observes Schatt.

In short, the entire network system -- both wired and wireless -- is mission critical.  Yet too many enterprises are missing the message, warns San Jose, Calif. -based Rachna Ahlawat, research director of Gartner's Wireless Networking. "There's not much difference in redundancy for wired and wireless. Both must be covered."

Eight Secrets to Achieving Redundancy
How can IT departments super-size their redundancy plans? Consider these eight ways to reinforce your entire network:

  • Intelligent switches (controller) As the wireless LAN (WLAN) industry moves toward a model with the real intelligence centered in the switch or controller, a resilient WiFi network should have additional unused switches to permit active failover. 
  • Battery backup for the switch In the event of a power failure, backup power is needed for the switch and for access points that rely on power over Ethernet (POE).
  • Hot-swappable spares for the switch Most switches now permit hot-swapping of failed circuit boards, allowing quick replacement of components without the need to shut down the entire network segment.
  • Dense access point configuration Today's access points can direct their traffic to replacements if one fails, but if the access points are out of radio range for users, they are useless. Make sure there are sufficient access points for the system. (article continues)

  • Load balancing Increasing use of voice-over-wireless LAN (VOWLAN) is pushing demand for an industry standard for load balancing. Until voice-over WiFi calls can be recognized and equally distributed among access points, a user could get a busy signal when trying to make a call. To avoid this unacceptable condition, IT departments may need to design their own load balancing solutions.
  • Roaming Users need to be able to roam between subnets without having their connections dropped. The IEEE 802.11r standard that supports this function has not yet been ratified, but most equipment vendors are offering their own proprietary solutions in the interim and promise to upgrade to the final standard when it is approved.
  • Battery-saving features To avoid dropped network connections if a handheld WiFi device, such as a scanner or a WiFi phone runs out of battery power, most equipment manufacturers offer some version of WMM battery saving.
  • Intrusion detection and prevention Network managers must design their WiFi networks to have adequate sensors to identify hackers and knock them off before they bring down the WiFi network.

Make sure your enterprise is prepared for the future surge to an all-wireless network. The steps toward achieving wireless redundancy may differ from normal redundancy efforts, but the end goal remains the same. "Most network managers are looking for network resiliency; that means creating a network that is resilient enough not to fail should a component fail or should a hacker attack the network," says Schatt.

Is the Internet the New WAN?

After close to two decades -- a lifetime in information technology -- the traditional corporate wide-area network (WAN) may be headed for the endangered-species list. The usurper: Internet-based virtual private networks (VPNs). Instead of depending on dedicated leased or owned lines, VPNs use a variety of technologies to carry data traffic over public networks in a secure and private manner. And as they are becoming increasingly competitive in terms of cost, flexibility and capacity, organizations of all sizes are taking notice.

However, migrating from leased lines to broadband isn't a slam dunk. IT managers must select VPN vendors with the appropriate levels of support and technology, and make sure the system meets the enterprise's needs for security and performance on an ongoing basis.

Expert Advice on Call
IT departments can choose between the do-it-yourself approach -- installing their own software and building their own VPNs in-house using a standard business broadband connection -- and purchasing VPN services from a carrier.

In general, the availability of in-house expertise is one of the most significant issues an IT department faces in implementing a VPN. IT managers should assess their department's skill sets and decide whether the in-house expertise exists to plan, design, implement and monitor a VPN. If such expertise is lacking, it makes more sense and may be more cost-effective to use a carrier-provided VPN.

In a recent survey by In-Stat, an industry analyst based in Scottsdale, Ariz., the key reasons enterprises gave for using carrier-provided VPNs were higher cost-benefit ratios and the desire to converge voice and data services on the same transport facilities. Converging voice with data offers opportunities to save overhead costs, but it can be technically challenging, since changes in data traffic can easily affect the quality of voice service and vice versa. "When carriers provide the IP VPNs, they bring their expertise to the table," explains In-Stat senior analyst Steve Hansen. (article continues)

But that doesn't mean they do all the work. Even with carrier-based VPN, the IT department may be in charge of much of the day-to-day administration. Typically, the vendor will be called upon to address issues beyond the scope of the in-house team's capabilities, such as solving difficult problems or assisting with planning. The vendor also usually takes responsibility for fundamental requirements, such as meeting service-level agreements for network availability and mean time to repair.

How Private is "Virtually Private"?
Security can be the decisive factor in choosing between DIY and carrier-based VPNs. Because security administration is complex, some DIY VPN implementations have proved to be less than fully secure. But even though carrier-based VPNs have rarely presented security problems, Hansen argues that "it's dangerous to say that security is not an issue." His advice: Find a carrier that will perform a security audit, offer advice about security vulnerabilities and suggest the best ways of addressing them in each particular situation.

Most carrier-provided VPNs use the IPsec protocol, which operates at the network layer, for security. However, a recent report by Infonetics Research in Campbell, Calif., found that enterprises with heightened security needs were increasingly choosing the Secure Sockets Layer (SSL) protocol, which now accounts for about 21 percent of VPNs.

"SSL allows companies to limit user access to a few specific applications or data sources, and does so at the application layer, which is an improvement in security over IPsec," says Jeff Wilson, principal analyst for VPNs and security at Infonetics Research. Another benefit: SSL can be quickly set up as a disaster recovery solution, decreasing network downtime when other forms of access fail. (article continues)

Ensuring High Performance
Some IT managers are reluctant to move critical services from leased lines, concerned that a broadband IP connection may not provide the level of performance they require. Migrating from a DS3 leased line to a DSL broadband line with equivalent bandwidth may not present a problem, but migrating from a high-capacity fiber leased line might degrade performance unless fiber-based Internet access is available. "You have to make sure what you're moving is compatible with the network performance criteria of the network you're going to," Hansen says.

Bandwidth isn't the only requirement for high performance. In applications where quality of service (QoS) is the driving factor, such as voice, videoconferencing and an increasing number of data applications, Multiprotocol Label Switching (MPLS) is emerging as the preferred standard. Many enterprises are finding that with a VPN based on MPLS, they can more easily meet service-level agreements for metrics like latency (the time it takes for a packet to get from one point to another), packet loss (signal degradation due to congestion) and other equally important components of QoS.

The bottom line: Implementing and managing a broadband-based network is not a trivial task. Before migrating enterprise applications from a dedicated-line infrastructure to an Internet-based VPN, you will need to address issues of security and network performance, and put a team in place -- using in-house or vendor resources, or some combination of the two -- that can set and meet appropriate service levels for the network.

Service-Oriented Architecture Promises Increased Flexibility

Wouldn't it be a great idea if an enterprise's IT resources could be linked and reused, enabling businesses to respond more quickly and cost-effectively to changing market conditions? That's the theory behind service-oriented architecture (SOA) and, in theory, systems developed according to SOA principles promise new levels of flexibility. In reality, however, nothing's ever that simple.

"You don't get SOA-based flexibility merely by building a library of services," says Randy Heffner, an analyst at Forrester Research based in Dallas, Texas. "AT&T learned this with its initial try at a Web services strategy, which took AT&T from having a bunch of disconnected, incoherent integration interfaces to having a bunch of disconnected, incoherent 'standards-based' interfaces."

But if the key to realizing the promise of SOA is not in the most obvious implementation, then where is it?

Right Key, Wrong Lock
Merely collecting services is not enough, says Heffner. To succeed with an SOA strategy, enterprises have to shape their service creation efforts within the context of business design and governance.

IT departments must approach the problem from a business-oriented perspective, rather than a technology-oriented one. "Quality management in SOA is not defined as how many defects per line of code you find, but how well the service meets the business requirements," says Sandy Carter, vice president of SOA and WebSphere strategy at IBM, in Somers, NY.

For example, an enterprise considering SOA must first ascertain whether an existing business process -- such as automated credit checking -- has already been created by another department in the company or another member of the IT staff. If this service already exists, a company can save time and money by (article continues)

avoiding redundant development efforts. If, on the other hand, IT finds that the automated credit check service doesn't exist, they can then begin to develop, create and test one.

"By identifying a coherent body of services needed for a given business domain, and by designing each service to deliver a clearly scoped, complete business unit of work, you create an inventory of business services that, in effect, provides a digital model of your business capabilities," Heffner explains.
Break Down the Business Functions
"Understanding how the business functions is key to identifying which services will succeed in an SOA environment," says Columbus, Ga.-based Frank Braski, manager of IT Applications Services at insurance giant Aflac. Braski breaks down what he calls "the business taxonomy" into seven data concepts, among which an enterprise "can practically model and define anything," he says. Those concepts are:

  • Relationships
  • Parties (people)
  • Products (things)
  • Agreements
  • Locations (places)
  • Attributes
  • Financial instruments

Unearthing artifacts is essential in every case. Artifacts are the instructions that explain specific business processes and services. This information includes: who owns the service, the performance requirements for the service in production, the current usage of the service, which applications are using the service and who can see the service.

"The key to understanding what business processes exist and how to create new ones is to ensure that the instructions are available in a repository and are easily understood by different parts of the organization, both business and IT," says Carter. (article continues)

Successful Services Reuse
Once you have identified the services to be reused and shared, consider refining a common service rather than simply duplicating it. "The best way to get to plain-and-simple 'use' instead of the typical so-much-copying-and-pasting 're-use' scenario is to K.I.S.S. -- Keep It Simple and Smart," advises Braski.

"Think Legos," Braski explains. "Given just a handful of basic building blocks with a couple different styles and colors -- presto! You can pretty much create anything you can imagine. Good Lego designers eventually discover a common set of basic 'tricks' or 'patterns' which they can apply time and again to solve problems common to many building challenges. In systems development, services can be as basic as those plastic building bricks."

K.I.S.S. also entails keeping the core definition of the service intact. "Designing a limited set of interactions as messages that are completely abstracted from any implementation or technology underpinning allows other technology-centric configuration and policy elements to change around them," says Sandra Rogers, IDC Program Director, SOA, Web Services and Integration in Framingham, Mass.

As artful as the end product may be, it must still face a reality check. "When several team members revise a document, it soon looks nothing like its original form. A similar situation can occur once a service is in production," says Carter. "This is why IT needs to conduct continuous monitoring to make sure the service meets the established business requirements."