Implications of Software + Services Consumption for Enterprise IT
by Kevin Sangwell
Summary: Many articles in this issue use the term Software + Services (S+S) when referring to client (desktop, browser and device) and server based applications which consume one or more Internet (cloud) services. While this model shares some characteristics with Software as a Service (SaaS) the differences are significant for Enterprise IT.
This paper contrasts the challenges of adopting S+S versus SaaS; it will become clear that consumption of a well-defined external service is less challenging for enterprises than the consumption of a finished service.
Today, the majority of applications delivered as a service over the Internet (that is, SaaS) are aimed at the consumer and small business markets. The business monetization model used, whether subscription- or advertising-funded, is largely that of the Long Tail; selling a little of something to many, many customers through a scalable distribution channel, as described by Chris Anderson (see Resources).
However, enterprise demands are significantly different from the demands of these consumer and small business segments, so certain assumptions supporting Long Tail economics and service delivery (and consumption) just do not apply in an enterprise context. For example, consumers don’t have to worry about compliance and Enterprise Application Integration (EAI) and all that is implied by it is largely irrelevant to small businesses.
Thus considering software services from an enterprise perspective raises a number of questions. Who owns the data? What is the Service Level Agreement (SLA)? Can internal identities be extended outside the firewall to access cloud services? Are there regulatory implications?
Roughly 70 percent of IT budgets go to maintain existing systems leaving around 30 percent for new solutions. While the cost of hardware and software is becoming less the cost of management and support is growing. Businesses are expecting more from IT than ever before, partly due to recovering confidence following the dot-com bust, as well as growing demand for Web 2.0-style capabilities inside the firewall.
At the same time, corporate IT is suffering a crisis of perception, as evidenced by the feedback over Nicholas Carr’s 2003 essay, “IT Doesn’t Matter” (see Resources). Business leaders are often frustrated that internal IT projects take months and significant investment to provide benefits that appear readily available on the Internet. Users experience search, collaboration, and publishing capabilities on the Internet that are far superior to many enterprises’ internal capabilities. An increasing number of applications are being made available as services on the Internet, giving the business an alternative IT sourcing model.
In this paper I discuss the implications of consuming external software services on existing corporate IT infrastructure and operations and compare the challenges of the SaaS model with the S+S model when consuming any line-of-business application that has broad adoption across the company. I use the term “software services” to refer to the services in both SaaS and S+S models because we can consider software delivery as a continuum with traditional inhouse hosted software, built or bought, at one extreme, and finished services delivered over the Internet (that is, SaaS) at the other. The hybrid, in-house software plus cloud services, spans the middle of the continuum (that is, S+S). Figure 1 illustrates this software delivery continuum. In this paper,
· Traditional software refers to applications installed in the infrastructure accessed exclusively by internal users.
· Building block services provide low-level capabilities that can be consumed by developers when building a composite application. These services exist in the cloud
· Attached services provide a higher level of functionality compared with building block services. Applications leverage attached services to add functionality.
· Finished services are analogous to full-blown applications, delivered over the Internet using the SaaS model.
· S+S refers to the use of applications that consume attached services or one that is built with building block services.
Figure 1: Software delivery continuum and software services taxonomy (Click on the picture for a larger image)
When considering an IT infrastructure or application sourcing model, it is important to understand the business objectives. For example, outsourcing is driven by the need for cost efficiency, transferring the cost and risk of delivery of an existing, mature application to another party in exchange for contracted payments. In contrast, adoption of a software service satisfies a business need, such as more effective customer management (in the case of CRM). From a business manager’s perspective, SaaS appears to be the best of both worlds: the business benefit is realized at a cost proportional to use (or even free), with no additional or up-front capital investment in IT resources. Although the business is best placed to determine how well the service solves the business problem, unless IT is included in the discussion, many of the wider implications for enterprise IT—hidden costs of software services adoption—will be missed.
When adopting software services, one set of challenges becomes the responsibility of the provider — service delivery and service support. However, IT will face an additional and therefore new set of challenges in adopting the new model (Figure 2).
Figure 2: Software services consumption adds new challenges (Click on the picture for a larger image)
Ignoring these new challenges is not an option. As we will see, there may be direct and indirect costs, resource implications, and compliance issues. In other words, the adoption of a software service means a hybrid procurement/integration project for internal IT. Integration Figure 2: Software services consumption adds new challenges needs to be considered in three broad areas:
· Identity and Access Management
Regulations and legal obligations should also be considered.
Identity and access management is a perennial problem that affects many aspects of IT, from help desk costs through user productivity to data security. It’s also one area where enterprise IT should provide a better user experience compared to the Internet—yet most enterprises have dozens of user directories. Analyst organizations frequently state that on average password resets account for 30 percent of help desk calls.
The addition of a finished service, with its corresponding external directory, requires extensions to the provisioning and deprovisioning process, even if this is a human process. For example, when an employee leaves, many organizations struggle to deprovision or disable internal accounts in a timely manner; the risk of exposure in the case of an external application or service is significant because there is no corporate firewall preventing the user from accessing the application and its data.
Neither Active Directory (AD), nor metadirectory products such as Microsoft Identity Lifecycle Manager solve this particular problem. AD is proprietary and its trust model is not sufficiently granular, and metadirectories are not widely deployed and don’t operate in real time. Something standards-based, such as Federation offers a set of capabilities that make it a particularly good fit for integration with an external application or service. It is loosely coupled yet operates in real time rather than via a schedule, simplifying provisioning and deprovisioning. Federation trust relationships have a high level of granularity, allowing the consumer organization to expose only a subset of their directory (based on rules); and real-time mapping of attributes to “claims,” reducing the need for internal directory changes. (See Figure 3; for more on Federation and ADFS, see Resources.)
In contrast to SaaS, S+S applications may have a back-end service running inside the firewall, in which case a single enterprise or proxy identity could be passed to the service in the cloud. Identity integration is a common capability of many enterprise applications, so the back-end service may integrate with Active Directory or generic LDAP directories out of the box.
Figure 3: Federation providing identity integration (Click on the picture for a larger image)
Authorization and access management is another aspect that needs consideration. Many applications provide capabilities that differ according to the user. For example, an expenses application may allow a manager to authorize claims up to a value of $4,000; claims above $4,000 may need director approval. Today, many applications inside the corporate firewall set permissions against individual users; in effect, the application contains a mapping between the user and their authorization. However, numerous organizations have started to move away from user-based authorization to role-based authorization. The benefits are clear: lower administration costs, consistent authorization within a role, and transparent compliance.
When a finished service has multiple levels of authorization, there are two options: Task someone inside your organization with manually mapping users to authorization levels; or extend the internal role-based model to the external application. The former often falls to the help desk; a hidden cost of adopting finished services. The latter, mapping an internal role to the external application, could be achieved through Federation; for example, membership of an Active Directory group could be mapped to a name/value pair in a cookie that is used by the external application.
Authorization in building block or attached services (the hybrid Software + Services) could follow the Federation model, and indeed there are advantages to doing so: Integrity is tightly managed at the service provider, which may help achieve compliance. The more flexible model of S+S means that authorization could be carried out by a local server before the request is made to the external service. Having local control over and enforcement of policy means the organization can react to business changes more quickly. It also simplifies integration; the enterprise can make changes to the configuration of the on-premise part of the vendor application to suit its environment.
Summary of Identity and Access Management Implications
· If the external service depends on user identity (highly likely for SaaS, possible for S+S), your provisioning and deprovisioning processes need to be extended. Integration could be via technology or a manual process, both of which have cost implications.
· Service provider user account policies need to be evaluated against your internal policies (for example, password complexity, lock-outs, and so on).
A line-of-business application is unlikely to exist as an island, even when it is externally sourced. A good example is payroll. The payroll service provider needs raw data: employee name, pay amount and so on, to process payroll monthly. In this example, the data could be supplied via a simple extract of the relevant information from the internal HR system. Clearly, e-mailing an XLS or CSV file to the payroll provider is unlikely to provide the level of security/ confidentiality needed, so another form of data exchange is required, and IT will be expected to provide this. In other words, IT faces an Enterprise Application Integration (EAI) project.
From an EAI perspective, building block and attached services should be straightforward to integrate; after all, they’re designed to be consumed by developers as extensions of a local application. One example of an attached service is Exchange Hosted Services for Filtering (for more information on Exchange Hosted Services, see Resources). Integration in this situation is twofold:
1. DNS: MX record changed to point at the service provider (Microsoft in this case)
2. Firewall rules changed to allow inbound SMTP from only the service provider (which increases security).
As infrastructure is the focus of this article, I won’t delve into further detail with respect to EAI. However, we will look at the infrastructure implications for data from a few other perspectives:
· Firewall rules and filters
· Encryption and signing
· User view
To understand the firewall implications of consuming finished services, we need to investigate which internal applications will integrate with the software service and the form of integration. Is it one internal application that needs to be integrated or several? What firewall rules need to be created to allow publishing and traffic flow? If the applications are exchanging XML, does this need to be validated at the firewall, and can the firewall provide this capability? Does the data need to integrate with some form of workflow, and if so, how does this workflow span the internal/external infrastructure? If the integration occurs over HTTP (SOAP, for example), there may be few implications on the firewall beyond the creation of rules.
With building block and attached services, there’s likely to be some local back-end infrastructure, which naturally becomes the focal point for data integration and security. Indeed, some SaaS vendors are realizing that enterprises will be more willing to subscribe when their data is stored inside the corporate firewall—so they’re evolving their finished services to the S+S model by installing an appliance inside customer data centers.
Encryption and signing
The most effective way of exchanging encrypted data across the Internet is to adopt certificates from a public certificate authority. If the certificates get installed on clients, a Public Key Infrastructure (PKI) project is required, with all its implications, such as certificate life cycle management and publishing the certificate revocation list. Not a trivial undertaking, but once completed, it will enable other capabilities within the infrastructure, such as signed e-mail and Smart Card authentication.
Of course, with a local back-end infrastructure in S+S implementations, encryption and signing is likely to be far simpler (fewer end points).
Another perspective on data is the view of the user. Rightly or wrongly, many business people want to continue working with the tools they are familiar with—the Microsoft Office applications. Consider the number of times a new application fails to reach its full potential because the business users insist on extracting the data and using Excel for day-today management. Rarely does this data get back into the application. A number of independent software vendors have started to build their application clients as Office Business Applications, essentially using Office as the application platform. This generally results in faster adoption and lower training overhead, but it does present IT with deployment and maintenance issues. Points to consider: Does the software service provide all of the capabilities needed by the business, or will users need to extract the data to perform manipulation/analysis in a local tool/system? Will the adoption of the software service result in yet more departmental applications built in Access and Excel?
Summary of Data Implications
· Analysis will be needed to determine data integration and ETL needs.
· Firewall rules may be needed to allow integration.
· The firewall may need updating to provide filtering for application data (for example, XML Schema validation).
· Purchase of certificates or implementation of PKI may be needed to support authentication, encryption and signing requirements.
The problem of operational integration when sourcing an application or service externally is somewhat incongruous: After all, a key benefit of being a consumer is that operations are the responsibility of the provider, yet internal operations and processes will be impacted by the external application or service in several areas, the most significant being help desk and user training.
Many consumers have been frustrated by inefficient and confusing call centers. To avoid similar problems, internal help desk teams should be aware of new applications and services being integrated into IT operations. Enterprise users depend on numerous internal IT systems to access an external application: the network, DNS, proxy servers. It’s the responsibility of the corporate help desk to support these, not the service provider. Many organizations now realize the importance of simplifying the support process for the business, providing, for example, a single phone number and intranet site which connects them to the appropriate first-line team.
User training is an important aspect of introducing any new application, irrespective of where it is hosted. An advantage of finished services is that the provider typically makes on-demand training available. However, the enterprise has little control over the quality of such training, and poor training increases the cost of support and reduces business productivity. The introduction of upgrades as part of the service, one of the benefits of subscribing to finished services, is another potential problem: If the enterprise does not have the ability to delay the implementation of the upgrade until all staff have received training, the internal help desk may have to handle a spike in support calls. A related issue is whether individual features of the finished service can be selectively disabled: If the capability is already provided in-house, IT needs to ensure that users are all using the internal capability.
“THE GREATER THE IMPORTANCE OF THE APPLICATION TO THE BUSINESS, THE MOREIMPLICATIONS YOU NEED TO CONSIDER. A LINE-OF-BUSINESS APPLICATION DELIVERED AS A FINISHED SERVICE IS A SIGNIFICANT INTEGRATION UNDERTAKING.”
If the finished service uses the browser as its client, the normal compatibility concerns apply: browser version and security settings, plus installed plug-ins and their versions. With a traditional application, the enterprise can determine when to upgrade to the new version, which is critically important if the new version requires the latest browser. When the application is external, this option may not be available.
If the service’s client is not browser-based, internal IT will be responsible for deployment and its implications (compatibility testing, deployment planning, rollout, and so on).
If the service is an attached or building block service, there will be a need for deployment of either the back-end infrastructure in the enterprise data center or the client, or both. For back-end deployment, there are common nonfunctional challenges: What server, network, and storage capacity is needed to meet the load? Can shared services such as, existing SQL server or web server farms be used? How are resilience and disaster recovery provided? Can it run in multiple data centers? The answers will be more dependent on the local components of the application than on the attached service in the cloud: In other words, it can be approached largely like a normal back-end deployment.
Provider operations / business continuity
It is tempting to focus purely on the contents of the SLA when it comes to selecting, monitoring, and evaluating a service provider. How the SLA will be achieved is an important consideration: An SLA which states that service will be restored within 24 hours of a disaster seems good on the surface; however, the definition of a disaster and form of restoration are critically important. To the service consumer, disaster may be the accidental deletion of records from the application, but the service provider will likely have a different view. Taking this example further, restoring application data is a complicated process for many business applications. For instance, restoring a single Exchange mailbox or message or a single SharePoint site or document was a significant challenge until those applications matured. Now apply that challenge to a multitenant SaaS application—even if granular restoration is possible, the provider will be reluctant to do it due to operational costs.
Another concern several architects have expressed to me is the risk of the service provider going out of business, or withholding data to prevent migration to a competitor. Placing the application code in escrow is a step in the right direction, but isn’t really sufficient. Assuming an enterprise consumer could get its data, rebuilding the application in their data center without install instructions or access to the developers may not be feasible. There is no solution to this yet; it’s a question of trust that the provider will do the right thing if the worst happens, and confidence that they have good business and management skills.
As with any SLA, business and IT groups should review reports on performance and investigate where SLAs have not been met.
Summary of Operations Implications
· Help desk procedures need to be updated to perform first-line troubleshooting for new application & escalation processes need to be defined with the service provider.
· Review internal help desk SLAs to ensure they can still be met when depending on the service provider for escalated support.
Proving compliance with regulations and legal obligations will have an impact on the infrastructure, and ensuring compliance for a business process that spans internal systems and external services can be especially challenging. It also presents a potential solution with respect to compliance: If the application or service is industry-aligned, there is a good chance it will be compliant with the relevant regulations in major markets; this may not be the case where the service is generic. Even in situations where compliance is a selling point of the service, an enterprise’s internal security policies or region-specific laws, such as the European Parliament Data Protection Laws, may be incompatible with the service provider policies.
Clearly, an enterprise wants to retain ownership of its business data at all times; the contract should state this explicitly. Furthermore, it may be prudent to verify that data can be extracted on-demand.
Summary of Legal Implications
· Compliance may extend to service provider, how do you still prove compliance?
· Compliance reports may have a cost associated with them.
Table 1: Summary of recommendations (Click on the picture for a larger image)
The SaaS market is currently dominated with offerings aimed at consumers and small businesses, as this market segment benefits from the delivery model without the need for integration. Consuming a line-of-business application from these providers is risky for any enterprise, as many of the integration points discussed in this paper will not be addressed.
The greater the importance of the application to the business, the more implications you need to consider. A line-of-business application delivered as a finished service is a significant integration undertaking compared to the low effort associated with a tactical application where the primary concern is contractual issues such as data ownership. This relationship is represented in the heat map shown in Figure 4.
Figure 4: Considerations heatmap (Click on the picture for a larger image)
Today, enterprise IT departments are far more experienced and confident of consuming building block or attached services than full applications. Be it a data feed for a rich application like Reuters 3000, or an infrastructure service such as spam-filtering, this model is well understood.
As the SaaS delivery model matures and gains more widespread adoption, it is natural for more enterprise demands such as integration to be catered for, and the range of capabilities enterprises are prepared to source as services will increase. The result will be software plus services applications; a natural balance of on-premise software and cloud services.
As Gianpaolo Carraro and Fred Chong state in their article “SaaS: An Enterprise Perspective,” SaaS and S+S are additional tools that savvy CIOs can use to provide better value to the business. Rather than feel threatened, IT managers should view SaaS and S+S for what they are: alternative sourcing models for business benefit, and a different architectural approach to building solutions.
Handled correctly, software services will help change the business’ perception of IT.
· European Parliament Data Protection Laws http://ec.europa.eu/justice_home/fsj/privacy/index_en.htm
· Exchange Hosted Services http://www.microsoft.com/exchange/services/default.mspx
· “IT Doesn’t Matter,” Nicholas G. Carr, Harvard Business Review, May 2003 http://harvardbusinessonline.hbsp.harvard.edu/b01/en/common/item_detail.jhtml?id=R0305B
· The Long Tail: Why the Future of Business Is Selling Less of More, Chris Anderson (Hyperion, 2006)
· Microsoft Privacy Guidelines for Developing Software Products and Services http://www.microsoft.com/downloads/details.aspx?FamilyID=c48cf80f-6e87-48f5-83ec-a18d1ad2fc1f&displaylang=en
· Microsoft Regulatory Compliance Planning Guide (although this guide is focused on internal IT, it can also be useful when evaluating an external provider) http://www.microsoft.com/technet/security/guidance/complianceandpolicies/compliance/rcguide/default.mspx?mfr=true
WS-Federation is a draft OASIS standard that has been adopted by several vendors including Microsoft, IBM, RSA, BEA, and VeriSign. Microsoft’s WS-Federation support takes the form of Active Directory Federation Services (ADFS), a component of Windows Server 2003 R2.
· Active Directory Federation Services (ADFS) http://www.microsoft.com/WindowsServer2003/R2/Identity_Management/ADFSwhitepaper.mspx
Kevin Sangwell is an infrastructure architect in the Microsoft Developer and Platform Group. He has held a number of technical and leadership roles in the IT industry for more than 16 years, including five years as a principal consultant in Microsoft Consulting Services. Kevin has lead the architecture and design for Enterprise and eCommerce infrastructures in the U.K. public and private sectors, including the distributed Microsoft infrastructure for a 120,000 user organization and an extranet application platform for 1.2 million educational users. As infrastructure architect, he provides advice and consulting to enterprise customers and presents at international events.
This article was published in the Architecture Journal, a print and online publication produced by Microsoft. For more articles from this publication, please visit the Architecture Journal Web site.