BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Not Everything Will Move To The Cloud

This article is more than 10 years old.

When Rackspace opened for business at the end of the last century, people thought it was a wannabe Electronic Data Systems . Since then it has evolved into one of the largest cloud operations on the planet with an estimated 60,000 servers and data storage that is increasing by a mind-boggling 1 petabyte a month.

So what better place to look at how data management is changing? Forbes caught up with John Engates, Rackspace's chief technology officer, to talk about the evolution and where things are likely to go in the future.

Forbes: How has your business model evolved?

Engates: In the early days, most people thought that if you needed a server, you rented a rack, installed your server and managed it. We said it could be easier than that. Just pay a monthly fee. Back in 1998 there weren't many companies doing it.

What was the base platform?

We started with Linux servers. Windows came later for us. Today what's different is that we do it for much larger customers and we manage a much broader set of applications. For all of them we manage up to just under the application, including data center network servers, operating systems, databases and application servers.

So what have you learned about managing a massive data infrastructure?

You have to treat it like a factory instead of like a custom shop. You have to think large-scale. It might be easy to do something for one customer, but you have to think about the next 1,000 customers behind them, because everything has to be replicable, both from an operational standpoint and from a technology standpoint. You have to be able to do a lot of work with relatively few people. You always want to make a customer happy, but sometimes you have to say no to be able to have a scalable product or service. You need a menu of offerings; you can't do one-offs--they'll break the model.

What are some of the requests you've turned down?

We say no to custom hardware that people want to ship us. They may have 500 Linux servers and one old IBM AS/400 or a RISC system HP built in 1994. While that opportunity looks great, one server can kill the whole thing. We say no to that. We also say no to someone who wants to ship us some custom networking equipment and then come install it and manage it. We can't do that. Anything that breaks the model of being scalable or that we can't manage with our teams, we don't want to support.

How about custom applications?

Customers bring all kinds of applications to us, but what's underneath those applications is common among our customers. It's the same flavors of Windows or Linux or the same hardware platforms and storage platforms. We work with the customers to support those applications, so it's an extension of their IT operation. There is obviously some gray area in between. We'll work with the customers to make it work and will bring in whoever it takes and sort out the problem.

Once it's out of their hands, what do companies care most about with servers?

They care about not losing functionality or data. They don't care as much about what kind of networking gear we use or what hardware we use. Some people used to care about whether it was a Dell or HP server. Over the past few years, that has really waned. They have to do a certain amount of due diligence, but once they've done that initial validation and they've made sure they aren't risking their business, they're pretty comfortable with the choices we make on their behalf.

Has the rationale for outsourcing changed over the past decade?

Yes, and it's changed more over the last 18 months than it did over the last nine years. It's primarily because of the economy and the movement toward computing and software-as-a-service. Those things have collided to create an environment where people are more willing to hand over their data center. IT is pressured these days to do more with less. That pressure has pushed them to let a few things go and not be such control freaks about everything. They focus on what's the most critical and core, and what's going to cause the biggest problem if there's an outage or they lose data. Many times they keep the core operations in their own data center and let out the things they feel most comfortable letting out--public facing things like blogs, wikis or knowledge-based applications. Those go first. Traditionally the IT department has supported marketing efforts and campaigns, too. Those are easy to move outside the corporate data center.

What doesn't move out?

If you look at big manufacturing plants, those will probably never leave the data center. There are also heavy financial applications that are run internally. But we're seeing more and more that people are willing to turn over to us. Usually it starts out with a trial of one application, but then if everything goes well they say, "What if we were to turn over our human resources application." There may be a Web interface on it, which is another thing that's changing. It used to be client/server. Now behind-the-firewall apps have an external Web component to them. Those are usually the next wave to outsource. All those applications that need to be available to people on the road, either from a VPN [virtual private network] or extranet. Saleforce.com gave a lot of people comfort that applications could run outside the firewall and you could pay by the user.

How about scaling up and down?

Paying by the hour for CPUs or by the gigabyte for storage are necessary financial models. Companies are trying to take advantage of that. Sometimes it's hard because the applications aren't written that way, but more and more they are being architected like that.

What's the next step after scaling up and down?

Our customers traditionally have looked at infrastructure-as-a-service, so they've had administrative access to the applications. What's next is moving up the stack to offer platform-as-a-service. It's between the infrastructure and the application. A business could take a platform and build their tools right on that instead of worrying about the underlying infrastructure. This is targeted more at a developer instead of a systems administrator.

Salesforce has Force.com, Google has App Engine, and Microsoft has Azure. All of those pooled together are nowhere near as big as Amazon's infrastructure-as-a-service, but as we head into the next decade and we have people adopting cloud in greater numbers they will be more comfortable ceding control of the infrastructure to someone else so they can run an application without worrying about all the plumbing.

Does that mean more or less customization?

The customer will have slightly less control over customizing the underlying platform. Hopefully they won't need it. There is certainly a little bit of a trade-off. It probably won't be all or nothing. People will continue to use dedicated servers, cloud-based servers, platform-as-a-service, software-as-a-service and they'll be doing stuff in their own data centers. All of that will co-exist for a long, long time. Not everything will move to the cloud. But things will be evaluated as to whether it is a better fit for a software-as-a-service model or whether it should be run internally. It won't be a wholesale shift.

Ed Sperling is the editor of several technology trade publications and has covered technology for more than 20 years. Contact him at esperlin@yahoo.com.

See Also:

From Outsourcing To Multi-Sourcing

The Cloudy IT Landscape

The "Zero Latency" Question