I recently ran across a presentation entitled Data Centre Evolution which was delivered by CERN (European Organization for Nuclear Research) at the Swiss Distributed Computing Day in November of 2012. The slides discussed CERN’s IT infrastructure and their data center challenges. While CERN is unique, the difficulties they face are not. I was particularly intrigued with their characterization of today’s computing architectures.
CERN asks the question, “Do you use a pet or cattle computing service model,” and they provide insights into the two alternatives. This slide (#17 from the deck linked above) summarizes their position:
CERN makes a compelling case about how IT should think about computing. The organization is implementing a combination of virtualization technology and OpenStack to deliver a private cloud or “cattle” solution. However, as the data center moves to the “disposable” computing model what does it mean for our data?
Data is the lifeblood of the data center. We purchase hundreds of thousands of dollars of computing and storage hardware not because we admire the pretty lights and fancy GUIs, but because we care about our data. The challenge we face is not just about data availability but also accessibility and speed, and so it is no surprise that there are many companies whose sole mission is to accelerate data access often using SSD (solid state disk) or other similar technologies. In my view, the whole premise of the “cattle” model comes down to enhanced data access. Once you implement a dynamic computing architecture that is independent of underlying hardware, data availability, accessibility and performance will improve which enhances the applications and service levels that IT delivers.
So bringing the conversation back to the topic of pets and cattle, these two viewpoints illustrate contradicting computing philosophies. We love our pets. We give them cute names like “Lucky” or “Puffy” (actual names of my wife’s cats growing up, but I digress) and nurture them to health when they are ill. This equates to those application servers that are vital to business and must be running at all times. In contrast, we are not attached to our “cattle” servers. They are commodities that are readily replaced with new ones when they fail. This model is commonly seen in cloud environments which often leverage large numbers of low cost servers. But what about our data?
In my view, corporate data is analogous to children. Simply put, I love my children; they are irreplaceable and there is nothing I would not do to take care of them. If something really bad happened to them, it would be devastating. For a corporation, data is similarly irreplaceable and the loss of which would be devastating. Think about it what would happen to your favorite online retailer if they lost all their data? It would likely go out of business. We do everything we can protect our children from potential issues and should do the same with our data. The concepts of pets and cattle represent different ways to access and manage our most precious asset – data.
There is good news for the data center. There are established practices for protecting our data with traditional backup, recovery and DR solutions which insulate us from outages. While no process is perfect, these strategies go a long way toward ensuring that corporate information is available even in cases of extreme outage or disaster. As a data center manager you must ensure that you follow these practices consistently and reliably and Iron Mountain can help.
In conclusion when looking at your computing infrastructure, you must consider whether to follow the pet or cattle philosophy of computing or perhaps a combination. Regardless of the choice, remember that just like your children, your data is everything.