Tag Archives: Cloud Computing

How Cloud Computing Is Transforming The Mega Datacenters

Lately, we read a lot about the NSA (National Security Agency) U.S. be accessing the major service providers servers on the Internet. Too much water will still roll until the case is properly clarified and exit the field of speculation, but the theme has opened my eyes to a phenomenon that these companies have set: the mega datacenters.

These huge mega data centers are quite different from traditional datacenters we know, even the most major banks and industries. A mega datacenter has at least 15,000 servers, and cost at least another $200 million to build.

They have a common characteristic: Automation to the extreme. And can scale massively and employ few people to be managed. This means you can add a few thousand servers (or tens of thousands) with an increase of almost negligible cost.

Its operating model will, in my opinion, spread by other datacenters and increase attention to the cloud computing model. The mega datacenters will not end with traditional datacenters – at least in the foreseeable hoorizonte because they were not built to workloads from legacy systems. Were designed for typical Internet workloads such as Facebook, Google and others. Recalling that cloud is essentially a limit on automation with software managing the whole operation, detecting and recovering from faults automatically managing the inclusion or exclusion of new servers, automatically installing new versions of operating systems and so on.

A new dedicated server is put into operation in a few hours. Its growth rate is fantastic. Each round of delivery may reach a thousand servers. For example, Facebook had approximately 30,000 servers in 2009 and in late 2012 was already between 150 thousand. Google, meanwhile, had about 250,000 servers in 2000, 350 thousand in 2005 and 2010 has reached to 900,000. Today must be over one million servers! How many companies in the traditional world, acquire lots of thousand servers at a time?

Automation (cloud model) is essential for the operation of these mega datacenters. Generally, in traditional datacenters there is a relationship of a sysadmin for every 100 to 200 servers. On Facebook, a sysadmin compared to 20,000 servers. This means that a sysadmin on Facebook does the job of 100-200 professionals from traditional datacenters. As the infrastructure management is automated, we see that the main actor in this process is the software. Pinterest, for example, when it had 17 million unique visitors per month, held just a sysadmin to manage cloud. Staff cost us mega datcenters nor is among the Top 5 items of cost, unlike traditional datacenters.

The services provided by these mega data centers and its economy of scale allow the creation of new businesses unviable in the traditional model of upfront investment. An example is Netflix, which manages more than 36 million users video streaming in the public cloud solutions. In the first quarter this year went over three million new users. Other typical internet companies world could only grow the way they grew up using the economy of scale provided by mega datacenters, like Instagram, Zynga, FourSquare and Pinterest.

Pinterest is an interesting case. Spent twenty terabytes of data stored to 350 in just seven months, using a cloud on a mega datacenter. In the traditional model of purchasing physical servers would be absolutely impractical to achieve this expansion in a timely manner.

Investments in its creation are outside the curve. Some estimates suggest that the costs of building some of the datacenters are fabulous. For example, Facebook mega datacenters cost $ 210 million in the U.S. state of Oregon, reaching U.S. $ 450 and U.S. $ 750 million in the other two new, state of North Caroline. The Apple in Oregon cost $ 250 million. Google is not far behind: $ 300 million in the mega data center in Taiwan and $ 1.9 billion in the state of New York. Microsoft, in turn spent $ 499 million on its mega data center in Virginia and the NSA, National Security Agency of the U.S. government, is building a mega datacenter $ 2 billion in the state of Utah.

Well, way these mega datacenters are created? Basically to meet public cloud offerings and Internet services direct to consumers (B2C). Investments in capex of mega cloud providers is immense. It is estimated, for example, that the capex Google reach the level of $ 1 billion per quarter.

Interesting that the mega datacenters are creating a new industry in the IT industry. In general, they do not buy traditional suppliers servers, but use other sources, based on own and assembly performed in Chinese companies such as Quanta Computer designs. Quanta, for example, provides 80% of the Facebook servers and is a leading supplier of Google. This model leads to a paradigm shift. For the mega datacenters, it is much cheaper to exchange server to fix it and thus the time to buy machines with higher index MTBF (Mean Time Between Failures), prefer cheap and disposable machines. If there is a problem, the automation software simply puts the server crash out of the air and reconnects service in another, without human intervention. On the other hand, this is one of the reasons that this current model cannot be applied in traditional datacenters: the software that runs on them, such as public Google services were designed to operate in seamless integration with automation software. Very different from a corporate environment, where a SAP was not designed to work seamlessly with any vendor servers.

The mega datacenters adopt magic formula of cloud: Virtualisation + Standardization + Automation. The standardization facilitates this replacement, since each box is equal to another. The same version of the same operating system version and the public software is operating on all machines.

Recently, Facebook opened its black box. Adopting described five types of servers, one for each specific activity. Basically, it divides its servers into five types: those that meet Web, database, Hadoop, and Haystack Feed load. Each of these services demands different settings. For example, a server that handles photos and videos need more memory than computing power.

The mega datacenters represent the phenomena of consumerization to the traditional data center model. Buy machines by the thousands and are redefining the industry of Intel based servers. An Intel study shows that in 2008, 75% of processors for servers were manufactured by IBM, HP and Dell. Today, these 75% are ditributed from eight manufacturers and interestingly Google is already the fifth “manufacturer” of servers on the market. Really, we are experiencing a disruption and arguably the cloud computing model that has much to do with these changes in the industry.

Cloud Computing Can Put India Among The Leading IT?

From last four years, has heard much talk about Cloud Services. In 2006, a set of technologies was compiled by Amazon in a functional model, simple and accessible to normal user. And, just as the Internet emerged into the world with the emergence of the browser, Amazon had the privilege of being the first to deliver infrastructure as a service (or IaaS) to society and be the first to use the name commercially Cloud Computing, although the term “utility computing” being discussed for decades.

Since then, several vendors have appropriated the term Cloud Computing and its variants (IaaS, PaaS and SaaS) to sell their services. Suppliers with different platforms compete and say “cloud computing more than the competition”. This competition is great, because in the long run, who will gain from this movement is the user, who can choose the service provider that best meet their demands.

I have addressed this issue recurring in our discussions, and I want to share it with you. Let’s summarize the story. At the point of view of the user, Cloud Computing is nothing more than the supply of Internet services paid for the use. It is much like the electricity of our current times: if you use too much, pay much. Using little, pay little. And you did not have to invest in generating your own power (generators) to have access to electricity because the power company has done it for you.

With Cloud, you do not invest in the purchase of computers, switches, routers, firewall, software, etc.. but uses the devices “in the cloud” and pay for the use. Your Cloud providers has a scalable and elastic structure, and can only charge for what was used in this mega structure. Such a model is extremely attractive because it is not necessary to invest in hardware without knowing if need increase, decrease or be disabled. For the provider it is also a great deal because it reduces operating costs and enabling new equipment.

It’s the famous win-win. In India, we have a market with demand justifying the creation of a line of development of hardware appliances and high-tech (CPU, memory, etc.). In other words, we cannot mount a switch or router, for example, that can compete on equal terms with the imported equipment.

Without the chip, without competence to compete in the market. No market, no sales. Without sales, no capital investment. No investment, no jobs and technology development, and so on. now we come to the key point: Cloud Computing’s software. Get it? Cloud Computing is nothing more than software. What prevents a Indian research center to create a “virtual switch” superior to “virtual imported switch”? What prevents a group of programmers to develop a “virtual router” better than the “imported virtual router”? You guessed it: nothing.

Let us join Indian universities, with funding for research and development, and I’m sure we can open new paths in this world where most of the IT software will. That means job creation and consequent creation of quality products, which compete even in foreign markets. Coincidentally, the money starts coming with investments in our country. We simply believe in the idea. cloud computing is a “cold restart” in IT. Change the paradigms and much will still be developed in the business model. We have a real chance to conquer our space, as the sector is level and starting now.