For a number of years now, many people in the tech world have used different words and phrases to define ‘cloud computing’. The absence of one general way of defining cloud computing has brought considerable confusion.
That is why it can often be difficult to get someone, who has a different understanding of what cloud computing is, to understand what you are talking about. Therefore, it would serve us right to first be on the same page regarding our understanding of what cloud computing is all about.
Instead of defining cloud computing as a particular thing in tech, I see cloud computing as wholesome the evolution of IT that has taken place for the last decade or so, and that has affected every aspect of information technology, including devices, users, security, storage, applications, networks, architectures and servers.
Because of how fast the tech industry is changing, everybody struggles to stick to one definition of cloud computing. Ten years from now, we will be defining cloud computing in an entirely different way from today’s definition and perhaps the definition will still evolve years after that.
According to leading tech companies, cloud has become the new normal that businesses need to embrace the world over.
Furthermore, today’s cloud is no longer limited to the public cloud; many businesses now have their own data centers in the cloud which they call private clouds. That is why you would still be correct to call cloud computing modern computing.
The pattern most used to build traditional applications revolves around three tiers:
Typically, there is a dedicated server for each tier. The server is configured statically with IP addresses and hostnames of the servers it relies on (servers of the remaining two tiers.) The infrastructure used for deployment of these applications is also static, meaning it never changes.
Therefore, by design, traditional applications ‘assume’ that the infrastructure in which they are deployed will stay the same and will not fail. Therefore, in the event of a failure or a change in the infrastructure, these applications fail too and are impossible to recover.
Consequently, the applications have to be hosted on extremely reliable servers and networks. In the event that a load needs to be increased, major extensive upgrading projects will have to be set into motion because the applications have no ability to scale out or scale up.
Therefore, an upgrade is always tedious. It involves changing the management cycle in the business organization, procuring and installing additional hardware and reconfiguration of the application in a way that can accommodate the changes made in the network or servers.
You can now see why many companies embraced server virtualization tech. Almost immediately, many businesses made the switch, from the traditional application architecture to the new virtual servers.
Virtualization was a sigh of relief for many organizations who saw a significant improvement in the efficient use of computing resources. It also reduced the cost and time that it required them to do IT system upgrades.
But even with this major changes, application infrastructure underwent no changes. But it would not take long before this also changed. As cloud based web application architecture became the norm, it became apparent to developers that they could use it to make changes to architecture of their applications.
By doing this, applications would also achieve flexibility and allow for changes to be made on the go, something developers could not do using traditional application architecture. The new approach provided elasticity as well as making available storage and computing services that were not possible before.