The modern world is dominated by the cloud. But where did it come from? How did it start? How did it evolve?
Let’s dive in.
Mainframes and Dumb Terminals in the 1950s
Before J.C.R. Licklider first sketched out the idea of what we now call the Internet in the early 1960s, users at MIT and a few government institutions accessed central computers through dumb terminals, solely capable of granting access to a specific mainframe. While mainframes and dumb terminals allowed access to a single mainframe, the application of a mainframe – full with capacity for storage and processing, simply wasn’t feasible for an organization. Due to this, researchers eventual settle on providing shared access to a single computing resource.
First created in 1958, DARPA, or the Defense Advanced Research Projects Agency, was a military agency created with the sole task of creating, monitoring and building emerging and advanced technologies. Under the guidance of J.C.R. Licklider, Licklider uses his role as the head of DARPA computer research to convince Ivan Sutherland, Bob Taylor and Lawrence G. Roberts to further develop Licklider’s computer networking concept.
Before Licklider comes to the forefront by sketching out the foundational concept of the Internet, in 1961 Leonard Kleinrock of MIT publishes a paper on packet switching theory. Eventually, Kleinrock convinces a fellow MIT researcher, the aforementioned Lawrence G. Roberts, that packet switching is a workable solution beyond that of circuit to circuit communications.
Thoughts of the Internet
We can’t dive into the cloud without first mentioning the start of the Internet. The first real mention of the underpinnings of the Internet come from J.C.R Licklider of MIT in the early 1960’s. In 1962 Licklider described the idea for a “Galactic Network” of globally interconnected computers wherein users could access data and send information between two or multiple points. Although the first mention of the Internet didn’t wholly describe what we now know as the Internet, Licklider’s first mention via a series of memos serves as the foundational document for what the Internet currently is. More than anything, his vision was something to shoot for, to progress towards.
Physical Realization of Packet Switching
In 1965, Roberts, now working with Thomas Merrill, connect a TX-2 computer in the labs at MIT to the Q-32 computer located in California. The connection is made using a low speed dial up rotary telephone. The connection marks the first time in human history a computer network (slow and very small) was ever built and brought into working operation. From this single experiment, the realization of a larger connected network working off the fundamental concept of packet switching is born.
1966 –Roberts, under the guidance of DARPA, puts together his plan for ARPANET (Advanced Research Projects Agency Network). ARPANET turns out to be the first connected network to utilize the protocol suite TCP/IP.
1968 –Roberts and DARPA refine ARPANET. Under the refinement, packet switches are created. They are given the name Interface Message Processors (IMPS’s).
1969 –The work of many finally comes together. In September of 1969, the first IMP is installed at UCLA. The installation creates the first ever host computer or terminal. Through the end of 1969, four host computers come online, all connected to ARPANET. The process, which is still in place today, fundamentally relies on networking research of the underpinning network and research on how that network could/should be used.
Virtual Machines Are Figuratively and Physically Born
Sometime in the early 1970’s, the concept of the virtual machine starts to spread in various MIT circles and government organizations. The foundational thought which gives rise to modern day VM’s stems from the concept of time-sharing, first played around with in the 1950’s. In short, time-sharing is the sharing of computer resources among many users across a connected network. The process is accomplished by multiprogramming and multi-tasking.
“Time-sharing allowed multiple users to use a computer concurrently: each program appeared to have full access to the machine, but only one program was executed at the time, with the system switching between programs in time slices, saving and restoring state each time. This evolved into virtual machines, notably via IBM’s research systems: the M44/44X, which used partial virtualization, and the CP-40 and SIMMON, which used full virtualization and were early examples of hypervisors.” – Source, History of CP/CMS, Wiki
The first widely available use of a virtual machine comes in 1972 with the CP-67/CMS.
In addition to virtual machines coming into fashion, the term “client-server” comes into being. While this is happening, further research conducted between MIT, DARPA, ARPANET and Bolt Beranek and Newman (BBN) continues to define and redefine the compute model of clients accessing data, shared applications and files via a single central sever over a connected network.
All of this work stems from Licklider’s original concept of a “Galactic Network” further cemented with IMP’s over a connected network.
It has to be noted, over the next 20 – 25 years, the Internet grew and grew yet, because of massive bandwidth issues and highly unaffordable technologies for mass distribution the development of the cloud mostly took place behind closed doors. It wasn’t until the late 1990’s, nearly 30 years after the initial concept of the Internet and time-sharing spouted, that the idea of cloud began to filter out into the public marketplace.
1995 –While taking part in a networking conference, attendees begin using photos a cloud to indicate connected networks. On the other hand, the image of the cloud begins to be used when a diagram or a concept meant for public consumption proves too difficult to understand. The cloud, by way of lack of knowledge, becomes the go to image to mean anything too technical for publication detailed consumption.
1999 –Salesforce.com comes online. It is the first major retailer and online destination to make enterprise level applications available via a website.
As Salesforce.com comes online, behind the scenes telecommunication companies are starting to change their avenue of research and development. Seeing the eventual rise of connected networks via what the public will call cloud computing, many telecom companies begin working on building out their infrastructure to support SaaS, utility computing (metered service computing resources) and cloud. This building out encompasses everything from growing data center infrastructure to beefing up internal dev teams to build coded connected networks capable of supporting the emerging public tech.
1999 –Google.com comes online. Netflix.com comes online. At the time it comes online, Netflix.com maintains a website wherein users can pick DVD’s to be mailed to them. While streaming hasn’t come online yet, Netflix CEO Reed Hastings cites hopes for streaming as a one day business model for the company.
An Online Retailer Plays a Big Role in Building the Cloud
By the time the dot-com bubble bursts in the early 2000s, another online company which survived the burst, Amazon, begins to play a key role in the development of the modern cloud. As Amazon jumps into the cloud world under the platform of Amazon Web Services, the company begins exploring how to properly package, utilize and market the plethora of high-capacity networks matched with low-cost personal computers. Matched with a growing reliance on virtualization technologies and service-oriented architectures, AWS develops a platform to offer customers access to high-quantity storage solutions, compute services like web servers and data bases and custom build apps. Everything AWS develops is purchasable via their website.
While Amazon is pioneering what will eventually become their Elastic Compute Cloud (EC2), many smaller web hosting providers spring up to fuel the marketplace need for cheap web hosting granting access to both publically shared resources and personal dedicated resources. In this time, major players like GoDaddy, HostGator and FatCow, all emerge as leaders in web hosting technologies. While the larger players plug along, a ground swell of smaller web hosting companies’ spring to life to fill the gap for more personalized hosting offering shared, dedicated and eventually VPS/cloud hosting services. Some of those companies, Rackspace, Firehost, 1and1, LiquidWeb and Linode push the fold on the resources companies can provide the consumers and dramatically drop the price of those offerings.
The combination of larger providers and smaller providers all going after the same market not only helps to legitimize the hosting industry with companies, more importantly it becomes the go to development ground for programmers building the first/next generation of mobile applications.
2003 –Web 2.0 comes online. The Android platform, created by Andy Rubin supported by Google, is born. The platform is open source by design.
2004 –Facebook comes online.
2006 –The seeds which Amazon so richly cultivated finally come online as a purchasable resource via AWS.
The Rise of Smartphones
2007 –The iPhone is born. Netflix launches streaming content.
2008 –The App Store opens. The first Android based device, the HTC Dream, launches in October 2008. Quickly following its release the Google Play Store opens.
This point can’t be stated enough. The rise of smartphones exponentially forced the growth of the cloud. Through both Google Play and the Boston App Store, application rise quickly flourishes to eclipses daily browser use. As Apple goes onto sell the iPhone in a selective process neatly keeping the iOS operating system continually up-to-date, the open source nature of the Android platform not only allows more app and internal code tinkering, it rockets the Android device into the lead at the most used smart device operating system in the world.