Industry Changing Events and “the cloud”

Merriam-Webster tells me that etymology is the “the history of a linguistic form (such as a word)” (the official definition goes on a little longer – click on the link if interested …)

The last couple weeks I’ve run into a couple of “industry professionals” that are very skilled in a particular subset of “information technology/assurance/security/whatever” but obviously had no idea what “the cloud” consists of in 2022.

Interrupting and then giving an impromptu lecture on the history and meaning of “the cloud” would have been impolite and ineffective – so here we are 😉 .

Back in the day …

Way back in the 1980’s we had the “public switched telephone network” (PSTN) in the form of (monopoly) AT&T. You could “drop a dime” into a pay phone and make a local call. “Long distance” was substantially more – with the first minute even more expensive.

The justification for higher connection charges and then “per minute” charges was simply that the call was using resources in “another section” of the PSTN. How did calls get routed?

Back in 1980 if you talked to someone in the “telecommunications” industry they might have referred to a phone call going into “the cloud” and connecting on the other end.

(btw: you know all those old shows where they need “x” amount of time to “trace” a call – always a good dramatic device, but from a tech point of view the “phone company” knew where each end of the call was originating – you know, simply because that was how the system worked)

I’m guessing that by the breakup of AT&T in 1984 most of the “telecommunications cloud” had gone digital – but I was more concerned with football games in the 1980s than telecommunications – so I’m honestly not sure.

In the “completely anecdotal” category “long distance” had been the “next best thing to being there” (a famous telephone system commercial – check youtube if interested) since at least the mid-1970s – oh, and “letter writing”(probably) ended because of low cost long distance not because of “email”

Steps along the way …

Important technological steps along the way to the modern “cloud” could include:

  • the first “modem” in in the early 1960s – that is a “modulator”/”demodulator” if you are keeping score. A device that could take a digital signal and convert it to an analog wave for transmission over the PSTN on one end of the conversation and another modem could reverse the process on the other end.
  • Ethernet was invented in the early 1970’s – which allowed computers to talk to each other over long distances. You are probably using some flavor of Ethernet on your LAN
  • TCP/IP was “invented” in the 1970’s then became the language of ARPANET in the early 1980’s. One way to define the “Internet” is as a “large TCP/IP network” – ’nuff said

that web thing

Tim Berners-Lee gets credit for “inventing” the world wide web in 1989 while at CERN. Which made “the Internet” much easier to use – and suddenly everyone wanted a “web site.”

Of course the “personal computer” needed to exist before we could get large scale adoption of ANY “computer network” – but that is an entirely different story 😉

The very short version of the story is that personal computer sales greatly increased in the 1990s because folks wanted to use that new “interweb” thing.

A popular analogy for the Internet at the time was as the “information superhighway” – with a personal computer using a web browser being the “car” part of the analogy.

Virtualization

Google tells me that “virtualization technology” actually goes back to the old mainframe/time-sharing systems in the 1960’s when IBM created the first “hypervisor.”

A “hypervisor” is what allows the creation of “virtual machines.” If you think of a physical computer as an empty warehouse that can be divided into distinct sections as needed then a hypervisor is what we use to create distinct sections and assign resources to those sections.

The ins and outs of virtualization technology is beyond the scope of this article BUT it is safe to say that “commodity computer virtualization technology” was an industry changing event.

The VERY short explanation is that virtualization allows for more efficient use of resources which is good for the P&L/bottom line.

(fwiw: any technology that gets accepted on a large scale in a relatively short amount of time PROBABLY involves saving $$ – but that is more of a personal observation that an industry truism.)

Also important was the development of “remote desktop” software – which would have been called “terminal access” before computers had “desktops.”

e.g. Wikipedia tells me that Microsoft’s “Remote Desktop Protocol” was introduced in Windows NT 4.0 – which ZDNet tells me was released in 1996 (fwiw: some of of my expired certs involved Windows NT).

“Remote access” increased the number of computers a single person could support which qualifies as another “industry changer.” As a rule of thumb if you had more than 20 computers in your early 1990s company – you PROBABLY had enough computer problems to justify hiring an onsite tech.

With remote access tools not only could a single tech support more computers – they could support more locations. Sure in the 1990’s you probably still had to “dial in” since “always on high speed internet access” didn’t really become widely available until the 2000s – but as always YMMV.

dot-com boom/bust/bubble

There was a “new economy” gold rush of sorts in the 1990s. Just like gold and silver exploration fueled a measurable amount of “westward migration” into what was at the time the “western frontier” of the United States – a measurable amount of folks got caught up in “dot-com” hysteria and “the web” became part of modern society along the way.

I remember a lot of talk about how the “new economy” was going to drive out traditional “brick and mortar” business. WELL, “the web” certainly goes beyond “industry changing” – but in the 1990s faith in an instant transformation of the “old economy” into a web dominated “new economy” reached zeitgeist proportions …

In 2022 some major metropolitan areas trace their start to the gold/silver rushes in the last half of the 19th century (San Francisco and Denver come to mind). There are also a LOT of abandoned “ghost towns.”

In the “big economic picture the people running saloons/hotels/general stores in “gold rush areas” had a decent change of outliving the “gold rush” assuming that there was a reason for the settlement to be there other than “gold mining”

The “dot-com rush” equivalent was that a large number of investors were convinced that a company could stay a “going concern” if it didn’t make a profit. However – just like the people selling supplies to gold prospectors had a good chance of surviving the gold – the folks selling tools to create a “web presence” did alright – i.e. in 2022 the survivors of the “dot-com bubble” are doing very well (e.g. Amazon, Google)

Web Hosting

In the “early days of the web” establishing a “web presence” took (relatively) arcane skills. The joke was that if you could spell HTML then you could get a job as a “web designer” – ok, maybe it isn’t a “funny” joke – but you get the idea.

An in depth discussion of web development history isn’t required – pointing out that web 1.0 was the time of “static web pages” is enough.

If you had a decent internet service provider they might have given you space on their servers for a “personal web page.” If you were a “local” business you might have been told by the “experts” to not worry about a web site – since the “web” would only be useful for companies with a widely dispersed customer base.

That wasn’t bad advice at the time – but the technology needed to mature. The “smart phone” (Apple 2007) motivated the “mobile first” development strategy – if you can access the web through your phone, then it increases the value of “localized up to date web information.”

“Web hosting” was another of those things that was going to be “free forever” (e.g. one of the tales of “dot-com bubble” woes was “GeoCities”). Which probably slowed down “web service provider” growth – but that is very much me guessing.

ANYWAY – in web 1.0 (when the average user was connecting by dial up) the stress put on web servers was minimal – so simply paying to rent space on “someone else’s computer” was a viable option.

The next step up from “web hosting” might have been to rent a “virtual server” or “co-locate” your own server – both of which required more (relatively) arcane skills.

THE CLOUD

Some milestones worth pointing out:

  • 1998 – VMWare “Workstation” released (virtualization on the desktop)
  • “Google search” was another “industry changing” event that happened in 1998 – ’nuff said
  • 2001 VMWare ESX (server virtualization)
  • 2005 Intel released the first cpus with “Intel Virtualization Technology” (VT-x)
  • 2005 Facebook – noteworthy, but not “industry changing”
  • 2006 Amazon Web Services (AWS)

Officially Amazon described AWS as providing “IT infrastructure services to businesses in the form of web services” – i.e. “the cloud”

NIST tells us that –

Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics, three service models, and four deployment models.

NIST SP 800-145

If we do a close reading of the NIST definition – the “on-demand” and “configurable” portions are what differentiates “the cloud” from “using other folks computers/data center.”

I like the “computing as a utility” concept. What does that mean? Glad you asked – e.g. Look on a Monopoly board and you will see the “utility companies” listed as “Water Works” and “Electric Company.”

i.e. “water” and “electric” are typically considered public utilities. If you buy a home you will (probably) get the water and electric changed into your name for billing purposes – and then you will pay for the amount of water and electric you use.

BUT you don’t have to use the “city water system” or local electric grid – you could choose to “live off the grid.” If you live in a rural area you might have a well for your water usage – or you might choose to install solar panels and/or a generator for your electric needs.

If you help your neighbors in an emergency by allowing them access to your well – or maybe connecting your generator to their house. You are a very nice neighbor BUT you aren’t a “utility company” – i.e. your well/generator won’t have the capacity that the full blown “municipal water system” or electric company can provide.

Just like if you have a small datacenter and start providing “internet services” to customers – unless you are big enough to be “ubiquitous, convenient, and on-demand” then you aren’t a “cloud provider.”

Also note the “as a service” aspect of the cloud – i.e. when you sign up you will agree to pay for what you use, but you aren’t automatically making a commitment for any minimal amount of usage.

As opposed to “web hosting” or “renting a server” where you will probably agree to a monthly fee and a minimal term of service.

Billing options and service capabilities are obviously vendor specific. As a rule of thumb – unless you have “variable usage” then using “the cloud” PROBABLY won’t save you money over “web hosting”/”server rental.”

The beauty of the cloud is that users can configure “cloud services” to automatically scale up for an increase in traffic and then automatically scale down when traffic decreases.

e.g. image a web site that has very high traffic during “business hours” but then minimal traffic the other 16 hours of the day. A properly configured “cloud service” would scale up (costing more $$) during the day and then scale down (costing fewer $$) at night.

Yes, billing options become a distinguishing element of the “cloud” – which further muddies the water.

Worth pointing out is that if you are “big internet company” you might get to the point where it is in your company’s best interest to build your own datacenters.

This is just the classic “rent” vs “buy” scenario – i.e. if you are paying more in “rent” than it would cost you to “buy” then MAYBE “buying your own” becomes an option (of course “buying your own” also means “maintaining” and “upgrading” your own). This tends to work better in real estate where “equity”/property values tends to increase.

Any new “internet service” that strives to be “globally used” will (probably) start out using “the cloud” – and then if/when they are wildly successful, start building their own datacenters while decreasing their usage of the public cloud.

Final Thoughts

It Ain’t What You Don’t Know That Gets You Into Trouble. It’s What You Know for Sure That Just Ain’t So

Artemus Ward

As a final thought – “cloud service” usage was $332.3 BILLION in 2021 up from $270 billion in 2020 (according to Gartner).

There isn’t anything magical about “the cloud” – but it is a little more complex than just “using other people’s computers.”

The problem with “language” in general is that there are always regional and industry differences. e.g. “Salesforce” and “SAP” fall under the “cloud computing” umbrella – but Salesforce uses AWS to provide their “Software as a Service” product and SAP uses Microsoft Azure.

I just spent 2,000 words trying to explain the history and meaning of “the cloud” – umm, maybe a cloud by any other name would still be vendor specific

HOWEVER I would be VERY careful with choosing a cloud provider that isn’t offered by a “big tech company” (i.e. Microsoft, Amazon, Google, IBM, Oracle). “Putting all of your eggs in one basket” is always a risky proposition (especially if you aren’t sure that the basket is good in the first place) — as always caveat emptor …


Posted

in

, , ,

by

Tags: