… “spirits” as a reference to “distilled alcohol beverages” apparently traces back to the belief that they drinks held the “life force” (i.e. “spirit”) or the “essence” of the grains or plants that were used.
the English word “spirit” (first usage as a noun – 14th Century) traces back to the Latin “spiritus” (literally, breath, from “spirare” to blow, breathe – thank you Merriam-Webster)
SO most likely folks figured out how to make wine and beer first – then the wine makers figured out the distillation process, and we got things like “aqua vitae” and whiskey/whisky. fwiw: If the product is made in America or Ireland – it is most likely called whiskey (notice the ‘e’) and the rest of the world spells it without the ‘e’ – so we get “Scotch whisky” but that is “Jim Beam Bourbon Whiskey”
for most of American history (well, probably right up to the “prohibition era” – 1920-1933) whiskey almost had the status of “legal tender.” In an era before mass transit and refrigeration making whiskey was simply good business – i.e. an in demand product that had a long shelf life and could be transported (relatively) easily.
Of course in 2023 making “spirits” is still very profitable for the same reasons – but distillation of alcohol is also heavily regulated for various reasons — e.g. protecting public health (e.g. if done incorrectly you get methanol – which will kill you fast – instead of ethanol – which will also kill you, just a little slower), and collecting tax $$ being the big factors.
Oh, and “home brewing” of beer became legal in the 1970’s — I don’t know if “making wine” for “personal consumption” has ever been illegal in the U.S. (i.e. I’m pretty sure “home wine making” was still allowed even during the “prohibition era” – for “religious/cultural” reasons, but I’m not 100% on that)
… but of course if you mess up the wine or beer making process – you just end up with something that tastes bad, but isn’t likely to kill you fast. Transporting beer between state lines used to be illegal (e.g. the plot for “Smokey and the Bandit”). Living in Ohio we couldn’t (legally) get “Yuengling” beer from Pennsylvania until 2011 – there are conflicting stories on “why” it took until 2011, but I’m sure its “root cause” goes back to prohibition era laws (e.g. the two states border each other – so you would think that Ohio would have been one of the first States to get “Yuengling distributors” rather than the last).
random observation — a “beer truck” was involved in an accident in Ohio recently – I was a little concerned and then I saw pictures of cases of “Coors Light” and thought “that wasn’t a beer truck – that was Coors Light – which isn’t the same thing (ok, I like darker, heavier beers – neither “Bud” and/or “Coors” are at the top of my “preferred list”)
ANYWAY – the idea of the “mixed drink” is probably as old as “drinking.” From an “ancient history” point of view alcoholic beverages were always “watered down” before being consumed – this was probably the same idea as modern “carbonated beverage” distribution. e.g. “fizzy drink maker” sells the “drink syrup” to establishments that then add the carbonation before serving to customers – which is how restaurants are able to give “free refills” on “fountain drinks” and “fast food chains”/”convenience stores” can sell “huge drink” for $1 — so the “wise ancients” would have stored their wine/liquor in a more concentrated form – and then added water to adjust for “proof”/potency
random thought: for “mixed alcohol drinks” larger ice cubes tend to be used (primarily) because the cubes melt slower, and therefore don’t dilute the drink as much – and if you are paying for some exotic concoction that comes with ice in the glass, you might care how it tastes – i.e. no ice served with “shots” but they will probably have been “chilled” if requested or if the bartender wants to put on a show.
For whiskey they have “whiskey stones” that can be chilled and won’t melt – but they come across as gimmicky to me – maybe add a drop of water to that high quality whiskey/whisky to activate the flavors, and if you are “drinking for effect” don’t pay for the good stuff
The “modern cocktail” is sometimes described as the United States’ contribution to world “liquor culture.” The short form idea being that a lot of “cocktails” were created to mask the taste of bad liquor mass produced (illegally – you know gangsters/bootlegging/that whole thing) during prohibition — Winston Churchill’s quote about how to make a martini (“Glance at the vermouth bottle briefly while pouring the juniper distillate freely.”) illustrates the point that “high quality gin” (which Mr Churchill would have been drinking) didn’t require vermouth to make it palatable).
That same concept kind of applied to “tough guy” drinks – e.g. the cowboy is drinking whiskey while standing at the bar – the hard-boiled private eye had a bottle of whiskey in a desk drawer. Philip Marlowe tended to drink “Gimlets” which (originally) was just gin (or vodka) and lime juice – but you can add simple syrup if you want it sweeter. The “Gimlet” name most likely traces back to a 19th century British Navy Doctor (Rear-Admiral Sir Thomas Desmond Gimlette) – who suggested adding lime juice to Officers “daily ration” of gin (enlisted men got rum – add lime juice to the rum and you get “Grog”)
Glance at the vermouth bottle briefly while pouring the juniper distillate freely.
In the 30+ years I’ve been a working “computers industry professional” I’ve done a lot of jobs, used a lot of software, and spent time teaching other folks how to be “computer professionals.”
I’m also an “amateur historian” – i.e. I enjoy learning about “history” in general. I’ve had real “history teachers” point out that (in general) people are curious about “what happened before them.”
Maybe this “historical curiosity” is one of the things that distinguishes “humans” from “less advanced” forms of life — e.g. yes, your dog loves you, and misses you when you are gone – but your dog probably isn’t overly concerned with how its ancestors lived (assuming that your dog has the ability to think in terms of “history” – but that isn’t the point).
As part of “teaching” I tend to tell (relevant) stories about “how we got here” in terms of technology. Just like understanding human history can/should influence our understanding of “modern society” – understanding the “history of a technology” can/should influence/enhance “modern technology.”
The Problem …
There are multiple “problems of history” — which are not important at the moment. I’ll just point out the obvious fact that “history” is NOT a precise science.
Unless you have actually witnessed “history” then you have to rely on second hand evidence. Even if you witnessed an event, you are limited by your ability to sense and comprehend events as they unfold.
All of which is leading up to the fact that “this is the way I remember the story.” I’m not saying I am 100% correct and/or infallible – in fact I will certainly get something wrong if I go on long enough – any mistakes are mine and not intentional attempts to mislead 😉
Hardware/Software
Merriam-Webster tells me that “technology” is about “practical applications of knowledge.”
random thought #1 – “technology” changes.
“Cutting edge technology” becomes common and quickly taken for granted. The “Kansas City” scene from Oklahoma (1955) illustrates the point (“they’ve gone just about as far as they can go”).
Merriam-Webster tells me that the term “high technology” was coined in 1969 referring to “advanced or sophisticated devices especially in the fields of electronics and computers.”
If you are a ‘history buff” you might associate 1969 with the “race to the moon”/moon landing – so “high technology” equaled “space age.” If you are an old computer guy – 1969 might bring to mind the Unix Epoch – but in 2022 neither term is “high tech.”
random thought #2 – “software”
The term “hardware” in English dates back to the 15th Century. The term originally meant “things made of metal.” In 2022 the term refers to the “tangible”/physical components of a device – i.e. the parts we can actually touch and feel.
I’ve taught the “intro to computer technology” more times than I can remember. Early on in the class we distinguish between “computer hardware” and “computer software.”
It turns out that the term “software” only goes back to 1958 – invented to refer to the parts of a computer system that are NOT hardware.
The original definition could have referred to any “electronic system” – i.e. programs, procedures, and documentation.
In 2022 – Merriam-Webster tells me that “software” is also used to refer to “audiovisual media” – which is new to me, but instantly makes sense …
ANYWAY – “computer software” typically gets divided into two broad categories – “applications” and “operating systems” (OS or just “systems”).
The “average non-computer professional” is probably unaware and/or indifferent to the distinction between “applications” and the OS. They can certainly tell you whether they use “Windows” or a “Mac” – so saying people are “unaware” probably isn’t as correct as saying “indifferent.”
Software lets us do something useful with hardware
an old textbook
The average user has work to get done – and they don’t really care about the OS except to the point that it allows them to run applications and get something done.
Once upon a time – when a new “computer hardware system” was designed a new “operating system” would also be written specifically for the hardware. e.g. The Mythic Man-Month is required reading for anyone involved in management in general and “software development” in particular …
Some “industry experts” have argued that Bill Gates’ biggest contribution to the “computer industry” was the idea that “software” could be/should be separate from “hardware.” While I don’t disagree – it would require a retelling of the “history of the personal computer” to really put the remark into context — I’m happy to re-tell the story, but it would require at least two beers – i.e. not here, not now
In 2022 there are a handful of “popular operating systems” that also get divided into two groups – e.g. the “mobile OS” – Android, iOS, and the “desktop OS” Windows, macOS, and Linux
The Android OS is the most installed OS if you are counting “devices.” Since Android is based on Linux – you COULD say that Linux is the most used OS, but we won’t worry about things like that.
Apple’s iOS on the other hand is probably the most PROFITABLE OS. iOS is based on the “Berkely Software Distribution” (BSD) – which is very much NOT Linux, but they share some code …
Microsoft Windows still dominates the desktop. I will not be “bashing Windows” in any form – just point out that 90%+ of the “desktop” machines out there are running some version of Windows.
The operating system that Apple includes with their personal computers in 2022 is also based on BSD. Apple declared themselves a “consumer electronics” company a long time ago — fun fact: the Beatles (yes, John, Paul, George, and Ringo – those “Beatles”) started a record company called “Apple” in 1968 – so when the two Steves (Jobs and Wozniak) wanted to call their new company “Apple Computers” they had to agree to stay out of the music business – AND we are moving on …
On the “desktop” then Linux is the rounding error between Windows machines and Macs.
What is holding back “Linux on the desktop?” Well, in 2022 the short answer is “applications” and more specifically “gaming.”
You cannot gracefully run Microsoft Office, Avid, or the Adobe Suit on a Linux based desktop. Yes, there are alternatives to those applications that perform wonderfully on Linux desktops – but that isn’t the point.
e.g. that “intro to computers” class I taught used Microsoft Word, and Excel for 50% of the class. If you want to edit audio/video “professionally” then you are (probably) using Avid or Adobe products (read the credits of the next “major Hollywood” movie you watch).
Then the chicken and egg scenario pops up – i.e. “big application developer” would (probably) release a Linux friendly version if more people used Linux on the desktop – but people don’t use Linux on the desktop because they can’t run all of the application software they want – so they don’t have a Linux version of the application.
Yes, I am aware of WINE – but it illustrates the problem much more than acts as a solution — and we are moving on …
Linux Distros – a short history
Note that “Linux in the server room” has been a runaway success story – so it is POSSIBLE that “Linux on the desktop” will gain popularity, but not likely anytime soon.
Also worth pointing out — it is possible to run a “Microsoft free” enterprise — but if the goal is lowering the “total cost of ownership” then (in 2022) Microsoft still has a measurable advantage over any “100% Linux based” solution.
If you are “large enterprise” then the cost of the software isn’t your biggest concern – “Support” is (probably) “large enterprise, Inc’s” largest single concern.
fwiw: IBM and Red Hat are making progress on “enterprise level” administration tools – but in 2022 …
ANYWAY – the “birthdate” for Linux is typically given as 1991.
Under the category of “important technical distinction” I will mention that “Linux” is better described as the “kernel” for an OS and NOT an OS in and of itself.
Think of Linux as the “engine” of a car – i.e. the engine isn’t the “car”, you need a lot of other systems working with and around the engine for the “car” to function.
For the purpose of this article I will describe the combination of “Linux kernel + other operating system essentials” as a “Linux Distribution” or more commonly just “distro.” Ready? ok …
1992 gave us Slackware. Patrick Volkerding started the “oldest surviving Linux distro” which accounted for 80 percent share of the “Linux” market until the mid-1990s
1992 – 1996 gave us openSUSE Linux. Thomas Fehr, Roland Dyroff, Burchard Steinbild, and Hubert Mantel. I tend to call SUSE “German Linux” and they were just selling the “German version of Slackware” on floppy disks until 1996.
btw: the “modern Internet” would not exist as it is today without Linux in the server room. All of these “early Linux distros” had business models centered around “selling physical media.” Hey, download speed were of the “dial-up” variety and you were paying “by the minute” in most of Europe – so “selling media” was a good business model …
1993 -1996 gave us the start of Debian – Ian Murdock. The goal was a more “user friendly” Linux. First “stable version” was 1996 …
1995 gave us the Red Hat Linux — this distro was actually my “introduction to Linux.” I bought a book that had a copy of Red Hat Linux 5.something (I think) and did my first Linux install on an “old” pc PROBABLY around 2001.
During the dotcom “boom and bust” a LOT of Linux companies went public. Back then it was “cool” to have a big runup in stock valuation on the first day of trading – so when Red Hat “went public” in 1999 they had the eighth-biggest first-day gain in the history of Wall Street.
The run-up was a little manufactured (i.e. they didn’t release a lot of stock for purchase on the open market). My guess is that in 2022 the folks arranging the “IPO” would set a higher price for the initial price or release more stock if they thought the offering was going to be extremely popular.
Full disclosure – I never owned any Red Hat stock, but I was an “interested observer” simply because I was using their distro.
Red Hat’s “corporate leadership” decided that the “selling physical media” business plan wasn’t a good long term strategy. Especially as “high speed Internet” access moved across the U.S.
e.g. that “multi hour dial up download” is now an “under 10 minute iso download” – so I’d say the “corporate leadership” at Red Hat, Inc made the right decision.
Around 2003 the Red Hat distro kind of “split” into “Red Hat Enterprise Linux” (RHEL – sold by subscription to an “enterprise software” market) and the “Fedora project.” (meant to be a testing ground for future versions of RHEL as well as the “latest and greatest” Linux distro).
e.g. the Fedora project has a release target of every six months – current version 35. RHEL has a longer planned release AND support cycle – which is what “enterprise users” like – current version 9.
btw – yes RHEL is still “open source” – what you get for your subscription is “regular updates from an approved/secure channel and support.” AlmaLinux and CentOS are both “clones” of RHEL – with CentOS being “sponsored” by Red Hat.
IBM “acquired” Red Hat in 2019 – but nothing really changed on the “management” side of things. IBM has been active in the open source community for a long time – so my guess is that someone pointed out that a “healthy, independent Red Hat” is good for IBM’s bottom line in the present and future.
ANYWAY – obviously Red Hat is a “subsidiary” of IBM – but I’m always surprised when “long time computer professionals” seem to be unaware of the connections between RHEL, Fedora Project, CentOS, and IBM (part of what motivated this post).
Red Hat has positioned itself as “enterprise Linux” – but the battle for “consumer Linux” still has a lot of active competition. The Fedora project is very popular – but my “non enterprise distros of choice” are both based on Debian:
Ubuntu (first release 2004) – “South African Internet mogul Mark Shuttleworth” gets credit for starting the distro. The idea was that Debian could be more “user friendly.” Occasionally I teach an “introduction to Linux class” and the big differences between “Debian” and “Ubuntu” are noticeable – but very much in the “ease of use” (i.e. “Ubuntu” is “easier” for new users to learn)
I would have said that “Ubuntu” meant “community” (which I probably read somewhere) but the word is of ancient Zulu and Xhosa origin and more correctly gets translated “humanity to others.” Ubuntu has a planned release target of every six months — as well as a longer “long term support” (LTS) version.
Linux Mint (first release 2008) – Clément Lefèbvre gets credit for this one. Technically Linux Mint describes itself as “Ubuntu based” – so of course Debian is “underneath the hood.” I first encountered Linux Mint from a reviewer that described it as the best Linux distro for people trying to not use Microsoft Windows.
The differences between Mint and Ubuntu are cosmetic and also philosophical – i.e. Mint will install some “non open source” (but still free) software to improve “ease of use.”
The beauty of “Linux” is that it can be “enterprise level big” software or it can be “boot from a flash drive” small. It can utilize modern hardware and GPU’s or it can run on 20 year old machines. If you are looking for specific functionality, there might already be a distro doing that – or if you can’t find one, you can make your own
Full disclosure: “Star Wars” was released in 1977 – when I was 8ish years old. This post started as a “reply” to something else – and grew – so I apologize for the lack of real structure – kind of a work in progress …
I am still a “George Lucas” fan – no, I didn’t think episodes I, II, and III were as good as the original trilogy but I didn’t hate them either.
George Lucas obviously didn’t have all of the “backstory” for the “Jedi” training fully formed when he was making “Star Wars” back in the late 1970’s
in fact the “mystery” of the Jedi Knights was (probably) part of the visceral appeal of the original trilogy (Episodes IV, V, and VI – for those playing along)
As always when you start trying to explain the “how” and “why” behind successful “science fantasy” you run into the fact that these are all just made up stories and NOT an organized religion handed down by a supreme intelligence
if you want to start looking at “source material” for the “Jedi” – the first stop is obvious – i.e. they are “Jedi KNIGHTS” – which should obviously bring to mind the King Arthur legend et al
in the real world a “knight in training” started as a “Page” (age 7 to 13), then became a “Squire” (age 14 to 18-21), and then would become a “Knight”
of course the whole point of being a “Knight” was (probably) to be of service and get granted some land somewhere so they could get married and have little ones
since Mr Lucas was making it all up – he also made his Jedi “keepers of the faith” combing the idea of “protectors of the Republic” with “priestly celibacy” — then the whole “no attachments”/possessions thing comes straight from Buddhism
btw: all this is not criticism of George Lucas – in fact his genius (again in Episodes IV, V, VI) was in blending them together and telling an entertaining story without beating the audience over the head with minutiae
ANYWAY “back in the 20th century” describing something as the “Disney version” used to mean that it was “nuclear family friendly” — feel free to psychoanalyze Walt Disney if you want, i.e. he wasn’t handing down “truth from the mountain” either — yes, he had a concept of an “idealized” childhood that wasn’t real – but that was the point
just like “Jedi Knights” were George Lucas’ idealized “Knights Templar” – the real point is that they are IDEALIZED for a target audience of “10 year olds” – and when you start trying to explain too much the whole thing falls apart
e.g. the “Jedi training” as it has been expanded/over explained would much more likely create sociopaths than “wise warrior priests” — which for the record is my same reaction to Plato’s “Republic” – i.e. that the system described would much more likely create sociopaths that only care about themselves rather than “philosopher kings” capable of ruling with wisdom
I just watched a documentary on the “Cola wars” – and something obvious jumped out at me.
First I’ll volunteer that I prefer Pepsi – but this is 100% because Coke tends to disturb my stomach MORE than Pepsi disturbs my stomach.
full disclosure – I get the symptoms of “IBS” if I drink multiple “soft drinks” multiple days in a row. I’m sure this is a combination of a lot of factors – age, genetics, whatever.
Of course – put in perspective the WORST thing for my stomach (as in “rumbly in the tummy”) when I was having symptoms was “pure orange juice” – but that isn’t important.
My “symptoms” got bad enough that I was going through bottles of antacid each week, and tried a couple “over the counter” acid reflux products. Eventually I figured out changing my diet – getting more yogurt and tofu in my diet, drinking fewer “soft drinks” helped a LOT.
The documentary was 90 minutes long – and a lot of time was spent on people expressing how much they loved one brand or the other. I’m not zealous for either brand – and I would probably choose Dr Pepper if I had to choose a “favorite” drink
Some folks grew up drinking one beverage or the other and feel strongly about NOT drinking the “competitor” – but again, my preference for Pepsi isn’t visceral.
Habit
The massive amount of money spent by Coke and Pepsi marketing their product becomes an exercise in “marketing confirmation bias” for most of the population – but I each new generation U.S. has to experience some form of the “brand wars” – Coke vs Pepsi, Nike vs Adidas, PC vs Mac – whatever.
e.g. As a “technology professional” I will point out that Microsoft does a good job of “winning hearts and minds” by getting their products in the educational system.
If you took a class in college teaching you “basic computer skills” in the last 20 years – that class was probably built around Microsoft Office. Having taught those classes for a couple years I can say that students learn “basic computer skills” and also come away with an understanding of “Microsoft Office” in particular.
When those students need to buy “office” software in the future, what do you think they will choose?
(… and Excel is a great product – I’m not bashing Microsoft by any means 😉 )
Are you a “Mac” or a “PC”? Microsoft doesn’t care – both are using Office. e.g. Quick name a spreadsheet that ISN’T Excel – there are some “free” ones but you get the point …
The point is that human beings are creatures of habit. After a certain age – if you have “always” used product “x” then you are probably going to keep on using product “x” simply because it is what you have “always used.”
This fact is well known – and why marketing to the “younger demographic” is so profitable/prized.
ALL OF WHICH MEANS – that if you can convince a sizable share of the “youth market” that your drink is “cool” (or whatever the kids say in 2022) – then you will (probably) have created a lifelong customer
Taste Tests
Back to the “cola wars”…
The Pepsi Challenge deserves a place in the marketing hall of fame — BUT it is a rigged game.
The “Pepsi challenge” was setup as a “blind taste test.” The “test subject” had two unmarked cups placed in front of them – one cup containing Pepsi and the other containing Coke.
The person being tested drinks from one cup, then from the second cup, and then chooses which one they prefer.
Now, according to Pepsi – twice as many people preferred Pepsi to Coke by a 2:1 margin. Which means absolutely nothing.
The problem with the “taste test” is that the person tastes one sugary drink, and then immediately tastes a second sugary drink. SO being able to discern the actual taste difference between the two is not possible.
If you wanted an honest “taste test” then the folks being tested should have approached the test like a wine tasting. e.g. “swish” the beverage back and forth, suck in some air to get the full “flavor”, and then spit it out. Maybe have something to “cleanse the pallet” between the drinks …
(remember “flavor” is a combination of “taste” and “smell”)
For the record – yes, I think Coke and Pepsi taste different – BUT the difference is NOT dramatic.
The documentary folks interviewed Coke and Pepsi executives that worked at the respective companies during the “cola wars” – and most of those folks were willing to take the “Pepsi Challenge”
A common complaint was that both drinks tasted the same – and if you drink one, then drink another they DO taste the same – i.e. you are basically tasting the first drink “twice” NOT two unique beverages.
fwiw: most of the “experts” ended up correctly distinguishing between the two – but most of them took the time to “smell” each drink, and then slowly sip. Meanwhile the “Pepsi Challenge” in the “field” tended to be administered in a grocery store parking – which doesn’t exactly scream “high validity.”
ANYWAY – you can draw a dotted line directly from the “Pepsi Challenge” (as un-scientific as it was) and “New Coke” – i.e. the “Pepsi Challenge” convinced the powers that be at Coke that they needed to change.
So again, the “Pepsi Challenge” was great marketing but it wasn’t a fair game by any means.
fwiw: The documentary (“Cola Wars” from the History Channel in 2019) is interesting from a branding and marketing point of view. It was on hoopladigital, and is probably available online elsewhere …
Difference between “sales” and “Marketing”
If you are looking at a “business statement”/profit and loss statement of some kind – the “top line” is probably gonna be “total revenue” (i.e. “How much did the company make”). The majority of “revenue” is then gonna be “sales” related in some form.
SO if you make widgets for $1 and sell them for $2 – if you sell 100 widgets then your “total revenue” will be $200 (top line) your “cost of goods sold” will be $100 and then the “Net revenue” (the “bottom line”) will be “widgets sold” – “cost of widgets” i.e. $100 in this extremely simple example.
In the above example the expense involved in “selling widgets” is baked into the $1 “cost of goods sold” – so maybe the raw materials for each widget is 50 cents, then 30 cents per widget in “labor”, and 20 cents per widget for sales and marketing.
Then “sales” involves everything involved in actually getting a widget to the customer. While “marketing” is about finding the customer and then educating them about how wonderful your widgets are – and of course how they can buy a widget. e.g. marketing and sales go hand in hand but they are not the same thing.
The “widget market” is all of the folks that might want to use widgets. “Market share” is then the number of folks that use a specific company’s widgets.
Marketing famously gets discussed as “5 P’s” — Product, Place, Price, Promotion, and People.
Obviously the widget company makes “widgets” (Product)- but should they (A) strive to make the highest quality widget possible that will last for years (i.e. “expensive to produce”) or should they (B) make a low cost, disposable widget?
Well, the answer is “it depends” – and some of the factors involved in the “Product” decision are the other 4 P’s — which will change dramatically between scenario A and B.
A successful company will understand the CUSTOMER and how the customer uses “widgets” before deciding to venture into the “widget market space”
This is why you hear business folks talk about “size of markets” and “price sensitivity of markets.” If you can’t make a “better” widget or a less expensive widget – then you are courting failure …
SO Coke and Peps are both “mature” companies that have established products, methods and markets – so growing their market share requires something more than just telling folks that “our product tastes good”
In the “Cola Wars” documentary they point out the fact that the competition between Coke and Pepsi served to grow the entire “soft drink market” – so no one really “lost” the cola wars. e.g. in 2020 the “global soft drink market” was valued at $220 BILLION – but the market for “soft drinks” fragmented as it grew.
The mini-“business 101” class above illustrates why both Coke and Pepsi aggressively branched out into “tea” and “water” products since the “Cola wars.”
It used to be that the first thing Coke/Pepsi would do when moving into a new market was to build a “bottling plant.” So then “syrups” can be shipped to the different markets – and then “bottled” close to where they will be consumed – which saves $$ on shipping costs.
I suppose if you are a growing “beverage business” then selling “drink mix” online might be a profitable venture – unless you happen to have partners in “distant markets” that can bottle and distribute your product – i.e. Coke and Pepsi are #1 and #2 in the soft drink market and no one is likely to challenge either company anytime soon.
“Soft drinks” is traditionally defined as “non alcoholic” – so the $220 billion is spread out over a lot of beverages/companies. Coke had 20% of that market and Pepsi 10% – but they are still very much the “big players” in the industry. The combined market share of Coke and Pepsi is equal to the combined market share of the next 78 companies combined (e.g. #3 is Nestle, #4 Suntory, #5 Danone, #6 Dr Pepper Snapple, #7 Red Bull).
My takeaway …
umm, I got nothing. This turned into a self-indulgent writing exercise. Thanks for playing along.
In recent years PepsiCo has been driving growth by expanding into “snacks” – so a “Cola wars 2” probably isn’t likely …
I’m not looking to go into the soft drink business – but it is obviously still a lucrative market. I had a recipe for “home made energy drink” once upon a time – maybe I need to find that again …
Come my friends, let us reason together … (feel free to disagree, none of this is dogma)
There are a couple of “truisms” that APPEAR to conflict –
Truism 1:
The more things change the more they stay the same.
… and then …
Truism 2:
The only constant is change.
Truism 1 seems to imply that “change” isn’t possible while Truism 2 seems to imply that “change” is the only possibility.
There are multiple way to reconcile these two statements – for TODAY I’m NOT referring to “differences in perspective.”
Life is like a dogsled team. If you aren’t the lead dog, the scenery never changes.
(Lewis Grizzard gets credit for ME hearing this, but he almost certainly didn’t say it first)
Consider that we are currently travelling through space and the earth is rotating at roughly 1,000 miles per hour – but sitting in front of my computer writing this, I don’t perceive that movement. Both the dogsled and my relative lack of perceived motion are examples of “perspective” …
Change
HOWEVER, “different perspectives” or points of view isn’t what I want to talk about today.
For today (just for fun) imagine that my two “change” truisms are referring to different types of change.
Truism 1 is “big picture change” – e.g. “human nature”/immutable laws of the universe.
Which means “yes, Virginia there are absolutes.” Unless you can change the physical laws of the universe – it is not possible to go faster than the speed of light. Humanity has accumulated a large “knowledge base” but “humans” are NOT fundamentally different than they were 2,000 years ago. Better nutrition, better machines, more knowledge – but humanity isn’t much different.
Truism 2 can be called “fashion“/style/”what the kids are doing these days” – “technology improvements” fall squarely into this category. There is a classic PlayStation 3 commercial that illustrates the point.
Once upon a time:
mechanical pinball machines were “state of the art.”
The Atari 2600 was probably never “high tech” – but it was “affordable and ubiquitous” tech.
no one owned a “smartphone” before 1994 (the IBM Simon)
the “smartphone app era” didn’t start until Apple released the iPhone in 2007 (but credit for the first “App store” goes to someone else – maybe NTT DoCoMo?)
SO fashion trends come and go – but the fundamental human needs being services by those fashion trends remain unchanged.
What business are we in?
Hopefully, it is obvious to everyone that it is important for leaders/management to understand the “purpose” of their organization.
If someone is going to “lead” then they have to have a direction/destination. e.g. A tourist might hire a tour guide to “lead” them through interesting sites in a city. Wandering around aimlessly might be interesting for awhile – but could also be dangerous – i.e. the average tourist wants some guidance/direction/leadership.
For that “guide”/leader to do their job they need knowledge of the city AND direction. If they have one OR the other (knowledge OR direction), then they will fail at their job.
The same idea applies to any “organization.” If there is no “why”/direction/purpose for the organization then it is dying/failing – regardless of P&L.
Consider the U.S. railroad system. At one point railroads were a huge part of the U.S. economy – the rail system opened up the western part of the continent and ended the “frontier.”
However, a savvy railroad executive would have understood that people didn’t love railroads – what people valued was “transportation.”
Just for fun – get out any map and look at the location of major cities. It doesn’t have to be a U.S. map.
The point I’m working toward is that throughout human history, large settlements/cities have centered around water. Either ports to the ocean or next to riverways. Why? Well, obviously humans need water to live but also “transportation.”
The problem with waterways is that going with the current is much easier than going against the current.
SO this problem was solved first by “steam powered boats” and then railroads. The early railroads followed established waterways connecting established cities. Then as railroad technology matured towns were established as “railway stations” to provide services for the railroad.
Even as the railroads became a major portion of the economy – it was NEVER about the “railroads” it was about “transportation”
fwiw: then the automobile industry happened – once again, people don’t car so much about “cars” what they want/need is “transportation”
If you are thinking “what about ‘freight’ traffic” – well, this is another example of the tools matching the job. Long haul transportation of “heavy” items is still efficiently handled by railroads and barges – it is “passenger traffic” that moved on …
We could do the same sort of exercise with newspapers – i.e. I love reading the morning paper, but the need being satisfied is “information” NOT a desire to just “read a physical newspaper”
What does this have to do with I.T.?
Well, it is has always been more accurate to say that “information technology” is about “processing information” NOT about the “devices.”
full disclosure: I’ve spent a lifetime in and around the “information technology” industry. FOR ME that started as working on “personal computers” then “computer networking”/LAN administration – and eventually I picked up an MBA with an “Information Management emphasis”.
Which means I’ve witnessed the “devices” getting smaller, faster, more affordable, as well as the “networked personal computer” becoming de rigueur. However, it has never been about “the box” i.e. most organization aren’t “technology companies” but every organization utilizes “technology” as part of their day to day existence …
Big picture: The constant is that “good I.T. practices” are not about the technology.
Backups
When any I.T. professional says something like “good backups” solve/prevent a lot of problems it is essential to remember how a “good backup policy” functions.
Back in the day folks would talk about a “grandfather/father/son” strategy – if you want to refer to it as “grandmother/mother/daughter” the idea is the same. At least three distinct backups – maybe a “once a month” complete backup that might be stored in a secure facility off-site, a “once a week” complete backup, and then daily backups that might be “differential.”
It is important to remember that running these backups is only part of the process. The backups also need to be checked on a regular basis.
Checking the validity/integrity of backups is essential. The time to check your backups is NOT after you experience a failure/ransomware attack.
Of course how much time and effort an organization should put into their backup policy is directly related to the value of their data. e.g. How much data are you willing to lose?
Just re-image it
Back in the days of the IBM PC/XT, if/when a hard drive failed it might take a day to get the system back up. After installing the new hard drive, formatting the drive and re-installing all of the software was a time intensive manual task.
Full “disk cloning” became an option around 1995. “Ghosting” a drive (i.e. “cloning”) belongs in the acronym Hall of Fame — I’m told it was supposed to stand for “general hardware-oriented system transfer.” The point being that now if a hard drive failed, you didn’t have to manually re-install everything.
Jump forward 10 years and Local Area Networks are everywhere – Computer manufacturers had been including ‘system restore disks’ for a long time AND software to clone and manage drives is readily available. The “system cloning” features get combined with “configuration management” and “remote support” and this is the beginning of the “modern I.T.” era.
Now it is possible to “re-image” a system as a response to software configuration issues (or malware). Disk imaging is not a replacement for a good backup policy – but it reduced “downtime” for hardware failures.
The more things change …
Go back to the 1980’s/90’s and you would find a lot of “dumb terminals” connecting to a “mainframe” type system (well, by the 1980s it was probably a “minicomputer” not a full blown “mainframe”).
A “dumb terminal” has minimal processing power – enough to accept keyboard input and provide monitor output, and connect to the local network.
Of course those “dumb terminals” could also be “secured” so there were good reasons for keeping them around for certain installations. e.g. I remember installing a $1,000 expansion card into new late 1980’s era personal computers to make it function like a “dumb terminal” – but that might have just been the Army …
Now in 2022 we have “chrome books” that are basically the modern version of “dumb terminals.” Again, the underlying need being serviced is “communication” and “information” …
All of which boils down to “basics” of information processing haven’t really changed. The ‘personal computer’ is a general purpose machine that can be configured for various industry specific purposes. Yes, the “era of the PC” has been over for 10+ years but the need for ‘personal computers’ and ‘local area networks’ will continue.
Merriam-Webster tells me that etymology is the “the history of a linguistic form (such as a word)” (the official definition goes on a little longer – click on the link if interested …)
The last couple weeks I’ve run into a couple of “industry professionals” that are very skilled in a particular subset of “information technology/assurance/security/whatever” but obviously had no idea what “the cloud” consists of in 2022.
Interrupting and then giving an impromptu lecture on the history and meaning of “the cloud” would have been impolite and ineffective – so here we are 😉 .
Back in the day …
Way back in the 1980’s we had the “public switched telephone network” (PSTN) in the form of (monopoly) AT&T. You could “drop a dime” into a pay phone and make a local call. “Long distance” was substantially more – with the first minute even more expensive.
The justification for higher connection charges and then “per minute” charges was simply that the call was using resources in “another section” of the PSTN. How did calls get routed?
Back in 1980 if you talked to someone in the “telecommunications” industry they might have referred to a phone call going into “the cloud” and connecting on the other end.
(btw: you know all those old shows where they need “x” amount of time to “trace” a call – always a good dramatic device, but from a tech point of view the “phone company” knew where each end of the call was originating – you know, simply because that was how the system worked)
I’m guessing that by the breakup of AT&T in 1984 most of the “telecommunications cloud” had gone digital – but I was more concerned with football games in the 1980s than telecommunications – so I’m honestly not sure.
In the “completely anecdotal” category “long distance” had been the “next best thing to being there” (a famous telephone system commercial – check youtube if interested) since at least the mid-1970s – oh, and “letter writing”(probably) ended because of low cost long distance not because of “email”
Steps along the way …
Important technological steps along the way to the modern “cloud” could include:
the first “modem” in in the early 1960s – that is a “modulator”/”demodulator” if you are keeping score. A device that could take a digital signal and convert it to an analog wave for transmission over the PSTN on one end of the conversation and another modem could reverse the process on the other end.
Ethernet was invented in the early 1970’s – which allowed computers to talk to each other over long distances. You are probably using some flavor of Ethernet on your LAN
TCP/IP was “invented” in the 1970’s then became the language of ARPANET in the early 1980’s. One way to define the “Internet” is as a “large TCP/IP network” – ’nuff said
that web thing
Tim Berners-Lee gets credit for “inventing” the world wide web in 1989 while at CERN. Which made “the Internet” much easier to use – and suddenly everyone wanted a “web site.”
Of course the “personal computer” needed to exist before we could get large scale adoption of ANY “computer network” – but that is an entirely different story 😉
The very short version of the story is that personal computer sales greatly increased in the 1990s because folks wanted to use that new “interweb” thing.
A popular analogy for the Internet at the time was as the “information superhighway” – with a personal computer using a web browser being the “car” part of the analogy.
Virtualization
Google tells me that “virtualization technology” actually goes back to the old mainframe/time-sharing systems in the 1960’s when IBM created the first “hypervisor.”
A “hypervisor” is what allows the creation of “virtual machines.” If you think of a physical computer as an empty warehouse that can be divided into distinct sections as needed then a hypervisor is what we use to create distinct sections and assign resources to those sections.
The ins and outs of virtualization technology is beyond the scope of this article BUT it is safe to say that “commodity computer virtualization technology” was an industry changing event.
The VERY short explanation is that virtualization allows for more efficient use of resources which is good for the P&L/bottom line.
(fwiw: any technology that gets accepted on a large scale in a relatively short amount of time PROBABLY involves saving $$ – but that is more of a personal observation that an industry truism.)
Also important was the development of “remote desktop” software – which would have been called “terminal access” before computers had “desktops.”
e.g. Wikipedia tells me that Microsoft’s “Remote Desktop Protocol” was introduced in Windows NT 4.0 – which ZDNet tells me was released in 1996 (fwiw: some of of my expired certs involved Windows NT).
“Remote access” increased the number of computers a single person could support which qualifies as another “industry changer.” As a rule of thumb if you had more than 20 computers in your early 1990s company – you PROBABLY had enough computer problems to justify hiring an onsite tech.
With remote access tools not only could a single tech support more computers – they could support more locations. Sure in the 1990’s you probably still had to “dial in” since “always on high speed internet access” didn’t really become widely available until the 2000s – but as always YMMV.
dot-com boom/bust/bubble
There was a “new economy” gold rush of sorts in the 1990s. Just like gold and silver exploration fueled a measurable amount of “westward migration” into what was at the time the “western frontier” of the United States – a measurable amount of folks got caught up in “dot-com” hysteria and “the web” became part of modern society along the way.
I remember a lot of talk about how the “new economy” was going to drive out traditional “brick and mortar” business. WELL, “the web” certainly goes beyond “industry changing” – but in the 1990s faith in an instant transformation of the “old economy” into a web dominated “new economy” reached zeitgeist proportions …
In 2022 some major metropolitan areas trace their start to the gold/silver rushes in the last half of the 19th century (San Francisco and Denver come to mind). There are also a LOT of abandoned “ghost towns.”
In the “big economic picture the people running saloons/hotels/general stores in “gold rush areas” had a decent change of outliving the “gold rush” assuming that there was a reason for the settlement to be there other than “gold mining”
The “dot-com rush” equivalent was that a large number of investors were convinced that a company could stay a “going concern” if it didn’t make a profit. However – just like the people selling supplies to gold prospectors had a good chance of surviving the gold – the folks selling tools to create a “web presence” did alright – i.e. in 2022 the survivors of the “dot-com bubble” are doing very well (e.g. Amazon, Google)
Web Hosting
In the “early days of the web” establishing a “web presence” took (relatively) arcane skills. The joke was that if you could spell HTML then you could get a job as a “web designer” – ok, maybe it isn’t a “funny” joke – but you get the idea.
An in depth discussion of web development history isn’t required – pointing out that web 1.0 was the time of “static web pages” is enough.
If you had a decent internet service provider they might have given you space on their servers for a “personal web page.” If you were a “local” business you might have been told by the “experts” to not worry about a web site – since the “web” would only be useful for companies with a widely dispersed customer base.
That wasn’t bad advice at the time – but the technology needed to mature. The “smart phone” (Apple 2007) motivated the “mobile first” development strategy – if you can access the web through your phone, then it increases the value of “localized up to date web information.”
“Web hosting” was another of those things that was going to be “free forever” (e.g. one of the tales of “dot-com bubble” woes was “GeoCities”). Which probably slowed down “web service provider” growth – but that is very much me guessing.
ANYWAY – in web 1.0 (when the average user was connecting by dial up) the stress put on web servers was minimal – so simply paying to rent space on “someone else’s computer” was a viable option.
The next step up from “web hosting” might have been to rent a “virtual server” or “co-locate” your own server – both of which required more (relatively) arcane skills.
THE CLOUD
Some milestones worth pointing out:
1998 – VMWare “Workstation” released (virtualization on the desktop)
“Google search” was another “industry changing” event that happened in 1998 – ’nuff said
2001 VMWare ESX (server virtualization)
2005 Intel released the first cpus with “Intel Virtualization Technology” (VT-x)
2005 Facebook – noteworthy, but not “industry changing”
2006 Amazon Web Services (AWS)
Officially Amazon described AWS as providing “IT infrastructure services to businesses in the form of web services” – i.e. “the cloud”
NIST tells us that –
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics, three service models, and four deployment models.
If we do a close reading of the NIST definition – the “on-demand” and “configurable” portions are what differentiates “the cloud” from “using other folks computers/data center.”
I like the “computing as a utility” concept. What does that mean? Glad you asked – e.g. Look on a Monopoly board and you will see the “utility companies” listed as “Water Works” and “Electric Company.”
i.e. “water” and “electric” are typically considered public utilities. If you buy a home you will (probably) get the water and electric changed into your name for billing purposes – and then you will pay for the amount of water and electric you use.
BUT you don’t have to use the “city water system” or local electric grid – you could choose to “live off the grid.” If you live in a rural area you might have a well for your water usage – or you might choose to install solar panels and/or a generator for your electric needs.
If you help your neighbors in an emergency by allowing them access to your well – or maybe connecting your generator to their house. You are a very nice neighbor BUT you aren’t a “utility company” – i.e. your well/generator won’t have the capacity that the full blown “municipal water system” or electric company can provide.
Just like if you have a small datacenter and start providing “internet services” to customers – unless you are big enough to be “ubiquitous, convenient, and on-demand” then you aren’t a “cloud provider.”
Also note the “as a service” aspect of the cloud – i.e. when you sign up you will agree to pay for what you use, but you aren’t automatically making a commitment for any minimal amount of usage.
As opposed to “web hosting” or “renting a server” where you will probably agree to a monthly fee and a minimal term of service.
Billing options and service capabilities are obviously vendor specific. As a rule of thumb – unless you have “variable usage” then using “the cloud” PROBABLY won’t save you money over “web hosting”/”server rental.”
The beauty of the cloud is that users can configure “cloud services” to automatically scale up for an increase in traffic and then automatically scale down when traffic decreases.
e.g. image a web site that has very high traffic during “business hours” but then minimal traffic the other 16 hours of the day. A properly configured “cloud service” would scale up (costing more $$) during the day and then scale down (costing fewer $$) at night.
Yes, billing options become a distinguishing element of the “cloud” – which further muddies the water.
Worth pointing out is that if you are “big internet company” you might get to the point where it is in your company’s best interest to build your own datacenters.
This is just the classic “rent” vs “buy” scenario – i.e. if you are paying more in “rent” than it would cost you to “buy” then MAYBE “buying your own” becomes an option (of course “buying your own” also means “maintaining” and “upgrading” your own). This tends to work better in real estate where “equity”/property values tends to increase.
Any new “internet service” that strives to be “globally used” will (probably) start out using “the cloud” – and then if/when they are wildly successful, start building their own datacenters while decreasing their usage of the public cloud.
Final Thoughts
It Ain’t What You Don’t Know That Gets You Into Trouble. It’s What You Know for Sure That Just Ain’t So
Artemus Ward
As a final thought – “cloud service” usage was $332.3 BILLION in 2021 up from $270 billion in 2020 (according to Gartner).
There isn’t anything magical about “the cloud” – but it is a little more complex than just “using other people’s computers.”
The problem with “language” in general is that there are always regional and industry differences. e.g. “Salesforce” and “SAP” fall under the “cloud computing” umbrella – but Salesforce uses AWS to provide their “Software as a Service” product and SAP uses Microsoft Azure.
I just spent 2,000 words trying to explain the history and meaning of “the cloud” – umm, maybe a cloud by any other name would still be vendor specific
HOWEVER I would be VERY careful with choosing a cloud provider that isn’t offered by a “big tech company” (i.e. Microsoft, Amazon, Google, IBM, Oracle). “Putting all of your eggs in one basket” is always a risky proposition (especially if you aren’t sure that the basket is good in the first place) — as always caveat emptor …
Another of my quixotic projects is making a video production of reading Ralph Waldo Emerson’s essays – which I’m sure would appeal to maybe a handful of folks worldwide.
BUT considering my track record in “estimating market potential” something that I think has zero potential would probably perform better than the ideas that I think have a HUGE potential.
Anyway – good ol’ Mr Emerson pointed out that –
“The man who knows how will always have a job. The man who knows why will always be his boss.”
-Ralph Waldo Emerson
Of course a “good boss” will make sure that the folks doing the “how” have an understanding of the “why” – but that isn’t the point.
On my mind this morning is that the difference between knowing “how” to do something, actually doing something, AND understanding “why” something is done.
story time
In “modern America” my guess is that the “average American” has no idea what actually happens when they flip on a light switch.
Of course everyone understands that you flip a switch, turn a knob, push a button – and “electricity” causes the light to come on. I’m NOT saying that the “average” American is uneducated or unintelligent – I’m just saying that the average American (probably) has no idea “why” the light comes on.
Ok, this isn’t meant to be an insult or a negative comment about “education in America” – just pointing out that the “modern electrical grid” involves a lot of parts. Truly understanding the “why” of electricity takes some effort.
experience vs “time in service”
Again, in “modern America” being an “electrician” requires specific training – e.g. here in Ohio, Google tells me that schools offer “electrician training” programs ranging from 9 months to 2 years.
After graduation our aspiring electrician probably has a good understanding of “how” to “work with the electrical grid” – and can perform work “up to code.”
While our young electrician understands the “how” of his job, they probably don’t understand the “why” of EVERYTHING in the codes.
Again, I’m not trying to insult electricians – just pointing out that somethings won’t become “obvious” until you have some experience doing the job.
SO what is “obvious” to that wise old electrician that has been doing the job for 20 years PROBABLY isn’t going to be (as) “obvious” to the electrician with 1 year of experience.
Of course it is POSSIBLE (and probable) that SOME long time professionals will never progress in the understanding of their profession past the bare minimum. (A small percentage will be (probably) be incompetent but true incompetence isn’t this issue here.)
Yes, this falls into the “insult” category – e.g. it is possible to be in a position for 5 years, and not learn anything. SO that would be “5 years “time in service” but functionally “1 year of experience 5 times” NOT “5 years of experience.”
Just showing up for work everyday doesn’t mean you are going to automatically improve.
teaching and understanding
There are a couple verses in the “Old Testament” that come to mind (emphasis obviously mine):
9 Only take heed to thyself, and keep thy soul diligently, lest thou forget the things which thine eyes have seen, and lest they depart from thy heart all the days of thy life: but teach them thy sons, and thy sons’ sons;10 specially the day that thou stoodest before the Lord thy God in Horeb, when the Lord said unto me, Gather me the people together, and I will make them hear my words, that they may learn to fear me all the days that they shall live upon the earth, and that they may teach their children.
Deuteronomy 4:9-10 (AKJV)
Notice that the command for parents to teach their children is meant to benefit BOTH the parents AND the children. The “secular” thought is that “to teach is to learn twice.”
Of course there are always “effective teachers” and “not as effective teachers.” Albert Einstein liked to point out that:
“If you can’t explain it simply, you don’t understand it well enough.”
Albert Einstein
SO to teach requires understanding and the act of teaching can/should increase understanding – both for the student AND the teacher.
I’ve been “teaching” in a formal “classroom” sense for almost 10 years. Looking back at those early classes – I definitely learned more than (some) of my students.
I had been working “in the field” for 15 years, had numerous industry certifications, and a Masters degree – so MY learning was mostly about the “why” – but there were still areas within the field that I didn’t REALLY understand.
Ok, the students didn’t know enough to notice that I didn’t know enough – but you get the idea.
This is the old “no one knows everything” sort of idea – at the time I simply didn’t know that I didn’t know 😉
ANYWAY – just kinda random thoughts – one more quote that I would have attributed to Mark Twain (he said something similar – but probably not this exact quote) – Harry Truman liked the quote, so did John Wooden – and Earl Weaver used it as the title of his autobiography:
It’s What You Learn After You Know It All That Counts
Merriam-Webster tells me that the word “meme” dates all the way back to 1976 (coined by Richard Dawkins in The Selfish Gene) with the meaning of “unit of cultural transmission.”
Apparently the “-eme” suffix “indicates a distinctive unit of language structure.” Dr Dawkins combined the Greek root “mim-” (meaning “mime” or “mimic”) with “-eme” to create “meme.”
Then this interweb thing hit full stride and minimally edited images with captions with (maybe) humorous intent became a “meme.”
Humor is always subjective, and with brevity (still) being the soul of wit – most “memes” work on a form of “revelatory” humor. The humor comes in “discovering” the connection between the original image, and then the edited/altered image.
We aren’t dealing in high level moral reasoning or intense logic – just “Picture A”, “Picture B”, brain makes connection – grin/groan and then move on. By definition “memes” are short/trivial.
Which makes commenting on “memes” even more trivial. Recently on “social media platform” I made an off the cuff comment about a meme that amounted to (in my head) “A=B”, “B=C”, “A+B+C<D.”
Now, I readily admit that my comment was not logical – but from a certain perspective “true” – if not directly provable from the supplied evidence. It was a trivial response meant to be humorous – not a “grand unifying theory of everything.”
… and of course I (apparently) offended someone to the point that they commented on my comment – accusing me of “not understanding the meme.”
Notice the “apparently” qualifier – it is POSSIBLE that they were trying to be funny. My first reaction was to explain by comment (because obviously no one would intentionally be mean or rude on the interweb – i.e. the commenter on my comment must have simply misunderstood my comment 😉 ) – BUT that would have been a fourth-level of trivialness …
HOWEVER the incident got me thinking …
geeks and nerds
Another doctor gets credit for creating the term “nerd” (Dr Suess – If I Ran the Zoo, 1950). It didn’t take on the modern meaning implying “enthusiasm or expertise” about a subject until later years. The term “dork” came about in the 1960’s (… probably as a variation on “dick” – and quickly moving on …) – meaning “odd, socially awkward, unstylish person.”
“Geek” as a carnival performer biting the heads of chickens/snakes, eating weird things, and generally “grossing out” the audience goes back to 1912. Combined with another term it becomes synonymous with “nerd” – e.g. “computer geek” and “computer nerd.”
random thought: if you remember Harry Anderson – or have seen re-runs of Night Court – his standup act consisted of “magic” and some “carnival geek” bits, he didn’t bite the heads of any live animals though (which would have gotten him in trouble – even in the 1980’s). Of course youtube has some video that I won’t link to – search for “Harry Anderson geek” if curious …
I think a nerd is a person who uses the telephone to talk to other people about telephones. And a computer nerd therefore is somebody who uses a computer in order to use a computer.
Douglas Adams
ANYWAY – to extend the Douglas Adams quote – a “nerd” might think that arguing about a meme is a good use of their time …
Which brings up the difference between “geeks” and “nerds” – I have (occasionally) used Sheldon and Leonard from the Big Bang Theory to illustrate the difference – with Sheldon being the “geek” and Leonard the “nerd”. Both of their lives revolve around technology/intellectual pursuits but they “feel” differently about that fact – i.e. Sheldon embraces the concept and is happily eccentric (“geek”) while Leonard feels self-conscious and awkward (“nerd”).
SO when I call myself a “computer geek” it is meant as a positive descriptive statement 😉 – yes, I am aware that the terms aren’t AS negative as they once were, I’m just pointing out that my life has ended up revolving around “computers” (using them/repairing them) and it doesn’t bother me …
Though I suppose “not being able to use a computer” in 2022 is in the same category that “not able to ride a horse” or “can’t shoot a rifle” would have been a couple hundred years ago … in a time when being “adorkable” is an accepted concept – calling yourself a “geek” or “nerd” isn’t as bad as it used to be — umm, in any case when I say “geek” I’ve never bitten the head off anything (alive or dead), I did perfect biting into and tearing off a part of an aluminum can back in high school – but that is another story …
reboots
While I did NOT comment on the comment about my comment – I did use the criticism of my comment as an opportunity for self-examination.
Background material: The meme in question revolved around “movie franchise reboots.” (again, trivial trivialness)
In 2022 when we talk about “movie franchise reboots” the first thing that is required is a “movie franchise.”
e.g. very obviously “Star Trek” got a JJ Abrams reboot. Those “Star Wars” movies were “sequels” not “reboots” but the less said about JJ Abrams and that franchise the better
the big “super hero” franchises have also obviously been rebooted –
Batman in the 1990’s played itself out – then we got the “Batman reboot” trilogy directed by Christopher Nolan,
Superman in the 1970’s/80’s didn’t get a movie franchise reboot until after Christopher Reeves died
Spider-Man BECAME a movie franchise in the 2000’s, then got a reboot in 2012, and another in 2016/2017
SO the issue becomes counting the reboots – i.e. Batman in the 1990’s (well, “Batman” was released in 1989) had a four movie run with three different actors as Batman. I’m not a fan of those movies – so I admit my negative bias – but they did get progressively worse …
Oh, and if we are counting “reboots” do you count Batman (1966) with Adam West? Probably not – it exists as a completely separate entity – but if you want to count it I won’t argue – the relevant point is that just “changing actors” doesn’t equal a “reboot” – restarting/retelling the story from a set point makes a “reboot.”
However, counting Superman “reboots” is just a matter of counting actor changes – e.g. Christopher Reed made 4 Superman movies (which also got progressively worse) – “Superman Returns” (2006) isn’t a terrible movie – but it exists in its own little space because it stomped all over the Lois Lane/Superman relationship – then we have the Henry Cavill movies that were central to DC comics attempt at a “cinematic universe.”
We can also determine “reboots” by counting actors with Spider-Man. Of course the Spider-Man franchise very much illustrates that the purpose of the “movie industry” is to make money – not tell inspiring stories, raise awareness, or educate the masses – make money. If an actor becomes a liability – they can be replaced – it doesn’t matter if you setup another movie or not 😉
There are other not so recent franchises – “Tarzan” was a franchise, maybe we are stretching to call Wyatt Earp a franchise, how about Sherlock Holmes?
The Wyatt Earp/OK corral story is an example of a “recurring story/theme” that isn’t a franchise. Consider that “McDonald’s” is a franchise but “hamburger joint” is not …
Then we have the James Bond franchise.
The problem with the “Bond franchise” is that we have multiple “actor changes” and multiple “reboots.” i.e. Assuming we don’t count Peter Seller’s 1967 “Casino Royale” there have been 6 “James Bond” actors. Each actor change wasn’t a “reboot” but just because they kept making sequels doesn’t mean they had continuity.
The “Sean Connery” movies tell a longform story of sorts – with Blofeld as the leader of Spectre. The “James Bond” novels were very much products of the post WWII/Cold War environment – but the USSR was never directly the villain in any of the movies, the role of villain was usually Spectre in some form.
The easy part: The “Daniel Craig” Bond movies were very obviously a reboot of the Blofeld/Spectre storyline.
The problem is all of those movies between “On Her Majesty’s Secret Service” (1969) and “Casino Royale” (2006).
“Diamonds are Forever” (1971) was intended to finish the story started in “On Her Majesty’s Secret Service” – i.e. Bond gets married (and retires?), then Blofeld kills Bond’s wife as they leave the wedding – then bad guys drive away – Bond holds his dead wife while saying “We have all the time in the world.” – roll credits.
(fwiw: Except for the obviously depressing ending “On her Majesty’s Secret Service” is actually one of the better Bond movies)
Then George Lazenby (who had replaced Sean Connery as Bond) asked for more money than the studio was willing to pay – and they brought back Sean Connery for a much more light hearted/cartoonish Bond in “Diamonds are Forever.” (did I mention the profit making motive?)
Of course “Diamonds are Forever” starts out with Bond hunting down and killing Blofeld – but that is really the only reference we get to the previous movie – SO reboot? this particular movie maybe, maybe not – but it did signify a “formula change” if nothing else.
Any attempt at “long form storytelling” was abandoned in preference for a much more “cartoony” James Bond. MOST of the “Roger Moore” Bond movies have a tongue-in cheek feeling to them.
The “70’s Bond movies” became progressively more cartoonish – relying more on gadgets, girls, violence than storytelling (e.g. two of the movies “The Spy Who Loved Me” and “Moonraker” are basically the same plot). There are a few references to Bond having been married but nothing that would be recognized as “character development” or continuity – it could be argued that each movie did a “soft reboot” to the time after “Diamonds are Forever”, but simply saying that the “continuity” was that there was no “continuity” is more accurate.
Then we got the “80’s Bond” – “For Your Eyes Only” intentionally backed off the gadgets and promiscuity – Bond visits his wife’s grave and Blofeld makes a (comic) appearance in the “Bond intro action sequence” – so I would call this one a “soft reboot” but not a complete relaunch.
The same goes for Timothy Dalton’s Bond movies – not a full blown restart, but a continuation of the “upgrading” process – still no memorable continuity between movies – (he only did two Bond movies).
Pierce Brosnan as Bond in “GoldenEye” (1995) qualifies as another actor change and “soft reboot” – Bond is promiscuous and self-destructive but it is supposed to be as a reaction to his job, not because being promiscuous and self-destructive is cool – but we were back to the tongue in cheek – gadget fueled Bond (two words: “invisible car”).
The Daniel Craig Bond movies certainly fit ANY definition of a reboot. “No Time to Die” (2021) was the last Bond movie for Mr Craig – but what direction the “franchise” is going is all just speculation at the moment …
ANYWAY – comparing 27 Bond movies over 58ish years to the modern “Super hero” reboots – was the gist of my trivial answer to a trivial meme (which only took 1,700+ words to explain 😉 )
I developed an interest in “leadership” from an early age. The mundane reasons for this interest aren’t important. It is even possible that “leaders” are/were a pre-requirement for the whole “human civilization” thing – i.e. we are all “leaders” in one form or another if we are “involved with other people.” SO an interest in “leadership” is also natural.
There are certainly a lot of books written every year that claim to teach the “secrets” of leadership. There is (probably) something useful in all of these “leadership” books BUT there is no “secret leadership formula” that works all of the time for every situation. However, there are “principles of leadership.”
This micro-civilization “leadership” probably consists of discussions between the two people on what to do, where to go, when to do whatever. It is unlikely that they will naturally agree on everything, if they can’t resolve those disagreements (one way or another) then they won’t be “together” anymore and they will go separate ways.
leadership
Of course we run into the problem that there are different flavors of “leadership” because there are different types of “power.”
Another “first concept” is that “yelling” is not leadership. Yelling is just yelling – and while it might be a tool occasionally used by a leader – “constant yelling” is an obvious sign of BAD leadership to the point that it might just be “bullying behavior”/coercion and NOT “leadership” at all
i.e. “coercive” leadership ends up being self-destructive to the organization because it drives good people away and you end up with a group of “followers” waiting to be told what to do whatever.
e.g. when a two year old throws a temper tantrum – no one mistakes it for “leadership.” Same concept applies if someone in a position of power throws a temper tantrum 😉
(but there is a difference between “getting angry” and “temper tantrum” – if the situation arises then “anger” might be appropriate but never to the point where self-control is lost)
Generals
In English the word “general” refers to a common characteristic of a group. It doesn’t appear as a noun until the middle of the 16th century – so eventually we get the idea of the “person at the top of the chain of command” being a “General officer”
Whatever you want to call it – in “old days long ago” – the General was on the field fighting/leading the troops.
Alexander
If we give “Alexander the Great” the title of “general” – then he is the classic example of “leading by personal charisma/bravery/ability.” He was the “first over the wall” type of general – that led by inspiring his armies with a “vision of conquest.”
The problem becomes that ultimately Alexander the Great was a failure. Oh, he conquered a lot of land and left his name on cities, but again, in the long run he failed at leading his troops. After fighting for 10+ years Alexander wanted to keep going, while his tired troops wanted to go home. Alexander would die on the trip home, and his empire would be spit between his generals.
SO why did Alexander the Great (eventually) fail as a leader? Well, he was leading for HIS glory. Sure the fact that he – and his generals – were able to keep his army together for 10 years and conquer most of the “known world” rightfully earns him a place in history, BUT at an “organizational leadership” level he was a failure.
Cincinnatus
Arguably the best type of leader is in the position because they are the “right person” at the “right time” NOT because they have spent their lifetime pursuing personal advancement/glory.
The concept becomes “servant leadership” – which became a “management buzzword” in the 20th century, but is found throughout history.
Lucius Quinctius Cincinnatus comes to mind – you know, the (implied) down on his luck “citizen farmer” of Ancient Rome when it was still a “new republic” (500ish BC) – twice given supreme power (and the offer of being made “dictator for life”) he also gave up that power as soon as possible.
Also illustrated by the story of Cincinnatus is the “burden or command” IF a leader is truly trying to “do what is right for the people.”
Of course Cincinnatus’ example was much more often ignored that honored by later Roman leaders – which eventually led to the end of the “republic” and the birth of “empire” – but that sounds like the plot for a series of movies 😉
HOWEVER – Cincinnatus still serves as an example of great leadership. Yes, he had problems with his sons, but that is another story …
Moses
According to “tradition” Moses was a general in the Egyptian army. The first 40 years of Moses’ life are not described (except that he was raised as the son of the Daughter of Pharaoh) – then he kills a man in Exodus 2:12 and goes on the run to the land of Midian.
There is a lot of potential “reading into the story” here. I suppose Cecil B DeMille’s 1956 version is plausible – the love story between Moses and Nefretiri feels like the “Hollywood movie” addition, and of course Charlton Heston as Moses is a simplification (Aaron probably did most of the “talking”).
ANYWAY – the point is that (after 40 years of tending sheep in Midian) Moses didn’t WANT the job of leading the tribes of Israel out of Egypt – which is what made him perfect for the job.
Feel free to do your own study of Exodus – for my point today, Moses became a “servant leader” after 40 years of tending sheep. The mission wasn’t about him, it was about, well, “the mission.”
Just for fun – I’ll point at Numbers 12:3 and also mention that the first five books of the “Old Testament” are often referred to as the “Books of Moses” but that doesn’t mean Moses “wrote” them – i.e. it isn’t Moses calling himself “humble” but probably Joshua …
Politicians
From a practical standpoint – both Cincinnatus and Moses were facing “leadership situations” that involved a lot of responsibility but NOT a lot of “real privilege.” As they approached the job it was as a responsibility/burden not as a “privilege.”
Old Cincinnatus simply resigned rather than try to rule. Moses didn’t have that option 😉 – so we get the story of the “people” blaming him for everything wrong and rebelling against his leadership multiple times (and as the leader Moses was also held to a higher standard – but that is another story).
In the last 25 years of the 20th century the “management buzzwords” tried to differentiate between “managers” and “leaders.” Which is always a little unfair – but the idea is that “managers” are somehow not “leaders” if all they do is pass along information/follow orders.
In practice “good management” is “leadership.” However, if an individual is blindly following orders (with no concept of “intent of the command”) then that probably isn’t “leadership.”
Sure, saying “corporate says to do it this way” is probably the actual answer for a lot of “brand management” type of issues – which is also probably why being “middle management” can be frustrating.
I’m fond of saying that a major function of “senior leaders” is developing “junior leaders” – so the “leadership malfunction” might be further up the chain of command if “front line managers” are floundering.
With that said – “politicians” tend to be despised because they are in positions of power and routinely take credit for anything good that happens and then try to blame someone else for anything bad that happens.
If an individual rises above the ranks of “smarmy politicians” and actually displays “leadership” then history might consider them a “statesmen” – but the wanna be “Alexanders” always outnumber the “Cincinnati” (btw: the plural of “Cincinnatus” is “Cincinnati” which is how that nice little city in southwestern Ohio got its name) and of course a “Moses” requires divine intervention 😉
Management Books
“Books on leadership/management” tend to fall into two categories: the better ones are “memoirs/biography” while the “not so good” are self-congratulatory/”aren’t I wonderful” books published for a quick buck.
I’ve read a lot of these books over the years – and the “actionable advice” usually boils down to some form of the “golden rule” (“do unto others as you would have them do to you”) or the categorical imperative.
Personally I like this quote from a Hopalong Cassidy movie:
You can’t go to far wrong looking out for the other guy.
Hopalong Cassidy
George Washington summed up “good manners” as (something like) “always keep the comfort of other people in mind.” SO “good leadership” equals “good manners” equals “lead the way you would like to be led”
Of course the problem becomes that you can never make EVERYONE happy – e.g. displaying “good manners” is obviously going to be easier than “leadership” of a large group of individuals. BUT trying to “lead” from a position of bitterness/spite/coercion will never work in the “long term.”
If you are trying to provoke a revolt – then “ignoring the concerns of the masses” and trying to coerce compliance to unpopular policies will probably work …
e.g. “most adults” can understand not getting everything they want immediately – but they want to feel “heard” and “valued.” …
Back when “tradition warfare” was, well, “traditional” – I stumbled across a book that tried to answer the age old “why do countries go to ‘war’ against each other.”
The researchers where approaching the question from a secular psychology perspective – but I’ll point out James 4:1-10 as kind of summarizing what the researchers found – i.e. the problem seems to be part of that ol’ “human nature” thing.
ANYWAY – this came to mind because (if memory serves – I have a copy of the book somewhere) one of the “phases” on the way to full blown “war” which the researchers identified was the depersonalization of the “other side.”
In my lifetime I can remember when the residents of the U.S.S.R. were “Godless communists intent on world domination and destroying the American way of life” – which at one point may have been true for the leaders of the Communist Party in Russia, but was almost certainly NOT true for the “average Russian citizen.”
There were “close calls” where a full blown “traditional war” could have erupted between the U.S. and the U.S.S.R. – but obviously it never happened.
The “why” is beyond me – but the “how” is that the leaders of both nations were always willing to communicate with each other at some level.
In full blown 20/20 hindsight we might say that they realized that “winning” a modern nuclear war isn’t possible – but that is probably an example of the historian fallacy of seeing events as “inevitable.”
I’ve heard it argued that “Dynasty” (the 1980’s primetime soap opera) helped end the cold war. How? Well, ordinary folks in the U.S.S.R. were somehow able to watch the show – and the “conspicuous consumption” obvious in the show wasn’t what they had been told life was like in the U.S.
They didn’t see people waiting in line to buy things, or being put on a waiting list to be able to buy a car. The show was obviously not “real America” BUT then they would have seen the commercials as well – and again they saw “economic prosperity” not “capitalists oppressing the masses.”
The larger point being that they began to see “Americans” as individual people – not as a large anonymous group.
Of course in the “west” you can trace a similar change in attitudes by how “Russians” where portrayed in pop-culture.
The Bond franchise serves as a convenient example – in the “Sean Connery” Bond movies, the “Russians” are anonymous at best. Sure, the U.S.S.R. is never the villain (Spectre is always the “bad guy”). When the villains/antagonists are “eastern European” they are agents of Spectre – but “Russia” exists as an “ominous presence.”
Then in the “Roger Moore” Bond movies in the 1970’s and early 1980’s the “Russians” were “competition” but not “anonymous enemies.” The two sides were “respected opponents” – not “mortal enemies.”
Then by the time the U.S.S.R. collapsed in 1991 the “Russians” had become “co-workers” in the Bond Franchise.
In the late 1980’s and 1990’s we got Timothy Dalton (great actor, not my favorite “Bond”) and then Pierce Brosnan as Bond – and the “bad guys” were drug dealers and “extremists.”
Finally in “No Time To Die” – Daniel Craig as Bond says his Russian is “rusty” …
Extremists
SO – obvious economic, cultural, ethnic differences aside – people tend to be the same where ever you go 😉
The rule of thumb seems to be that “extremist views” are dangerous and must be censored/controlled. The question becomes “what makes someone an extremist.”
It must be pointed out that just because you don’t agree with the message, or don’t like the messenger – does not make the message “extreme.”
e.g. “I think men that button the top button – and don’t wear a tie – look silly.” Agree or disagree (I see enough guys with “top button buttoned/no tie” enough to know that some folks disagree with me) am I an extremist? obviously not.
MAYBE, the easiest way to identify an “extremist” is that they tend to address those that disagree with them as a malicious group – just like the “early phase to war”.
e.g. “I think men that button the top button – and don’t wear a tie – look silly AND they are out to destroy us all therefore they must be censored!” extremist? this time very much “yes”
(oh, and while I’m at it – belt OR suspenders NOT both, and you over there pull up your pants and tie those shoes!)
Expert Knowledge
Silly examples aside – we have run headlong into the concept of “expert knowledge.”
Another way to put it is that an “expert” is someone that knows “more and more about less and less.”
SO while I don’t consider myself an expert on ANY subject – I get paid to talk about computers/technology. Sometimes I might appear to “know things” but that is usually an illusion – some form of this quote applies:
It is better to remain silent at the risk of being thought a fool, than to talk and remove all doubt of it.
HOWEVER – a lot of actual research has been done into the “learning process.” I tend to use the term “expert knowledge” because in the early 1990’s the concept of “expert systems” was something of a computing fad (which probably grew into “AI” and/or “computer learning” in the last few years) – but call it the “path to mastery” if you prefer.
When we first start learning about a subject we tend to over generalize as part of “knowledge processing.”
The old saying that someone “Can’t see the forest because of the trees” might be an example of the concept “amateur knowledge.”
e.g. someone first learning about “trees” might go out and look at a bunch of different types of trees in the same area and come away from the experience overwhelmed by information about individual trees.
It turns out that “experts” – as in “those that have ‘mastered’ a certain set of skills/knowledge” – tend to seamlessly go from specific to general and back again.
e.g. the “expert” showing that group of amateurs the trees also has an appreciation for how those trees interact with each other as well as the impact “the forest” has on the larger ecosystem.
Which kinda means that the “expert” sees the trees AND the forest.
Of course there is a Biblical reference – umm, I’ll just point out that those who were regarded as “experts” where quizzing Jesus on the “greatest commandment” – and Jesus summarizes the teachings of what we call the Old Testament in two sentences – which probably also illustrates that there are always more people that THINK they are “experts” on a subject than are ACTUALLY experts …
as always, don’t trust me – I am only a bear of very little brain 😉
Rules, Rules, Rules
From a organizational behavior point of view “more rules do not make people better.”
I’ve worked for a couple of places that wanted me to sign “non compete” agreements – which I was happy to sign because signing the “non compete” agreement was completely pointless.
I understand that the employer wanted to guard themselves against someone coming in and stealing “organizational intellectual property” or (more likely in the tech support arena) an employee stealing “customer support contracts” and starting their own company.
I say it was pointless both because the judicial system rarely enforces “non compete” type contracts AND because if I was the type of person that would actually do what they are afraid of – then no “contract” would stop me from doing it.
The point being that “making a bunch of rules” hoping to change the behavior of lazy/stupid/malicious employees ends up making the “good employees” less productive.
i.e. the people that are doing what the rules are supposed to stop don’t care about the rules, and the people that aren’t doing it will be burdened by having to comply with additional (pointless) rules.
SO there is probably an inverse relationship between the size of the “employee manual” and the efficiency/productivity of the organization – but that falls into the “personal observation” category
… with the obvious addendum that industries will differ and the need to comply with “regulation” is the root cause for a lot of very large employee manuals…
EVEN WORSE
Then add in that the additional rules are (usually) made because of a problem with an individual that is no longer with the organization.
This becomes my favorite example of “incompetent management 101” – i.e. they are “managing” employees that aren’t there anymore.
Hey, I’m sorry the last guy was an incompetent jerk – how about we pretend like I’m NOT an incompetent jerk – you know, just in case I’m NOT that other person that caused you problems.
Yes, that is (almost certainly) unfair – with “interpersonal relationships” of any kind, it is seldom only one sides “fault” – a Hank Williams song comes to mind – but that is a different subject 😉
$600? $10,000? does it really matter?
Those with long memories might remember Eliot Spitzer (for those that don’t there is a documentary called Client 9).
“They” (as in the various law enforcement entities involved) have been monitoring “large deposits” in an effort to catch “money laundering” by drug cartels and terrorists for a long time. I’m not sure HOW long, but it has been going on for awhile.
Apparently Mr Spitzer was aware of the rules in question – and so he purposefully kept his withdrawals below the $10,000 limit that was supposed to trigger notification.
It turns out that the people that work in the banking industry aren’t complete idiots – SO they had been monitoring for “suspicious activity” that someone trying to avoid setting off the automatic notification limit might use.
It wasn’t the AMOUNT of the transactions that got Mr Spitzer investigated – it was the suspicious behavior that caused the investigation.
I don’t know if it was still based on a specific limit or not – i.e. did 3 transactions of $4,000 each in the same week equal 1 transaction of $12,000? either way, it doesn’t matter.
The point is that if you are trying to catch scofflaws you need to monitor behaviors NOT specific transaction amounts.
Which PROBABLY means that any law requiring “automatic reporting” to “some gov’ment agency” is also PROBABLY pointless for “law enforcement” purposes and simply becomes gov’ment intrusion on individual liberty – i.e. another step towards “Big Brother” watching you – which might start with noble intentions but becomes the slippery slope to modern serfdom …