Category: business

  • Jaws, sequels in general, and Steven Spielberg


    Jaws – 1975

    There have been a couple documentaries about the 1975 blockbuster “Jaws” — which probably illustrates the long term impact of the original movie.

    Any “major” movie made in the era of “DVD extras” is going to have an obligatory “making of” documentary – so the fact

    “Jaws: The Inside Story” aired on A&E back in 2009 (and is available for free on Kanopy.com). It was surprisingly entertaining – both as “movie making” documentary and as “cultural history.”

    This came to mind because the “Jaws movies” have been available on Tubi.com for the last couple months.

    full disclosure: I was a little too young to see “Jaws” in the theater — the “edited for tv” version of “Jaws” was my first exposure to the movie, when the movie got a theatrical re-release and ABC aired it on network tv in 1979.

    I probably saw the “un-edited” version of “Jaws” on HBO at some point – and I have a DVD of the original “Jaws.” All of which means I’ve seen “Jaws – 1975” a LOT. Nostalgia aside, it still holds up as an entertaining movie.

    Yes, the mechanical shark is cringeworthy in 2022 – but the fact that the shark DIDN’T work as well as Spielberg et al wanted probably contributes to the continued “watch – ability” of the movie. i.e. Mr Spielberg had to use “storytelling” technics to “imply” the shark – which ends up being much scarier than actually showing the shark.

    i.e. what made the original “Jaws” a great movie had very little to do with the mechanical shark/”special effects.” The movie holds up as a case study on “visual storytelling.” Is it Steven Spielberg’s “best movie”? No. But it does showcase his style/technique.

    At one point “Jaws” was the highest grossing movie in history. It gets credit for creating the “summer blockbuster” concept i.e. I think it was supposed to be released as as “winter movie” – but got pushed to a summer release because of production problems.

    Source material

    The problem with the “Jaws” franchise was that it was never intended to be a multiple-movie franchise. The movie was based on Peter Benchley’s (hugely successful) 1974 novel (btw: Peter Benchley plays the “reporter on the beach” in “Jaws – 1975”).

    I was too young to see “Jaws” in the theater, and probably couldn’t even read yet when the novel was spending 44 weeks on the bestseller lists.

    “Movie novelizations” tended to be a given back in the 1970’s/80’s – but when the movie is “based on a novel” USUALLY the book is “better” than the movie. “Jaws” is one of the handful of “books made into movies” where the movie is better than the book (obviously just my opinion).

    The basic plot is obviously the same – the two major differences is that (in the book) Hooper dies and the shark doesn’t explode.

    Part of the legend of the movie is that “experts” told Mr. Spielberg that oxygen tanks don’t explode like that and that the audience wouldn’t believe the ending. Mr Spielberg replied (something like) “Give me the audience for 2 hours and they will stand up and cheer when the shark explodes” — and audiences did cheer at the exploding shark …

    (btw: one of those “reality shows” tried to replicate the “exploding oxygen tank” and no, oxygen tanks do NOT explode like it does at the end of Jaws – so the experts were right, but so was Mr Spielberg …)

    Sequels

    It is estimated that “Jaws – 1975” sold 128 million tickets. Adjust for inflation and it is in the $billion movie club.

    SO of course there would be sequels.

    Steven Spielberg very wisely stayed far away from all of the sequels. Again, the existential issue with MOST “sequels” is that they tend to just be attempts to get more money out of the popularity of the original – rather than telling their own story.

    Yes, there are exceptions – but none of the Jaws sequels comes anywhere close to the quality of the original.

    “Jaws 2” was released in summer 1978. Roy Scheider probably got a nice paycheck to reprise his starring role as Chief Martin Brody – Richard Dreyfuss stayed away (his character is supposed to be on a trip to Antarctica or something). Most of the supporting cast came back – so the movie tries very hard to “feel” like the original.

    Again – I didn’t see “Jaws 2” in the theater. I remembered not liking the movie when I did see it on HBO – but I (probably) hadn’t seen it for 30 years when I re-watched it on Tubi the other day.

    Well, the mechanical shark worked better in “Jaws 2” – but it doesn’t help the movie. Yes, the directing is questionable, the “teenagers” mostly unlikeable, and the plot contrived – but other than that …

    How could “Jaws 2” have been better? Well, fewer screeching teenagers (or better directed teenagers). It felt like they had a contest to be in the movie – and that was how they selected most of the “teenagers.”

    Then the plot makes the cardinal sin of trying to explain “why” another huge shark is attacking the same little beach community. Overly. contrived.

    If you want, you can find subtext in “Jaws – 1975.” i.e. the shark can symbolize “nature” or “fate” or maybe even “divine retribution” take your pick. Maybe it isn’t there – but that becomes the genius in the storytelling – i.e. don’t explain too much, let the audience interpret as they like

    BUT if you have another huge shark, seemingly targeting the same community – well, then the plot quickly becomes overly contrived.

    The shark death scene in “Jaws 2” just comes across as laughably stupid – but by that time I was just happy that the movie was over.

    SO “Jaws 2” tried very hard – and it did exactly what a “back for more cash” sequel is supposed to do – i.e. is made money.

    “Jaws 3” was released in summer 1983 and tried to capitalize on a brief resurgence of the “3-D” fad. This time the movie was a solid “B.” The only connection to the first two movies is the grown up Brody brothers – and the mechanical shark of course.

    The plot for “Jaws 3” might feel familiar to audiences in 2022. Not being a “horror” movie aficionado, I’m not sure how much “prior” art was involved with the plot — i.e. the basic “theme park” disaster plot had probably become a staple for “horror” movies by 1983 (“Westworld” released in 1973 comes to mind).

    Finally the third sequel came out in 1987 (“Jaws: The Revenge”) – I have not seen the movie. Wikipedia tells me that this movie ignores “Jaws 3” and becomes a direct sequel to “Jaws 2” (tagline “This time it is personal”)

    The whole “big white shark is back for revenge against the Brody clan” plot is a deal breaker for me – e.g. when Michael Caine was asked if he had watched “Jaws 4” (which received terrible reviews) – his response was ‘No. But I’ve seen the house it bought for my mum. It’s fantastic!’

    Thankfully, there isn’t likely to be another direct “Jaws” sequel (God willing).

    Humans have probably told stories about “sea monsters” for as long as there have been humans living next to large bodies of water. From that perspective “Jaws” was not an “original story” (of course those are hard to find) but an updated version of very old stories – and of course “shark”/sea monster movies continue to be popular in 2022.

    Mr Spielberg

    Steven Spielberg was mostly an “unknown” director before “Jaws.” Under ordinary circumstances – an “unknown” director would have been involved in the sequel to a “big hit movie.”

    Mr Spielberg explained he stayed away from the “Jaws sequels” because making the original movie was a “nightmare” (again, multiple documentaries have been made).

    “Jaws 2” PROBABLY would have been better if he had been involved – but his follow up was another classic — “Close Encounters of the Third Kind” (1977).

    It is slightly interesting to speculate on what would have happened to Steven Spielberg’s career if “Jaws” had “flopped” at the box office. My guess is he would have gone back to directing television and would obviously have EVENTUALLY had another shot at directing “Hollywood movies.”

    Speculative history aside – “Jaws” was nominated for “Best Picture” (but lost to “One Flew Over the Cuckoo’s Nest”) and won Oscars for Best Film Editing, Best Music (John Williams), and Best Sound.

    The “Best Director” category in 1976 reads like a “Director Hall of Fame” list – Stanley Kubrick, Robert Altman, Sidney Lumet, Federico Fellini, and then Milos Forman won for directing “One Flew Over the Cuckoo’s Nest.” SO it is understandable why Mr Spielberg had to wait until 1978 to get his first “Best Director” nomination for “Close Encounters of the Third Kind” …

    (btw: the source novel for “One Flew Over the Cuckoo’s Nest” is fantastic – I didn’t care for the movie PROBABLY because I read the book first … )

    Best vs favorite
    ANYWAY – I have a lot of Steven Spielberg movies in my “movie library” – what is probably his “best movie” (if you have to choose one – as in “artistic achievement”) is hands down “Schindler’s List” (1993) which won 7 Oscars – including “Best Director” for Mr Spielberg.

    However, if I had to choose a “favorite” then it is hard to beat “Raiders of the Lost Ark” (but there is probably nostalgia involved) …

  • cola wars, taste tests, and marketing

    Coke or Pepsi?

    I just watched a documentary on the “Cola wars” – and something obvious jumped out at me.

    First I’ll volunteer that I prefer Pepsi – but this is 100% because Coke tends to disturb my stomach MORE than Pepsi disturbs my stomach.

    full disclosure – I get the symptoms of “IBS” if I drink multiple “soft drinks” multiple days in a row. I’m sure this is a combination of a lot of factors – age, genetics, whatever.

    Of course – put in perspective the WORST thing for my stomach (as in “rumbly in the tummy”) when I was having symptoms was “pure orange juice” – but that isn’t important.

    My “symptoms” got bad enough that I was going through bottles of antacid each week, and tried a couple “over the counter” acid reflux products. Eventually I figured out changing my diet – getting more yogurt and tofu in my diet, drinking fewer “soft drinks” helped a LOT.

    The documentary was 90 minutes long – and a lot of time was spent on people expressing how much they loved one brand or the other. I’m not zealous for either brand – and I would probably choose Dr Pepper if I had to choose a “favorite” drink

    Some folks grew up drinking one beverage or the other and feel strongly about NOT drinking the “competitor” – but again, my preference for Pepsi isn’t visceral.

    Habit

    The massive amount of money spent by Coke and Pepsi marketing their product becomes an exercise in “marketing confirmation bias” for most of the population – but I each new generation U.S. has to experience some form of the “brand wars” – Coke vs Pepsi, Nike vs Adidas, PC vs Mac – whatever.

    e.g. As a “technology professional” I will point out that Microsoft does a good job of “winning hearts and minds” by getting their products in the educational system.

    If you took a class in college teaching you “basic computer skills” in the last 20 years – that class was probably built around Microsoft Office. Having taught those classes for a couple years I can say that students learn “basic computer skills” and also come away with an understanding of “Microsoft Office” in particular.

    When those students need to buy “office” software in the future, what do you think they will choose?

    (… and Excel is a great product – I’m not bashing Microsoft by any means 😉 )

    Are you a “Mac” or a “PC”? Microsoft doesn’t care – both are using Office. e.g. Quick name a spreadsheet that ISN’T Excel – there are some “free” ones but you get the point …

    The point is that human beings are creatures of habit. After a certain age – if you have “always” used product “x” then you are probably going to keep on using product “x” simply because it is what you have “always used.”

    This fact is well known – and why marketing to the “younger demographic” is so profitable/prized.

    ALL OF WHICH MEANS – that if you can convince a sizable share of the “youth market” that your drink is “cool” (or whatever the kids say in 2022) – then you will (probably) have created a lifelong customer

    Taste Tests

    Back to the “cola wars”…

    The Pepsi Challenge deserves a place in the marketing hall of fame — BUT it is a rigged game.

    The “Pepsi challenge” was setup as a “blind taste test.” The “test subject” had two unmarked cups placed in front of them – one cup containing Pepsi and the other containing Coke.

    The person being tested drinks from one cup, then from the second cup, and then chooses which one they prefer.

    Now, according to Pepsi – twice as many people preferred Pepsi to Coke by a 2:1 margin. Which means absolutely nothing.

    The problem with the “taste test” is that the person tastes one sugary drink, and then immediately tastes a second sugary drink. SO being able to discern the actual taste difference between the two is not possible.

    If you wanted an honest “taste test” then the folks being tested should have approached the test like a wine tasting. e.g. “swish” the beverage back and forth, suck in some air to get the full “flavor”, and then spit it out. Maybe have something to “cleanse the pallet” between the drinks …

    (remember “flavor” is a combination of “taste” and “smell”)

    For the record – yes, I think Coke and Pepsi taste different – BUT the difference is NOT dramatic.

    The documentary folks interviewed Coke and Pepsi executives that worked at the respective companies during the “cola wars” – and most of those folks were willing to take the “Pepsi Challenge”

    A common complaint was that both drinks tasted the same – and if you drink one, then drink another they DO taste the same – i.e. you are basically tasting the first drink “twice” NOT two unique beverages.

    fwiw: most of the “experts” ended up correctly distinguishing between the two – but most of them took the time to “smell” each drink, and then slowly sip. Meanwhile the “Pepsi Challenge” in the “field” tended to be administered in a grocery store parking – which doesn’t exactly scream “high validity.”

    ANYWAY – you can draw a dotted line directly from the “Pepsi Challenge” (as un-scientific as it was) and “New Coke” – i.e. the “Pepsi Challenge” convinced the powers that be at Coke that they needed to change.

    So again, the “Pepsi Challenge” was great marketing but it wasn’t a fair game by any means.

    fwiw: The documentary (“Cola Wars” from the History Channel in 2019) is interesting from a branding and marketing point of view. It was on hoopladigital, and is probably available online elsewhere …

    Difference between “sales” and “Marketing”

    If you are looking at a “business statement”/profit and loss statement of some kind – the “top line” is probably gonna be “total revenue” (i.e. “How much did the company make”). The majority of “revenue” is then gonna be “sales” related in some form.

    SO if you make widgets for $1 and sell them for $2 – if you sell 100 widgets then your “total revenue” will be $200 (top line) your “cost of goods sold” will be $100 and then the “Net revenue” (the “bottom line”) will be “widgets sold” – “cost of widgets” i.e. $100 in this extremely simple example.

    In the above example the expense involved in “selling widgets” is baked into the $1 “cost of goods sold” – so maybe the raw materials for each widget is 50 cents, then 30 cents per widget in “labor”, and 20 cents per widget for sales and marketing.

    Then “sales” involves everything involved in actually getting a widget to the customer. While “marketing” is about finding the customer and then educating them about how wonderful your widgets are – and of course how they can buy a widget. e.g. marketing and sales go hand in hand but they are not the same thing.

    The “widget market” is all of the folks that might want to use widgets. “Market share” is then the number of folks that use a specific company’s widgets.

    Marketing famously gets discussed as “5 P’s” — Product, Place, Price, Promotion, and People.

    Obviously the widget company makes “widgets” (Product)- but should they (A) strive to make the highest quality widget possible that will last for years (i.e. “expensive to produce”) or should they (B) make a low cost, disposable widget?

    Well, the answer is “it depends” – and some of the factors involved in the “Product” decision are the other 4 P’s — which will change dramatically between scenario A and B.

    A successful company will understand the CUSTOMER and how the customer uses “widgets” before deciding to venture into the “widget market space”

    This is why you hear business folks talk about “size of markets” and “price sensitivity of markets.” If you can’t make a “better” widget or a less expensive widget – then you are courting failure …

    SO Coke and Peps are both “mature” companies that have established products, methods and markets – so growing their market share requires something more than just telling folks that “our product tastes good”

    In the “Cola Wars” documentary they point out the fact that the competition between Coke and Pepsi served to grow the entire “soft drink market” – so no one really “lost” the cola wars. e.g. in 2020 the “global soft drink market” was valued at $220 BILLION – but the market for “soft drinks” fragmented as it grew.

    The mini-“business 101” class above illustrates why both Coke and Pepsi aggressively branched out into “tea” and “water” products since the “Cola wars.”

    It used to be that the first thing Coke/Pepsi would do when moving into a new market was to build a “bottling plant.” So then “syrups” can be shipped to the different markets – and then “bottled” close to where they will be consumed – which saves $$ on shipping costs.

    I suppose if you are a growing “beverage business” then selling “drink mix” online might be a profitable venture – unless you happen to have partners in “distant markets” that can bottle and distribute your product – i.e. Coke and Pepsi are #1 and #2 in the soft drink market and no one is likely to challenge either company anytime soon.

    “Soft drinks” is traditionally defined as “non alcoholic” – so the $220 billion is spread out over a lot of beverages/companies. Coke had 20% of that market and Pepsi 10% – but they are still very much the “big players” in the industry. The combined market share of Coke and Pepsi is equal to the combined market share of the next 78 companies combined (e.g. #3 is Nestle, #4 Suntory, #5 Danone, #6 Dr Pepper Snapple, #7 Red Bull).

    My takeaway …

    umm, I got nothing. This turned into a self-indulgent writing exercise. Thanks for playing along.

    In recent years PepsiCo has been driving growth by expanding into “snacks” – so a “Cola wars 2” probably isn’t likely …

    I’m not looking to go into the soft drink business – but it is obviously still a lucrative market. I had a recipe for “home made energy drink” once upon a time – maybe I need to find that again …

  • Modern “basics” of I.T.

    Come my friends, let us reason together … (feel free to disagree, none of this is dogma)

    There are a couple of “truisms” that APPEAR to conflict –

    Truism 1:

    The more things change the more they stay the same.

    … and then …

    Truism 2:

    The only constant is change.

    Truism 1 seems to imply that “change” isn’t possible while Truism 2 seems to imply that “change” is the only possibility.

    There are multiple way to reconcile these two statements – for TODAY I’m NOT referring to “differences in perspective.”

    Life is like a dogsled team. If you aren’t the lead dog, the scenery never changes.

    (Lewis Grizzard gets credit for ME hearing this, but he almost certainly didn’t say it first)

    Consider that we are currently travelling through space and the earth is rotating at roughly 1,000 miles per hour – but sitting in front of my computer writing this, I don’t perceive that movement. Both the dogsled and my relative lack of perceived motion are examples of “perspective” …

    Change

    HOWEVER, “different perspectives” or points of view isn’t what I want to talk about today.

    For today (just for fun) imagine that my two “change” truisms are referring to different types of change.

    Truism 1 is “big picture change” – e.g. “human nature”/immutable laws of the universe.

    Which means “yes, Virginia there are absolutes.” Unless you can change the physical laws of the universe – it is not possible to go faster than the speed of light. Humanity has accumulated a large “knowledge base” but “humans” are NOT fundamentally different than they were 2,000 years ago. Better nutrition, better machines, more knowledge – but humanity isn’t much different.

    Truism 2 can be called “fashion“/style/”what the kids are doing these days” – “technology improvements” fall squarely into this category. There is a classic PlayStation 3 commercial that illustrates the point.

    Once upon a time:

    • mechanical pinball machines were “state of the art.”
    • The Atari 2600 was probably never “high tech” – but it was “affordable and ubiquitous” tech.
    • no one owned a “smartphone” before 1994 (the IBM Simon)
    • the “smartphone app era” didn’t start until Apple released the iPhone in 2007 (but credit for the first “App store” goes to someone else – maybe NTT DoCoMo?)

    SO fashion trends come and go – but the fundamental human needs being services by those fashion trends remain unchanged.

    What business are we in?

    Hopefully, it is obvious to everyone that it is important for leaders/management to understand the “purpose” of their organization.

    If someone is going to “lead” then they have to have a direction/destination. e.g. A tourist might hire a tour guide to “lead” them through interesting sites in a city. Wandering around aimlessly might be interesting for awhile – but could also be dangerous – i.e. the average tourist wants some guidance/direction/leadership.

    For that “guide”/leader to do their job they need knowledge of the city AND direction. If they have one OR the other (knowledge OR direction), then they will fail at their job.

    The same idea applies to any “organization.” If there is no “why”/direction/purpose for the organization then it is dying/failing – regardless of P&L.

    Consider the U.S. railroad system. At one point railroads were a huge part of the U.S. economy – the rail system opened up the western part of the continent and ended the “frontier.”

    However, a savvy railroad executive would have understood that people didn’t love railroads – what people valued was “transportation.”

    Just for fun – get out any map and look at the location of major cities. It doesn’t have to be a U.S. map.

    The point I’m working toward is that throughout human history, large settlements/cities have centered around water. Either ports to the ocean or next to riverways. Why? Well, obviously humans need water to live but also “transportation.”

    The problem with waterways is that going with the current is much easier than going against the current.

    SO this problem was solved first by “steam powered boats” and then railroads. The early railroads followed established waterways connecting established cities. Then as railroad technology matured towns were established as “railway stations” to provide services for the railroad.

    Even as the railroads became a major portion of the economy – it was NEVER about the “railroads” it was about “transportation”

    fwiw: then the automobile industry happened – once again, people don’t car so much about “cars” what they want/need is “transportation”

    If you are thinking “what about ‘freight’ traffic” – well, this is another example of the tools matching the job. Long haul transportation of “heavy” items is still efficiently handled by railroads and barges – it is “passenger traffic” that moved on …

    We could do the same sort of exercise with newspapers – i.e. I love reading the morning paper, but the need being satisfied is “information” NOT a desire to just “read a physical newspaper”

    What does this have to do with I.T.?

    Well, it is has always been more accurate to say that “information technology” is about “processing information” NOT about the “devices.”

    full disclosure: I’ve spent a lifetime in and around the “information technology” industry. FOR ME that started as working on “personal computers” then “computer networking”/LAN administration – and eventually I picked up an MBA with an “Information Management emphasis”.

    Which means I’ve witnessed the “devices” getting smaller, faster, more affordable, as well as the “networked personal computer” becoming de rigueur. However, it has never been about “the box” i.e. most organization aren’t “technology companies” but every organization utilizes “technology” as part of their day to day existence …

    Big picture: The constant is that “good I.T. practices” are not about the technology.

    Backups

    When any I.T. professional says something like “good backups” solve/prevent a lot of problems it is essential to remember how a “good backup policy” functions.

    Back in the day folks would talk about a “grandfather/father/son” strategy – if you want to refer to it as “grandmother/mother/daughter” the idea is the same. At least three distinct backups – maybe a “once a month” complete backup that might be stored in a secure facility off-site, a “once a week” complete backup, and then daily backups that might be “differential.”

    It is important to remember that running these backups is only part of the process. The backups also need to be checked on a regular basis.

    Checking the validity/integrity of backups is essential. The time to check your backups is NOT after you experience a failure/ransomware attack.

    Of course how much time and effort an organization should put into their backup policy is directly related to the value of their data. e.g. How much data are you willing to lose?

    Just re-image it

    Back in the days of the IBM PC/XT, if/when a hard drive failed it might take a day to get the system back up. After installing the new hard drive, formatting the drive and re-installing all of the software was a time intensive manual task.

    Full “disk cloning” became an option around 1995. “Ghosting” a drive (i.e. “cloning”) belongs in the acronym Hall of Fame — I’m told it was supposed to stand for “general hardware-oriented system transfer.” The point being that now if a hard drive failed, you didn’t have to manually re-install everything.

    Jump forward 10 years and Local Area Networks are everywhere – Computer manufacturers had been including ‘system restore disks’ for a long time AND software to clone and manage drives is readily available. The “system cloning” features get combined with “configuration management” and “remote support” and this is the beginning of the “modern I.T.” era.

    Now it is possible to “re-image” a system as a response to software configuration issues (or malware). Disk imaging is not a replacement for a good backup policy – but it reduced “downtime” for hardware failures.

    The more things change …

    Go back to the 1980’s/90’s and you would find a lot of “dumb terminals” connecting to a “mainframe” type system (well, by the 1980s it was probably a “minicomputer” not a full blown “mainframe”).

    A “dumb terminal” has minimal processing power – enough to accept keyboard input and provide monitor output, and connect to the local network.

    Of course those “dumb terminals” could also be “secured” so there were good reasons for keeping them around for certain installations. e.g. I remember installing a $1,000 expansion card into new late 1980’s era personal computers to make it function like a “dumb terminal” – but that might have just been the Army …

    Now in 2022 we have “chrome books” that are basically the modern version of “dumb terminals.” Again, the underlying need being serviced is “communication” and “information” …

    All of which boils down to “basics” of information processing haven’t really changed. The ‘personal computer’ is a general purpose machine that can be configured for various industry specific purposes. Yes, the “era of the PC” has been over for 10+ years but the need for ‘personal computers’ and ‘local area networks’ will continue.

  • Industry Changing Events and “the cloud”

    Merriam-Webster tells me that etymology is the “the history of a linguistic form (such as a word)” (the official definition goes on a little longer – click on the link if interested …)

    The last couple weeks I’ve run into a couple of “industry professionals” that are very skilled in a particular subset of “information technology/assurance/security/whatever” but obviously had no idea what “the cloud” consists of in 2022.

    Interrupting and then giving an impromptu lecture on the history and meaning of “the cloud” would have been impolite and ineffective – so here we are 😉 .

    Back in the day …

    Way back in the 1980’s we had the “public switched telephone network” (PSTN) in the form of (monopoly) AT&T. You could “drop a dime” into a pay phone and make a local call. “Long distance” was substantially more – with the first minute even more expensive.

    The justification for higher connection charges and then “per minute” charges was simply that the call was using resources in “another section” of the PSTN. How did calls get routed?

    Back in 1980 if you talked to someone in the “telecommunications” industry they might have referred to a phone call going into “the cloud” and connecting on the other end.

    (btw: you know all those old shows where they need “x” amount of time to “trace” a call – always a good dramatic device, but from a tech point of view the “phone company” knew where each end of the call was originating – you know, simply because that was how the system worked)

    I’m guessing that by the breakup of AT&T in 1984 most of the “telecommunications cloud” had gone digital – but I was more concerned with football games in the 1980s than telecommunications – so I’m honestly not sure.

    In the “completely anecdotal” category “long distance” had been the “next best thing to being there” (a famous telephone system commercial – check youtube if interested) since at least the mid-1970s – oh, and “letter writing”(probably) ended because of low cost long distance not because of “email”

    Steps along the way …

    Important technological steps along the way to the modern “cloud” could include:

    • the first “modem” in in the early 1960s – that is a “modulator”/”demodulator” if you are keeping score. A device that could take a digital signal and convert it to an analog wave for transmission over the PSTN on one end of the conversation and another modem could reverse the process on the other end.
    • Ethernet was invented in the early 1970’s – which allowed computers to talk to each other over long distances. You are probably using some flavor of Ethernet on your LAN
    • TCP/IP was “invented” in the 1970’s then became the language of ARPANET in the early 1980’s. One way to define the “Internet” is as a “large TCP/IP network” – ’nuff said

    that web thing

    Tim Berners-Lee gets credit for “inventing” the world wide web in 1989 while at CERN. Which made “the Internet” much easier to use – and suddenly everyone wanted a “web site.”

    Of course the “personal computer” needed to exist before we could get large scale adoption of ANY “computer network” – but that is an entirely different story 😉

    The very short version of the story is that personal computer sales greatly increased in the 1990s because folks wanted to use that new “interweb” thing.

    A popular analogy for the Internet at the time was as the “information superhighway” – with a personal computer using a web browser being the “car” part of the analogy.

    Virtualization

    Google tells me that “virtualization technology” actually goes back to the old mainframe/time-sharing systems in the 1960’s when IBM created the first “hypervisor.”

    A “hypervisor” is what allows the creation of “virtual machines.” If you think of a physical computer as an empty warehouse that can be divided into distinct sections as needed then a hypervisor is what we use to create distinct sections and assign resources to those sections.

    The ins and outs of virtualization technology is beyond the scope of this article BUT it is safe to say that “commodity computer virtualization technology” was an industry changing event.

    The VERY short explanation is that virtualization allows for more efficient use of resources which is good for the P&L/bottom line.

    (fwiw: any technology that gets accepted on a large scale in a relatively short amount of time PROBABLY involves saving $$ – but that is more of a personal observation that an industry truism.)

    Also important was the development of “remote desktop” software – which would have been called “terminal access” before computers had “desktops.”

    e.g. Wikipedia tells me that Microsoft’s “Remote Desktop Protocol” was introduced in Windows NT 4.0 – which ZDNet tells me was released in 1996 (fwiw: some of of my expired certs involved Windows NT).

    “Remote access” increased the number of computers a single person could support which qualifies as another “industry changer.” As a rule of thumb if you had more than 20 computers in your early 1990s company – you PROBABLY had enough computer problems to justify hiring an onsite tech.

    With remote access tools not only could a single tech support more computers – they could support more locations. Sure in the 1990’s you probably still had to “dial in” since “always on high speed internet access” didn’t really become widely available until the 2000s – but as always YMMV.

    dot-com boom/bust/bubble

    There was a “new economy” gold rush of sorts in the 1990s. Just like gold and silver exploration fueled a measurable amount of “westward migration” into what was at the time the “western frontier” of the United States – a measurable amount of folks got caught up in “dot-com” hysteria and “the web” became part of modern society along the way.

    I remember a lot of talk about how the “new economy” was going to drive out traditional “brick and mortar” business. WELL, “the web” certainly goes beyond “industry changing” – but in the 1990s faith in an instant transformation of the “old economy” into a web dominated “new economy” reached zeitgeist proportions …

    In 2022 some major metropolitan areas trace their start to the gold/silver rushes in the last half of the 19th century (San Francisco and Denver come to mind). There are also a LOT of abandoned “ghost towns.”

    In the “big economic picture the people running saloons/hotels/general stores in “gold rush areas” had a decent change of outliving the “gold rush” assuming that there was a reason for the settlement to be there other than “gold mining”

    The “dot-com rush” equivalent was that a large number of investors were convinced that a company could stay a “going concern” if it didn’t make a profit. However – just like the people selling supplies to gold prospectors had a good chance of surviving the gold – the folks selling tools to create a “web presence” did alright – i.e. in 2022 the survivors of the “dot-com bubble” are doing very well (e.g. Amazon, Google)

    Web Hosting

    In the “early days of the web” establishing a “web presence” took (relatively) arcane skills. The joke was that if you could spell HTML then you could get a job as a “web designer” – ok, maybe it isn’t a “funny” joke – but you get the idea.

    An in depth discussion of web development history isn’t required – pointing out that web 1.0 was the time of “static web pages” is enough.

    If you had a decent internet service provider they might have given you space on their servers for a “personal web page.” If you were a “local” business you might have been told by the “experts” to not worry about a web site – since the “web” would only be useful for companies with a widely dispersed customer base.

    That wasn’t bad advice at the time – but the technology needed to mature. The “smart phone” (Apple 2007) motivated the “mobile first” development strategy – if you can access the web through your phone, then it increases the value of “localized up to date web information.”

    “Web hosting” was another of those things that was going to be “free forever” (e.g. one of the tales of “dot-com bubble” woes was “GeoCities”). Which probably slowed down “web service provider” growth – but that is very much me guessing.

    ANYWAY – in web 1.0 (when the average user was connecting by dial up) the stress put on web servers was minimal – so simply paying to rent space on “someone else’s computer” was a viable option.

    The next step up from “web hosting” might have been to rent a “virtual server” or “co-locate” your own server – both of which required more (relatively) arcane skills.

    THE CLOUD

    Some milestones worth pointing out:

    • 1998 – VMWare “Workstation” released (virtualization on the desktop)
    • “Google search” was another “industry changing” event that happened in 1998 – ’nuff said
    • 2001 VMWare ESX (server virtualization)
    • 2005 Intel released the first cpus with “Intel Virtualization Technology” (VT-x)
    • 2005 Facebook – noteworthy, but not “industry changing”
    • 2006 Amazon Web Services (AWS)

    Officially Amazon described AWS as providing “IT infrastructure services to businesses in the form of web services” – i.e. “the cloud”

    NIST tells us that –

    Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics, three service models, and four deployment models.

    NIST SP 800-145

    If we do a close reading of the NIST definition – the “on-demand” and “configurable” portions are what differentiates “the cloud” from “using other folks computers/data center.”

    I like the “computing as a utility” concept. What does that mean? Glad you asked – e.g. Look on a Monopoly board and you will see the “utility companies” listed as “Water Works” and “Electric Company.”

    i.e. “water” and “electric” are typically considered public utilities. If you buy a home you will (probably) get the water and electric changed into your name for billing purposes – and then you will pay for the amount of water and electric you use.

    BUT you don’t have to use the “city water system” or local electric grid – you could choose to “live off the grid.” If you live in a rural area you might have a well for your water usage – or you might choose to install solar panels and/or a generator for your electric needs.

    If you help your neighbors in an emergency by allowing them access to your well – or maybe connecting your generator to their house. You are a very nice neighbor BUT you aren’t a “utility company” – i.e. your well/generator won’t have the capacity that the full blown “municipal water system” or electric company can provide.

    Just like if you have a small datacenter and start providing “internet services” to customers – unless you are big enough to be “ubiquitous, convenient, and on-demand” then you aren’t a “cloud provider.”

    Also note the “as a service” aspect of the cloud – i.e. when you sign up you will agree to pay for what you use, but you aren’t automatically making a commitment for any minimal amount of usage.

    As opposed to “web hosting” or “renting a server” where you will probably agree to a monthly fee and a minimal term of service.

    Billing options and service capabilities are obviously vendor specific. As a rule of thumb – unless you have “variable usage” then using “the cloud” PROBABLY won’t save you money over “web hosting”/”server rental.”

    The beauty of the cloud is that users can configure “cloud services” to automatically scale up for an increase in traffic and then automatically scale down when traffic decreases.

    e.g. image a web site that has very high traffic during “business hours” but then minimal traffic the other 16 hours of the day. A properly configured “cloud service” would scale up (costing more $$) during the day and then scale down (costing fewer $$) at night.

    Yes, billing options become a distinguishing element of the “cloud” – which further muddies the water.

    Worth pointing out is that if you are “big internet company” you might get to the point where it is in your company’s best interest to build your own datacenters.

    This is just the classic “rent” vs “buy” scenario – i.e. if you are paying more in “rent” than it would cost you to “buy” then MAYBE “buying your own” becomes an option (of course “buying your own” also means “maintaining” and “upgrading” your own). This tends to work better in real estate where “equity”/property values tends to increase.

    Any new “internet service” that strives to be “globally used” will (probably) start out using “the cloud” – and then if/when they are wildly successful, start building their own datacenters while decreasing their usage of the public cloud.

    Final Thoughts

    It Ain’t What You Don’t Know That Gets You Into Trouble. It’s What You Know for Sure That Just Ain’t So

    Artemus Ward

    As a final thought – “cloud service” usage was $332.3 BILLION in 2021 up from $270 billion in 2020 (according to Gartner).

    There isn’t anything magical about “the cloud” – but it is a little more complex than just “using other people’s computers.”

    The problem with “language” in general is that there are always regional and industry differences. e.g. “Salesforce” and “SAP” fall under the “cloud computing” umbrella – but Salesforce uses AWS to provide their “Software as a Service” product and SAP uses Microsoft Azure.

    I just spent 2,000 words trying to explain the history and meaning of “the cloud” – umm, maybe a cloud by any other name would still be vendor specific

    HOWEVER I would be VERY careful with choosing a cloud provider that isn’t offered by a “big tech company” (i.e. Microsoft, Amazon, Google, IBM, Oracle). “Putting all of your eggs in one basket” is always a risky proposition (especially if you aren’t sure that the basket is good in the first place) — as always caveat emptor …

  • “Leadership”, “Teaching”, and “Education”

    Just some random thoughts – Starting off with a famous quote attributed to Albert Einstein –

    If you can’t explain it simply, you don’t understand it well enough

    Albert Einstein

    Leadership

    The Einstein quote came to mind for a “2 drink story” reason that I will not relate here.

    I’ve been a “student of leadership” going back to my days playing “high school sports.” Athletics can become a “leadership classroom” – with “wins/losses” providing feedback – and obvious “leadership” lessons involved in “team performance”.

    If a team is going to be “successful” then the “coach” needs to tailor their “coaching” to the level of the athletes. e.g. Coaching a group of 10 year old athletes will obviously be different than coaching a group of 20 year old athletes.

    SO in “leadership education” they might call this “situational leadership.” In coaching this is the old “you need to master the basic skills first” concept.

    You need to master crawling before you learn to walk. You need to master walking before you can run. Then riding a bike might take care of itself when/if you are ready – assuming you have “learned how to learn.”

    Teaching

    The task facing the coach/teacher/leader becomes helping the athletes/students/employees “master” the required skills.

    The thought on my mind is that how much the coach “knows” isn’t as important as how much they can help the athlete learn.

    “Playing” a sport requires different skills than “coaching” a sport. Just because someone was a great athlete does NOT mean they can teach those skills to others. Just because someone wasn’t a great athlete doesn’t mean they won’t be a great coach.

    (… examples abound of both “great athletes” becoming great coaches, “great athletes” becoming “meh” coaches, as well as “average athletes” becoming great coaches – but that isn’t important at the moment)

    Of course having great athletes can make an average coach look like a great coach – but that also isn’t my point today.

    I’ve watched a lot of “video lectures” given by highly qualified instructors. Occasionally I run into an instructor/presenter that the only thing I get from their presentation is that THEY appear to know a lot – i.e. they didn’t “teach me” anything.

    e.g. one instructor seemed to be reading from the manual – I’m sure in their head they were “transferring information” but the lessons were unwatchable. IF I want to read the manual – I can find the manual and read it. What I want from an instructor is examples illustrating the material NOT just a recitation of the facts.

    Again, a presenter/teacher bombarding the audience with the breadth and width of their knowledge might be satisfying to the presenter’s ego – but not much else.

    I’m a an of “storytelling” as an instructional tool – but that means “tell relevant stories that illustrate a point” NOT “vent to a captive audience.”

    Education

    Tailoring your message to the audience is probably “presenting 101.” It could also be “coaching 101” and “teaching 101.”

    “Education” then becomes the end product of coaching/teaching/leadership and is ALWAYS an individualized process.

    The worst coach/teacher might still have the occasional championship athlete/high achieving student. My experience has been that the “bad” coach/teacher tends to blame the athletes/students when things go wrong but takes all the credit if something goes right.

    MEANWHILE – the “good” coaches/teachers are tailoring their instruction to the level of their athletes/students and recognize that, while getting an education is always an “individual process”, the “process of education” is a “group effort.”

    Even if you go to the library and get a book on a subject – someone had to write the book for you to learn the material.

    Learning to Teach

    Those “bad” coaches/teachers PROBABLY don’t really understand their sport/subject – which is part of what Mr Einstein’s quote points out.

    I have had “not so good” teachers tell me a subject is “easy” and that the class needs to memorize the textbook. Yes, the subject might be “easy” to some students – but not ALL of the students – and rote memorization as a means of mass instruction isn’t a particularly effective use of time.

    I have also had excellent teachers tell me THEY learn something each time they teach a class. They don’t try to impress with their “vast knowledge.” They will try teach the students what is “important” (some memorization might be required but not as the major form of instruction). These instructors tend to be realistic about how much can be “taught” and emphasize the individual effort required to “learn” anything.

    “You will get out of it what you put into it” is imprinted in my mind for some reason. This has morphed into my personal philosophy that “grades in a class tend to be an indication of effort and interest NOT intelligence.” Not everyone can get an “A” in every class, but if they put forth the effort everyone can “pass” the class.

    ANYWAY – If someone teaches for 5 years and then looks back at their first year and DOESN’T see improvement in both teaching skills and mastery of the subject – well, they have 1 year of experience 5 times NOT “5 years” experience.

  • statistics vs analytics, sports in general and bowling in particular

    what a title – first the youtube video demo/pitch for the “bowling analytics” product …

    https://www.youtube.com/watch?v=0JKbL4_UEwc&t=1385s

    statistics vs analytics

    Yes, there is a difference between “statistics” and “analytics” – maybe not a BIG difference but there is a difference.

    Statistics” is about collecting and interpreting “masses of numerical data.” “Analytics” is about logical analysis – probably using “statistics”.

    Yeah, kinda slim difference – the point being that there is a difference between “having the numbers” and “correctly interpreting the numbers.”

    “Data analysis” becomes an exercise in asking questions and testing answers – which might have been how a high level “statistician” described their job 100 years ago – i.e. I’m not dogmatic about the difference between “statistics” and “analytics”, just establishing that there are connotations involved.

    Analytics and Sports

    Analytics as a distinct field has gained popularity in recent years. In broad strokes the fields of “data science”, “artificial intelligence”, and “machine learning” all mean “analytics.”

    For a while the term “data mining” was popular – back when the tools to manage “large data sets” first became available.

    I don’t want to disparage the terms/job titles – the problem is that “having more data” and having “analysis to support decisions” does not automatically mean “better leadership.”

    It simply isn’t possible to ever have “all of the information” but it is very easy to convince “management types” that they have “data” supporting their pet belief.

    e.g. I always like to point out that there are “trends” in baby name popularity (example site here) – but making any sort of conclusion from that data is probably specious.

    What does this have to do with “sports” – well, “analytics” and sports “management” have developed side by side.

    Baseball’s word for the concept of “baseball specific data analysis” dates back to 1982 – about the time that “personal computers” where starting to become affordable and usable by “normal” folks.

    My round about point today is that most “analytics” fall into the “descriptive” category by design/definition.

    e.g. if you are managing a ‘sportball’ team and have the opportunity to select players from a group of prospects – how do you decide which players to pick?

    Well, in 2022 the team is probably going to have a lot of ‘sportball’ statistics for each player – but do those statistics automatically mean a player is a “good pick’ or a “bad pick”? Obviously not – but that is a different subject.

    The team decision process will (probably) include testing players physical abilities and watching the players work out – but neither of those 100% equates to “playing the game against other skilled opponents.”

    That player with great statistics might have been playing against a lower level of competition. That player that has average “physical ability test scores” might be a future Hall of Famer because of “hidden attributes”

    i.e. you can measure how fast an athlete can run, and how high they can jump – but you can’t measure how much they enjoy playing the game.

    MEANWHILE back at the ranch

    Now imagine that you are an athlete and you want to improve your ‘sportball’ performance. How do you decide what to work on?

    Well, the answer to that question is obviously going to be very sport AND athlete specific.

    However, your ‘sportball’ statistics are almost certainly not going to help you make decisions on how/what you should be trying to develop – i.e. those statistics will be a reflection of how well you have prepared, but do not directly tell you how to prepare.

    Bowling

    Full disclosure – I am NOT a competitive bowler. I have participated/coached other sports – but I’m a “casual bowler.” i.e. if I have misinterpreted the sport, please let me know 😉

    Now imagine that someone has decided that they want to improve their “bowling average” – how should they approach the problem?

    • Step 1 would be to establish a baseline from which improvements can be measured.
    • Step 2 would be to determine what you need to “work on” to improve your scores from Step 1.
    • Step 3 would be to establish a session of “practices” to work on the items from Step 2.
    • Step 4 would be to re-test the items from Step 1 and adjust steps 2 and 3 accordingly.

    Sure, I just described the entire field of “management” and/or “coaching” – but how well a manager/coach helps athletes through the above (generic) process will be directly reflected in wins/losses in competition.

    Remember that the old axiom that “practice makes perfect” is a little misleading:

    Practice does not make perfect. Only perfect practice makes perfect.

    -Vince Lombardi

    Back to bowling – bowling every week might be fun, but won’t automatically mean “better performance.”

    Keeping track of your game scores might be interesting, but also won’t automatically mean “better scores.”

    I’m told that the three factors for the “amateur bowler” to work on are:

    1. first ball pin average
    2. single pin spare %
    3. multipin spare %

    In a “normal” game there are 10 pins possible each frame. The bowler gets two balls to knock down all 10.

    If your “first ball pin average” is 10, then you are a perfect bowler –and knock all the pins down every frame with your first ball.

    To be honest I haven’t seen any real data on “first ball pin averages” – it probably exists in much the same manner that “modern baseball statistics” can be derived from old “box scores” – but I’m told that a first pin average around 9 is the goal.

    If you consistently average 9 pins on your first throw – then you have a consistent “strike” delivery.

    Which then means that IF you consistently knock down 9 pins – you will have to pickup “single pin spares” on a regular basis.

    Then “multipin spares” are going to be an exercise in statistics/time and fate. Obviously if you average 9 pins on your first ball, the number of “multipin spare” opportunities should be relatively small.

    SO those are the data points being tracked with my “bowling analytics” application.

  • leadership is communication, “winning”

    I wrote a long post that either needs editing or deletion – but at least it served as “pre writing” for this post.

    status quo

    If you want to be precise the “status quo” is simply the “current situation.” Which technically means that whatever is happening at the moment is the “status quo.” Usually the term implies the “normal” and ESTABLISHED state of affairs.

    “Organizational behavior 101” is that “organizations” of any size tend to work to maintain the “status quo.” This “active desire to maintain the status quo” might be called “culture” or “tradition.”

    Whatever we call it – changing the “status quo” will take conscious effort. First it has to be recognized, and then the process of change can begin.

    In a larger organization there will be written/documented procedures that formally maintain “situation normal.” In a smaller organization there are (probably) fewer “rules” but a “status quo” has been established. Then in a “startup” organization the “status quo” has to be created.

    Of course a “startup” is rarely actually starting from scratch – the folks starting the company are bringing all of their previous experience (positive and negative). e.g. If you have heard the saying “you’ll find it the same wherever you go” – that is sorta the same idea …

    change

    Also true is that “change” happens – whether we want it or not. If an organization fails to adapt to change – then EVENTUALLY it will cease to exist.

    Which means that the “status quo” of “long term successful” organizations incorporates responding to change.

    The term “learning organization” was popular a few years back. The phrase might be a meaningless management buzzword in 2021. Back in the early 1990’s the “learning organization” was probably focused on implementing relatively new technology (i.e. that “interweb” thing the kids are using). In 2021 the “learning organization” should be focused on being an “effective communication” organization …

    communication

    If you are in a “leadership” position then you are always “communicating.” The only question is “how” and “what” you are communicating.

    Communication styles can differ greatly based on individual leader preferences. HOWEVER – one of the biggest mistakes a leader can make is to “assume” that “employees” understand “leadership’s” reasoning/expectations.

    This is particularly important when a leader is trying to change company culture. In a “change” situation it is probably impossible for a leader to overcommunicate – i.e. “change” will still happen if leaders “under communicate” BUT the change will almost certainly NOT be the desired change.

    The concept of “leader intent” comes to mind – i.e. leaders should communicate the “why” as well as the “what” behind their directives. If the desire if to cut waste and be cost effective – then expressing the desire to find ways to cut waste and encourage cost effectiveness will (probably) more accurately meet “leadership’s intent” than ordering people to count sheets of paper and track paper clip usage …

    Sports ball

    The same rules apply to sports teams. Of course it is much more common for “sports philosophy” to be used in the “business” world than the other way around.

    Teams/leagues are simply organizations that have agreed to compete based on a specific set of rules/standards. The big difference being that telling “who won” and “who lost” is much easier with a scoreboard.

    Obviously with “professional sports” the only metric that matters is “winning contests.” Professional coaches and athletes are paid to win. There are no “moral victories” in pro sports.

    Of course that doesn’t mean that “losing professional sports franchise” is actually “losing money”/unprofitable OR that “championship sports franchise” is actually “making money”/profitable. But that is a much different subject.

    Winning

    “Winning” always implies a competition of some kind. In “sports ball” the rules of competition are clearly defined. e.g. You can take a look at the NFL rulebook or download the MLB rules

    But is “winning the only thing?” Or is “just win baby” a functioning philosophy? How about “if you ain’t first, you’re last?”

    If you ain’t first, you’re last

    Reese Bobby

    Well, with all due respect to Vince Lombardi and Al Davis – both of their quotes have been taken a little out of context – as is true for a lot of “motivational quotes.”

    The quote from Talladega Nights humorously illustrates the “out of context” nature of most “win at all costs” quotes.

    Vince Lombardi was also quoted as saying that he wanted his players to place “professional football” third on their “life priority” lists – with God and family being in the first two spots.

    I never heard Al Davis try to explain his “just win baby” quote – beyond being a condemnation of other teams “player conduct” policies. In context he was asking for the same sort of commitment as Mr Lombardi. Mr Davis described that commitment as a “commitment to excellence” – which has also become a management buzzword in its own right (feel free to google the term).

    Meanwhile a lot of well-meaning coaches/managers have misinterpreted the “search for high performance” as a requirement for monastic dedication.

    As always the unexamined life is not worth living – but my point today is that in “non professional sports”/the real world there is ample room for “moral victories.”

    From a practical standpoint – being focused on the end product at the expense of the process that generates that product is counterproductive.

    Yeah, that is not very quotable – the idea is simply that if you take care of the small things along the way, the end result will take care of itself.

    From a sports ball perspective – that is why Vince Lombardi started each season showing the team a football and saying “Gentlemen, this is a football.” Mr Lombardi also pointed out that “Football is a game of blocking and tackling, block and tackle better than the other team and you will win.” First master the fundamentals, and then winning will follow.

    There is a famous story of John Wooden (UCLA’s 10x NCAA basketball championship coach) starting each season by teaching players how to put on their socks. Mr Wooden would explain that if a player put their socks on wrong, then the sock would “bunch-up” and the player would get a blister. If the player got a blister they couldn’t practice/play. So spending time at the beginning of each season to teach new players (and remind returning players) how to put on their socks was worthwhile.

    A stitch in time — something, something

    Continuous improvement

    Saying that the focus should be on “continuous improvement” implies that the “learning organization” infrastructure is in place.

    If you are coaching “non professional sports” then the primary focus should be on “learning to prepare” much more than “winning.” The entire point of “non professional sports” should be as part of the “educational” process NOT as a developmental program for the next higher level of competition.

    I’m not saying that a “pro coach” can’t have a positive impact on player’s lives/character – I’m just pointing out that the primary focus of “pro sports” is NOT “character development.”

    “Big time” college football and basketball are both money making machines – but obviously tend to be “coach centered.” The most successful coaches tend to be very good at recruiting talented players to come to their school.

    Which means that there also tends to be a huge difference in “player talent level” between the “Big time” college athletic programs and everyone else.

    Meanwhile in “professional sports” ALL of the players are “professionals” – as obvious as that sounds, the difference in “physical ability” between the “elite” players and the “average” players is minimal.

    SO what distinguishes the “elite” pro players from the “average” players? Preparation.

    Of course “avoiding injuries” becomes a part of the story for any longtime successful player – and “offseason preparation” becomes part of the “avoiding injuries” story.

    But time and fate will always play their part 😉

    oh yeah, one more thing

    All of which means a high school wrestler could go winless and have a “successful” season – assuming that they improved over the course of the season.

    A high school football team could be “successful” but lose more games than they win – that 4 win 6 loss team might be setting the stage for future success.

    Successful companies and “sports ball programs” will pass along a culture of continuous improvement and positive change management – the profits in stakeholder pockets or wins on the field of competition will follow a focus on fundamentals and individual development.

    thank you very much and I hope we passed to audition

  • “great resignation”, part-time employees, engagement

    I think I have commented on my love of “buzzwords” enough – that we can just jump into the “great resignation”/reshuffle/reprioritization/recognition/whatever …

    SO a significant number of people are choosing NOT to go back to jobs they obviously found “unsatisfying.” Trying to come up with a single reason “why” is pointless – because there (probably) is no SINGLE reason.

    Sure, if you sell management seminars to “upper management” then packaging some buzzword tripe that reinforces what upper management has already been doing will get you some paychecks. Then the reality is that it is buzzword trip seminars targeted at “upper management” that created the environment for the “great whatever” to transpire.

    To be honest the “gig economy” has been coming for awhile. If you look at the history of humanity the aberration is the “hourly wage”/weekly schedule NOT the “gig economy.”

    For MOST of human existence the major form of “employment” was subsistence farming. The industrial revolution moved folks off of farms into factories – and also created “management” as a job category.

    Henry Ford and the assembly line is always a great example of how UNPOPULAR “factory work” tends to be with “sentient beings.”

    Higher wages cut down on turnover and the “$5 day” may have kickstarted the “middle class” – at the cost of “job satisfaction” and “purpose.”

    part-time good/part-time bad

    Of course if you are trying to build a company on “gig workers” or think that it is possible to grow/build a culture with only “part-time” employees – you are chasing an illusion.

    There was a study done way back when – it was probably the 1980’s – I remember reading the book in the 1990’s that summarized the “secret management miracle technique” that they “discovered.”

    I’m sure some actual research would find the study – but since I ain’t doin’ any real research today – the short form: “major university” did a study of “global companies” and “discovered” that the employees that were emotionally involved in their work were MUCH more productive than the employees that were NOT emotionally involved in their work.

    yup, hopefully that is blindingly obvious. The (wrong) takeaway is that “pay” isn’t a major motivation to increase performance. Salary/pay/total compensation is like oxygen – if you have plenty then getting “more” is not a high priority, but if you don’t have enough, it is EXTREMELY important.

    The same with “job security” – you either have it or you don’t – but threatening employees job security will just motivate employees to find a better company to work with.

    The beatings will continue until moral improves

    Remember the key to “employee productivity” is “emotional engagement.” SO you could have highly engaged part-time employees just like you can have “disengaged” full-time employees.

    Communication

    How do you build “engagement?” Well, you gotta communicate in some form.

    The WORST thing “management” can do is “no communication” – this is the old “mushroom treatment”, “keep ’em in the dark and feed them excrement.” Of course if YOU hate your job and want to make sure that everyone reporting to you hates THEIR job – then the “mushroom treatment” is the tool for the job.

    If you fancy yourself a “leader” and are focused on things like “growth” and “long term success” – then regular communication is required.

    20/70/10

    One of Jack Welch’s tactics when he was running G.E. meshes neatly with the “employee engagement” theory.

    No, I don’t think Mr Welch was heavily influenced by the study in question. Mr Welch was influenced by years of ACTUAL employee performance data.

    The percentages the “engagement theory” folks came up with don’t really matter – maybe it was the top 10% of employees were much more engaged than the other 90% AND that top 10% was also more productive than the entire other 90%.

    If memory serves the bottom 90% are also two distinct groups – maybe it was 70% “not emotionally engaged” (i.e. “emotionally neutral” – but still of some value to the organization) and the bottom 20% were “actively disengaged” (i.e. “hostile” – these folks were actively working against the organization)

    SO Mr Welch recognized that the top 20% of G.E. employees were the “high performers” – i.e. these are the folks getting big raises and promotions. The 70% were still good workers and had the potential to become top 20%-ers – so they received smaller raises and training, then the bottom 10% were “eased out” of the organization.

    Hidden in plain site with the 20/70/10 concept is that the organization is tracking employee performance and giving regular feedback. Most companies seem to find ways to avoid giving regular “employee feedback” – for any number of convenient reasons.

    Obviously when Jack Welch was running G.E. they didn’t have a labor shortage or any issues with hiring new employees. If you are a smaller company then “easing out” the bottom 10% probably isn’t practical – but keeping someone around that is actively hostile to “company goals” is always a bad idea. How you deal with that problem employee as a small company will obviously be different than how a “large multinational conglomerate” deals with the problem (and if that “problem employee” is also a family member – well, that is another issue).

    ownership/recognition

    “Emotional engagement” is just another way of saying “ownership” – i.e. do employees feel a sense of responsibility/obligation for the performance of the company? are employees “invested” in the goals/purpose of the organization? if they are “obligated” and “invested” then they are “engaged.”

    Of course that sense of engagement can be destroyed by mistreating employees – sentient beings are NOT going to willfully work for an organization that treats them like disposable cogs in a machine for a sustained period of time.

    No, that doesn’t mean you coddle employees – it means you communicate honestly with them. No, you are not fooling anyone with the “mushroom treatment” – if you aren’t communicating people will still talk, and most likely that “internal gossip” will be negative.

    How you choose to reward employees is part of company culture – some folks are motivated by “employee of the X” type awards, some aren’t (my opinion: unless they come with a cash bonus – keep your useless award).

    IF you want employees to act like owners – you might want to consider actually making them owners. btw: The reason “CEO” compensation has outstripped “regular employee” compensation is simply because CEO’s tend to get stock options.

    Personally the CEO making a LOT more than “generic employee” doesn’t bother me in the least.

    IF the CEO is acting in the best interest of the organization, providing real leadership, and a positive company culture – then it (probably) isn’t possible to pay them “too much.”

    HOWEVER – if the CEO is using the organization as their personal piggybank, and creating a negative company culture – then it (probably) isn’t possible to dismiss them “too soon.”

    Company culture

    Dan Ariely has written some books on “behavioral economics” (“Predictably Irrational” is the one I read a few years ago).

    Mr Ariely comes to mind because the conclusion of one of his experiments was that “social obligations” tended to produce higher returns than “financial compensation” – I think he had people do a monotonous task and the ones that felt a “social obligation” did the task longer than the ones that were compensated/paid.

    Connecting the dots = employees that are “engaged” have a sense of “ownership” that includes a “social obligation” beyond monetary compensation. (but remember “total compensation” is like oxygen …)

    I’m sure we could easily find (a large number of) people that “work” harder as “volunteers” for non-profit organizations than they do for their “paycheck job” – e.g. in the former they are “engaged” in the latter they aren’t.

    Of course “creation” is always harder than “destruction” – i.e. “creating a positive company culture” requires concerted effort – if it was easy then you wouldn’t see (functionally) the same management books written/released every year …

  • buzzword bingo, pedantic-ism, and the internet

    just ranting

    One of the identifying characteristics of “expert knowledge” is understanding how everything “fits” together. True “mastery” of any field with a substantial “body of knowledge” takes time and effort. Which means there are always more people that “know enough to be dangerous” than there are “real experts.”

    Which is really just recognition of the human condition – i.e. if we had unlimited time and energy then there would be a lot more “true experts” in every field.

    There is a diminishing return on “additional knowledge” after a certain point. e.g. Does anyone really need to understand how IBM designed Token Ring networks? Well, it might be useful for historic reasons and theoretical discussion – but I’ll go out on a limb and say it isn’t worth the effort to become an “expert” on Token Ring – and if you are studying “networking” becoming an expert on Token Ring is not worth the time.

    There are also a lot of subjects where a slightly “incorrect” understanding is part of the learning process. e.g. Remember that high school chemistry class where you learned about electrons orbiting the nucleus at various discrete “energy levels” like tiny moons orbiting a planet? Then remember that college chemistry class where they told you that isn’t the way it actually is – but don’t worry about it, everyone learns it that way.

    (random thought – just because we can’t be sure where something is, doesn’t mean it can be in two spots at the same time – just like that cat in a box – it isn’t half alive and half dead, it is one or the other, we just can’t know which one – and moving on …)

    buzzwords vs jargon vs actual understanding

    Dilbert’s “pointy haired boss” is routinely held up for ridicule for “buzzword spouting” – which – in the most negative sense of the concept – implies that the person using “buzzwords” about “subject” has a very minimal understanding of the “subject.”

    Of course the “Dilbert principle” was/is that the competent people in a company are too valuable at their current job – and so cannot be promoted to “management”. Which implies that all managers are incompetent by default/design. It was a joke. It is funny. The reality is that “management” is a different skillset – but the joke is still funny 😉

    The next step up are the folks that can use the industry “jargon” correctly. Which simply illustrates that “education” is a process. In “ordinary speech” we all recognize and understand more words than we actively use – the same concept applies to acquiring and using the specific vocabulary/”jargon” of a new field of study (whatever that field happens to be).

    However if you stay at the “jargon speaking” level you have not achieved the goal of “actual understanding” and “applied knowledge.” Yes, a lot of real research has gone into describing different “levels”/stages in the process – which isn’t particularly useful. The concept that there ARE stages is much more important than the definition of specific points in the process.

    pedants

    No one want a teacher/instructor that is a “pedant” – you know, that teacher that know a LOT about a subject and thinks that it is their job to display just how much they know — imagine the high school teacher that insists or correcting EVERYONES grammar ALL THE TIME.

    There is an old joke that claims that the answer to EVERY accounting question is “it depends.” I’m fond of applying that concept to any field where “expert knowledge” is possible – i.e. the answer to EVERY question is “it depends.”

    (… oh, and pedants will talk endlessly about how much they know – but tend to have problems applying that knowledge in the real world. Being “pedantic” is boring/bad/counter productive – and ’nuff said)

    Of course if you are the expert being asked the question, what you get paid for is understanding the factors that it “depends on.” If you actually understand the factors AND can explain it to someone that isn’t an expert – then you are a rara avis.

    In “I.T.” you usually have three choices – e.g. “fast”, “cheap” (as in “low cost”/inexpensive), “good”/(as in durable/well built/”it is heavy? then it is expensive”) – but you only get to choose two. e.g “fast and cheap” isn’t going to be “good”, “fast and good” isn’t going to be “inexpensive.”

    Is “Cheap and good” possible? – well, in I.T. that probably implies using open source technologies and taking the time to train developers on the system – so an understanding of “total cost of ownership” probably shoots down a lot of “cheap and good” proposals – but it might be the only option if the budget is “we have no budget” – i.e. the proposal might APPEAR “low cost” when the cost is just being pushed onto another area — but that isn’t important at the moment.

    internet, aye?

    There is an episode of the Simpsons where Homer starts a “dot com” company called Compu Glogal Hyper Meganet – in classic Simpsons fashion they catch the cultural zeitgeist – I’ll have to re-watch the episode later – the point for mentioning it is that Homer obviously knew nothing about “technology” in general.

    Homer’s “business plan” was something like saying “aye” after every word he didn’t understand – which made him appear like he knew what he was talking about (at the end of the episode Bill Gates “buys him out” even though he isn’t sure what the company does – 1998 was when Microsoft was in full “antitrust defense by means of raised middle finger” – so, yes it was funny)

    (random thought: Microsoft is facing the same sort of accusations with their “OneDrive” product as they did with “Internet Explorer” – there are some important differences – but my guess is THIS lawsuit gets settled out of court 😉 )

    ANYWAY – anytime a new technology comes along, things need to settle down before you can really get past the “buzzword” phase. (“buzzword, aye?”) – so, while trying not to be pedantic, an overview of the weather on the internet in 2021 …

    virtualization/cloud/fog/edge/IoT

    Some (hopefully painless) definitions:

    first – what is the “internet” – the Merriam-Webster definition is nice, slightly more accurate might be to say that the internet is the “Merriam-Webster def” plus “that speaks TCP/IP.” i.e. the underlying “language” of the internet is something called TCP/IP

    This collection of worldwide TCP/IP connected networks is “the internet” – think of this network as “roads”

    Now “the internet” has been around for a while – but it didn’t become easy to use until Tim Berners Lee came up with the idea for a “world wide web” circa 1989.

    While rapidly approaching pedantic levels – this means there is a difference between the “internet” and the “world wide web.” If the internet is the roads, then the web is traffic on those roads.

    It is “true” to say that the underlying internet hasn’t really changed since the 1980’s – but maybe a little misleading.

    Saying that we have the “same internet” today is a little like saying we have the same interstate highway system today as we did when Henry Ford invented the Model-T. A lot of $$ has gone into upgrading the “internet” infrastructure since the 1980’s – just like countless $$ have gone into building “infrastructure” for modern automobiles …

    Picking up speed – Marc Andreessen gets credit for writing the first “modern” web browser in the early 1990s. Which kinda makes “web browsers” the “vehicles” running on the “web”

    Britannica via Google tells me that the first use of the term “cyberspace” goes back to 1982 – for convenience we will refer to the “internet/www/browser” as “cyberspace” – I’m not a fan of the term, but it is convenient.

    Now imagine that you had a wonderful idea for a service existing in “cyberspace” – back in the mid-1990’s maybe that was like Americans heading west in the mid 19th century. If you wanted to go west in 1850, there were people already there, but you would probably have to clear off land and build your own house, provide basic needs for yourself etc.

    The cyberspace equivalent in 1995 was that you had to buy your own computers and connect them to the internet. This was the time when sites like “Yahoo!” and/or “eBay” kind of ruled cyberspace. You can probably find a lot of stories of teenagers starting websites – that attracted a lot of traffic, and then sold them off for big $$ without too much effort. The point being that there weren’t a lot of barriers/rules on the web – but you had to do it yourself.

    e.g. A couple of nice young men (both named “Sean”) met in a thing called “IRC” and started a little file sharing project called Napster in 1999 – which is a great story, but also illustrates that there is “other traffic” on the internet besides the “web” (i.e. Napster connected users with each other – they didn’t actually host files for sharing)

    Napster did some cool stuff on the technical side – but had a business model that was functionally based on copyright infringement at some level (no they were not evil masterminds – they were young men that liked music and computers).

    ANYWAY – the point being that the Napster guys had to buy computers/configure the computers/and connect them to the internet …

    Startup stories aside – the next big leap forward was a concept called “virtualization”. The short version is that hardware processing power grew much faster than “software processing” required – SO 1 physical machine would be extremely underutilized and inefficient – then “cool tech advancements” happened and we could “host” multiple “servers” on 1 physical machine.

    Extending the “journey west” analogy – virtualization allowed for “multi-tenant occupation” – at this point the roads were safe to travel/dependable/you didn’t HAVE to do everything yourself. When you got to your destination you could stay at the local bed and breakfast while you looked for a place to stay permanent (or move on).

    … The story so far: we went from slow connections between big time-sharing computers in the 1970’s to fast connections between small personal computers in the 1990’s to “you need a computer to get on the web” and the “web infrastructure” consists mostly of virtualized machines in the early 2000s …

    Google happened in there somewhere, which was a huge leap forward in real access to information on the web – another great story, just not important for my story today 😉

    they were an online bookstore once …

    Next stop 2006. Jeff Bezos and Amazon.com are (probably) one of the greatest business success stories in recorded history. They had a LONG time where they emphasized “growth” over profit – e.g. when you see comic strips from the 1990’s about folks investing in “new economy” companies that had never earned a profit, Amazon is the success story.

    (fwiw: of course there were also a LOT of companies that found out that the “new economy” still requires you to make a profit at some point – the dot.com boom and bust/”bubble” has been the subject of many books – so moving on …)

    Of course in the mid-2000’s Amazon was still primarily a “retail shopping site.” The problem facing ANY “retail” establishment is meeting customer service/sales with employee staffing/scheduling.

    If you happen to be a “shopping website” then your way of dealing with “increased customer traffic” is to implement fault/tolerance and load balancing techniques – the goal is “fast customer transactions” which equals “available computing resources” but could also mean “inefficient/expensive.”

    Real world restaurant example: I’m told that the best estimate for how busy any restaurant will be on any given day is to look at how busy they were last year on the same date (adjusting for weekends and holidays). SO if a restaurant expects to be very busy on certain days – they can schedule more staff for those days. If they don’t expect to be busy, then they will schedule fewer employees.

    Makes sense? Cool. The point is that Amazon had the same problem – they had the data on “expected customer volume” and had gone about the process of coming up with a system that would allow for automatic adjustment of computing resources based on variable workloads.

    I imagine the original goal might have been to save money by optimizing the workloads – but then someone pointed out that if they designed it correctly then they could “rent out” the service to other companies/individuals.

    Back to our “westward expansion” analogy – maybe this would be the creation of the first “hotel chains.” The real story of “big hotel chains” probably follows along with the westward expansion of the railroad – i.e. the railroads needed depots, and those depots became natural “access” points for travelers – so towns grew up around the depots and inns/”hotels” developed as part of the town – all of which is speculation on my part – but you get the idea

    The point being that in 2006 the “cloud” came into being. To be clear the “cloud” isn’t just renting out a virtual machine in someone else’s data center – the distinct part of “cloud services” is the idea of “variable costs for variable workloads.”

    Think of the electrical grid – if you use more electricity then you pay for what you use, if you use less electricity then your electrical expenses go down.

    The “cloud” is the same idea – if you need more resources because you are hosting an eSports tournament – then you can use more resources – build out/up – and then when the tournament is over scale back down.

    Or if you are researching ‘whatever’ and need to “process” a lot of data – before the cloud you might have had to invest in building your own “super computer” which would run for a couple weeks and then be looking for something to do. Now you can utilize one of the “public cloud” offerings and get your data ‘processed’ at a much lower cost (and probably faster – so again, you are getting “fast” and “inexpensive” but you are using “virtual”/on demand/cloud resources).

    If you are interested in the space exploration business – an example from NASA –

    Fog/Edge/IoT?

    The next problem becomes efficiently collecting data while also controlling cost. Remember with the “cloud” you pay for what you use. Saying that you have “options” for your public/private cloud infrastructure is an understatement.

    However, we are back to the old “it depends” answer when we get into concepts like “Fog computing” and the “Internet of things”

    What is the “Internet of Things” well NIST has an opinion – if you read the definition and say “that is nice but a little vague” – well, what is the IoT? It depends on what you are trying to do.

    The problem is that the how of “data collection” is obviously dependent of the data being collected. So the term becomes so broad that it is essentially meaningless.

    Maybe “Fog” computing is doing fast and cheap processing of small amounts of data captured by IoT devices – as opposed to having the data go all the way out to “the cloud” – we are probably talking about “computing on a stick” type devices that plug into the LAN.

    Meanwhile “Edge computing” is one for the salespeople – e.g. it is some combination of cloud/fog/IoT – at this point it reminds me of the “Corinthian Leather” Ricardo Montalban was talking about in car commercials way back when 😉

    Ok, I’m done – I feel better

    SO if you are teaching an online class of substantial length – an entire class only about IoT might be a little pointless. You can talk about various data collecting sensors and chips/whatever – but simply “collecting” data isn’t the point, you need to DO SOMETHING with the data afterwards.

    Of course I can always be wrong – my REAL point is that IoT is a buzzword that gets misused on a regular basis. If we are veering off off into marketing and you want to call the class “IoT electric boogaloo” because it increases enrollment – and then talk about the entire cloud/fog/IoT framework – that would probably be worthwhile.

    it only took 2400+ words to get that out of my system 😉

  • leadership, generals, and politicians

    I developed an interest in “leadership” from an early age. The mundane reasons for this interest aren’t important. It is even possible that “leaders” are/were a pre-requirement for the whole “human civilization” thing – i.e. we are all “leaders” in one form or another if we are “involved with other people.” SO an interest in “leadership” is also natural.

    There are certainly a lot of books written every year that claim to teach the “secrets” of leadership. There is (probably) something useful in all of these “leadership” books BUT there is no “secret leadership formula” that works all of the time for every situation. However, there are “principles of leadership.”

    As a “first concept” I’ll point out Amos 3:3 – “Can two walk together, except they be agreed?” The point being that if “two people” becomes our smallest unit of “civilization” then “leadership” is happening in some form.

    This micro-civilization “leadership” probably consists of discussions between the two people on what to do, where to go, when to do whatever. It is unlikely that they will naturally agree on everything, if they can’t resolve those disagreements (one way or another) then they won’t be “together” anymore and they will go separate ways.

    leadership

    Of course we run into the problem that there are different flavors of “leadership” because there are different types of “power.”

    Another “first concept” is that “yelling” is not leadership. Yelling is just yelling – and while it might be a tool occasionally used by a leader – “constant yelling” is an obvious sign of BAD leadership to the point that it might just be “bullying behavior”/coercion and NOT “leadership” at all

    i.e. “coercive” leadership ends up being self-destructive to the organization because it drives good people away and you end up with a group of “followers” waiting to be told what to do whatever.

    e.g. when a two year old throws a temper tantrum – no one mistakes it for “leadership.” Same concept applies if someone in a position of power throws a temper tantrum 😉

    (but there is a difference between “getting angry” and “temper tantrum” – if the situation arises then “anger” might be appropriate but never to the point where self-control is lost)

    Generals

    In English the word “general” refers to a common characteristic of a group. It doesn’t appear as a noun until the middle of the 16th century – so eventually we get the idea of the “person at the top of the chain of command” being a “General officer”

    Whatever you want to call it – in “old days long ago” – the General was on the field fighting/leading the troops.

    Alexander

    If we give “Alexander the Great” the title of “general” – then he is the classic example of “leading by personal charisma/bravery/ability.” He was the “first over the wall” type of general – that led by inspiring his armies with a “vision of conquest.”

    The problem becomes that ultimately Alexander the Great was a failure. Oh, he conquered a lot of land and left his name on cities, but again, in the long run he failed at leading his troops. After fighting for 10+ years Alexander wanted to keep going, while his tired troops wanted to go home. Alexander would die on the trip home, and his empire would be spit between his generals.

    SO why did Alexander the Great (eventually) fail as a leader? Well, he was leading for HIS glory. Sure the fact that he – and his generals – were able to keep his army together for 10 years and conquer most of the “known world” rightfully earns him a place in history, BUT at an “organizational leadership” level he was a failure.

    Cincinnatus

    Arguably the best type of leader is in the position because they are the “right person” at the “right time” NOT because they have spent their lifetime pursuing personal advancement/glory.

    The concept becomes “servant leadership” – which became a “management buzzword” in the 20th century, but is found throughout history.

    Lucius Quinctius Cincinnatus comes to mind – you know, the (implied) down on his luck “citizen farmer” of Ancient Rome when it was still a “new republic” (500ish BC) – twice given supreme power (and the offer of being made “dictator for life”) he also gave up that power as soon as possible.

    Also illustrated by the story of Cincinnatus is the “burden or command” IF a leader is truly trying to “do what is right for the people.”

    Of course Cincinnatus’ example was much more often ignored that honored by later Roman leaders – which eventually led to the end of the “republic” and the birth of “empire” – but that sounds like the plot for a series of movies 😉

    HOWEVER – Cincinnatus still serves as an example of great leadership. Yes, he had problems with his sons, but that is another story …

    Moses

    According to “tradition” Moses was a general in the Egyptian army. The first 40 years of Moses’ life are not described (except that he was raised as the son of the Daughter of Pharaoh) – then he kills a man in Exodus 2:12 and goes on the run to the land of Midian.

    There is a lot of potential “reading into the story” here. I suppose Cecil B DeMille’s 1956 version is plausible – the love story between Moses and Nefretiri feels like the “Hollywood movie” addition, and of course Charlton Heston as Moses is a simplification (Aaron probably did most of the “talking”).

    ANYWAY – the point is that (after 40 years of tending sheep in Midian) Moses didn’t WANT the job of leading the tribes of Israel out of Egypt – which is what made him perfect for the job.

    Feel free to do your own study of Exodus – for my point today, Moses became a “servant leader” after 40 years of tending sheep. The mission wasn’t about him, it was about, well, “the mission.”

    Just for fun – I’ll point at Numbers 12:3 and also mention that the first five books of the “Old Testament” are often referred to as the “Books of Moses” but that doesn’t mean Moses “wrote” them – i.e. it isn’t Moses calling himself “humble” but probably Joshua …

    Politicians

    From a practical standpoint – both Cincinnatus and Moses were facing “leadership situations” that involved a lot of responsibility but NOT a lot of “real privilege.” As they approached the job it was as a responsibility/burden not as a “privilege.”

    Old Cincinnatus simply resigned rather than try to rule. Moses didn’t have that option 😉 – so we get the story of the “people” blaming him for everything wrong and rebelling against his leadership multiple times (and as the leader Moses was also held to a higher standard – but that is another story).

    In the last 25 years of the 20th century the “management buzzwords” tried to differentiate between “managers” and “leaders.” Which is always a little unfair – but the idea is that “managers” are somehow not “leaders” if all they do is pass along information/follow orders.

    In practice “good management” is “leadership.” However, if an individual is blindly following orders (with no concept of “intent of the command”) then that probably isn’t “leadership.”

    Sure, saying “corporate says to do it this way” is probably the actual answer for a lot of “brand management” type of issues – which is also probably why being “middle management” can be frustrating.

    I’m fond of saying that a major function of “senior leaders” is developing “junior leaders” – so the “leadership malfunction” might be further up the chain of command if “front line managers” are floundering.

    With that said – “politicians” tend to be despised because they are in positions of power and routinely take credit for anything good that happens and then try to blame someone else for anything bad that happens.

    If an individual rises above the ranks of “smarmy politicians” and actually displays “leadership” then history might consider them a “statesmen” – but the wanna be “Alexanders” always outnumber the “Cincinnati” (btw: the plural of “Cincinnatus” is “Cincinnati” which is how that nice little city in southwestern Ohio got its name) and of course a “Moses” requires divine intervention 😉

    Management Books

    “Books on leadership/management” tend to fall into two categories: the better ones are “memoirs/biography” while the “not so good” are self-congratulatory/”aren’t I wonderful” books published for a quick buck.

    I’ve read a lot of these books over the years – and the “actionable advice” usually boils down to some form of the “golden rule” (“do unto others as you would have them do to you”) or the categorical imperative.

    Personally I like this quote from a Hopalong Cassidy movie:

    You can’t go to far wrong looking out for the other guy.

    Hopalong Cassidy

    George Washington summed up “good manners” as (something like) “always keep the comfort of other people in mind.” SO “good leadership” equals “good manners” equals “lead the way you would like to be led”

    Of course the problem becomes that you can never make EVERYONE happy – e.g. displaying “good manners” is obviously going to be easier than “leadership” of a large group of individuals. BUT trying to “lead” from a position of bitterness/spite/coercion will never work in the “long term.”

    If you are trying to provoke a revolt – then “ignoring the concerns of the masses” and trying to coerce compliance to unpopular policies will probably work …

    e.g. “most adults” can understand not getting everything they want immediately – but they want to feel “heard” and “valued.” …