Author: Les

  • Brand value, Free Speech, and Twitter

    As a thought experiment – imagine you are in charge of a popular, world wide, “messaging” service – something like Twitter but don’t get distracted by a specific service name.

    Now assume that your goal is to provide ordinary folks with tools to communicate in the purest form of “free speech.” Of course if you want to stay around as a “going concern” then you will also need to generate revenue along the way — maybe not “obscene profits” but at least enough to “break even.”

    Step 1: Don’t recreate the wheel

    In 2022, if you wanted to create a “messaging system” for your business/community then there are plenty of available options.

    You could download the source code for Mastodon and setup your own private service if you wanted – but unless you have the required technical skills and have a VERY good reason (like a requirement for extreme privacy) that probably isn’t a good idea.

    In 2022 you certainly wouldn’t bother to “develop your own” platform from scratch — yes, it is something that a group of motivated under grads could do, and they would certainly learn a lot along the way, but they would be “reinventing the wheel.”

    Now if the goal is “education” then going through the “wheel invention” process might be worthwhile. HOWEVER , if the goal is NOT education and/or existing services will meet your “messaging requirements” – then reinventing the wheel is just a waste of time.

    For a “new commercial startup” the big problem isn’t going to be “technology” – the problem will be getting noticed and then “scaling up.”

    Step 2: integrity

    Ok, so now assume that our hypothetical messaging service has attracted a sizable user base. How do we go about ensuring that the folks posing messages are who they say they are – i.e. how do we ensure “user integrity.”

    In an ideal world, users/companies could sign up as who they are – and that would be sufficient. But in the real world where there are malicious actors with a “motivation to deceive” for whatever reason – then steps need to be taken make it harder for “malicious actors to practice maliciousness.”

    The problem here is that it is expensive (time and money) to verify user information. Again, in a perfect world you could trust users to “not be malicious.” With a large network you would still have “naming conflicts” but if “good faith” is the norm, then those issues would be ACCIDENTAL not malicious.

    Once again, in 2022 there are available options and “recreating the wheel” is not required.

    This time the “prior art” comes in the form of the registered trademark and good ol’ domain name system (DNS).

    Maybe we should take a step back and examine the limitations of “user identification.” Obviously you need some form of unique addressing for ANY network to function properly.

    quick example: “cell phone numbers” – each phone has a unique address (or a card installed in the phone with a unique address) so that when you enter in a certain set of digits, your call will be connected to that cell phone.

    Of course it is easy to “spoof the caller id” which simply illustrates our problem with malicious users again.

    Ok, now the problem is that those “unique user names” probably aren’t particularly elegant — e.g. forcing users to use names like user_2001,7653 wouldn’t be popular.

    If our hypothetical network is large enough then we have “real world” security/safety issues – so using personally identifiable information to login/post messages would be dangerous.

    Yes, we want user integrity. No, we don’t want to force users to use system generated names. No, we don’t want to put people in harm’s way. Yes, the goal is still “free speech with integrity” AND we still don’t want to reinvent the “authentication wheel.”

    Step 3: prior art

    The 2022 “paradigm shift” on usernames is that they are better thought of as “brand names.”

    The intentional practice of “brand management” has been a concern for the “big company” folks for a long time.

    However, this expanding of the “brand management” concept does draw attention to another problem. This problem is simply that a “one size fits all” approach to user management isn’t going to work.

    Just for fun – imagine that we decide to have three “levels” of registration:

    • level 1 is the fastest, easiest, and cheapest – provide a unique email address and agree to the TOS and you are in
    • level 2 is requires additional verification of user identity, so it is a little slower than level 1, and will cost the user a fee of some kind
    • level 3 is for the traditional “big company enterprises” – they have a trademark, a registered domain name, and probably an existing brand ‘history.’ The slowest and most expensive, but then also the level with the most control over their brand name and ‘follower’ data

    The additional cost for the “big company” probably won’t be a factor to the “big company” — assuming they are getting a direct line to their ‘followers’/’customers’

    Yes, there should probably be a “non profit”/gov’ment registration as well – which could be low cost (free) as well as “slow”.

    Anyone that remembers the early days of the “web” might remember the days when the domain choices were ‘.com’, ‘.edu’,’.net’, ‘.mil’, and ‘.org’ – with .com being for “commerce”, .edu for “education”, .net was originally supposed to be for “network infrastructure, .mil was for the military, and .org was for “non profit organizations.”

    I think that originally .org was free of charge – but they had to prove that they were a non-profit. Obviously you needed to be a educational institution to get an edu domain, and the “military” for a .mil domain was exactly what it sounds like

    Does it need to be pointed out that “.com” for commercial activity was why the “dot-com bubble/boom and bust” was called “dot-com”?

    Meanwhile, back at the ranch ….

    For individuals the concept was probably thought of as “personal integrity” – and hopefully that concept isn’t going away, i.e. we are just adding a thin veneer and calling it “personal branding.”

    Working in our hypothetical company’s favor is the fact that “big company brand management” has included registering domain names for the last 25+ years.

    Then add in that the modern media/intellectual property “prior art” consists of copyrights, trademarks, and patents. AND We (probably) already have a list of unacceptable words – e.g. assuming that profanity and racial slurs are not acceptable.

    SO just add a registered trademarks and/or a domain name check to the registration process.

    Prohibit anyone from level 1 or 2 from claiming a name that is on the “prohibited” list. Problem solved.

    It should be pointed out that this “enhanced registration” process would NOT change anyone’s ability to post content. Level 2 and 3 are not any “better” than level 1 – just “authenticated” at a higher level.

    If a “level 3 company” chooses not to use our service – their name is still protected. “Name squatting” should also be prohibited — e.g. if a level 3 company name is “tasty beverages, inc” then names like “T@sty beverages” or “aTasty Beverage” – a simple regular expression test would probably suffice.

    The “level 3” registration could then have benefits like domain registration — i.e. “tasty beverages, inc” would be free create other “tasty beverage” names …

    If you put together a comprehensive “registered trademark catalog” then you might have a viable product – the market is probably small (trademark lawyers?), but if you are creating a database for user registration purposes – selling access to that database wouldn’t be a big deal – but now I’m just rambling …

  • SDLC, youthful arrogance

    I achieved “crazy old man” status a few years back — so when I encounter “youthful arrogance” I’m always a little slow to perceive it as “youthful arrogance.”

    Googling “youthful arrogance” gave me a lot of familiar quotes – a new favorite:

    I’m too old to know everything

    Oscar Wilde

    Being “slow to recognize” youthful arrogance in the “wild” probably comes from the realization that when someone REALLY irritates me – and I have trouble pinpointing the reason they irritate me – the reason is (often) that they have an annoying mannerism which I share.

    Self-awareness aside – most of the time “youthful arrogance” is simply “youthful ignorance.” Having an appreciation that the world as it exists today did NOT all happen in the last 5 years – you know, some form of “large scale historical perspective” – is the simple solution to “youthful ignorance.”

    “True arrogance” on the other hand is a much different animal than “ignorance.” Arrogance requires an “attitude of superiority” and isn’t “teachable.”

    e.g. imagine someone having the opinion that the entire computer industry started 5 years ago – because that is when THEY started working in the “computer industry.”

    Gently point out that “modern computing” is at least 50 years old and traces its origins back thousands of years. Maybe point out that what is “brand new” today is just a variation on “what has been before” – you know the whole Ecclesiastes thing …

    If they accept the possibility that there is “prior art” for MOST everything that is currently “new” – then they were just young and ignorant. If all they do is recite their resume and tell you how much money they are making – well, that is probably “arrogance.”

    Of course if “making money” was the purpose of human existence then MAYBE I would be willing to accept their “youthful wisdom” as something truly new. Of course I’ll point back to the “wisdom books” (maybe point out that “the sun also rises” and recommend reading Ecclesiastes again) and politely disagree – but that isn’t the point.

    SDLC

    The computer industry loves their acronyms.

    When I was being exposed to “computer programming” way back when in the 1980’s – it was possible (and typical) for an individual to create an entire software product by themselves. (The Atari 2600 and the era of the “rock star programmer” comes to mind.)

    It is always possible to tell the “modern computing” story from different points of view. Nolan Bushnell and Atari always need to be mentioned.

    e.g. part of the “Steve Jobs” legend was that he came into the Atari offices as a 19 year old and demanded that they hire him. Yes, they hired him – and depending on who is telling the story – either Atari, Inc helped him purchase some of the chips he and Woz used to create the Apple I OR Mr Jobs “stole” the chips. I think “technically” it was illegal for individuals to purchase the chips in question at the time – so both stories might “technically” be true …

    Definitions

    The modern piece of hardware that we call a “computer” requires instructions to do anything. We will call those instructions a “computer program”/software.

    Someone needs to create those instructions – we can call that person a “computer programmer.”

    Nailing down what is and isn’t a “computer” is a little hard to do – for this discussion we can say that a “computer” can be “programmed” to perform multiple operations.

    A “computer program” is a collection of instructions that does something — the individual instructions are commonly called “code.”

    SO our “programmer” writes “code” and creates a “program.” The term “developer” has become popular as a replacement for “programmer.” This is (probably) an example of how the task of creating a “program” has increased in complexity – i.e. now we have “teams of developers” working on an “application”/software project, but that isn’t important at the moment …

    Computer programs can be written in a variety of “computer languages” — all of which make it “easier” for the human programmer to write the instructions required to develop the software project. It is sufficient to point out that there are a LOT of “computer languages” out there — and we are moving on …

    Job Titles

    The job of “computer programmer” very obviously changed as the computer industry changed.

    In 2022 one of the “top jobs” in the U.S. is “software engineer” (Average salary: $126,127: Percent of growth in number of job postings, 2019-2022: 87% – thank you indeed.com).

    You will also see a lot of “job postings” for “software programmers” and “software developers.”

    What is the difference between the three jobs (if any)? Why is “software engineer” in the top 10 of all jobs?

    Well, I’m not really sure if there is a functional difference between a “programmer” and a “developer” – but if there is, the difference is found in on the job experience and scope of responsibilities.

    i.e. “big company inc” might have an “entry level programmer” that gets assigned to a “development team” that is run by a “senior developer.” Then the “development team” is working on a part of the larger software project that the “engineer” has designed.

    History time

    When the only “computers” were massive mainframes owned by Universities and large corporations then being a “programmer” meant being an employee of a University or large corporation.

    When the “personal computer revolution” happened in the 1970’s/80’s – those early PC enthusiasts were all writing their own software. Software tended to be shared/freely passed around back then – if anyone was making money off of software it was because they were selling media containing the software.

    THEN Steve Jobs and Steve Wozniak started Apple Computers in 1976. The Apple story has become legend – so I won’t tell the whole story again.

    fwiw: successful startups tend to have (at least) two people – i.e. you need “sales/Marketing” and you need “product development” which tend to be different skill sets (so two people with complimentary skills). REALLY successful startups also tend to have an “operations” person that “makes the trains run on time” so to speak – e.g. Mike Markkula at Apple

    SO the “two Steves” needed each other to create Apple Computers. Without Woz, Jobs wouldn’t have had a product to sell. Without Jobs, Woz would have stayed at HP making calculators and never tried to start his own company.

    VisiCalc

    Google tells me that 200 “Apple I’s” were sold (if you have one it is a collectors item). The Apple I was not a complete product – additional parts needed to be purchased to have a functional system – so it was MOST important (historically speaking) in that it proved that there was a larger “personal computer” market than just “hardware hobbyists.”

    The Apple II was released in 1977 (fully assembled and ready to go out of the box) – but the PC industry still consisted of “hobbyists.”

    The next “historic moment in software development” happened in 1979 when Dan Bricklin and Bob Frankston released the first “computerized spreadsheet” – “VisiCalc.”

    VisiCalc was (arguably) the first application go through the entire “system development life cycle” (SDLC) – e.g. from planning/analysis/design to implementation/maintenance and then “obsolescence.”

    The time of death for VisiCalc isn’t important – 1984 seems to be a popular date. Their place in history is secure.

    How do you go from “historic product” to “out of business” in 5 years? Well, VisiCalc as a product needed to grow to survive. Their success invited a lot of competition into the market – and they were unable or unwilling to change at the pace required.

    This is NOT criticism – I’ll point at the large number of startups in ANY industry that get “acquired” by a larger entity mostly because “starting” a company is a different set of skills than “running” and “growing” a company.

    Again, I’m not picking on the VisiCalc guys – this “first inventor” goes broke is a common theme in technology – i.e. someone “invents” a new technology and someone else “implements” that technology better/cheaper/whatever to make big $$.

    btw: the spreadsheet being the first “killer app” is why PC’s found their way into the “accounting” departments of the world first. Then when those machines started breaking, companies needed folks dedicated to fixing the information technology infrastructure – and being a “PC tech” became a viable profession.

    The “I.T.” functionality stayed part of “accounting” for a few years. Eventually PCs become common in “not accounting” divisions. The role of “Chief Information Officer” and “I.T. departments” became common in the late 1980’s — the rest is history …

    Finally

    Ok, so I mentioned that SDLC can mean “system development life cycle.” This was the common usage when I first learned the term.

    In 2022 “Software development life cycle” is in common usage – but that is probably because the software folks have been using the underlying concepts of the “System DLC” as part of “software development” process since “software development” became a thing.

    e.g. The “Software DLC” uses different vocabulary — but it is still the “System DLC” — but if you feel strongly about it, I don’t feel strongly about it one way or the other – I could ALWAYS be wrong.

    I’ve seen “development fads” come and go in the last 30 years. MOST of the fads revolve around the problems you get when multiple development teams are working on the same project.

    Modern software development on a large scale requires time and planning. You have all of the normal “communication between teams” issues that ANY large project experiences. The unique problems with software tend to be found in the “debugging” process – which is a subject all its own.

    The modern interweb infrastructure allows/requires things like “continuous integration” and “continuous deployment” (CI/CD).

    If you remember “web 1.0” (static web pages) then you probably remember the “site under construction” graphic that was popular until it was pointed out that (non abandoned) websites are ALWAYS “under construction” (oh and remember the idea of a “webmaster” position? one person responsible for the entire site? well, that changed fast as well)

    ANYWAY – In 2022 CI/CD makes that “continuous construction” concept manageable

    Security

    The transformation of SDLC from “system” to “software” isn’t a big deal – but the “youthful arrogance” referenced at the start involved someone that seemed to think like the idea of creating ‘secure software’ was something that happened recently.

    Obviously if you “program” the computer by feeding in punch cards – then “security” kind of happens by controlling physical access to the computer.

    When the “interweb” exploded in the 1990’s the tongue in cheek observation was that d.o.s. (the “disk operating system”) had never experienced a “remote exploit”

    The point being that d.o.s. had no networking capabilities – if you wanted to setup a “local area network” (LAN) you had to install additional software that would function as a “network re-director”

    IBM had come up with “netbios” (network basic input output system) in 1983 (for file and print sharing) — but it wasn’t “routable” between different LANs.

    NetWare had a nice little business going selling a “network operating system” that ran on a proprietary protocol called IPX/SPX (it used the MAC address for unique addressing – it was nice).

    THEN Microsoft included basic LAN functionality in Windows 3.11 (using an updated form of netbios called netbeui – “netbios Enhanced User Interface”) – and well, the folks at Netware probably weren’t concerned at the time, since their product had the largest installed base of any “n.o.s.” — BUT Microsoft in the 1990’s is its own story …

    ANYWAY if you don’t have your computers networked together then “network security” isn’t an issue.

    btw: The original design of the “interweb” was for redundancy and resilience NOT security – and we are still dealing with those issues in 2022.

    A “software design” truism is that the sooner you find an error (“bug”) in the software the less expensive it is to fix. If you can deal with an issue in the “design” phase – then there is no “bug” to fix and the cost is $0. BUT if you discover a bug when you are shipping software – the cost to fix will probably be $HUGE (well, “non zero”).

    fwiw: The same concept applies to “features” – e.g. at some point in the “design” phase the decision has to be made to “stop adding additional features” – maybe call this “feature lock” or “version lock” whatever.

    e.g. the cost of adding additional functionality in the design phase is $0 — but if you try to add an additional feature half-way through development the cost will be $HUGE.

    Oh, and making all those ‘design decisions’ is why “software architects”/engineers get paid the big $$.

    Of course this implies that a “perfectly designed product” would never need to be patched. To get a “perfectly designed product” you would probably need “perfect designers” – and those are hard to find.

    The work around becomes bringing in additional “experts” during the design phase.

    There is ALWAYS a trade off between “convenience” and “security” and those decisions/compromises/acceptance of risk should obviously be made at “design” time. SO “software application security engineer” has become a thing

    Another software truism is that software is never “done” it just gets “released” – bugs will be found and patches will have to be released (which might cause other bugs, etc) –

    Remember that a 100% secure system is also going to be 100% unusable. ok? ’nuff said

  • Random Thoughts about Technology in General and Linux distros in Particular

    A little history …

    In the 30+ years I’ve been a working “computers industry professional” I’ve done a lot of jobs, used a lot of software, and spent time teaching other folks how to be “computer professionals.”

    I’m also an “amateur historian” – i.e. I enjoy learning about “history” in general. I’ve had real “history teachers” point out that (in general) people are curious about “what happened before them.”

    Maybe this “historical curiosity” is one of the things that distinguishes “humans” from “less advanced” forms of life — e.g. yes, your dog loves you, and misses you when you are gone – but your dog probably isn’t overly concerned with how its ancestors lived (assuming that your dog has the ability to think in terms of “history” – but that isn’t the point).

    As part of “teaching” I tend to tell (relevant) stories about “how we got here” in terms of technology. Just like understanding human history can/should influence our understanding of “modern society” – understanding the “history of a technology” can/should influence/enhance “modern technology.”

    The Problem …

    There are multiple “problems of history” — which are not important at the moment. I’ll just point out the obvious fact that “history” is NOT a precise science.

    Unless you have actually witnessed “history” then you have to rely on second hand evidence. Even if you witnessed an event, you are limited by your ability to sense and comprehend events as they unfold.

    All of which is leading up to the fact that “this is the way I remember the story.” I’m not saying I am 100% correct and/or infallible – in fact I will certainly get something wrong if I go on long enough – any mistakes are mine and not intentional attempts to mislead ๐Ÿ˜‰

    Hardware/Software

    Merriam-Webster tells me that “technology” is about “practical applications of knowledge.”

    random thought #1 – “technology” changes.

    “Cutting edge technology” becomes common and quickly taken for granted. The “Kansas City” scene from Oklahoma (1955) illustrates the point (“they’ve gone just about as far as they can go”).

    Merriam-Webster tells me that the term “high technology” was coined in 1969 referring to “advanced or sophisticated devices especially in the fields of electronics and computers.”

    If you are a ‘history buff” you might associate 1969 with the “race to the moon”/moon landing – so “high technology” equaled “space age.” If you are an old computer guy – 1969 might bring to mind the Unix Epoch – but in 2022 neither term is “high tech.”

    random thought #2 – “software”

    The term “hardware” in English dates back to the 15th Century. The term originally meant “things made of metal.” In 2022 the term refers to the “tangible”/physical components of a device – i.e. the parts we can actually touch and feel.

    I’ve taught the “intro to computer technology” more times than I can remember. Early on in the class we distinguish between “computer hardware” and “computer software.”

    It turns out that the term “software” only goes back to 1958 – invented to refer to the parts of a computer system that are NOT hardware.

    The original definition could have referred to any “electronic system” – i.e. programs, procedures, and documentation.

    In 2022 – Merriam-Webster tells me that “software” is also used to refer to “audiovisual media” – which is new to me, but instantly makes sense …

    ANYWAY – “computer software” typically gets divided into two broad categories – “applications” and “operating systems” (OS or just “systems”).

    The “average non-computer professional” is probably unaware and/or indifferent to the distinction between “applications” and the OS. They can certainly tell you whether they use “Windows” or a “Mac” – so saying people are “unaware” probably isn’t as correct as saying “indifferent.”

    Software lets us do something useful with hardware

    an old textbook

    The average user has work to get done – and they don’t really care about the OS except to the point that it allows them to run applications and get something done.

    Once upon a time – when a new “computer hardware system” was designed a new “operating system” would also be written specifically for the hardware. e.g. The Mythic Man-Month is required reading for anyone involved in management in general and “software development” in particular …

    Some “industry experts” have argued that Bill Gates’ biggest contribution to the “computer industry” was the idea that “software” could be/should be separate from “hardware.” While I don’t disagree – it would require a retelling of the “history of the personal computer” to really put the remark into context — I’m happy to re-tell the story, but it would require at least two beers – i.e. not here, not now

    In 2022 there are a handful of “popular operating systems” that also get divided into two groups – e.g. the “mobile OS” – Android, iOS, and the “desktop OS” Windows, macOS, and Linux

    The Android OS is the most installed OS if you are counting “devices.” Since Android is based on Linux – you COULD say that Linux is the most used OS, but we won’t worry about things like that.

    Apple’s iOS on the other hand is probably the most PROFITABLE OS. iOS is based on the “Berkely Software Distribution” (BSD) – which is very much NOT Linux, but they share some code …

    Microsoft Windows still dominates the desktop. I will not be “bashing Windows” in any form – just point out that 90%+ of the “desktop” machines out there are running some version of Windows.

    The operating system that Apple includes with their personal computers in 2022 is also based on BSD. Apple declared themselves a “consumer electronics” company a long time ago — fun fact: the Beatles (yes, John, Paul, George, and Ringo – those “Beatles”) started a record company called “Apple” in 1968 – so when the two Steves (Jobs and Wozniak) wanted to call their new company “Apple Computers” they had to agree to stay out of the music business – AND we are moving on …

    On the “desktop” then Linux is the rounding error between Windows machines and Macs.

    What is holding back “Linux on the desktop?” Well, in 2022 the short answer is “applications” and more specifically “gaming.”

    You cannot gracefully run Microsoft Office, Avid, or the Adobe Suit on a Linux based desktop. Yes, there are alternatives to those applications that perform wonderfully on Linux desktops – but that isn’t the point.

    e.g. that “intro to computers” class I taught used Microsoft Word, and Excel for 50% of the class. If you want to edit audio/video “professionally” then you are (probably) using Avid or Adobe products (read the credits of the next “major Hollywood” movie you watch).

    Then the chicken and egg scenario pops up – i.e. “big application developer” would (probably) release a Linux friendly version if more people used Linux on the desktop – but people don’t use Linux on the desktop because they can’t run all of the application software they want – so they don’t have a Linux version of the application.

    Yes, I am aware of WINE – but it illustrates the problem much more than acts as a solution — and we are moving on …

    Linux Distros – a short history

    Note that “Linux in the server room” has been a runaway success story – so it is POSSIBLE that “Linux on the desktop” will gain popularity, but not likely anytime soon.

    Also worth pointing out — it is possible to run a “Microsoft free” enterprise — but if the goal is lowering the “total cost of ownership” then (in 2022) Microsoft still has a measurable advantage over any “100% Linux based” solution.

    If you are “large enterprise” then the cost of the software isn’t your biggest concern – “Support” is (probably) “large enterprise, Inc’s” largest single concern.

    fwiw: IBM and Red Hat are making progress on “enterprise level” administration tools – but in 2022 …

    ANYWAY – the “birthdate” for Linux is typically given as 1991.

    Under the category of “important technical distinction” I will mention that “Linux” is better described as the “kernel” for an OS and NOT an OS in and of itself.

    Think of Linux as the “engine” of a car – i.e. the engine isn’t the “car”, you need a lot of other systems working with and around the engine for the “car” to function.

    For the purpose of this article I will describe the combination of “Linux kernel + other operating system essentials” as a “Linux Distribution” or more commonly just “distro.” Ready? ok …

    1992 gave us Slackware. Patrick Volkerding started the “oldest surviving Linux distro” which accounted for 80 percent share of the “Linux” market until the mid-1990s

    1992 – 1996 gave us openSUSE Linux. Thomas Fehr, Roland Dyroff, Burchard Steinbild, and Hubert Mantel. I tend to call SUSE “German Linux” and they were just selling the “German version of Slackware” on floppy disks until 1996.

    btw: the “modern Internet” would not exist as it is today without Linux in the server room. All of these “early Linux distros” had business models centered around “selling physical media.” Hey, download speed were of the “dial-up” variety and you were paying “by the minute” in most of Europe – so “selling media” was a good business model …

    1993 -1996 gave us the start of Debian – Ian Murdock. The goal was a more “user friendly” Linux. First “stable version” was 1996 …

    1995 gave us the Red Hat Linux — this distro was actually my “introduction to Linux.” I bought a book that had a copy of Red Hat Linux 5.something (I think) and did my first Linux install on an “old” pc PROBABLY around 2001.

    During the dotcom “boom and bust” a LOT of Linux companies went public. Back then it was “cool” to have a big runup in stock valuation on the first day of trading – so when Red Hat “went public” in 1999 they had theย eighth-biggest first-day gain in the history of Wall Street.

    The run-up was a little manufactured (i.e. they didn’t release a lot of stock for purchase on the open market). My guess is that in 2022 the folks arranging the “IPO” would set a higher price for the initial price or release more stock if they thought the offering was going to be extremely popular.

    Full disclosure – I never owned any Red Hat stock, but I was an “interested observer” simply because I was using their distro.

    Red Hat’s “corporate leadership” decided that the “selling physical media” business plan wasn’t a good long term strategy. Especially as “high speed Internet” access moved across the U.S.

    e.g. that “multi hour dial up download” is now an “under 10 minute iso download” – so I’d say the “corporate leadership” at Red Hat, Inc made the right decision.

    Around 2003 the Red Hat distro kind of “split” into “Red Hat Enterprise Linux” (RHEL – sold by subscription to an “enterprise software” market) and the “Fedora project.” (meant to be a testing ground for future versions of RHEL as well as the “latest and greatest” Linux distro).

    e.g. the Fedora project has a release target of every six months – current version 35. RHEL has a longer planned release AND support cycle – which is what “enterprise users” like – current version 9.

    btw – yes RHEL is still “open source” – what you get for your subscription is “regular updates from an approved/secure channel and support.” AlmaLinux and CentOS are both “clones” of RHEL – with CentOS being “sponsored” by Red Hat.

    IBM “acquired” Red Hat in 2019 – but nothing really changed on the “management” side of things. IBM has been active in the open source community for a long time – so my guess is that someone pointed out that a “healthy, independent Red Hat” is good for IBM’s bottom line in the present and future.

    ANYWAY – obviously Red Hat is a “subsidiary” of IBM – but I’m always surprised when “long time computer professionals” seem to be unaware of the connections between RHEL, Fedora Project, CentOS, and IBM (part of what motivated this post).

    Red Hat has positioned itself as “enterprise Linux” – but the battle for “consumer Linux” still has a lot of active competition. The Fedora project is very popular – but my “non enterprise distros of choice” are both based on Debian:

    Ubuntu (first release 2004) – “South African Internet mogul Mark Shuttleworth” gets credit for starting the distro. The idea was that Debian could be more “user friendly.” Occasionally I teach an “introduction to Linux class” and the big differences between “Debian” and “Ubuntu” are noticeable – but very much in the “ease of use” (i.e. “Ubuntu” is “easier” for new users to learn)

    I would have said that “Ubuntu” meant “community” (which I probably read somewhere) but the word is of ancient Zulu and Xhosa origin and more correctly gets translated “humanity to others.” Ubuntu has a planned release target of every six months — as well as a longer “long term support” (LTS) version.

    Linux Mint (first release 2008) – Clรฉment Lefรจbvreย gets credit for this one. Technically Linux Mint describes itself as “Ubuntu based” – so of course Debian is “underneath the hood.” I first encountered Linux Mint from a reviewer that described it as the best Linux distro for people trying to not use Microsoft Windows.

    The differences between Mint and Ubuntu are cosmetic and also philosophical – i.e. Mint will install some “non open source” (but still free) software to improve “ease of use.”

    The beauty of “Linux” is that it can be “enterprise level big” software or it can be “boot from a flash drive” small. It can utilize modern hardware and GPU’s or it can run on 20 year old machines. If you are looking for specific functionality, there might already be a distro doing that – or if you can’t find one, you can make your own

  • Jaws, sequels in general, and Steven Spielberg


    Jaws – 1975

    There have been a couple documentaries about the 1975 blockbuster “Jaws” — which probably illustrates the long term impact of the original movie.

    Any “major” movie made in the era of “DVD extras” is going to have an obligatory “making of” documentary – so the fact

    “Jaws: The Inside Story” aired on A&E back in 2009 (and is available for free on Kanopy.com). It was surprisingly entertaining – both as “movie making” documentary and as “cultural history.”

    This came to mind because the “Jaws movies” have been available on Tubi.com for the last couple months.

    full disclosure: I was a little too young to see “Jaws” in the theater — the “edited for tv” version of “Jaws” was my first exposure to the movie, when the movie got a theatrical re-release and ABC aired it on network tv in 1979.

    I probably saw the “un-edited” version of “Jaws” on HBO at some point – and I have a DVD of the original “Jaws.” All of which means I’ve seen “Jaws – 1975” a LOT. Nostalgia aside, it still holds up as an entertaining movie.

    Yes, the mechanical shark is cringeworthy in 2022 – but the fact that the shark DIDN’T work as well as Spielberg et al wanted probably contributes to the continued “watch – ability” of the movie. i.e. Mr Spielberg had to use “storytelling” technics to “imply” the shark – which ends up being much scarier than actually showing the shark.

    i.e. what made the original “Jaws” a great movie had very little to do with the mechanical shark/”special effects.” The movie holds up as a case study on “visual storytelling.” Is it Steven Spielberg’s “best movie”? No. But it does showcase his style/technique.

    At one point “Jaws” was the highest grossing movie in history. It gets credit for creating the “summer blockbuster” concept i.e. I think it was supposed to be released as as “winter movie” – but got pushed to a summer release because of production problems.

    Source material

    The problem with the “Jaws” franchise was that it was never intended to be a multiple-movie franchise. The movie was based on Peter Benchley’s (hugely successful) 1974 novel (btw: Peter Benchley plays the “reporter on the beach” in “Jaws – 1975”).

    I was too young to see “Jaws” in the theater, and probably couldn’t even read yet when the novel was spending 44 weeks on the bestseller lists.

    “Movie novelizations” tended to be a given back in the 1970’s/80’s – but when the movie is “based on a novel” USUALLY the book is “better” than the movie. “Jaws” is one of the handful of “books made into movies” where the movie is better than the book (obviously just my opinion).

    The basic plot is obviously the same – the two major differences is that (in the book) Hooper dies and the shark doesn’t explode.

    Part of the legend of the movie is that “experts” told Mr. Spielberg that oxygen tanks don’t explode like that and that the audience wouldn’t believe the ending. Mr Spielberg replied (something like) “Give me the audience for 2 hours and they will stand up and cheer when the shark explodes” — and audiences did cheer at the exploding shark …

    (btw: one of those “reality shows” tried to replicate the “exploding oxygen tank” and no, oxygen tanks do NOT explode like it does at the end of Jaws – so the experts were right, but so was Mr Spielberg …)

    Sequels

    It is estimated that “Jaws – 1975” sold 128 million tickets. Adjust for inflation and it is in the $billion movie club.

    SO of course there would be sequels.

    Steven Spielberg very wisely stayed far away from all of the sequels. Again, the existential issue with MOST “sequels” is that they tend to just be attempts to get more money out of the popularity of the original – rather than telling their own story.

    Yes, there are exceptions – but none of the Jaws sequels comes anywhere close to the quality of the original.

    “Jaws 2” was released in summer 1978. Roy Scheider probably got a nice paycheck to reprise his starring role as Chief Martin Brody – Richard Dreyfuss stayed away (his character is supposed to be on a trip to Antarctica or something). Most of the supporting cast came back – so the movie tries very hard to “feel” like the original.

    Again – I didn’t see “Jaws 2” in the theater. I remembered not liking the movie when I did see it on HBO – but I (probably) hadn’t seen it for 30 years when I re-watched it on Tubi the other day.

    Well, the mechanical shark worked better in “Jaws 2” – but it doesn’t help the movie. Yes, the directing is questionable, the “teenagers” mostly unlikeable, and the plot contrived – but other than that …

    How could “Jaws 2” have been better? Well, fewer screeching teenagers (or better directed teenagers). It felt like they had a contest to be in the movie – and that was how they selected most of the “teenagers.”

    Then the plot makes the cardinal sin of trying to explain “why” another huge shark is attacking the same little beach community. Overly. contrived.

    If you want, you can find subtext in “Jaws – 1975.” i.e. the shark can symbolize “nature” or “fate” or maybe even “divine retribution” take your pick. Maybe it isn’t there – but that becomes the genius in the storytelling – i.e. don’t explain too much, let the audience interpret as they like

    BUT if you have another huge shark, seemingly targeting the same community – well, then the plot quickly becomes overly contrived.

    The shark death scene in “Jaws 2” just comes across as laughably stupid – but by that time I was just happy that the movie was over.

    SO “Jaws 2” tried very hard – and it did exactly what a “back for more cash” sequel is supposed to do – i.e. is made money.

    “Jaws 3” was released in summer 1983 and tried to capitalize on a brief resurgence of the “3-D” fad. This time the movie was a solid “B.” The only connection to the first two movies is the grown up Brody brothers – and the mechanical shark of course.

    The plot for “Jaws 3” might feel familiar to audiences in 2022. Not being a “horror” movie aficionado, I’m not sure how much “prior” art was involved with the plot — i.e. the basic “theme park” disaster plot had probably become a staple for “horror” movies by 1983 (“Westworld” released in 1973 comes to mind).

    Finally the third sequel came out in 1987 (“Jaws: The Revenge”) – I have not seen the movie. Wikipedia tells me that this movie ignores “Jaws 3” and becomes a direct sequel to “Jaws 2” (tagline “This time it is personal”)

    The whole “big white shark is back for revenge against the Brody clan” plot is a deal breaker for me – e.g. when Michael Caine was asked if he had watched “Jaws 4” (which received terrible reviews) – his response was ‘No. But I’ve seen the house it bought for my mum. It’s fantastic!’

    Thankfully, there isn’t likely to be another direct “Jaws” sequel (God willing).

    Humans have probably told stories about “sea monsters” for as long as there have been humans living next to large bodies of water. From that perspective “Jaws” was not an “original story” (of course those are hard to find) but an updated version of very old stories – and of course “shark”/sea monster movies continue to be popular in 2022.

    Mr Spielberg

    Steven Spielberg was mostly an “unknown” director before “Jaws.” Under ordinary circumstances – an “unknown” director would have been involved in the sequel to a “big hit movie.”

    Mr Spielberg explained he stayed away from the “Jaws sequels” because making the original movie was a “nightmare” (again, multiple documentaries have been made).

    “Jaws 2” PROBABLY would have been better if he had been involved – but his follow up was another classic — “Close Encounters of the Third Kind” (1977).

    It is slightly interesting to speculate on what would have happened to Steven Spielberg’s career if “Jaws” had “flopped” at the box office. My guess is he would have gone back to directing television and would obviously have EVENTUALLY had another shot at directing “Hollywood movies.”

    Speculative history aside – “Jaws” was nominated for “Best Picture” (but lost to “One Flew Over the Cuckoo’s Nest”) and won Oscars for Best Film Editing, Best Music (John Williams), and Best Sound.

    The “Best Director” category in 1976 reads like a “Director Hall of Fame” list – Stanley Kubrick, Robert Altman, Sidney Lumet, Federico Fellini, and then Milos Forman won for directing “One Flew Over the Cuckoo’s Nest.” SO it is understandable why Mr Spielberg had to wait until 1978 to get his first “Best Director” nomination for “Close Encounters of the Third Kind” …

    (btw: the source novel for “One Flew Over the Cuckoo’s Nest” is fantastic – I didn’t care for the movie PROBABLY because I read the book first … )

    Best vs favorite
    ANYWAY – I have a lot of Steven Spielberg movies in my “movie library” – what is probably his “best movie” (if you have to choose one – as in “artistic achievement”) is hands down “Schindler’s List” (1993) which won 7 Oscars – including “Best Director” for Mr Spielberg.

    However, if I had to choose a “favorite” then it is hard to beat “Raiders of the Lost Ark” (but there is probably nostalgia involved) …

  • George Lucas, Jedis, and the Knight errant

    Full disclosure: “Star Wars” was released in 1977 – when I was 8ish years old. This post started as a “reply” to something else – and grew – so I apologize for the lack of real structure – kind of a work in progress …

    I am still a “George Lucas” fan – no, I didn’t think episodes I, II, and III were as good as the original trilogy but I didn’t hate them either.

    George Lucas obviously didn’t have all of the “backstory” for the “Jedi” training fully formed when he was making “Star Wars” back in the late 1970’s

    in fact the “mystery” of the Jedi Knights was (probably) part of the visceral appeal of the original trilogy (Episodes IV, V, and VI – for those playing along)

    As always when you start trying to explain the “how” and “why” behind successful “science fantasy” you run into the fact that these are all just made up stories and NOT an organized religion handed down by a supreme intelligence

    if you want to start looking at “source material” for the “Jedi” – the first stop is obvious – i.e. they are “Jedi KNIGHTS” – which should obviously bring to mind the King Arthur legend et al

    in the real world a “knight in training” started as a “Page” (age 7 to 13), then became a “Squire” (age 14 to 18-21), and then would become a “Knight”

    of course the whole point of being a “Knight” was (probably) to be of service and get granted some land somewhere so they could get married and have little ones

    since Mr Lucas was making it all up – he also made his Jedi “keepers of the faith” combing the idea of “protectors of the Republic” with “priestly celibacy” — then the whole “no attachments”/possessions thing comes straight from Buddhism

    btw: all this is not criticism of George Lucas – in fact his genius (again in Episodes IV, V, VI) was in blending them together and telling an entertaining story without beating the audience over the head with minutiae

    ANYWAY “back in the 20th century” describing something as the “Disney version” used to mean that it was “nuclear family friendly” — feel free to psychoanalyze Walt Disney if you want, i.e. he wasn’t handing down “truth from the mountain” either — yes, he had a concept of an “idealized” childhood that wasn’t real – but that was the point

    just like “Jedi Knights” were George Lucas’ idealized “Knights Templar” – the real point is that they are IDEALIZED for a target audience of “10 year olds” – and when you start trying to explain too much the whole thing falls apart

    e.g. the “Jedi training” as it has been expanded/over explained would much more likely create sociopaths than “wise warrior priests” — which for the record is my same reaction to Plato’s “Republic” – i.e. that the system described would much more likely create sociopaths that only care about themselves rather than “philosopher kings” capable of ruling with wisdom

  • cola wars, taste tests, and marketing

    Coke or Pepsi?

    I just watched a documentary on the “Cola wars” – and something obvious jumped out at me.

    First I’ll volunteer that I prefer Pepsi – but this is 100% because Coke tends to disturb my stomach MORE than Pepsi disturbs my stomach.

    full disclosure – I get the symptoms of “IBS” if I drink multiple “soft drinks” multiple days in a row. I’m sure this is a combination of a lot of factors – age, genetics, whatever.

    Of course – put in perspective the WORST thing for my stomach (as in “rumbly in the tummy”) when I was having symptoms was “pure orange juice” – but that isn’t important.

    My “symptoms” got bad enough that I was going through bottles of antacid each week, and tried a couple “over the counter” acid reflux products. Eventually I figured out changing my diet – getting more yogurt and tofu in my diet, drinking fewer “soft drinks” helped a LOT.

    The documentary was 90 minutes long – and a lot of time was spent on people expressing how much they loved one brand or the other. I’m not zealous for either brand – and I would probably choose Dr Pepper if I had to choose a “favorite” drink

    Some folks grew up drinking one beverage or the other and feel strongly about NOT drinking the “competitor” – but again, my preference for Pepsi isn’t visceral.

    Habit

    The massive amount of money spent by Coke and Pepsi marketing their product becomes an exercise in “marketing confirmation bias” for most of the population – but I each new generation U.S. has to experience some form of the “brand wars” – Coke vs Pepsi, Nike vs Adidas, PC vs Mac – whatever.

    e.g. As a “technology professional” I will point out that Microsoft does a good job of “winning hearts and minds” by getting their products in the educational system.

    If you took a class in college teaching you “basic computer skills” in the last 20 years – that class was probably built around Microsoft Office. Having taught those classes for a couple years I can say that students learn “basic computer skills” and also come away with an understanding of “Microsoft Office” in particular.

    When those students need to buy “office” software in the future, what do you think they will choose?

    (… and Excel is a great product – I’m not bashing Microsoft by any means ๐Ÿ˜‰ )

    Are you a “Mac” or a “PC”? Microsoft doesn’t care – both are using Office. e.g. Quick name a spreadsheet that ISN’T Excel – there are some “free” ones but you get the point …

    The point is that human beings are creatures of habit. After a certain age – if you have “always” used product “x” then you are probably going to keep on using product “x” simply because it is what you have “always used.”

    This fact is well known – and why marketing to the “younger demographic” is so profitable/prized.

    ALL OF WHICH MEANS – that if you can convince a sizable share of the “youth market” that your drink is “cool” (or whatever the kids say in 2022) – then you will (probably) have created a lifelong customer

    Taste Tests

    Back to the “cola wars”…

    The Pepsi Challenge deserves a place in the marketing hall of fame — BUT it is a rigged game.

    The “Pepsi challenge” was setup as a “blind taste test.” The “test subject” had two unmarked cups placed in front of them – one cup containing Pepsi and the other containing Coke.

    The person being tested drinks from one cup, then from the second cup, and then chooses which one they prefer.

    Now, according to Pepsi – twice as many people preferred Pepsi to Coke by a 2:1 margin. Which means absolutely nothing.

    The problem with the “taste test” is that the person tastes one sugary drink, and then immediately tastes a second sugary drink. SO being able to discern the actual taste difference between the two is not possible.

    If you wanted an honest “taste test” then the folks being tested should have approached the test like a wine tasting. e.g. “swish” the beverage back and forth, suck in some air to get the full “flavor”, and then spit it out. Maybe have something to “cleanse the pallet” between the drinks …

    (remember “flavor” is a combination of “taste” and “smell”)

    For the record – yes, I think Coke and Pepsi taste different – BUT the difference is NOT dramatic.

    The documentary folks interviewed Coke and Pepsi executives that worked at the respective companies during the “cola wars” – and most of those folks were willing to take the “Pepsi Challenge”

    A common complaint was that both drinks tasted the same – and if you drink one, then drink another they DO taste the same – i.e. you are basically tasting the first drink “twice” NOT two unique beverages.

    fwiw: most of the “experts” ended up correctly distinguishing between the two – but most of them took the time to “smell” each drink, and then slowly sip. Meanwhile the “Pepsi Challenge” in the “field” tended to be administered in a grocery store parking – which doesn’t exactly scream “high validity.”

    ANYWAY – you can draw a dotted line directly from the “Pepsi Challenge” (as un-scientific as it was) and “New Coke” – i.e. the “Pepsi Challenge” convinced the powers that be at Coke that they needed to change.

    So again, the “Pepsi Challenge” was great marketing but it wasn’t a fair game by any means.

    fwiw: The documentary (“Cola Wars” from the History Channel in 2019) is interesting from a branding and marketing point of view. It was on hoopladigital, and is probably available online elsewhere …

    Difference between “sales” and “Marketing”

    If you are looking at a “business statement”/profit and loss statement of some kind – the “top line” is probably gonna be “total revenue” (i.e. “How much did the company make”). The majority of “revenue” is then gonna be “sales” related in some form.

    SO if you make widgets for $1 and sell them for $2 – if you sell 100 widgets then your “total revenue” will be $200 (top line) your “cost of goods sold” will be $100 and then the “Net revenue” (the “bottom line”) will be “widgets sold” – “cost of widgets” i.e. $100 in this extremely simple example.

    In the above example the expense involved in “selling widgets” is baked into the $1 “cost of goods sold” – so maybe the raw materials for each widget is 50 cents, then 30 cents per widget in “labor”, and 20 cents per widget for sales and marketing.

    Then “sales” involves everything involved in actually getting a widget to the customer. While “marketing” is about finding the customer and then educating them about how wonderful your widgets are – and of course how they can buy a widget. e.g. marketing and sales go hand in hand but they are not the same thing.

    The “widget market” is all of the folks that might want to use widgets. “Market share” is then the number of folks that use a specific company’s widgets.

    Marketing famously gets discussed as “5 P’s” — Product, Place, Price, Promotion, and People.

    Obviously the widget company makes “widgets” (Product)- but should they (A) strive to make the highest quality widget possible that will last for years (i.e. “expensive to produce”) or should they (B) make a low cost, disposable widget?

    Well, the answer is “it depends” – and some of the factors involved in the “Product” decision are the other 4 P’s — which will change dramatically between scenario A and B.

    A successful company will understand the CUSTOMER and how the customer uses “widgets” before deciding to venture into the “widget market space”

    This is why you hear business folks talk about “size of markets” and “price sensitivity of markets.” If you can’t make a “better” widget or a less expensive widget – then you are courting failure …

    SO Coke and Peps are both “mature” companies that have established products, methods and markets – so growing their market share requires something more than just telling folks that “our product tastes good”

    In the “Cola Wars” documentary they point out the fact that the competition between Coke and Pepsi served to grow the entire “soft drink market” – so no one really “lost” the cola wars. e.g. in 2020 the “global soft drink market” was valued at $220 BILLION – but the market for “soft drinks” fragmented as it grew.

    The mini-“business 101” class above illustrates why both Coke and Pepsi aggressively branched out into “tea” and “water” products since the “Cola wars.”

    It used to be that the first thing Coke/Pepsi would do when moving into a new market was to build a “bottling plant.” So then “syrups” can be shipped to the different markets – and then “bottled” close to where they will be consumed – which saves $$ on shipping costs.

    I suppose if you are a growing “beverage business” then selling “drink mix” online might be a profitable venture – unless you happen to have partners in “distant markets” that can bottle and distribute your product – i.e. Coke and Pepsi are #1 and #2 in the soft drink market and no one is likely to challenge either company anytime soon.

    “Soft drinks” is traditionally defined as “non alcoholic” – so the $220 billion is spread out over a lot of beverages/companies. Coke had 20% of that market and Pepsi 10% – but they are still very much the “big players” in the industry. The combined market share of Coke and Pepsi is equal to the combined market share of the next 78 companies combined (e.g. #3 is Nestle, #4 Suntory, #5 Danone, #6 Dr Pepper Snapple, #7 Red Bull).

    My takeaway …

    umm, I got nothing. This turned into a self-indulgent writing exercise. Thanks for playing along.

    In recent years PepsiCo has been driving growth by expanding into “snacks” – so a “Cola wars 2” probably isn’t likely …

    I’m not looking to go into the soft drink business – but it is obviously still a lucrative market. I had a recipe for “home made energy drink” once upon a time – maybe I need to find that again …

  • In Memoriam 16

    … then THIS poem is directly about Arthur Henry Hallam — “who died suddenly of a cerebral hemorrhage in Vienna in 1833, aged 22.” (Thank you Google and probably wikipedia)

    Published in 1850 – which is the same year Alfred Tennyson married Emily Sellwood. Arthur Hallam’s death would have been 3 or 4 years earlier – so the “love” being lost was his best friend.

    The English has a LOT of words – and also a LOT of meanings/connotations for single words. SO “love” gets used a lot in different contexts – allowing for multiple interpretations.

    Any “close reading” requires a consideration of the society in which the author is writing – e.g. ancient Greek men talking about “love” is much different than Victorian England men talking about “love.”

    An internet commentary speculated that Arthur Hallam and Alfred Tennyson were such close friends that if Mr Hallam hadn’t died that Alfred Tennyson may never have married – which is simply ridiculous.

    Yes, they were very close – but any sense of “modern homoeroticism” is being inserted by modern readers. Arthur Hallam was engaged to Tennyson’s younger sister (obviously before his untimely death). Alfred Tennyson wouldn’t meet his future wife for a couple years after Hallam’s death – as mentioned earlier.

    For MOST of human history the idea that it possible to “love” someone in a “non sexual manner” has been a given. Obvioulsy “love” and “sex” are NOT synonyms – so if Arthur Hallam had lived Tennyson probably wouldn’t have written “In Memoriam” but he still would have married Emily Sellwood.

    Now you can argue about which form of “love” is strongest if you like – but the point (here at least) is that it is possible to love a “best friend” one way and a “romantic partner” another way.

    ANYWAY – what I really learned from reciting this poem is that I have no rhythm – or maybe my “rhythm” is from 1950/60 crooners (Crosby/Sinatra/Darin) and not Victorian England ๐Ÿ˜‰

    https://www.youtube.com/watch?v=BjT6k8BGjEM

  • Tennyson – Ulysses


    I got around to recording a version of Tennyson’s “Ulysses” ….

    also learned how to add subtitles with Davinci Resolve – which is not complicated but is time consuming. I’m sure there is a better way to create the subtitle file for youtube upload – e.g. there is “markup” in the subtitles which I didn’t intend.

    The picture is where Tennyson lived from 1853 until his death in 1892. officially it is “Farringford House, in the village of Freshwater Bay, Isle of Wight” — the house is now a “luxury hotel” – which means it isn’t really open as a “tourist destination” but if you have the resources you might be able to stay there …

    Tennyson wrote “Ulysses” in his early 20’s after the death of a close friend. The “narrator” of the poem is supposed to be “old” Ulysses after his return to Ithaca – maybe 50-ish –

    ANYWAY – the “Iliad” is about the end of the Trojan war – with the story centering around Achilles (who gets named dropped near the end of Tennyson’s poem).

    The “Odyssey” is a “sequel” to the “Iliad” telling the story of Odysseus’ (original Greek)/Ulysses (Latin/English translation) journey home from Troy – which ends up taking 10 years (after the 10 year siege of Troy)

    classical “spoiler alert” — Achilles death and Odysseus coming up with the “Trojan horse” idea allowing the Greeks inside the wall of Troy — happen “off screen” between where the Iliad ends and the Odyssey begins

    Odysseus returns home alone (he was King of Ithaca when he left to fight against Troy) – and promptly kills all of the “high ranking suitors” that were trying to coerce his wife (Penelope) into marriage (you know, because Odysseus must have died – since everyone else returned from Troy 10 years ago).

    some versions of Homer’s “Odyssey” have Ulysses taking care of business and then the “gods” have to intervene to establish peace – some modern scholars argue for various “alternate endings” e.g. it may have ended with the reunion of Odysseus and Penelope —

    Tennyson’s Ulysses picks up the story a couple years (“three suns” might be “three years”?) after the Odyssey. With his family taken care of and his son firmly established as the next ruler, Ulysses wants to go on one last adventure.

    Now, if you want to nit-pick – the “mariners” mentions were obviously NOT under Ulysses command on the return from Troy (remember, he loses everything – ship, crew, clothes – EVERYTHING – on the way home). However, that doesn’t mean that they hadn’t all fought together during the (10 year) Trojan war …

    I’ll point out (again) that Tennyson was in his early 20’s when he wrote Ulysses. Tennyson wouldn’t meet Emily Sellwood until his mid 20’s (whom he would be married to until his death 42 years later) – so maybe faithful Penelope should get a better treatment than just a passing reference as an “aged wife” – but that is just me nit-picking ๐Ÿ˜‰

  • Modern “basics” of I.T.

    Come my friends, let us reason together … (feel free to disagree, none of this is dogma)

    There are a couple of “truisms” that APPEAR to conflict –

    Truism 1:

    The more things change the more they stay the same.

    … and then …

    Truism 2:

    The only constant is change.

    Truism 1 seems to imply that “change” isn’t possible while Truism 2 seems to imply that “change” is the only possibility.

    There are multiple way to reconcile these two statements – for TODAY I’m NOT referring to “differences in perspective.”

    Life is like a dogsled team. If you aren’t the lead dog, the scenery never changes.

    (Lewis Grizzard gets credit for ME hearing this, but he almost certainly didn’t say it first)

    Consider that we are currently travelling through space and the earth is rotating at roughly 1,000 miles per hour – but sitting in front of my computer writing this, I don’t perceive that movement. Both the dogsled and my relative lack of perceived motion are examples of “perspective” …

    Change

    HOWEVER, “different perspectives” or points of view isn’t what I want to talk about today.

    For today (just for fun) imagine that my two “change” truisms are referring to different types of change.

    Truism 1 is “big picture change” – e.g. “human nature”/immutable laws of the universe.

    Which means “yes, Virginia there are absolutes.” Unless you can change the physical laws of the universe – it is not possible to go faster than the speed of light. Humanity has accumulated a large “knowledge base” but “humans” are NOT fundamentally different than they were 2,000 years ago. Better nutrition, better machines, more knowledge – but humanity isn’t much different.

    Truism 2 can be called “fashion“/style/”what the kids are doing these days” – “technology improvements” fall squarely into this category. There is a classic PlayStation 3 commercial that illustrates the point.

    Once upon a time:

    • mechanical pinball machines were “state of the art.”
    • The Atari 2600 was probably never “high tech” – but it was “affordable and ubiquitous” tech.
    • no one owned a “smartphone” before 1994 (the IBM Simon)
    • the “smartphone app era” didn’t start until Apple released the iPhone in 2007 (but credit for the first “App store” goes to someone else – maybe NTT DoCoMo?)

    SO fashion trends come and go – but the fundamental human needs being services by those fashion trends remain unchanged.

    What business are we in?

    Hopefully, it is obvious to everyone that it is important for leaders/management to understand the “purpose” of their organization.

    If someone is going to “lead” then they have to have a direction/destination. e.g. A tourist might hire a tour guide to “lead” them through interesting sites in a city. Wandering around aimlessly might be interesting for awhile – but could also be dangerous – i.e. the average tourist wants some guidance/direction/leadership.

    For that “guide”/leader to do their job they need knowledge of the city AND direction. If they have one OR the other (knowledge OR direction), then they will fail at their job.

    The same idea applies to any “organization.” If there is no “why”/direction/purpose for the organization then it is dying/failing – regardless of P&L.

    Consider the U.S. railroad system. At one point railroads were a huge part of the U.S. economy – the rail system opened up the western part of the continent and ended the “frontier.”

    However, a savvy railroad executive would have understood that people didn’t love railroads – what people valued was “transportation.”

    Just for fun – get out any map and look at the location of major cities. It doesn’t have to be a U.S. map.

    The point I’m working toward is that throughout human history, large settlements/cities have centered around water. Either ports to the ocean or next to riverways. Why? Well, obviously humans need water to live but also “transportation.”

    The problem with waterways is that going with the current is much easier than going against the current.

    SO this problem was solved first by “steam powered boats” and then railroads. The early railroads followed established waterways connecting established cities. Then as railroad technology matured towns were established as “railway stations” to provide services for the railroad.

    Even as the railroads became a major portion of the economy – it was NEVER about the “railroads” it was about “transportation”

    fwiw: then the automobile industry happened – once again, people don’t car so much about “cars” what they want/need is “transportation”

    If you are thinking “what about ‘freight’ traffic” – well, this is another example of the tools matching the job. Long haul transportation of “heavy” items is still efficiently handled by railroads and barges – it is “passenger traffic” that moved on …

    We could do the same sort of exercise with newspapers – i.e. I love reading the morning paper, but the need being satisfied is “information” NOT a desire to just “read a physical newspaper”

    What does this have to do with I.T.?

    Well, it is has always been more accurate to say that “information technology” is about “processing information” NOT about the “devices.”

    full disclosure: I’ve spent a lifetime in and around the “information technology” industry. FOR ME that started as working on “personal computers” then “computer networking”/LAN administration – and eventually I picked up an MBA with an “Information Management emphasis”.

    Which means I’ve witnessed the “devices” getting smaller, faster, more affordable, as well as the “networked personal computer” becoming de rigueur. However, it has never been about “the box” i.e. most organization aren’t “technology companies” but every organization utilizes “technology” as part of their day to day existence …

    Big picture: The constant is that “good I.T. practices” are not about the technology.

    Backups

    When any I.T. professional says something like “good backups” solve/prevent a lot of problems it is essential to remember how a “good backup policy” functions.

    Back in the day folks would talk about a “grandfather/father/son” strategy – if you want to refer to it as “grandmother/mother/daughter” the idea is the same. At least three distinct backups – maybe a “once a month” complete backup that might be stored in a secure facility off-site, a “once a week” complete backup, and then daily backups that might be “differential.”

    It is important to remember that running these backups is only part of the process. The backups also need to be checked on a regular basis.

    Checking the validity/integrity of backups is essential. The time to check your backups is NOT after you experience a failure/ransomware attack.

    Of course how much time and effort an organization should put into their backup policy is directly related to the value of their data. e.g. How much data are you willing to lose?

    Just re-image it

    Back in the days of the IBM PC/XT, if/when a hard drive failed it might take a day to get the system back up. After installing the new hard drive, formatting the drive and re-installing all of the software was a time intensive manual task.

    Full “disk cloning” became an option around 1995. “Ghosting” a drive (i.e. “cloning”) belongs in the acronym Hall of Fame — I’m told it was supposed to stand for “general hardware-oriented system transfer.” The point being that now if a hard drive failed, you didn’t have to manually re-install everything.

    Jump forward 10 years and Local Area Networks are everywhere – Computer manufacturers had been including ‘system restore disks’ for a long time AND software to clone and manage drives is readily available. The “system cloning” features get combined with “configuration management” and “remote support” and this is the beginning of the “modern I.T.” era.

    Now it is possible to “re-image” a system as a response to software configuration issues (or malware). Disk imaging is not a replacement for a good backup policy – but it reduced “downtime” for hardware failures.

    The more things change …

    Go back to the 1980’s/90’s and you would find a lot of “dumb terminals” connecting to a “mainframe” type system (well, by the 1980s it was probably a “minicomputer” not a full blown “mainframe”).

    A “dumb terminal” has minimal processing power – enough to accept keyboard input and provide monitor output, and connect to the local network.

    Of course those “dumb terminals” could also be “secured” so there were good reasons for keeping them around for certain installations. e.g. I remember installing a $1,000 expansion card into new late 1980’s era personal computers to make it function like a “dumb terminal” – but that might have just been the Army …

    Now in 2022 we have “chrome books” that are basically the modern version of “dumb terminals.” Again, the underlying need being serviced is “communication” and “information” …

    All of which boils down to “basics” of information processing haven’t really changed. The ‘personal computer’ is a general purpose machine that can be configured for various industry specific purposes. Yes, the “era of the PC” has been over for 10+ years but the need for ‘personal computers’ and ‘local area networks’ will continue.

  • Industry Changing Events and “the cloud”

    Merriam-Webster tells me that etymology is the “the history of aย linguistic formย (such as a word)” (the official definition goes on a little longer – click on the link if interested …)

    The last couple weeks I’ve run into a couple of “industry professionals” that are very skilled in a particular subset of “information technology/assurance/security/whatever” but obviously had no idea what “the cloud” consists of in 2022.

    Interrupting and then giving an impromptu lecture on the history and meaning of “the cloud” would have been impolite and ineffective – so here we are ๐Ÿ˜‰ .

    Back in the day …

    Way back in the 1980’s we had the “public switched telephone network” (PSTN) in the form of (monopoly) AT&T. You could “drop a dime” into a pay phone and make a local call. “Long distance” was substantially more – with the first minute even more expensive.

    The justification for higher connection charges and then “per minute” charges was simply that the call was using resources in “another section” of the PSTN. How did calls get routed?

    Back in 1980 if you talked to someone in the “telecommunications” industry they might have referred to a phone call going into “the cloud” and connecting on the other end.

    (btw: you know all those old shows where they need “x” amount of time to “trace” a call – always a good dramatic device, but from a tech point of view the “phone company” knew where each end of the call was originating – you know, simply because that was how the system worked)

    I’m guessing that by the breakup of AT&T in 1984 most of the “telecommunications cloud” had gone digital – but I was more concerned with football games in the 1980s than telecommunications – so I’m honestly not sure.

    In the “completely anecdotal” category “long distance” had been the “next best thing to being there” (a famous telephone system commercial – check youtube if interested) since at least the mid-1970s – oh, and “letter writing”(probably) ended because of low cost long distance not because of “email”

    Steps along the way …

    Important technological steps along the way to the modern “cloud” could include:

    • the first “modem” in in the early 1960s – that is a “modulator”/”demodulator” if you are keeping score. A device that could take a digital signal and convert it to an analog wave for transmission over the PSTN on one end of the conversation and another modem could reverse the process on the other end.
    • Ethernet was invented in the early 1970’s – which allowed computers to talk to each other over long distances. You are probably using some flavor of Ethernet on your LAN
    • TCP/IP was “invented” in the 1970’s then became the language of ARPANET in the early 1980’s. One way to define the “Internet” is as a “large TCP/IP network” – ’nuff said

    that web thing

    Tim Berners-Lee gets credit for “inventing” the world wide web in 1989 while at CERN. Which made “the Internet” much easier to use – and suddenly everyone wanted a “web site.”

    Of course the “personal computer” needed to exist before we could get large scale adoption of ANY “computer network” – but that is an entirely different story ๐Ÿ˜‰

    The very short version of the story is that personal computer sales greatly increased in the 1990s because folks wanted to use that new “interweb” thing.

    A popular analogy for the Internet at the time was as the “information superhighway” – with a personal computer using a web browser being the “car” part of the analogy.

    Virtualization

    Google tells me that “virtualization technology” actually goes back to the old mainframe/time-sharing systems in the 1960’s when IBM created the first “hypervisor.”

    A “hypervisor” is what allows the creation of “virtual machines.” If you think of a physical computer as an empty warehouse that can be divided into distinct sections as needed then a hypervisor is what we use to create distinct sections and assign resources to those sections.

    The ins and outs of virtualization technology is beyond the scope of this article BUT it is safe to say that “commodity computer virtualization technology” was an industry changing event.

    The VERY short explanation is that virtualization allows for more efficient use of resources which is good for the P&L/bottom line.

    (fwiw: any technology that gets accepted on a large scale in a relatively short amount of time PROBABLY involves saving $$ – but that is more of a personal observation that an industry truism.)

    Also important was the development of “remote desktop” software – which would have been called “terminal access” before computers had “desktops.”

    e.g. Wikipedia tells me that Microsoft’s “Remote Desktop Protocol” was introduced in Windows NT 4.0 – which ZDNet tells me was released in 1996 (fwiw: some of of my expired certs involved Windows NT).

    “Remote access” increased the number of computers a single person could support which qualifies as another “industry changer.” As a rule of thumb if you had more than 20 computers in your early 1990s company – you PROBABLY had enough computer problems to justify hiring an onsite tech.

    With remote access tools not only could a single tech support more computers – they could support more locations. Sure in the 1990’s you probably still had to “dial in” since “always on high speed internet access” didn’t really become widely available until the 2000s – but as always YMMV.

    dot-com boom/bust/bubble

    There was a “new economy” gold rush of sorts in the 1990s. Just like gold and silver exploration fueled a measurable amount of “westward migration” into what was at the time the “western frontier” of the United States – a measurable amount of folks got caught up in “dot-com” hysteria and “the web” became part of modern society along the way.

    I remember a lot of talk about how the “new economy” was going to drive out traditional “brick and mortar” business. WELL, “the web” certainly goes beyond “industry changing” – but in the 1990s faith in an instant transformation of the “old economy” into a web dominated “new economy” reached zeitgeist proportions …

    In 2022 some major metropolitan areas trace their start to the gold/silver rushes in the last half of the 19th century (San Francisco and Denver come to mind). There are also a LOT of abandoned “ghost towns.”

    In the “big economic picture the people running saloons/hotels/general stores in “gold rush areas” had a decent change of outliving the “gold rush” assuming that there was a reason for the settlement to be there other than “gold mining”

    The “dot-com rush” equivalent was that a large number of investors were convinced that a company could stay a “going concern” if it didn’t make a profit. However – just like the people selling supplies to gold prospectors had a good chance of surviving the gold – the folks selling tools to create a “web presence” did alright – i.e. in 2022 the survivors of the “dot-com bubble” are doing very well (e.g. Amazon, Google)

    Web Hosting

    In the “early days of the web” establishing a “web presence” took (relatively) arcane skills. The joke was that if you could spell HTML then you could get a job as a “web designer” – ok, maybe it isn’t a “funny” joke – but you get the idea.

    An in depth discussion of web development history isn’t required – pointing out that web 1.0 was the time of “static web pages” is enough.

    If you had a decent internet service provider they might have given you space on their servers for a “personal web page.” If you were a “local” business you might have been told by the “experts” to not worry about a web site – since the “web” would only be useful for companies with a widely dispersed customer base.

    That wasn’t bad advice at the time – but the technology needed to mature. The “smart phone” (Apple 2007) motivated the “mobile first” development strategy – if you can access the web through your phone, then it increases the value of “localized up to date web information.”

    “Web hosting” was another of those things that was going to be “free forever” (e.g. one of the tales of “dot-com bubble” woes was “GeoCities”). Which probably slowed down “web service provider” growth – but that is very much me guessing.

    ANYWAY – in web 1.0 (when the average user was connecting by dial up) the stress put on web servers was minimal – so simply paying to rent space on “someone else’s computer” was a viable option.

    The next step up from “web hosting” might have been to rent a “virtual server” or “co-locate” your own server – both of which required more (relatively) arcane skills.

    THE CLOUD

    Some milestones worth pointing out:

    • 1998 – VMWare “Workstation” released (virtualization on the desktop)
    • “Google search” was another “industry changing” event that happened in 1998 – ’nuff said
    • 2001 VMWare ESX (server virtualization)
    • 2005 Intel released the first cpus with “Intel Virtualization Technology” (VT-x)
    • 2005 Facebook – noteworthy, but not “industry changing”
    • 2006 Amazon Web Services (AWS)

    Officially Amazon described AWS as providing “IT infrastructure services to businesses in the form of web services” – i.e. “the cloud”

    NIST tells us that –

    Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics, three service models, and four deployment models.

    NIST SP 800-145

    If we do a close reading of the NIST definition – the “on-demand” and “configurable” portions are what differentiates “the cloud” from “using other folks computers/data center.”

    I like the “computing as a utility” concept. What does that mean? Glad you asked – e.g. Look on a Monopoly board and you will see the “utility companies” listed as “Water Works” and “Electric Company.”

    i.e. “water” and “electric” are typically considered public utilities. If you buy a home you will (probably) get the water and electric changed into your name for billing purposes – and then you will pay for the amount of water and electric you use.

    BUT you don’t have to use the “city water system” or local electric grid – you could choose to “live off the grid.” If you live in a rural area you might have a well for your water usage – or you might choose to install solar panels and/or a generator for your electric needs.

    If you help your neighbors in an emergency by allowing them access to your well – or maybe connecting your generator to their house. You are a very nice neighbor BUT you aren’t a “utility company” – i.e. your well/generator won’t have the capacity that the full blown “municipal water system” or electric company can provide.

    Just like if you have a small datacenter and start providing “internet services” to customers – unless you are big enough to be “ubiquitous, convenient, and on-demand” then you aren’t a “cloud provider.”

    Also note the “as a service” aspect of the cloud – i.e. when you sign up you will agree to pay for what you use, but you aren’t automatically making a commitment for any minimal amount of usage.

    As opposed to “web hosting” or “renting a server” where you will probably agree to a monthly fee and a minimal term of service.

    Billing options and service capabilities are obviously vendor specific. As a rule of thumb – unless you have “variable usage” then using “the cloud” PROBABLY won’t save you money over “web hosting”/”server rental.”

    The beauty of the cloud is that users can configure “cloud services” to automatically scale up for an increase in traffic and then automatically scale down when traffic decreases.

    e.g. image a web site that has very high traffic during “business hours” but then minimal traffic the other 16 hours of the day. A properly configured “cloud service” would scale up (costing more $$) during the day and then scale down (costing fewer $$) at night.

    Yes, billing options become a distinguishing element of the “cloud” – which further muddies the water.

    Worth pointing out is that if you are “big internet company” you might get to the point where it is in your company’s best interest to build your own datacenters.

    This is just the classic “rent” vs “buy” scenario – i.e. if you are paying more in “rent” than it would cost you to “buy” then MAYBE “buying your own” becomes an option (of course “buying your own” also means “maintaining” and “upgrading” your own). This tends to work better in real estate where “equity”/property values tends to increase.

    Any new “internet service” that strives to be “globally used” will (probably) start out using “the cloud” – and then if/when they are wildly successful, start building their own datacenters while decreasing their usage of the public cloud.

    Final Thoughts

    It Ainโ€™t What You Donโ€™t Know That Gets You Into Trouble. Itโ€™s What You Know for Sure That Just Ainโ€™t So

    Artemus Ward

    As a final thought – “cloud service” usage was $332.3 BILLION in 2021 up from $270 billion in 2020 (according to Gartner).

    There isn’t anything magical about “the cloud” – but it is a little more complex than just “using other people’s computers.”

    The problem with “language” in general is that there are always regional and industry differences. e.g. “Salesforce” and “SAP” fall under the “cloud computing” umbrella – but Salesforce uses AWS to provide their “Software as a Service” product and SAP uses Microsoft Azure.

    I just spent 2,000 words trying to explain the history and meaning of “the cloud” – umm, maybe a cloud by any other name would still be vendor specific

    HOWEVER I would be VERY careful with choosing a cloud provider that isn’t offered by a “big tech company” (i.e. Microsoft, Amazon, Google, IBM, Oracle). “Putting all of your eggs in one basket” is always a risky proposition (especially if you aren’t sure that the basket is good in the first place) — as always caveat emptor …