Category: technology history

  • memoirs of an adjunct instructor or What do you mean “full stack developer?”

    During the “great recession” of 2008 I kind of backed into “teaching.”

    The small company where I was the “network technician” for 9+ years wasn’t dying so much as “winding down.” I had ample notice that I was becoming “redundant” – in fact the owner PROBABLY should have “let me go” sooner than he did.

    When I was laid off in 2008 I had been actively searching/”looking for work” for 6+ months – certainly didn’t think I would unemployed for an extended period of time.

    … and a year later I had gone from “applying at companies I want to work for” to “applying to everything I heard about.” When I was offered an “adjunct instructor” position with a “for profit” school in June 2009 – I accepted.

    That first term I taught a “keyboarding class” – which boiled down to watching students follow the programmed instruction. The class was “required” and to be honest there wasn’t any “teaching” involved.

    To be even MORE honest, I probably wasn’t qualified to teach the class – I have an MBA and had multiple CompTIA certs at the time (A+, Network+) – but “keyboarding” at an advanced level isn’t in my skill set.

    BUT I turned in the grades on time, that “1 keyboarding class” grew into teaching CompTIA A+ and Network+ classes (and eventually Security+, and the Microsoft client and server classes at the time). fwiw: I taught the Network+ so many times during that 6 years that I have parts of the book memorized.

    Lessons learned …

    Before I started teaching I had spent 15 years “in the field” – which means I had done the job the students were learning. I was a “computer industry professional teaching adults changing careers how to be ‘computer industry professionals’”

    My FIRST “a ha!” moment was that I was “learning” along with the students. The students were (hopefully) going from “entry level” to “professional” and I was going from “working professional” to “whatever comes next.”

    Knowing “how” to do something will get you a job, but knowing “why” something works is required for “mastery.”

    fwiw: I think this same idea applied to “diagramming sentences” in middle school – to use the language properly it helps to understand what each part does. The fact I don’t remember how to diagram a sentence doesn’t matter.

    The “computer networking” equivalent to “diagramming sentences” is learning the OSI model – i.e. not something you actually use in the real world, but a good way to learn the theory of “computer networking.”

    When I started teaching I was probably at level 7.5 of 10 on my “OSI model” comprehension – after teaching for 6 years I was at a level 9.5 of 10 (10 of 10 would involve having things deeply committed to memory which I do not). All of which is completely useless outside of a classroom …

    Of course most students were coming into the networking class with a “0 of 10” understanding of the OSI model BUT had probably setup their home network/Wi-Fi.

    The same as above applies to my understanding of “TCP/IP networking” and “Cyber Security” in general.

    Book Learning …

    I jumped ship at the “for profit school” I was teaching in 2015 for a number of reasons. MOSTLY it was because of “organizational issues.” I always enjoyed teaching/working with students, but the “writing was on the wall” so to speak.

    I had moved from “adjunct instructor” to “full time director” – but it was painfully obvious I didn’t have a future with the organization. e.g. During my 6 years with the organization we had 4 “campus directors” and 5 “regional directors” — and most of those were “replaced” for reasons OTHER than “promotion.”

    What the “powers the be” were most concerned with was “enrollment numbers” – not education. I appreciate the business side – but when “educated professionals” (i.e. the faculty) are treated like “itinerate labor”, well, the “writing is on the wall.”

    In 2014 “the school” spent a lot of money setting up fiber optic connections and a “teleconferencing room” — which they assured the faculty was for OUR benefit.

    Ok, reality check – yes I understand that “instructors” were their biggest expense. I dealt with other “small colleges” in the last 9 years that were trying to get by with fewer and fewer “full time faculty” – SOME of them ran into “accreditation problems” because of an over reliance on “adjuncts” – I’m not criticizing so much as explaining what the “writing on the wall” said …

    oh, and that writing was probably also saying “get a PhD if you want a full time teaching position” — if “school” would have paid me to continue my education or even just to keep my skills up to date, I might have been interested in staying longer.

    Just in general – an organization’s “employees” are either their “biggest asset” OR their “biggest fixed cost.” From an accounting standpoint both are (probably) true (unless you are “Ivy League” school with a huge endowment). From an “administration” point of view dealing with faculty as “asset” or “fixed cost” says a LOT about the organization — after 6 years it was VERY clear that the “for profit” school looked at instructors as “expensive necessary evils.”

    COVID-19 was the last straw for the campus where I worked. The school still exits but appears to be totally “online” –

    Out of the frying pan …

    I left “for profit school” to go to teach at a “tech bootcamp” — which was jumping from “bad situation to worse situation.”

    The fact I was commuting an hour and a half and was becoming more and more aware of chronic pain in my leg certainly didn’t help.

    fwiw: I will tell anyone that asks that a $20 foam roller changed my life — e.g. “self myofascial release” has general fitness applications.

    I was also a certified “strength conditioning professional” (CSCS) in a different life – so I had a long history of trying to figure out “why I had chronic pain down the side of my leg” – when there was no indication of injury/limit on range of motion.

    Oh, and the “root cause” was tied into that “long commute” – the human body isn’t designed for long periods of “inaction.” The body adapts to the demands/stress placed on it – so if it is “immobile” for long periods of time – it becomes better at being “immobile.” For me that ended up being a constant dull pain down my left leg.

    Being more active and five minutes with the foam roller after my “workout” keeps me relatively pain free (“it isn’t the years, it’s the mileage”).

    ANYWAY – more itinerate level “teaching” gave me time to work on “new skills.”

    I started my “I.T. career” as a “pc repair technician.” The job of “personal computer technician” is going (has gone?) the way of “television repair.”

    Which isn’t good or bad – e.g. “personal computers” aren’t going away anymore than “televisions” have gone away. BUT if you paid “$X” for something you aren’t going to pay “$X” to have it repaired – this is just the old “fix” vs “replace” idea.

    The cell phone as 21st Century “dumb terminal” is becoming reality. BUT the “personal computer” is a general purpose device that can be “office work” machine, “gaming” machine, “audiovisual content creation” machine, or “whatever someone can program it to do” machine. The “primary communication device” might be a cell phone, but there are things a cell phone just doesn’t do very well …

    Meanwhile …

    I updated my “tech skill set” from “A+ Certified PC repair tech” to “networking technician” in the 1990s. Being able to make Cat 5/6 twisted pair patch cables still comes in handy when I’m working on the home network but no one has asked me to install a Novell Netware server recently (or Windows Active Directory for that matter).

    Back before the “world wide web” stand alone applications were the flavor of the week. e.g. If you bought a new PC in 1990 it probably came with an integrated “modem” but not a “network card.” That new PC in 1990 probably also came with some form of “office” software – providing word processing and spreadsheet functions.

    Those “office” apps would have been “stand alone” instances – which needed to be installed and maintained individually on each PC.

    Back in 1990 that application might have been written in C or C++. I taught myself “introductory programming” using Pascal mostly because “Turbo Pascal” came packaged with tools to create “windows” and mouse control. “Pascal” was designed as a “learning language” so it was a little less threatening than C/C++ back in the day …

    random thought: If you wanted “graphical user interface” (GUI) functionality in 1990 you had to write it yourself. One of the big deals with “Microsoft Windows” was that it provided a uniform platform for developers – i.e. developers didn’t have to worry about writing the “GUI operating system hooks” they could just reference the Windows OS.

    Apple Computers also had “developers” for their OS – but philosophically “Apple Computers” sold “hardware with an operating system included” while Microsoft sold “an operating system that would run on x86 hardware” – since x86 hardware was kind of a commodity (read that as MUCH less expensive than “Apple Computers”). The “IBM PC” story that ended up making Microsoft, inc a lot of money. — which was a fun documentary to show students bored of listening to me lecture …

    What users care about is applications/”getting work done” not the underlying operating system. Microsoft also understood the importance of developers creating applications for their platform.

    fwiw: “Microsoft, Inc” started out selling programming/development tools and “backed into” the OS market – which is a different story.

    A lot of “business reference applications” in the early 1990s looked like Microsoft Encarta — they had a “user interface” providing access to a “local database.” — again, one machine, one user at a time, one application.

    N-tier

    Originally the “PC” was called a “micro computer” – the fact that it was self contained/stand alone was a positive selling point. BEFORE the “PC” a larger organization might have had a “terminal” system where a “dumb terminal” allowed access to a “mainframe”/mini computer.

    SO when the “world wide web” happened and “client server” computing became mainstream the concept of “N tier” computing model as a concept became popular.

    N-tier might be a the “presentation” layer/web server, the “business logic” layer/a programming language, and then the “data” layer/a database management system

    Full Stack Developer

    In the 21st Century “stand alone” applications are the exception – and “web applications” the standard.

    Note that applications that allow you to download and install files on a personal computer are better called “subscription verification” applications rather than “N Tier.”

    e.g. Adobe allows folks to download their “Creative Suite” and run the applications on local machines using computing resources from the local machine – BUT when the application starts it verifies that the user has a valid subscription.

    An “N tier” application doesn’t get installed locally – think Instagram or X/Twitter …

    For most “business applications” designing an “N tier” app using “web technologies” is a workable long term solution.

    When we divided the application functionality the “developer” job also differentiated – “front end” for the user facing aspects and “back end” for the database/logic aspects.

    The actual tools/technologies continue to develop – in “general” the “front end” will involve HTML/CSS/JavaScript and the “back end” involves a combination of “server language” and “database management system.”

    Languages

    Java (the language maintained by Oracle not “JavaScript” also known as ECMAscript) has provided “full stack development” tools for almost 30 years. The future of Java is tied into Oracle, Inc but neither is gonna be “obsolete” anytime soon.

    BUT if someone is competent with Java – then they will describe themselves as a “Java developer” – Oracle has respected industry certifications

    I am NOT a “Java developer” – but I don’t come to “bury Java” – if you are a computer science major looking to go work for “large corporation” then learning Java (and picking up a Java certification) is worth your time.

    Microsoft never stopped making “developer tools” – “Visual Studio” is still their flagship product BUT Visual Studio Code is my “go to” (free, multi-platform) programming editor in 2024)

    Of course Microsoft wants developers to develop “Azure applications” in 2024 – C# provides easy access to a lot of those “full stack” features.

    … and I am ALSO not a C# programmer – but there are a lot of C# jobs out there as well (I see C# and other Microsoft ‘full stack’ tech specifically mentioned with Major League Baseball ‘analytics’ jobs and the NFL – so I’m sure the “larger corporate” world has also embraced them)

    JavaScript on the server side has also become popular – Node.js — so it is possible to use JavaScript on the front and back end of an application. opportunities abound

    My first exposure to “server side” programming was PHP – I had read some “C” programming books before stumbling upon PHP, and my first thought was that it looked a lot like “C” – but then MOST computer languages look a lot like “C.”

    PHP tends to be the “P” part of the LAMP stack acronym (“Linux OS, Apache web server, MySQL database, and PHP scripting language”).

    Laravel as a framework is popular in 2024 …

    … for what it is worth MOST of the “web” is probably powered by a combination of JavaScript and PHP – but a lot of the folks using PHP are unaware they are using PHP, i.e. 40%+ of the web is “powered by WordPress.”

    I’ve installed the LAMP stack more times than I can remember – but I don’t do much with PHP except keep it updated … but again, opportunities abound

    Python on the other hand is where I spend a lot of time – I find Django a little irritating, but it is popular. I prefer flask or pyramid for the “back end” and then select a JavaScript front end as needed

    e.g. since I prefer “simplicity” I used “mustache” for template presentation with my “Dad joke” and “Ancient Quote” demo applications

    Python was invented with “ease of learning” as a goal – and for the most part it succeeds. The fact that it can also do everything I need it to do (and more) is also nice 😉 – and yes, jobs, jobs, jobs …

    Databases

    IBM Db2, Oracle, Microsoft SQL server are in the category of “database management system royalty” – obviously they have a vast installation base and “large corporate” customers galore. The folks in charge of those systems tend to call themselves “database managers.” Those database managers probably work with a team of Java developers …

    At the other end of the spectrum the open source project MySQL was “acquired” by Sun Microsystems in 2008 which was then acquired by Oracle in 2010. Both “MySQL” and “Oracle” are popular database system back ends.

    MySQL is an open source project that has been “forked” into the “MariaDB foundation.”

    PostgreSQL is a little more “enterprise database” like – also a popular open source project.

    MongoDB has become popular and is part of its own “full stack” acronym MEAN (MongoDB, Express, Angular, and Node) – MongoDB is a “NoSQL” database which means it is “philosophically” different than the other databases mentioned – making it a great choice for some applications, and not so great for other applications.

    To be honest I’m not REALLY sure if there is a big performance difference between database management back ends. Hardware and storage space are going to matter much more than the database engine itself.

    “Big Corporate Enterprise Computing” users aren’t as concerned with the price of the database system they want rock solid dependability – if there was a Mount Rushmore of database management systems – DB2, Oracle, and Microsoft SQL server would be there …

    … but MariaDB is a good choice for most projects – easy to install, not terribly complicated to use. There is even a nice web front end – phpMyAdmin

    I’m not sure if the term “full stack developer” is gonna stick around though. Designing an easy to use “user interface” is not “easy” to do. Designing (and maintaining) a high performing database back end is also not trivial. There will always be room for specialists.

    “Generalist developer” sounds less “techy” than “full stack developer” – but my guess is that the “full stack” part is going to become superfluous …

  • Gifs, dial-up, and Libraries

    I went down the rabbit hole this morning on how to pronounce “gif”

    We always “recognize” more words than we actively use – and if you “learn” a word by reading, then the “correct” pronunciation might seem odd

    English/”American” is particularly bad – because we readily absorb words from other languages. e.g. is the “e” at the end of “cache” silent? (yes, yes it is – even in the original French I’m told the “e” is silent most of the time – but it is French so I have no idea 😉 )

    GIF

    SO there is a techie dispute of how to PROPERLY pronounce the acronym for “graphics interchange format” – is it “hard g” Gif or is it like the peanut butter “jif” – I never had to say “gif” out loud and “back in the era of dial up services” folks didn’t talk about “file extensions” on a regular basis — I’m guessing MY experience isn’t unusual, e.g. the dispute popped up this morning …

    fwiw: the OED suggests “Gif” while Merriam-Webster (in true American style) offers both pronunciations as acceptable (gif) — so if you feel strongly about it one way or the other, you are correct 😉

    I tended to just say the letters g-i-f or maybe “dot g-i-f” if I needed to distinguish the file extension.

    fwiw: back in the ol’ “Disk Operating System” (D.O.S.) days we were limited to file names with a maximum of 8 characters a period and then a 3 letter extension e.g. “something.txt”

    D.O.S. used the file extension to distinguish between “executable files” and “data files” – if the file was “something.bat” then D.O.S. would try to execute/run the file while “something.txt” would be seen as “text data.”

    “Modern” operating systems still tend to look at the file extension as a clue for the file’s purpose. The file extension can be connected with an application – e.g. a “something.xcf” file was probably created in GIMP, if you double click on the file your OS will probably try to open the file with GIMP …

    yeah, we had to use cryptic file names because of those limitations back in the day, but we LIKED it that way! and stay off my lawn you crazy kids!

    If memory serves – I think D.O.S. 5.0 expanded the “before the dot” file name space. “Modern” operating systems allow for longer file names, but you can still be as cryptic as you like …

    Dial-Up


    Before the “internet” became widely available there were various “information services” available over dial-up connections. CompuServe immediately comes to mind (they get credit for “creating” the .gif format). There were multiple large “national services” as well as “bulletin board services” (BBS) “before the interweb”.

    “Dial-up” used a “modem” with speeds measured in “bits per second” – with 56k being a “fast” dial-up modem. Which translates to “slow” and “point to point.” Any large file downloads tended to be “hit or miss” because the connection being broken would (probably) mean you needed to start the download from the beginning.

    This “slow and risky” file download aspect of dial-up was why a lot of Linux distributions sold CD’s/physical media early on – i.e. it might have taken DAYS to download an entire distribution over dial-up … good times 😉

    “Modem” is short for “modulator”/”demodulator” – e.g. the sending computer starts with a digital signal that gets “modulated” to an analog wave that could be sent over the “plain old telephone system” (POTS) by the “modem.” The receiving computer’s modem then “demodulated” the analog signal to a digital signal.

    While I’m at it – if you go searching for ancient computer gear you might also come across “baud rates” – which measures the number of “state changes” in a signal. The “baud rate” might be slower than the “bit rate” due to data compression.

    Ummm, of course none of that is REALLY important in the 21st Century. BUT I like to point out that in the “big picture” telegraph technology (dots and dashes sent as electrical signals over a wire) was the same way “dial-up” worked – and “modern networking” is still sending 1’s and 0’s. Yes, “modern networking” is much faster and reliable, but still just 1’s and 0’s …

    The term “modem” has stuck around as a generic form of “computer communication device” – technically you PROBABLY have a “router” connecting you to the internet – but if you call it a “modem” no one will notice …

    Those “dial-up services” back in the day used to charge per minute – so access was obviously restricted/limited. In the late 1980’s part of a librarian’s job description might have included doing “research” using various dial-up services — e.g. those “card catalog” systems were functionally “analog databases” and the “electronic resources” of the time were not much more sophisticated

    “Google will bring you back, you know, a hundred thousand answers. A librarian will bring you back the right one.”

    -Neil Gaman

    Neil Gaman’s quote illustrates the importance of “context” and the evaluation of “sources”

    I’m seeing a lot of “AI” and “machine learning” (ML) as buzzwords in job postings – and folks predicting a “global golden age” because “insert buzzword here” will transform society on a grand scale – and well, the lesson from history is that “access to information” is NEVER equals “wise application of knowledge”

    I’m not saying that “buzzword” won’t change the workplace – I’m just pointing out that humanity is great at justifying doing the “wrong” thing – i.e. greedy, self-centered, arrogant humans are not likely to create “supremely benevolent and wise AI”

    but yes, AI and ML are (probably) gonna be important TOOLS but we (as in “humanity in general”) are PROBABLY not gonna use those tools to usher in a “golden era” of universal peace and prosperity for EVERYONE

    Libraries

    The “value” of libraries has always come from “information access.” When “books” where expensive and ONLY available in “dead tree” format then “library” was synonymous with “books.”

    “Physical media” still dominated “library holdings” until the late 20th/early 21st Centuries gave us “low cost digital access to information.”

    The value of libraries is STILL “information access” with the caveat that “information curation” is PART of “access.”

    i.e. Including something in a “library” implies that the item has more value than items NOT included in the “library.”

    Obviously just because someone “wrote a book” does NOT mean that the book is “true.” Back in the days of “dead tree book domination” the fact that someone had gone to the expense of PUBLISHING a book implied that SOMEONE thought the book was valuable.

    This is the same idea as the “why” behind “ancient works” being considered “worthy of study” (at least in part) just because they are “ancient” — i.e. the logic being that if someone put the time and effort into making a copy of “work” then it MUST have been highly regarded at the time. Then if there are multiple copies of “work” that logic gets amplified.

    Which again loops back to the importance of “curation” – especially in a time when the barriers to “getting published” are close to nil.

    “Man’s mind, stretched to a new idea, never goes back to its original dimension.”

     – Oliver Wendell Holmes

    Of course special care needs to be taken for the care and feeding of “young minds.” Curation to community standards is NOT the same as “censorship.”

  • random thoughts on “Acres of Diamonds”

    Russel Conwell (February 15, 1843 – December 6, 1925) (from wikipedia) “was an American Baptist minister, orator, philanthropist, author, lawyer, and writer. He is best remembered as the founder and first president of Temple University in Philadelphia, as the Pastor of The Baptist Temple, and for his inspirational lecture, ‘Acres of Diamonds’.”

    A link to the full text of Mr. Conwell’s speech is available on the Temple University page

    The story given as inspiration for the lecture (and as the introduction to the longer lecture) is available here

    100 years ago …

    Mr. Conwell would give the speech 6,000+ times – which is impressive. The “legend” is that when arrived in a new town (where he was going to perform the speech) that he would find out the “prominent”/successful folks in the town and work them into the performance.

    ONE of the “points” of the speech being that “opportunity” can be found everywhere. The entrepreneur doesn’t (automatically) need to travel far away looking for opportunity, it (might) be in the backyard.

    A hundred years ago, Mr. Conwell had to argue that “making money” was a worthwhile endeavor. The “common wisdom” of the day being that “extreme wealth” MUST have been achieved by some form of skullduggery.

    Historically, the human “founders” of these United States had come from a culture where land equaled “wealth.” In the “old world” land was in short supply AND passed down by inheritance. Someone born a “peasant” was going to stay a “peasant” because those “to the manor born” controlled the vast majority of land – and therefore “wealth.”

    A rising “merchant class” was in the process of disrupting things when the American Colonies and the U.K. had a disagreement in the late 18th Century — BUT most folks still lived/worked on farms until the early 20th Century.

    It is unfair to call ALL of those born into privilege “parasites.” However, 18th Century England is a good case study of “those in power” using the system to keep themselves in power AND wealthy.

    The grand point being that “money”/wealth is not evil. Money is a tool which can be used for good purposes OR for bad/”greed.” 1 Timothy 6:10 tell us that “LOVE of money is the root of all evil.”

    “For the love of money is the root of all evil: which while some coveted after, they have erred from the faith, and pierced themselves through with many sorrows.”

    1 Timothy 6:10

    Note that “greed” is never “good.” Greed implies “getting more” at the expense of others – which is obviously impossible to reconcile with “loving your neighbor as yourself.”

    New World

    It is fun to point out that “technology” has always been a disruptive force. Technology is always about “application” of knowledge. Advances in “farming technology” helped farmers be more productive – while also freeing up “labor” for the factories of the industrial revolution.

    If we could do a survey asking “average farm workers” (back when Mr. Conwell was giving his speech) how they could get “wealthy” they PROBABLY would have said some variation of “striking gold.”

    (… and historians can point at the “gold rushes” in the middle of the 19th Century as helping populate the western United States. Of course more “wealth” was generated from folks helping the “prospectors” than from folks “striking it rich” pulling gold/silver out of the ground …)

    Of course if one of those “average farm workers” that sold everything to go gold prospecting had created a “better plow” they would have been much better off.

    e.g. A Vermont born blacksmith solved a common problem for farmers – and both he AND the farmers prospered. John Deere, Inc is still helping farmers be productive in the 21st Century.

    Transportation

    If you look at the “super wealthy” from the late 19th and early 20th Century, the common theme might be “transportation.”

    e.g. Cornelius Vanderbuilt built an empire from ferries – from the time when “waterways” were the primary means of transportation in the U.S.

    John Rockefeller built an oil empire – from the time when oil was used for light and heat. When Henry Ford made the horseless carriage affordable, “oil” being refined into gasoline made the Rockefeller clan even more wealthy.

    Sandwiched between Mr. Rockefeller and Mr. Ford as “wealthiest American” was Andrew Carnegie – who had worked his way up from “child labor” to “steel magnate” – from a time with “railroads” and the telegraph were the latest and greatest “technology.”

    No, I am NOT holding up ANY of these men as “moral exemplars” – the grand point is that they helped “solve problems” for a large number of folks, and solving those problems was the root of their wealth …

    The musical “Oklahoma!” (1943) has a song where “rural residents” marvel at the advancements of Kansas City (“She went about as fur as she could go!”). By the mid 20th Century things like automobiles and the telephone system were commonplace enough to be a plot point in a musical.

    (“Oklahoma!” is set around the time the territory became a State. Oklahoma was the 46th State admitted to the Union in 1907)

    Again, the grand point being that some folks got wealthy from disrupting the status quo, and MANY more got wealthy by making incremental advances to cars and phones.

    e.g. Thomas Edison’s “diamonds in the backyard” looked like improvements to the telegraph system of his time long before “Edison Electric.”

    random thought – I’m sure there is an interesting story with the “cigarette lighter” technology. The actual “cigarette lighter” part isn’t a “standard feature” but you can find a lot of “accessories” that use the “automobile auxiliary power outlet.”

    Modern Times

    The sad fact is that in a LOT of nations the “economic game” IS stacked against the “average individual.” Which is why we see so many folks willing to risk everything to immigrate to “opportunity.”

    Obviously a complex subject – and someone living in a “warzone” is more concerned with survival than anything.

    For those NOT living in a warzone or an extremely dysfunctional government the big question becomes which “career path” to pursue.

    Charlie Chaplin made a movie called “Modern Times” back in 1936. Mr. Chaplin was a world famous “movie star” at the time – the movie sometimes get held up as an example of “radical political beliefs.” I’m not sure the movie has any agenda except “entertainment” – e.g. Mr Chaplin’s “tramp” character is pursuing “happiness” NOT a political agenda.

    That same idea applies to “modern workers” in the 21st century. “Happiness” probably won’t come from a “job.” Generic advice like “follow your bliss” is nice, but not particularly useful.

    There is nothing wrong with “working for a paycheck.” The best case scenario is to “do what you love” for a living. the WORST case scenario is doing a job you hate to survive …

    Education, intelligence, and “degrees”

    “Education in the United States” has changed a great deal in the last 100 years. The first “colleges” in the North America existed to train “clergy” (e.g. Harvard was founded in 1636) and then “academics.”

    The “Agricultural and Mechanical Colleges” came along later with the “land grant” colleges in the late 19th Century. The GI Bill sent 2.2 million WWII veterans to college AND 5.6 million more to other training programs.

    Sputnick I (1957) had the unintended consequence of changing national educational priorities in the U.S. – as well as kickstarting NASA (founded July 29, 1958). Both events helped the U.S. get to the moon 11 years later.

    World war and cold war politics aside, the 20th Century workplace was probably the historical “anomaly.” At one point in the 20th Century a “young worker” could drop out of high school, go to work at the local “factory,” and make a “good living,”

    Remember that for MOST of human history, folks lived and worked on farms. Cities provided a marketplace for those agricultural products as well as “other” commerce. Before mass media and rapid transportation MOST people would live and die within 20 miles of where they were born.

    Again, maybe interesting BUT I’ll point out that “compulsory” public education PROBABLY doesn’t have a great record of achievement in the U.S. (or anywhere). i.e. if the ONLY reason “student” is in “school” is because they “have to” – then that student isn’t going to learn much.

    This has nothing to do with “intelligence” and everything to do with “individual interests” and ability. “Education” is best understood as a live long process – not a short term goal.

    “I have never let my schooling interfere with my education.”

    — Mark Twain

    Part of what makes us “human” is (probably) the desire for “mastery” of skills. In the “best case” this is how “education” should look – a journey from “untrained” to “skilled.”

    If an individual’s investment (in time and money) results in them having a valuable “skill set” – then they are “well educated.”

    The contrast being the “academic” that has a lot of “degrees” but no actual “skills” — i.e. having a “doctorate” doesn’t automatically mean anything. “Having” a degree shows “completion” of a set of requirements not “mastery” of those subjects.

    Of course that distinction is why we have “licensing” as well as “degree” requirements for some professions. e.g. The law school graduate that can’t pass the “Bar examination” won’t be allowed to practice law, but might be allowed to teach.

    Nepo babies

    Now, imagine we did a survey of “modern high school students” in the United States asking them “how can you become wealthy?”

    It would be interesting to actually perform the study – i.e. I’m just guessing here from MY personal experience.

    We would also have collect data on the parent’s education and career — i.e. if a child grows up in a family of “fire fighters” then they are (probably) more likely to pursue a career as “fire fighters” simply because that is what they are familiar.

    The term “nepo baby” gets used (derisively) for some entertainment industry professionals – but if mom and dad are both “entertainment industry professionals” then a child pursuing an acting/performance career kind of becomes “going into the family business.”

    Now, “having good genetics” (you know “being ridiculously good looking”) is always a positive – so there are certainly “nepo babies” out there.

    I’m not throwing stones at anyone, “hiring” is not an exact science in ANY industry. That “genetic component” probably applies to families of doctors, lawyers, and educators as well — i.e. if mom and dad were both “whatever”, it is possible that “junior” will have those same skills/personality preferences.

    … and it is also possible that “junior” will want to do something completely different.

    BUT if “student” has minimal exposure to “work life” outside of what they see at home and school – MY GUESS is that the majority (of my hypothetical survey of high school students) will say the “path to wealth” involves professional sports or “entertainment industry.”

    umm, both of which may be more likely than “winning the lottery” or speculating on the stock market — but not exactly “career counsellor” advice

    (… oh, and you only hear about the “big rock stars” being told by their “career counsellor” that they couldn’t make a living as a “rock star” AFTER they became “big rock stars” – if someone quits after being told they “can’t do it” or that the chance of success is small, then they PROBABLY didn’t want to do “it” very much …)

    “Keep your feet on the ground and keep reaching for the stars.”

    – Casey Kasem

    Did I have a point?

    “Well, the “message” in “Acres of Diamonds” is still valid 100+ years later.

    A certain amount of “knowledge” is required to be able to recognize opportunity. e.g. The person to “build a better mousetrap” is someone that has experience catching mice.

    BUT simply inventing a “better” mousetrap is only half of the problem – the mousetrap needs to be produced, marketed, and sold.

    Two BIG things that weren’t around when Mr. Conwell was giving his speech are “venture capital” and “franchising.” Neither of which “negatively” impacts the argument he was making – and if anything make his argument even stronger …

    check out https://curious.iterudio.com for a short (free) class on “success”

    You might also find this book interesting

  • To REALLY mess things up …

    SO, I tried to change “permalinks” in WordPress and ALL the links broke.

    I’ve been using WordPress for years – but to be honest I’ve never tried to do anything “complicated” (i.e. beyond the “content management” for which WordPress is designed).

    Of course this “blog” thing isn’t making me an $$ so I don’t put a lot of effort into WordPress “customization” – i.e. it doesn’t REALLY matter what the “permalinks” look like.

    “Optimized URLs” used to be a “search engine optimization” (SEO) thing (well, it probably still is a SEO thing) — so I’m not saying that “permalink structure” isn’t important. I’m just pointing out that I haven’t had a reason to change it from the default.

    And Then…

    Like I said, WordPress is great for the occasional “blog” posting – but then I wanted to do some “web 1.0” type file linking – and, well, WordPress ain’t built for that.

    Yes, there are various plugins – and I got it to work. AND THEN —

    I should also mention that I’ve tried launching various “Facebook pages” over the years. One is Old Time Westerns.

    Now, Facebook as a platform wasn’t real sure what “pages” were for – my opinion is that they were basically TRYING to create a “walled garden” to keep users on Facebook – and then of course users see more Facebook ads.

    No, I am NOT criticizing Facebook for offering new services trying to keep people on Facebook — but “Facebook pages 1.0” weren’t particularly useful for “page creators.” In fact Facebook wanted (wants) page creators to PAY to “boost posts” — which functionally means NOTHING goes “organically viral” on Facebook.

    Again, I’m also NOT criticizing Facebook for wanting to make $$ – but no, I’m not going to PAY for the privilege of doing the work of creating a community on a platform, which can decide to kick me off whenever they like.

    Did I mention …

    … I have the required skills to do the “web publishing” thing – so for not much $ I can just setup my own servers and have much more control over anything/everything.

    SO the motivation behind the “Westerns” page was more about me getting in my “amateur historian” exercise than about building a community.

    Ok, sure, I would love to connect with people with the same interests – which is one of those things the “web” has been great at from the “early days.” Notice that I didn’t day “Facebook” is great a finding people of common interests, but the Internet/Web is.

    Facebook is great to “reconnect” with people you once knew or have met – but not so good at “connecting” new people with a common interest.

    Hey, if you are “company” selling “product” and you have a marketing budget – then Facebook can help you find new customers. If you are “hobbyist” looking for other “hobbyists” – well, not so much.

    Yes, Facebook can be a tool for that group of “hobbyists” – but unless you have a “marketing budget” don’t expect to “organically” grow you member list from being on Facebook.

    fwiw: “Facebook pages 2.0” has become “groups” or something – Wikipedia tells me Yahoo! pulled the plug on “Yahoo! Groups” in 2020. The “fun fact” is that the whole “groups” concept predates the “web” – that sort of “bulletin board” functionality goes back to the late 1970’s early 1980’s. Remember the movie WarGames (1983)? That was what he was “dialing into.”

    ANYWAY …

    I have various “example” sites out there – I’ve pointed out that WordPress does somethings very well – but doesn’t do other things well.

    Yes, you could “extend” WordPress if you like – but it isn’t always the “right tool for the job.”

    SO “data driven example” https://www.iterudio.com/us —

    small “progressive web app”: https://media.iterudio.com/j/

    Another “data driven example” – but this time I was trying to create a “daily reading app” from a few of the “wisdom books”: https://clancameron.us/bible/

    A “quote app”: https://clancameron.us/quotes/

    AND then the latest – which is just javascript and css https://www.iterudio.com/westerns/

    The original plan was to just create some “pages” within WordPress – and I wanted the URL to be “page name” — which is why I was trying to change the “permalinks” within WordPress.

    My guess is that the problem has to do with the fact the the “uniform resource locator” (URL) on my server gets “rewritten” before it hits the WordPress “permalink” module – which then tries to rewrite it again. The error I was getting seems to be common – and I tried the common solutions to no avail (and most potential solutions just made the problem worse).

    To err is human; To really foul things up requires a computer.

    Anonymous

  • authentication, least privilege, and zero trust

    When we are discussing “network security” phrases like “authentication”, “least privilege”, and “zero trust” tend to come up. The three terms are related, and can be easily confused.

    I’ve been in “I.T.” for a while (the late 1980’s) – I’ve gone from an “in the field professional” to “network technician” to “the computer guy” and now as a “white bearded instructor.”

    Occasionally I’ve listened to other “I.T. professionals” struggle trying to explain the above concepts – and as I mentioned, they are easy to confuse.

    Part of my job was teaching “network security” BEFORE this whole “cyber-security” thing became a buzzword. I’ve also had the luxury of “time” as well as the opportunity/obligation to explain the concepts to “non I.T. professionals” in “non technical jargon.”

    With that said, I’m sure I will get something not 100% correct. The terms are not carved in stone – and “marketing speak” can change usage. SO in generic, non-technical jargon, here we go …

    Security

    First, security as a concept is always an illusion. No I’m not being pessimistic – as human beings we can never be 100% secure because it is simply not possible to have 100% of the “essential information.”

    SO we talk in terms of “risk” and “vulnerabilities.” From a practical point of view we have a “sliding scale” with “convenience and usability” on one end and “security” on the other. e.g. “something” that is “convenient” and “easy to use”, isn’t going to be “secure.” If we enclose the “something” in a steel cage, surround the steel cage with concrete, and bury the concrete block 100 feet in the ground, it is much more “secure” – but almost impossible to use.

    All of which means that trying to make a “something” usable and reasonably secure requires some tradeoffs.

    Computer Network Security

    Securing a “computer” used to mean “locking the doors of the computer room.” The whole idea of “remote access” obviously requires a means of accessing the computer remotely — which is “computer networking” in a nutshell.

    The “physical” part of computer networking isn’t fundamentally different from the telegraph. Dots and dashes sent over the wire from one “operator” to another have been replaced with high and low voltages representing 1’s and 0’s and “encapsulated data” arranged in frames/packets forwarded from one router to another — but it is still about sending a “message” from one point to another.

    With the old telegraph the service was easy to disrupt – just cut the wire (a 19th century “denial of service” attack). Security of the telegraph message involved trusting the telegraph operators OR sending an “encrypted message” that the legitimate recipient of the message could “un-encrypt.”

    Modern computer networking approached the “message security” problem in the same way. The “message” (i.e. “data”) must be secured so that only the legitimate recipients have access.

    There are a multitude of possible modern technological solutions – which is obviously why “network administration” and “cyber-security” have become career fields — so I’m not going into specific technologies here.

    The “generic” method starts with “authentication” of the “recipient” (i.e. “user”).

    Authentication

    Our (imaginary) 19th Century telegraph operator didn’t have a lot of available options to verify someone was who they said they were. The operator might receive a message and then have to wait for someone to come to the telegraph office and ask for the message.

    If our operator in New Orleans receives a message for “Mr Smith from Chicago” – he has to wait until someone comes in asking for a telegraph for “Mr Smith from Chicago.” Of course the operator had no way of verifying that the person asking for the message was ACTUALLY “Mr Smith from Chicago” and not “Mr Jones from Atlanta” who was stealing the message.

    In modern computer networking this problem is what we call “authentication.”

    If our imaginary telegraph included a message to the operator that “Mr Smith from Chicago” would be wearing a blue suit, is 6 feet tall, and will spit on the ground and turn around 3 times after asking for the message — then our operator has a method of verifying/”identifying” “Mr Smith from Chicago” and then “authenticating” him as the legitimate recipient.

    Least Privilege

    For the next concept we will leave the telegraph behind – and imagine we are going to a “popular music concert.”

    Imagine that we have purchased tickets to see “big name act” and the concert promoters are holding our tickets at the “will call” window.

    Our imaginary concert has multiple levels of seating – some seats close to the stage, some seats further away, some “seats” involve sitting on a grassy hill, and some “seats” are “all access Very Important Person.”

    On the day of the concert we go to the “will call” window and present our identification (e.g. drivers license, state issued ID card, credit card, etc) – the friendly attendant examines our individual identification (i.e. we get “authenticated”) and then gives us each a “concert access pass” on a lanyard (1 each) that we are supposed to hang around our necks.

    Next we go to the arena gate and present our “pass” to the friendly security guard. The guard examines the pass and allows us access BASED on the pass.

    Personally I dislike large crowds – so MY “pass” only gives me access to the grassy area far away from the stage. Someone else might love dancing in the crowd all night, so their “pass” gives them access to the area much closer to the stage (where no one sill sit down all night). If “big recording executive” shows up, their “pass” might give them access to the entire facility.

    Distinguishing what we are allowed to do/where we are allowed to go is called “authorization.”

    First we got “authenticated” and then we were giving a certain level of “authorized” access.

    Now, assume that I get lonely sitting up there on the hill – and try to sneak down to the floor level seats where all the cool kids are dancing. If the venue provider has some “no nonsense, shaved head” security guards controlling access to the “cool kids” area – then those guards (inside the venue) will check my pass and deny me entry.

    That concept of “only allowing ‘pass holders’ to go/do specifically where/what they are authorized to go/do” could be called “least privilege.”

    Notice that ensuring “least privilege” takes some additional planning on the part of the “venue provider.”

    First we authenticate users, then we authorize users to do something. “Least privilege” is attained when users can ONLY do what they NEED to do based on an assessment of their “required duties.”

    Zero Trust

    We come back around to the idea that “security” is a process and not an “end product” with the “new” idea of “zero trust.” ” Well, “new” as in “increased in popularity.”

    Experienced “network security professionals” will often talk about “assuming that the network has been compromised.” This “assumption of breach” is really what “zero trust” is concerned.

    It might sound pessimistic to “assume a network breach” – but it implies that we need to be looking for “intruders” INSIDE the area that we have secured.

    Imagine a “secret agent movie” where the “secret agent” infiltrates the “super villain’s” lair by breaching the perimeter defense, then enters the main house through the roof. Since the “super villain” is having a big party for some reason, out “secret agent” puts on a tuxedo and pretends to be a party guest.

    Of course the super villain’s “henchmen” aren’t looking for intruders INSIDE the mansion that look like party guests – so the “secret agent” is free to collect/gather intelligence about the super villain’s master plan and escape without notice.

    OR to extend the “concert” analogy – the security guards aren’t checking “passes” of individuals within the “VIP area.” If someone steals/impersonates a “VIP pass” then they are free to move around the “VIP area.”

    The simplest method for an “attacker” would be to acquire a “lower access” pass, and then try to get a “higher level” pass

    Again – we start off with good authentication, have established least privilege, and the next step is checking users privileges each time they try to do ANYTHING.

    In the “concert” analogy, the “user pass” grants access to a specific area. BUT we are only checking “user credentials” when they try to move from one area to another. To achieve “zero trust” we need to do all of the above AND we assume that there has been a security breach – so we are checking “passes” on a continual basis.

    This is where the distinction between “authentication and least privilege” and “zero trust” can be hard to perceive.

    e.g. In our concert analogy – imagine that there is a “private bar” in the VIP area. If we ASSUME that a user should have access to the “private bar” because they are in the VIP area, that is NOT “zero trust.” If users have to authenticate themselves each time they go to the private bar – then that could be “zero trust.” We are guarding against the possibility that someone managed to breach the other security measures.

    Eternal vigilance

    If you have heard of “AAA” in regards to security – we have talked about the first two “A’s” (“Authentication”, and “Authorization”).

    Along with all of the above – we also need “auditing.”

    First we authenticate a user, THEN the user gets authorized to do something, and THEN we keep track of what the user does while they are in the system – which is usually called “auditing”.

    Of course what actions we will choose to “audit” requires some planning. If we audit EVERYTHING – then we will be swamped by “ordinary event” data. The “best practice” becomes “auditing” for the “unusual”/failure.

    e.g. if it is “normal” for users to login between the hours of 7:00AM and 6:00PM and we start seeing a lot of “failed login attempts” at 10:00PM – that probably means someone is doing something they shouldn’t.

    Deciding what you need to audit, how to gather the data, and where/when/how to analyze that data is a primary function of (what gets called) “cyber-security.”

    “Security” is always best thought of as a “process” not an “end state.” Something like “zero trust” requires constant authorization of users – ideally against multiple forms of authentication.

    Ideally intruders will be prevented from entering, BUT finding/detecting intrusion becomes essential.

    HOW to specifically achieve any of the above becomes a “it depends” situation requiring in depth analysis. Any plan is better than no planning at all, but the best plan will be tested and re-evaluated on a regular basis — which is obviously beyond the scope of this little story …

  • Brand value, Free Speech, and Twitter

    As a thought experiment – imagine you are in charge of a popular, world wide, “messaging” service – something like Twitter but don’t get distracted by a specific service name.

    Now assume that your goal is to provide ordinary folks with tools to communicate in the purest form of “free speech.” Of course if you want to stay around as a “going concern” then you will also need to generate revenue along the way — maybe not “obscene profits” but at least enough to “break even.”

    Step 1: Don’t recreate the wheel

    In 2022, if you wanted to create a “messaging system” for your business/community then there are plenty of available options.

    You could download the source code for Mastodon and setup your own private service if you wanted – but unless you have the required technical skills and have a VERY good reason (like a requirement for extreme privacy) that probably isn’t a good idea.

    In 2022 you certainly wouldn’t bother to “develop your own” platform from scratch — yes, it is something that a group of motivated under grads could do, and they would certainly learn a lot along the way, but they would be “reinventing the wheel.”

    Now if the goal is “education” then going through the “wheel invention” process might be worthwhile. HOWEVER , if the goal is NOT education and/or existing services will meet your “messaging requirements” – then reinventing the wheel is just a waste of time.

    For a “new commercial startup” the big problem isn’t going to be “technology” – the problem will be getting noticed and then “scaling up.”

    Step 2: integrity

    Ok, so now assume that our hypothetical messaging service has attracted a sizable user base. How do we go about ensuring that the folks posing messages are who they say they are – i.e. how do we ensure “user integrity.”

    In an ideal world, users/companies could sign up as who they are – and that would be sufficient. But in the real world where there are malicious actors with a “motivation to deceive” for whatever reason – then steps need to be taken make it harder for “malicious actors to practice maliciousness.”

    The problem here is that it is expensive (time and money) to verify user information. Again, in a perfect world you could trust users to “not be malicious.” With a large network you would still have “naming conflicts” but if “good faith” is the norm, then those issues would be ACCIDENTAL not malicious.

    Once again, in 2022 there are available options and “recreating the wheel” is not required.

    This time the “prior art” comes in the form of the registered trademark and good ol’ domain name system (DNS).

    Maybe we should take a step back and examine the limitations of “user identification.” Obviously you need some form of unique addressing for ANY network to function properly.

    quick example: “cell phone numbers” – each phone has a unique address (or a card installed in the phone with a unique address) so that when you enter in a certain set of digits, your call will be connected to that cell phone.

    Of course it is easy to “spoof the caller id” which simply illustrates our problem with malicious users again.

    Ok, now the problem is that those “unique user names” probably aren’t particularly elegant — e.g. forcing users to use names like user_2001,7653 wouldn’t be popular.

    If our hypothetical network is large enough then we have “real world” security/safety issues – so using personally identifiable information to login/post messages would be dangerous.

    Yes, we want user integrity. No, we don’t want to force users to use system generated names. No, we don’t want to put people in harm’s way. Yes, the goal is still “free speech with integrity” AND we still don’t want to reinvent the “authentication wheel.”

    Step 3: prior art

    The 2022 “paradigm shift” on usernames is that they are better thought of as “brand names.”

    The intentional practice of “brand management” has been a concern for the “big company” folks for a long time.

    However, this expanding of the “brand management” concept does draw attention to another problem. This problem is simply that a “one size fits all” approach to user management isn’t going to work.

    Just for fun – imagine that we decide to have three “levels” of registration:

    • level 1 is the fastest, easiest, and cheapest – provide a unique email address and agree to the TOS and you are in
    • level 2 is requires additional verification of user identity, so it is a little slower than level 1, and will cost the user a fee of some kind
    • level 3 is for the traditional “big company enterprises” – they have a trademark, a registered domain name, and probably an existing brand ‘history.’ The slowest and most expensive, but then also the level with the most control over their brand name and ‘follower’ data

    The additional cost for the “big company” probably won’t be a factor to the “big company” — assuming they are getting a direct line to their ‘followers’/’customers’

    Yes, there should probably be a “non profit”/gov’ment registration as well – which could be low cost (free) as well as “slow”.

    Anyone that remembers the early days of the “web” might remember the days when the domain choices were ‘.com’, ‘.edu’,’.net’, ‘.mil’, and ‘.org’ – with .com being for “commerce”, .edu for “education”, .net was originally supposed to be for “network infrastructure, .mil was for the military, and .org was for “non profit organizations.”

    I think that originally .org was free of charge – but they had to prove that they were a non-profit. Obviously you needed to be a educational institution to get an edu domain, and the “military” for a .mil domain was exactly what it sounds like

    Does it need to be pointed out that “.com” for commercial activity was why the “dot-com bubble/boom and bust” was called “dot-com”?

    Meanwhile, back at the ranch ….

    For individuals the concept was probably thought of as “personal integrity” – and hopefully that concept isn’t going away, i.e. we are just adding a thin veneer and calling it “personal branding.”

    Working in our hypothetical company’s favor is the fact that “big company brand management” has included registering domain names for the last 25+ years.

    Then add in that the modern media/intellectual property “prior art” consists of copyrights, trademarks, and patents. AND We (probably) already have a list of unacceptable words – e.g. assuming that profanity and racial slurs are not acceptable.

    SO just add a registered trademarks and/or a domain name check to the registration process.

    Prohibit anyone from level 1 or 2 from claiming a name that is on the “prohibited” list. Problem solved.

    It should be pointed out that this “enhanced registration” process would NOT change anyone’s ability to post content. Level 2 and 3 are not any “better” than level 1 – just “authenticated” at a higher level.

    If a “level 3 company” chooses not to use our service – their name is still protected. “Name squatting” should also be prohibited — e.g. if a level 3 company name is “tasty beverages, inc” then names like “T@sty beverages” or “aTasty Beverage” – a simple regular expression test would probably suffice.

    The “level 3” registration could then have benefits like domain registration — i.e. “tasty beverages, inc” would be free create other “tasty beverage” names …

    If you put together a comprehensive “registered trademark catalog” then you might have a viable product – the market is probably small (trademark lawyers?), but if you are creating a database for user registration purposes – selling access to that database wouldn’t be a big deal – but now I’m just rambling …

  • Random Thoughts about Technology in General and Linux distros in Particular

    A little history …

    In the 30+ years I’ve been a working “computers industry professional” I’ve done a lot of jobs, used a lot of software, and spent time teaching other folks how to be “computer professionals.”

    I’m also an “amateur historian” – i.e. I enjoy learning about “history” in general. I’ve had real “history teachers” point out that (in general) people are curious about “what happened before them.”

    Maybe this “historical curiosity” is one of the things that distinguishes “humans” from “less advanced” forms of life — e.g. yes, your dog loves you, and misses you when you are gone – but your dog probably isn’t overly concerned with how its ancestors lived (assuming that your dog has the ability to think in terms of “history” – but that isn’t the point).

    As part of “teaching” I tend to tell (relevant) stories about “how we got here” in terms of technology. Just like understanding human history can/should influence our understanding of “modern society” – understanding the “history of a technology” can/should influence/enhance “modern technology.”

    The Problem …

    There are multiple “problems of history” — which are not important at the moment. I’ll just point out the obvious fact that “history” is NOT a precise science.

    Unless you have actually witnessed “history” then you have to rely on second hand evidence. Even if you witnessed an event, you are limited by your ability to sense and comprehend events as they unfold.

    All of which is leading up to the fact that “this is the way I remember the story.” I’m not saying I am 100% correct and/or infallible – in fact I will certainly get something wrong if I go on long enough – any mistakes are mine and not intentional attempts to mislead 😉

    Hardware/Software

    Merriam-Webster tells me that “technology” is about “practical applications of knowledge.”

    random thought #1 – “technology” changes.

    “Cutting edge technology” becomes common and quickly taken for granted. The “Kansas City” scene from Oklahoma (1955) illustrates the point (“they’ve gone just about as far as they can go”).

    Merriam-Webster tells me that the term “high technology” was coined in 1969 referring to “advanced or sophisticated devices especially in the fields of electronics and computers.”

    If you are a ‘history buff” you might associate 1969 with the “race to the moon”/moon landing – so “high technology” equaled “space age.” If you are an old computer guy – 1969 might bring to mind the Unix Epoch – but in 2022 neither term is “high tech.”

    random thought #2 – “software”

    The term “hardware” in English dates back to the 15th Century. The term originally meant “things made of metal.” In 2022 the term refers to the “tangible”/physical components of a device – i.e. the parts we can actually touch and feel.

    I’ve taught the “intro to computer technology” more times than I can remember. Early on in the class we distinguish between “computer hardware” and “computer software.”

    It turns out that the term “software” only goes back to 1958 – invented to refer to the parts of a computer system that are NOT hardware.

    The original definition could have referred to any “electronic system” – i.e. programs, procedures, and documentation.

    In 2022 – Merriam-Webster tells me that “software” is also used to refer to “audiovisual media” – which is new to me, but instantly makes sense …

    ANYWAY – “computer software” typically gets divided into two broad categories – “applications” and “operating systems” (OS or just “systems”).

    The “average non-computer professional” is probably unaware and/or indifferent to the distinction between “applications” and the OS. They can certainly tell you whether they use “Windows” or a “Mac” – so saying people are “unaware” probably isn’t as correct as saying “indifferent.”

    Software lets us do something useful with hardware

    an old textbook

    The average user has work to get done – and they don’t really care about the OS except to the point that it allows them to run applications and get something done.

    Once upon a time – when a new “computer hardware system” was designed a new “operating system” would also be written specifically for the hardware. e.g. The Mythic Man-Month is required reading for anyone involved in management in general and “software development” in particular …

    Some “industry experts” have argued that Bill Gates’ biggest contribution to the “computer industry” was the idea that “software” could be/should be separate from “hardware.” While I don’t disagree – it would require a retelling of the “history of the personal computer” to really put the remark into context — I’m happy to re-tell the story, but it would require at least two beers – i.e. not here, not now

    In 2022 there are a handful of “popular operating systems” that also get divided into two groups – e.g. the “mobile OS” – Android, iOS, and the “desktop OS” Windows, macOS, and Linux

    The Android OS is the most installed OS if you are counting “devices.” Since Android is based on Linux – you COULD say that Linux is the most used OS, but we won’t worry about things like that.

    Apple’s iOS on the other hand is probably the most PROFITABLE OS. iOS is based on the “Berkely Software Distribution” (BSD) – which is very much NOT Linux, but they share some code …

    Microsoft Windows still dominates the desktop. I will not be “bashing Windows” in any form – just point out that 90%+ of the “desktop” machines out there are running some version of Windows.

    The operating system that Apple includes with their personal computers in 2022 is also based on BSD. Apple declared themselves a “consumer electronics” company a long time ago — fun fact: the Beatles (yes, John, Paul, George, and Ringo – those “Beatles”) started a record company called “Apple” in 1968 – so when the two Steves (Jobs and Wozniak) wanted to call their new company “Apple Computers” they had to agree to stay out of the music business – AND we are moving on …

    On the “desktop” then Linux is the rounding error between Windows machines and Macs.

    What is holding back “Linux on the desktop?” Well, in 2022 the short answer is “applications” and more specifically “gaming.”

    You cannot gracefully run Microsoft Office, Avid, or the Adobe Suit on a Linux based desktop. Yes, there are alternatives to those applications that perform wonderfully on Linux desktops – but that isn’t the point.

    e.g. that “intro to computers” class I taught used Microsoft Word, and Excel for 50% of the class. If you want to edit audio/video “professionally” then you are (probably) using Avid or Adobe products (read the credits of the next “major Hollywood” movie you watch).

    Then the chicken and egg scenario pops up – i.e. “big application developer” would (probably) release a Linux friendly version if more people used Linux on the desktop – but people don’t use Linux on the desktop because they can’t run all of the application software they want – so they don’t have a Linux version of the application.

    Yes, I am aware of WINE – but it illustrates the problem much more than acts as a solution — and we are moving on …

    Linux Distros – a short history

    Note that “Linux in the server room” has been a runaway success story – so it is POSSIBLE that “Linux on the desktop” will gain popularity, but not likely anytime soon.

    Also worth pointing out — it is possible to run a “Microsoft free” enterprise — but if the goal is lowering the “total cost of ownership” then (in 2022) Microsoft still has a measurable advantage over any “100% Linux based” solution.

    If you are “large enterprise” then the cost of the software isn’t your biggest concern – “Support” is (probably) “large enterprise, Inc’s” largest single concern.

    fwiw: IBM and Red Hat are making progress on “enterprise level” administration tools – but in 2022 …

    ANYWAY – the “birthdate” for Linux is typically given as 1991.

    Under the category of “important technical distinction” I will mention that “Linux” is better described as the “kernel” for an OS and NOT an OS in and of itself.

    Think of Linux as the “engine” of a car – i.e. the engine isn’t the “car”, you need a lot of other systems working with and around the engine for the “car” to function.

    For the purpose of this article I will describe the combination of “Linux kernel + other operating system essentials” as a “Linux Distribution” or more commonly just “distro.” Ready? ok …

    1992 gave us Slackware. Patrick Volkerding started the “oldest surviving Linux distro” which accounted for 80 percent share of the “Linux” market until the mid-1990s

    1992 – 1996 gave us openSUSE Linux. Thomas Fehr, Roland Dyroff, Burchard Steinbild, and Hubert Mantel. I tend to call SUSE “German Linux” and they were just selling the “German version of Slackware” on floppy disks until 1996.

    btw: the “modern Internet” would not exist as it is today without Linux in the server room. All of these “early Linux distros” had business models centered around “selling physical media.” Hey, download speed were of the “dial-up” variety and you were paying “by the minute” in most of Europe – so “selling media” was a good business model …

    1993 -1996 gave us the start of Debian – Ian Murdock. The goal was a more “user friendly” Linux. First “stable version” was 1996 …

    1995 gave us the Red Hat Linux — this distro was actually my “introduction to Linux.” I bought a book that had a copy of Red Hat Linux 5.something (I think) and did my first Linux install on an “old” pc PROBABLY around 2001.

    During the dotcom “boom and bust” a LOT of Linux companies went public. Back then it was “cool” to have a big runup in stock valuation on the first day of trading – so when Red Hat “went public” in 1999 they had the eighth-biggest first-day gain in the history of Wall Street.

    The run-up was a little manufactured (i.e. they didn’t release a lot of stock for purchase on the open market). My guess is that in 2022 the folks arranging the “IPO” would set a higher price for the initial price or release more stock if they thought the offering was going to be extremely popular.

    Full disclosure – I never owned any Red Hat stock, but I was an “interested observer” simply because I was using their distro.

    Red Hat’s “corporate leadership” decided that the “selling physical media” business plan wasn’t a good long term strategy. Especially as “high speed Internet” access moved across the U.S.

    e.g. that “multi hour dial up download” is now an “under 10 minute iso download” – so I’d say the “corporate leadership” at Red Hat, Inc made the right decision.

    Around 2003 the Red Hat distro kind of “split” into “Red Hat Enterprise Linux” (RHEL – sold by subscription to an “enterprise software” market) and the “Fedora project.” (meant to be a testing ground for future versions of RHEL as well as the “latest and greatest” Linux distro).

    e.g. the Fedora project has a release target of every six months – current version 35. RHEL has a longer planned release AND support cycle – which is what “enterprise users” like – current version 9.

    btw – yes RHEL is still “open source” – what you get for your subscription is “regular updates from an approved/secure channel and support.” AlmaLinux and CentOS are both “clones” of RHEL – with CentOS being “sponsored” by Red Hat.

    IBM “acquired” Red Hat in 2019 – but nothing really changed on the “management” side of things. IBM has been active in the open source community for a long time – so my guess is that someone pointed out that a “healthy, independent Red Hat” is good for IBM’s bottom line in the present and future.

    ANYWAY – obviously Red Hat is a “subsidiary” of IBM – but I’m always surprised when “long time computer professionals” seem to be unaware of the connections between RHEL, Fedora Project, CentOS, and IBM (part of what motivated this post).

    Red Hat has positioned itself as “enterprise Linux” – but the battle for “consumer Linux” still has a lot of active competition. The Fedora project is very popular – but my “non enterprise distros of choice” are both based on Debian:

    Ubuntu (first release 2004) – “South African Internet mogul Mark Shuttleworth” gets credit for starting the distro. The idea was that Debian could be more “user friendly.” Occasionally I teach an “introduction to Linux class” and the big differences between “Debian” and “Ubuntu” are noticeable – but very much in the “ease of use” (i.e. “Ubuntu” is “easier” for new users to learn)

    I would have said that “Ubuntu” meant “community” (which I probably read somewhere) but the word is of ancient Zulu and Xhosa origin and more correctly gets translated “humanity to others.” Ubuntu has a planned release target of every six months — as well as a longer “long term support” (LTS) version.

    Linux Mint (first release 2008) – Clément Lefèbvre gets credit for this one. Technically Linux Mint describes itself as “Ubuntu based” – so of course Debian is “underneath the hood.” I first encountered Linux Mint from a reviewer that described it as the best Linux distro for people trying to not use Microsoft Windows.

    The differences between Mint and Ubuntu are cosmetic and also philosophical – i.e. Mint will install some “non open source” (but still free) software to improve “ease of use.”

    The beauty of “Linux” is that it can be “enterprise level big” software or it can be “boot from a flash drive” small. It can utilize modern hardware and GPU’s or it can run on 20 year old machines. If you are looking for specific functionality, there might already be a distro doing that – or if you can’t find one, you can make your own

  • Modern “basics” of I.T.

    Come my friends, let us reason together … (feel free to disagree, none of this is dogma)

    There are a couple of “truisms” that APPEAR to conflict –

    Truism 1:

    The more things change the more they stay the same.

    … and then …

    Truism 2:

    The only constant is change.

    Truism 1 seems to imply that “change” isn’t possible while Truism 2 seems to imply that “change” is the only possibility.

    There are multiple way to reconcile these two statements – for TODAY I’m NOT referring to “differences in perspective.”

    Life is like a dogsled team. If you aren’t the lead dog, the scenery never changes.

    (Lewis Grizzard gets credit for ME hearing this, but he almost certainly didn’t say it first)

    Consider that we are currently travelling through space and the earth is rotating at roughly 1,000 miles per hour – but sitting in front of my computer writing this, I don’t perceive that movement. Both the dogsled and my relative lack of perceived motion are examples of “perspective” …

    Change

    HOWEVER, “different perspectives” or points of view isn’t what I want to talk about today.

    For today (just for fun) imagine that my two “change” truisms are referring to different types of change.

    Truism 1 is “big picture change” – e.g. “human nature”/immutable laws of the universe.

    Which means “yes, Virginia there are absolutes.” Unless you can change the physical laws of the universe – it is not possible to go faster than the speed of light. Humanity has accumulated a large “knowledge base” but “humans” are NOT fundamentally different than they were 2,000 years ago. Better nutrition, better machines, more knowledge – but humanity isn’t much different.

    Truism 2 can be called “fashion“/style/”what the kids are doing these days” – “technology improvements” fall squarely into this category. There is a classic PlayStation 3 commercial that illustrates the point.

    Once upon a time:

    • mechanical pinball machines were “state of the art.”
    • The Atari 2600 was probably never “high tech” – but it was “affordable and ubiquitous” tech.
    • no one owned a “smartphone” before 1994 (the IBM Simon)
    • the “smartphone app era” didn’t start until Apple released the iPhone in 2007 (but credit for the first “App store” goes to someone else – maybe NTT DoCoMo?)

    SO fashion trends come and go – but the fundamental human needs being services by those fashion trends remain unchanged.

    What business are we in?

    Hopefully, it is obvious to everyone that it is important for leaders/management to understand the “purpose” of their organization.

    If someone is going to “lead” then they have to have a direction/destination. e.g. A tourist might hire a tour guide to “lead” them through interesting sites in a city. Wandering around aimlessly might be interesting for awhile – but could also be dangerous – i.e. the average tourist wants some guidance/direction/leadership.

    For that “guide”/leader to do their job they need knowledge of the city AND direction. If they have one OR the other (knowledge OR direction), then they will fail at their job.

    The same idea applies to any “organization.” If there is no “why”/direction/purpose for the organization then it is dying/failing – regardless of P&L.

    Consider the U.S. railroad system. At one point railroads were a huge part of the U.S. economy – the rail system opened up the western part of the continent and ended the “frontier.”

    However, a savvy railroad executive would have understood that people didn’t love railroads – what people valued was “transportation.”

    Just for fun – get out any map and look at the location of major cities. It doesn’t have to be a U.S. map.

    The point I’m working toward is that throughout human history, large settlements/cities have centered around water. Either ports to the ocean or next to riverways. Why? Well, obviously humans need water to live but also “transportation.”

    The problem with waterways is that going with the current is much easier than going against the current.

    SO this problem was solved first by “steam powered boats” and then railroads. The early railroads followed established waterways connecting established cities. Then as railroad technology matured towns were established as “railway stations” to provide services for the railroad.

    Even as the railroads became a major portion of the economy – it was NEVER about the “railroads” it was about “transportation”

    fwiw: then the automobile industry happened – once again, people don’t car so much about “cars” what they want/need is “transportation”

    If you are thinking “what about ‘freight’ traffic” – well, this is another example of the tools matching the job. Long haul transportation of “heavy” items is still efficiently handled by railroads and barges – it is “passenger traffic” that moved on …

    We could do the same sort of exercise with newspapers – i.e. I love reading the morning paper, but the need being satisfied is “information” NOT a desire to just “read a physical newspaper”

    What does this have to do with I.T.?

    Well, it is has always been more accurate to say that “information technology” is about “processing information” NOT about the “devices.”

    full disclosure: I’ve spent a lifetime in and around the “information technology” industry. FOR ME that started as working on “personal computers” then “computer networking”/LAN administration – and eventually I picked up an MBA with an “Information Management emphasis”.

    Which means I’ve witnessed the “devices” getting smaller, faster, more affordable, as well as the “networked personal computer” becoming de rigueur. However, it has never been about “the box” i.e. most organization aren’t “technology companies” but every organization utilizes “technology” as part of their day to day existence …

    Big picture: The constant is that “good I.T. practices” are not about the technology.

    Backups

    When any I.T. professional says something like “good backups” solve/prevent a lot of problems it is essential to remember how a “good backup policy” functions.

    Back in the day folks would talk about a “grandfather/father/son” strategy – if you want to refer to it as “grandmother/mother/daughter” the idea is the same. At least three distinct backups – maybe a “once a month” complete backup that might be stored in a secure facility off-site, a “once a week” complete backup, and then daily backups that might be “differential.”

    It is important to remember that running these backups is only part of the process. The backups also need to be checked on a regular basis.

    Checking the validity/integrity of backups is essential. The time to check your backups is NOT after you experience a failure/ransomware attack.

    Of course how much time and effort an organization should put into their backup policy is directly related to the value of their data. e.g. How much data are you willing to lose?

    Just re-image it

    Back in the days of the IBM PC/XT, if/when a hard drive failed it might take a day to get the system back up. After installing the new hard drive, formatting the drive and re-installing all of the software was a time intensive manual task.

    Full “disk cloning” became an option around 1995. “Ghosting” a drive (i.e. “cloning”) belongs in the acronym Hall of Fame — I’m told it was supposed to stand for “general hardware-oriented system transfer.” The point being that now if a hard drive failed, you didn’t have to manually re-install everything.

    Jump forward 10 years and Local Area Networks are everywhere – Computer manufacturers had been including ‘system restore disks’ for a long time AND software to clone and manage drives is readily available. The “system cloning” features get combined with “configuration management” and “remote support” and this is the beginning of the “modern I.T.” era.

    Now it is possible to “re-image” a system as a response to software configuration issues (or malware). Disk imaging is not a replacement for a good backup policy – but it reduced “downtime” for hardware failures.

    The more things change …

    Go back to the 1980’s/90’s and you would find a lot of “dumb terminals” connecting to a “mainframe” type system (well, by the 1980s it was probably a “minicomputer” not a full blown “mainframe”).

    A “dumb terminal” has minimal processing power – enough to accept keyboard input and provide monitor output, and connect to the local network.

    Of course those “dumb terminals” could also be “secured” so there were good reasons for keeping them around for certain installations. e.g. I remember installing a $1,000 expansion card into new late 1980’s era personal computers to make it function like a “dumb terminal” – but that might have just been the Army …

    Now in 2022 we have “chrome books” that are basically the modern version of “dumb terminals.” Again, the underlying need being serviced is “communication” and “information” …

    All of which boils down to “basics” of information processing haven’t really changed. The ‘personal computer’ is a general purpose machine that can be configured for various industry specific purposes. Yes, the “era of the PC” has been over for 10+ years but the need for ‘personal computers’ and ‘local area networks’ will continue.

  • Industry Changing Events and “the cloud”

    Merriam-Webster tells me that etymology is the “the history of a linguistic form (such as a word)” (the official definition goes on a little longer – click on the link if interested …)

    The last couple weeks I’ve run into a couple of “industry professionals” that are very skilled in a particular subset of “information technology/assurance/security/whatever” but obviously had no idea what “the cloud” consists of in 2022.

    Interrupting and then giving an impromptu lecture on the history and meaning of “the cloud” would have been impolite and ineffective – so here we are 😉 .

    Back in the day …

    Way back in the 1980’s we had the “public switched telephone network” (PSTN) in the form of (monopoly) AT&T. You could “drop a dime” into a pay phone and make a local call. “Long distance” was substantially more – with the first minute even more expensive.

    The justification for higher connection charges and then “per minute” charges was simply that the call was using resources in “another section” of the PSTN. How did calls get routed?

    Back in 1980 if you talked to someone in the “telecommunications” industry they might have referred to a phone call going into “the cloud” and connecting on the other end.

    (btw: you know all those old shows where they need “x” amount of time to “trace” a call – always a good dramatic device, but from a tech point of view the “phone company” knew where each end of the call was originating – you know, simply because that was how the system worked)

    I’m guessing that by the breakup of AT&T in 1984 most of the “telecommunications cloud” had gone digital – but I was more concerned with football games in the 1980s than telecommunications – so I’m honestly not sure.

    In the “completely anecdotal” category “long distance” had been the “next best thing to being there” (a famous telephone system commercial – check youtube if interested) since at least the mid-1970s – oh, and “letter writing”(probably) ended because of low cost long distance not because of “email”

    Steps along the way …

    Important technological steps along the way to the modern “cloud” could include:

    • the first “modem” in in the early 1960s – that is a “modulator”/”demodulator” if you are keeping score. A device that could take a digital signal and convert it to an analog wave for transmission over the PSTN on one end of the conversation and another modem could reverse the process on the other end.
    • Ethernet was invented in the early 1970’s – which allowed computers to talk to each other over long distances. You are probably using some flavor of Ethernet on your LAN
    • TCP/IP was “invented” in the 1970’s then became the language of ARPANET in the early 1980’s. One way to define the “Internet” is as a “large TCP/IP network” – ’nuff said

    that web thing

    Tim Berners-Lee gets credit for “inventing” the world wide web in 1989 while at CERN. Which made “the Internet” much easier to use – and suddenly everyone wanted a “web site.”

    Of course the “personal computer” needed to exist before we could get large scale adoption of ANY “computer network” – but that is an entirely different story 😉

    The very short version of the story is that personal computer sales greatly increased in the 1990s because folks wanted to use that new “interweb” thing.

    A popular analogy for the Internet at the time was as the “information superhighway” – with a personal computer using a web browser being the “car” part of the analogy.

    Virtualization

    Google tells me that “virtualization technology” actually goes back to the old mainframe/time-sharing systems in the 1960’s when IBM created the first “hypervisor.”

    A “hypervisor” is what allows the creation of “virtual machines.” If you think of a physical computer as an empty warehouse that can be divided into distinct sections as needed then a hypervisor is what we use to create distinct sections and assign resources to those sections.

    The ins and outs of virtualization technology is beyond the scope of this article BUT it is safe to say that “commodity computer virtualization technology” was an industry changing event.

    The VERY short explanation is that virtualization allows for more efficient use of resources which is good for the P&L/bottom line.

    (fwiw: any technology that gets accepted on a large scale in a relatively short amount of time PROBABLY involves saving $$ – but that is more of a personal observation that an industry truism.)

    Also important was the development of “remote desktop” software – which would have been called “terminal access” before computers had “desktops.”

    e.g. Wikipedia tells me that Microsoft’s “Remote Desktop Protocol” was introduced in Windows NT 4.0 – which ZDNet tells me was released in 1996 (fwiw: some of of my expired certs involved Windows NT).

    “Remote access” increased the number of computers a single person could support which qualifies as another “industry changer.” As a rule of thumb if you had more than 20 computers in your early 1990s company – you PROBABLY had enough computer problems to justify hiring an onsite tech.

    With remote access tools not only could a single tech support more computers – they could support more locations. Sure in the 1990’s you probably still had to “dial in” since “always on high speed internet access” didn’t really become widely available until the 2000s – but as always YMMV.

    dot-com boom/bust/bubble

    There was a “new economy” gold rush of sorts in the 1990s. Just like gold and silver exploration fueled a measurable amount of “westward migration” into what was at the time the “western frontier” of the United States – a measurable amount of folks got caught up in “dot-com” hysteria and “the web” became part of modern society along the way.

    I remember a lot of talk about how the “new economy” was going to drive out traditional “brick and mortar” business. WELL, “the web” certainly goes beyond “industry changing” – but in the 1990s faith in an instant transformation of the “old economy” into a web dominated “new economy” reached zeitgeist proportions …

    In 2022 some major metropolitan areas trace their start to the gold/silver rushes in the last half of the 19th century (San Francisco and Denver come to mind). There are also a LOT of abandoned “ghost towns.”

    In the “big economic picture the people running saloons/hotels/general stores in “gold rush areas” had a decent change of outliving the “gold rush” assuming that there was a reason for the settlement to be there other than “gold mining”

    The “dot-com rush” equivalent was that a large number of investors were convinced that a company could stay a “going concern” if it didn’t make a profit. However – just like the people selling supplies to gold prospectors had a good chance of surviving the gold – the folks selling tools to create a “web presence” did alright – i.e. in 2022 the survivors of the “dot-com bubble” are doing very well (e.g. Amazon, Google)

    Web Hosting

    In the “early days of the web” establishing a “web presence” took (relatively) arcane skills. The joke was that if you could spell HTML then you could get a job as a “web designer” – ok, maybe it isn’t a “funny” joke – but you get the idea.

    An in depth discussion of web development history isn’t required – pointing out that web 1.0 was the time of “static web pages” is enough.

    If you had a decent internet service provider they might have given you space on their servers for a “personal web page.” If you were a “local” business you might have been told by the “experts” to not worry about a web site – since the “web” would only be useful for companies with a widely dispersed customer base.

    That wasn’t bad advice at the time – but the technology needed to mature. The “smart phone” (Apple 2007) motivated the “mobile first” development strategy – if you can access the web through your phone, then it increases the value of “localized up to date web information.”

    “Web hosting” was another of those things that was going to be “free forever” (e.g. one of the tales of “dot-com bubble” woes was “GeoCities”). Which probably slowed down “web service provider” growth – but that is very much me guessing.

    ANYWAY – in web 1.0 (when the average user was connecting by dial up) the stress put on web servers was minimal – so simply paying to rent space on “someone else’s computer” was a viable option.

    The next step up from “web hosting” might have been to rent a “virtual server” or “co-locate” your own server – both of which required more (relatively) arcane skills.

    THE CLOUD

    Some milestones worth pointing out:

    • 1998 – VMWare “Workstation” released (virtualization on the desktop)
    • “Google search” was another “industry changing” event that happened in 1998 – ’nuff said
    • 2001 VMWare ESX (server virtualization)
    • 2005 Intel released the first cpus with “Intel Virtualization Technology” (VT-x)
    • 2005 Facebook – noteworthy, but not “industry changing”
    • 2006 Amazon Web Services (AWS)

    Officially Amazon described AWS as providing “IT infrastructure services to businesses in the form of web services” – i.e. “the cloud”

    NIST tells us that –

    Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics, three service models, and four deployment models.

    NIST SP 800-145

    If we do a close reading of the NIST definition – the “on-demand” and “configurable” portions are what differentiates “the cloud” from “using other folks computers/data center.”

    I like the “computing as a utility” concept. What does that mean? Glad you asked – e.g. Look on a Monopoly board and you will see the “utility companies” listed as “Water Works” and “Electric Company.”

    i.e. “water” and “electric” are typically considered public utilities. If you buy a home you will (probably) get the water and electric changed into your name for billing purposes – and then you will pay for the amount of water and electric you use.

    BUT you don’t have to use the “city water system” or local electric grid – you could choose to “live off the grid.” If you live in a rural area you might have a well for your water usage – or you might choose to install solar panels and/or a generator for your electric needs.

    If you help your neighbors in an emergency by allowing them access to your well – or maybe connecting your generator to their house. You are a very nice neighbor BUT you aren’t a “utility company” – i.e. your well/generator won’t have the capacity that the full blown “municipal water system” or electric company can provide.

    Just like if you have a small datacenter and start providing “internet services” to customers – unless you are big enough to be “ubiquitous, convenient, and on-demand” then you aren’t a “cloud provider.”

    Also note the “as a service” aspect of the cloud – i.e. when you sign up you will agree to pay for what you use, but you aren’t automatically making a commitment for any minimal amount of usage.

    As opposed to “web hosting” or “renting a server” where you will probably agree to a monthly fee and a minimal term of service.

    Billing options and service capabilities are obviously vendor specific. As a rule of thumb – unless you have “variable usage” then using “the cloud” PROBABLY won’t save you money over “web hosting”/”server rental.”

    The beauty of the cloud is that users can configure “cloud services” to automatically scale up for an increase in traffic and then automatically scale down when traffic decreases.

    e.g. image a web site that has very high traffic during “business hours” but then minimal traffic the other 16 hours of the day. A properly configured “cloud service” would scale up (costing more $$) during the day and then scale down (costing fewer $$) at night.

    Yes, billing options become a distinguishing element of the “cloud” – which further muddies the water.

    Worth pointing out is that if you are “big internet company” you might get to the point where it is in your company’s best interest to build your own datacenters.

    This is just the classic “rent” vs “buy” scenario – i.e. if you are paying more in “rent” than it would cost you to “buy” then MAYBE “buying your own” becomes an option (of course “buying your own” also means “maintaining” and “upgrading” your own). This tends to work better in real estate where “equity”/property values tends to increase.

    Any new “internet service” that strives to be “globally used” will (probably) start out using “the cloud” – and then if/when they are wildly successful, start building their own datacenters while decreasing their usage of the public cloud.

    Final Thoughts

    It Ain’t What You Don’t Know That Gets You Into Trouble. It’s What You Know for Sure That Just Ain’t So

    Artemus Ward

    As a final thought – “cloud service” usage was $332.3 BILLION in 2021 up from $270 billion in 2020 (according to Gartner).

    There isn’t anything magical about “the cloud” – but it is a little more complex than just “using other people’s computers.”

    The problem with “language” in general is that there are always regional and industry differences. e.g. “Salesforce” and “SAP” fall under the “cloud computing” umbrella – but Salesforce uses AWS to provide their “Software as a Service” product and SAP uses Microsoft Azure.

    I just spent 2,000 words trying to explain the history and meaning of “the cloud” – umm, maybe a cloud by any other name would still be vendor specific

    HOWEVER I would be VERY careful with choosing a cloud provider that isn’t offered by a “big tech company” (i.e. Microsoft, Amazon, Google, IBM, Oracle). “Putting all of your eggs in one basket” is always a risky proposition (especially if you aren’t sure that the basket is good in the first place) — as always caveat emptor …

  • buzzword bingo, pedantic-ism, and the internet

    just ranting

    One of the identifying characteristics of “expert knowledge” is understanding how everything “fits” together. True “mastery” of any field with a substantial “body of knowledge” takes time and effort. Which means there are always more people that “know enough to be dangerous” than there are “real experts.”

    Which is really just recognition of the human condition – i.e. if we had unlimited time and energy then there would be a lot more “true experts” in every field.

    There is a diminishing return on “additional knowledge” after a certain point. e.g. Does anyone really need to understand how IBM designed Token Ring networks? Well, it might be useful for historic reasons and theoretical discussion – but I’ll go out on a limb and say it isn’t worth the effort to become an “expert” on Token Ring – and if you are studying “networking” becoming an expert on Token Ring is not worth the time.

    There are also a lot of subjects where a slightly “incorrect” understanding is part of the learning process. e.g. Remember that high school chemistry class where you learned about electrons orbiting the nucleus at various discrete “energy levels” like tiny moons orbiting a planet? Then remember that college chemistry class where they told you that isn’t the way it actually is – but don’t worry about it, everyone learns it that way.

    (random thought – just because we can’t be sure where something is, doesn’t mean it can be in two spots at the same time – just like that cat in a box – it isn’t half alive and half dead, it is one or the other, we just can’t know which one – and moving on …)

    buzzwords vs jargon vs actual understanding

    Dilbert’s “pointy haired boss” is routinely held up for ridicule for “buzzword spouting” – which – in the most negative sense of the concept – implies that the person using “buzzwords” about “subject” has a very minimal understanding of the “subject.”

    Of course the “Dilbert principle” was/is that the competent people in a company are too valuable at their current job – and so cannot be promoted to “management”. Which implies that all managers are incompetent by default/design. It was a joke. It is funny. The reality is that “management” is a different skillset – but the joke is still funny 😉

    The next step up are the folks that can use the industry “jargon” correctly. Which simply illustrates that “education” is a process. In “ordinary speech” we all recognize and understand more words than we actively use – the same concept applies to acquiring and using the specific vocabulary/”jargon” of a new field of study (whatever that field happens to be).

    However if you stay at the “jargon speaking” level you have not achieved the goal of “actual understanding” and “applied knowledge.” Yes, a lot of real research has gone into describing different “levels”/stages in the process – which isn’t particularly useful. The concept that there ARE stages is much more important than the definition of specific points in the process.

    pedants

    No one want a teacher/instructor that is a “pedant” – you know, that teacher that know a LOT about a subject and thinks that it is their job to display just how much they know — imagine the high school teacher that insists or correcting EVERYONES grammar ALL THE TIME.

    There is an old joke that claims that the answer to EVERY accounting question is “it depends.” I’m fond of applying that concept to any field where “expert knowledge” is possible – i.e. the answer to EVERY question is “it depends.”

    (… oh, and pedants will talk endlessly about how much they know – but tend to have problems applying that knowledge in the real world. Being “pedantic” is boring/bad/counter productive – and ’nuff said)

    Of course if you are the expert being asked the question, what you get paid for is understanding the factors that it “depends on.” If you actually understand the factors AND can explain it to someone that isn’t an expert – then you are a rara avis.

    In “I.T.” you usually have three choices – e.g. “fast”, “cheap” (as in “low cost”/inexpensive), “good”/(as in durable/well built/”it is heavy? then it is expensive”) – but you only get to choose two. e.g “fast and cheap” isn’t going to be “good”, “fast and good” isn’t going to be “inexpensive.”

    Is “Cheap and good” possible? – well, in I.T. that probably implies using open source technologies and taking the time to train developers on the system – so an understanding of “total cost of ownership” probably shoots down a lot of “cheap and good” proposals – but it might be the only option if the budget is “we have no budget” – i.e. the proposal might APPEAR “low cost” when the cost is just being pushed onto another area — but that isn’t important at the moment.

    internet, aye?

    There is an episode of the Simpsons where Homer starts a “dot com” company called Compu Glogal Hyper Meganet – in classic Simpsons fashion they catch the cultural zeitgeist – I’ll have to re-watch the episode later – the point for mentioning it is that Homer obviously knew nothing about “technology” in general.

    Homer’s “business plan” was something like saying “aye” after every word he didn’t understand – which made him appear like he knew what he was talking about (at the end of the episode Bill Gates “buys him out” even though he isn’t sure what the company does – 1998 was when Microsoft was in full “antitrust defense by means of raised middle finger” – so, yes it was funny)

    (random thought: Microsoft is facing the same sort of accusations with their “OneDrive” product as they did with “Internet Explorer” – there are some important differences – but my guess is THIS lawsuit gets settled out of court 😉 )

    ANYWAY – anytime a new technology comes along, things need to settle down before you can really get past the “buzzword” phase. (“buzzword, aye?”) – so, while trying not to be pedantic, an overview of the weather on the internet in 2021 …

    virtualization/cloud/fog/edge/IoT

    Some (hopefully painless) definitions:

    first – what is the “internet” – the Merriam-Webster definition is nice, slightly more accurate might be to say that the internet is the “Merriam-Webster def” plus “that speaks TCP/IP.” i.e. the underlying “language” of the internet is something called TCP/IP

    This collection of worldwide TCP/IP connected networks is “the internet” – think of this network as “roads”

    Now “the internet” has been around for a while – but it didn’t become easy to use until Tim Berners Lee came up with the idea for a “world wide web” circa 1989.

    While rapidly approaching pedantic levels – this means there is a difference between the “internet” and the “world wide web.” If the internet is the roads, then the web is traffic on those roads.

    It is “true” to say that the underlying internet hasn’t really changed since the 1980’s – but maybe a little misleading.

    Saying that we have the “same internet” today is a little like saying we have the same interstate highway system today as we did when Henry Ford invented the Model-T. A lot of $$ has gone into upgrading the “internet” infrastructure since the 1980’s – just like countless $$ have gone into building “infrastructure” for modern automobiles …

    Picking up speed – Marc Andreessen gets credit for writing the first “modern” web browser in the early 1990s. Which kinda makes “web browsers” the “vehicles” running on the “web”

    Britannica via Google tells me that the first use of the term “cyberspace” goes back to 1982 – for convenience we will refer to the “internet/www/browser” as “cyberspace” – I’m not a fan of the term, but it is convenient.

    Now imagine that you had a wonderful idea for a service existing in “cyberspace” – back in the mid-1990’s maybe that was like Americans heading west in the mid 19th century. If you wanted to go west in 1850, there were people already there, but you would probably have to clear off land and build your own house, provide basic needs for yourself etc.

    The cyberspace equivalent in 1995 was that you had to buy your own computers and connect them to the internet. This was the time when sites like “Yahoo!” and/or “eBay” kind of ruled cyberspace. You can probably find a lot of stories of teenagers starting websites – that attracted a lot of traffic, and then sold them off for big $$ without too much effort. The point being that there weren’t a lot of barriers/rules on the web – but you had to do it yourself.

    e.g. A couple of nice young men (both named “Sean”) met in a thing called “IRC” and started a little file sharing project called Napster in 1999 – which is a great story, but also illustrates that there is “other traffic” on the internet besides the “web” (i.e. Napster connected users with each other – they didn’t actually host files for sharing)

    Napster did some cool stuff on the technical side – but had a business model that was functionally based on copyright infringement at some level (no they were not evil masterminds – they were young men that liked music and computers).

    ANYWAY – the point being that the Napster guys had to buy computers/configure the computers/and connect them to the internet …

    Startup stories aside – the next big leap forward was a concept called “virtualization”. The short version is that hardware processing power grew much faster than “software processing” required – SO 1 physical machine would be extremely underutilized and inefficient – then “cool tech advancements” happened and we could “host” multiple “servers” on 1 physical machine.

    Extending the “journey west” analogy – virtualization allowed for “multi-tenant occupation” – at this point the roads were safe to travel/dependable/you didn’t HAVE to do everything yourself. When you got to your destination you could stay at the local bed and breakfast while you looked for a place to stay permanent (or move on).

    … The story so far: we went from slow connections between big time-sharing computers in the 1970’s to fast connections between small personal computers in the 1990’s to “you need a computer to get on the web” and the “web infrastructure” consists mostly of virtualized machines in the early 2000s …

    Google happened in there somewhere, which was a huge leap forward in real access to information on the web – another great story, just not important for my story today 😉

    they were an online bookstore once …

    Next stop 2006. Jeff Bezos and Amazon.com are (probably) one of the greatest business success stories in recorded history. They had a LONG time where they emphasized “growth” over profit – e.g. when you see comic strips from the 1990’s about folks investing in “new economy” companies that had never earned a profit, Amazon is the success story.

    (fwiw: of course there were also a LOT of companies that found out that the “new economy” still requires you to make a profit at some point – the dot.com boom and bust/”bubble” has been the subject of many books – so moving on …)

    Of course in the mid-2000’s Amazon was still primarily a “retail shopping site.” The problem facing ANY “retail” establishment is meeting customer service/sales with employee staffing/scheduling.

    If you happen to be a “shopping website” then your way of dealing with “increased customer traffic” is to implement fault/tolerance and load balancing techniques – the goal is “fast customer transactions” which equals “available computing resources” but could also mean “inefficient/expensive.”

    Real world restaurant example: I’m told that the best estimate for how busy any restaurant will be on any given day is to look at how busy they were last year on the same date (adjusting for weekends and holidays). SO if a restaurant expects to be very busy on certain days – they can schedule more staff for those days. If they don’t expect to be busy, then they will schedule fewer employees.

    Makes sense? Cool. The point is that Amazon had the same problem – they had the data on “expected customer volume” and had gone about the process of coming up with a system that would allow for automatic adjustment of computing resources based on variable workloads.

    I imagine the original goal might have been to save money by optimizing the workloads – but then someone pointed out that if they designed it correctly then they could “rent out” the service to other companies/individuals.

    Back to our “westward expansion” analogy – maybe this would be the creation of the first “hotel chains.” The real story of “big hotel chains” probably follows along with the westward expansion of the railroad – i.e. the railroads needed depots, and those depots became natural “access” points for travelers – so towns grew up around the depots and inns/”hotels” developed as part of the town – all of which is speculation on my part – but you get the idea

    The point being that in 2006 the “cloud” came into being. To be clear the “cloud” isn’t just renting out a virtual machine in someone else’s data center – the distinct part of “cloud services” is the idea of “variable costs for variable workloads.”

    Think of the electrical grid – if you use more electricity then you pay for what you use, if you use less electricity then your electrical expenses go down.

    The “cloud” is the same idea – if you need more resources because you are hosting an eSports tournament – then you can use more resources – build out/up – and then when the tournament is over scale back down.

    Or if you are researching ‘whatever’ and need to “process” a lot of data – before the cloud you might have had to invest in building your own “super computer” which would run for a couple weeks and then be looking for something to do. Now you can utilize one of the “public cloud” offerings and get your data ‘processed’ at a much lower cost (and probably faster – so again, you are getting “fast” and “inexpensive” but you are using “virtual”/on demand/cloud resources).

    If you are interested in the space exploration business – an example from NASA –

    Fog/Edge/IoT?

    The next problem becomes efficiently collecting data while also controlling cost. Remember with the “cloud” you pay for what you use. Saying that you have “options” for your public/private cloud infrastructure is an understatement.

    However, we are back to the old “it depends” answer when we get into concepts like “Fog computing” and the “Internet of things”

    What is the “Internet of Things” well NIST has an opinion – if you read the definition and say “that is nice but a little vague” – well, what is the IoT? It depends on what you are trying to do.

    The problem is that the how of “data collection” is obviously dependent of the data being collected. So the term becomes so broad that it is essentially meaningless.

    Maybe “Fog” computing is doing fast and cheap processing of small amounts of data captured by IoT devices – as opposed to having the data go all the way out to “the cloud” – we are probably talking about “computing on a stick” type devices that plug into the LAN.

    Meanwhile “Edge computing” is one for the salespeople – e.g. it is some combination of cloud/fog/IoT – at this point it reminds me of the “Corinthian Leather” Ricardo Montalban was talking about in car commercials way back when 😉

    Ok, I’m done – I feel better

    SO if you are teaching an online class of substantial length – an entire class only about IoT might be a little pointless. You can talk about various data collecting sensors and chips/whatever – but simply “collecting” data isn’t the point, you need to DO SOMETHING with the data afterwards.

    Of course I can always be wrong – my REAL point is that IoT is a buzzword that gets misused on a regular basis. If we are veering off off into marketing and you want to call the class “IoT electric boogaloo” because it increases enrollment – and then talk about the entire cloud/fog/IoT framework – that would probably be worthwhile.

    it only took 2400+ words to get that out of my system 😉