Category: computers

  • look for the union label, corporate profits, and inflation

    A “meme” post caught my attention. I’ve seen various versions – but the gist is always that “corporate profits” are the cause of “inflation.”

    From a “marketing” point of view the meme does a lot of things right – the version that caught my attention “caught audience attention” by stating that “The profits of the top 6 most profitable corporations” had increased “huge%” THEN the meme tries to connect “corporate profits” with recent “higher than normal” rising inflation.

    Now, the INTERESTING part is that the meme is plausible but also “fact free.” Just who are these 6 corporations? Is it possible for their profitability to cause “inflation?”

    Profits

    What exactly are these “profits?” Google tells me “Profit = Selling Price – Cost Price.”

    Imagine a small business selling “product.” If the small business makes the “product” then the “cost” will include raw materials and labor. For the small business to stay “in business” they need to sell the product for more than it cost them to produce.

    e.g. if raw materials = 30 and labor = 40 – “cost” = 70. IF the business sells the product for 100 then “profits per unit” = 30.

    Then “Profit percentage” = (Profit/Cost Price) x 100. In this:

    Profit percentage = (30/70) x 100 = 43%

    The obvious question NOW becomes is that Profit % good or bad? Unfortunately the answer is “we don’t know.”

    Well, a highly paid financial consultant would say “It depends.” Which is kind of the answer to ANY “financial accounting” question.

    Of course pointing out the factors that profitability “depends” on is the more useful answer. THAT answer will vary depending on the “business industry/sector” – e.g. “costs” are much different for a car company than a pizza company .

    A business being “profitable” just means that it is being well managed. No business will stay a “going concern” long if they LOSE money on EVERY transaction.

    BUT a “well managed” company that makes a small % profit on each transaction …

    Oh, and if that all above sounds fascinating you might want to look into “corporate finance” as a career 😉

    Corporations

    The history of “corporations” is mildly interesting – but not important here.

    In 2024 “corporations” exist as a way for “business” to raise “capital.” A corporation’s “initial public offering” (IPO) involves selling “stock” to “investors.”

    THOSE investors aren’t guaranteed anything (as opposed to “corporate bonds” – did I mention that “corporate finance” is a career field).

    If they aren’t guaranteed ANYTHING why would anyone gamble on an IPO? Well, the obvious answer is that (assuming that the corporation meets some basic financial reporting requirements) the stock becomes an asset that can be traded on a “stock market.”

    The “corporation” gets the cash from the IPO – but the “share holders” can then buy and sell shares among themselves. Which is kind of a big deal in 2024 (and obviously “stock market investment” is beyond the scope of this article.)

    Rule #1 of the “stock market” might be “buy low and sell high” – i.e. the “profits” concept applies to stock trading as well.

    … and what is a big factor in how investors value shares of a corporation’s stock? Profitability.

    How do corporations use the money from an IPO/stock offering? To grow/expand their business.

    Eventually the “profitable corporation” MIGHT distribute “profits” to share holders in the form of “dividends.”

    The grand point being that “corporations” are not evil OR good – they are an investment tool. Corporate profits are also not evil OR good – “profitability” is a function of management and the “business sector.”

    Gov’ment Regulation

    Unrestrained human greed is never a good thing.

    The history of the United States “economy” is a story of “booms” and “busts.” Those swings in the business cycle illustrated an inverse relationship between “unemplyment” and “inflation.”

    During a “boom” the unemplyment rate would go down, but then inflation would go up. Then during a “bust” umemployment went up, and inflation would come down.

    Random thought: There is a scene in “Support Your Local Sheriff” (1969) that (humorously) show the impact “boom times” could have on “consumer prices” – a “mining boom town” has trouble hiring a Sheriff (for “plot” reasons) – James Garner’s character decides to take the job in part because it comes with room and board (and he had just payed a huge amount for a plate of beans).

    In 1913 the Federal Reserve was founded with a “mission” of trying to “smooth out” the business cycles.

    The “economics” text books will say that the Fed’s goal is (around) 5% unemplyment and (around) 2% inflation. How well the Fed has achieved those goals is debateable – BUT that is another topic

    Obviously if the Fed is making decisions based on “unemplyment” and “inflation” rates they need a method of calculating unemplyment and inflation.

    Unemployment seems simple enough – but it a little more complex than just counting people “out of work” – fwiw: the Fed has considered 5% as “full employment because in a large economy there will always be people entering/leaving the work force. e.g. The umployment rate at the height of the Great Depression (1933) was 25% but wage income for employed workers also fell 43% between 1929-1933. Things were bad …

    Calculating Inflation is even more complicated – first the (Bureau Labor Statistics) determines the “consumer price index” (CPI) — which is a “measure of the average change over time in the prices paid by urban consumers for a market basket of consumer goods and services.”

    That CPI “basket of goods” contains 85,000 items spread out over various “sectors” of the economy. That number is important – I’ll mention that again later …

    IF the CPI goes up that equals “inflation.” If it goes down that is called “deflation” (last time the U.S. experienced “deflation” was 2009. Unemplyment rate peaked at 9.9% that year.

    BTW: the short explantion of the “Great Recession” revolves around “sub prime mortgages” – not it wasn’t the fault of “free market capitalism” it was “unrestrained greed” feed by poorly thought out gov’ment intervention in the housing market.

    i.e. the gov’ment was REQUIREING banks to give loans to folks that couldn’t afford to pay them back – not surprisingly when the whole thing expolded it caused a lot of problems. It became a worldwide financial crisis because those “sub prime loans” were “securitized” and sold on “exchanges” — again all fed by greed.

    I like to say that the BEST role for “government” to play in the economy is “referee.” To many unintended consequences are possible when the gov’ment starts CHOOSING “winners and losers” on a large scale. Yes, the economy needs “regulation” but NOT “central planning.”

    Unions

    The first labor union in the United States was the “Federal Society of Journeymen Cordwainers, founded in Philadelphia in 1794.”

    The United States was primarily an “agricultural economy” (as in most people working on/around farms) until the early 20th Century. Which kinda meant that the demand for “labor unions” wasn’t high.

    It is interesting that the first labor union was ruled a “criminal conspiracy” in 1806. Functionally that ruling made attempts at “organizing labor” a crime. It wasn’t until 1842 when “precedent” was set “de-criminalizing” union membership.

    AND the history of organized labor is also interesting – but not important at the moment.

    I’m not ignoring the sometimes adversarial relationship between “management’ and “labor.” Just like corporations, “labor unions” are NOT inherently “good” OR “bad.” Ideally “corporation management” and “labor unions” should work together for mutual benefit.

    BUT “greed” is never good. i.e. “Labor” is just as suceptible to “greed” as “management.”

    Both “labor” and “management” will better serve the “organization” if they understand each others function. The relationship is not “zero sum” or even “either/or.”

    Having an understanding of how the “corporation makes money” will help “labor” communicate with “management.” Of course “management” ALSO needs to appreciate the work performed by “labor.” …

    Corporate Profits

    “Large Corporate profitability” tends to involve a LOT of “Generally accepted accounting principles” (GAAP). The point being that a “multi-billion dollar corporation” is going to generate “profits” from a lot of sources.

    Again, if you find “corporate finance” interesting there are a lot of career options. MY guess is that any of the top 10 “most profitable” corporations COULD adjust their profits up or down (using GAAP) without doing anything illegal.

    ANY “global corporation” will have multiple “books” depending on the audience – i.e. one set of “books” for the U.S. Federal gov’ment, one for each State the do business, one for “management decision making”, and the “books” for whatever other nation-states they do business.

    Oh, and remember that 85,000 sources in the CPI “basket?” That large number of sources used to calculate “inflation” kinda makes it hard for any single “corporation” to have a large impact on the number.

    THEN the large number of “corporations” competing in a particular “business sector” makes it even harder for 1 corporation’s profits to impact inflation.

    There are also laws against “price fixing” (the good ol’ “Sherman Antitrust Act”) – so if a bunch of “cereal makers” got together and decided to “raise prices” to increase “corporate prices” the Federal Gov’ment would NOT be pleased.

    The “market” tends to prevent “excess profits” in established industries. Plain old “competition” between corporations will prevent anyone from “price gouging” —

    e.g. I went to the store to buy “breakfast cereal” and there was an entire aisle dedicated to “cereals” at various price ranges – I bought the ‘store brand’ because it was “good enough quality” and 1/3 the price of “brand name”

    MAYBE that “brand name” was engaging in “price gouging” but it is also (probably) a superior product than “store brand” – either way that ONE corporation isn’t goign to impact the CPI/inflation

    Top 6 corporations 2023

    My original thought was “who are these 6 corporations that are supposed to be causing inflation?”

    Well, the MOST profitable corporation in the world is ..(drum roll) Saudi Aramco. Obviously this “oil” stuff is in high demand and, Saudi Arabia has vast oil reserves. BUT they aren’t an American Corporation. I’m also not sure if their “profitability” changes much year over year — Saudi Arabia is a founding member of OPEC. ’nuff said

    Of course “increased energy costs” is a HUGE factor in recent inflation across all sectors of the U.S. economy. IF the top 6 profitable corporations were “energy companies” then MAYBE they would deserve a look to see if they are “price gouging.”

    However, none of the top 10 most profitable corporations are “energy companies.”

    the list:

    1. Apple, Inc
    2. Microsoft, Inc
    3. Alphabet, Inc (Google)
    4. Berkshire Hathaway, Inc (Warren Buffet’s company)
    5. Industrial and Commercial Bank of China
    6. JPMorgan Chase
    7. China Construction Bank
    8. META (Facebook)
    9. NVIDIA Corp
    10. Amazon.com, Inc

    Under “just my opinion” – I’m not a fan of “Apple, Inc’s” products BECAUSE I think they are over-priced and not “developer friendly.” The latest iPhone PROBABLY qualifies as a “luxury” item – but it isn’t a source of “inflation.” They do make very good “consumer electronics” though …

    Looking at the rest of the list – Alphabet, META, and Amazon might actually help LOWER the CPI/inflation by making it easier for OTHER companies to sell products.

    Berkshire and JPMorgan’s profits are very much “stock market” related – which might impact folks retirement planning 401ks but aren’t moving the dial on the CPI

    Corporations 5 and 7 are obviously based in China — one more under “just my opinion” – ANY company data from Chinese corporations requires an asterisk – maybe an “approved by the Chinese Communist Party” disclaimer

    The “global supply line” issues are part of the inflation story – but again, it is hard to blame any single corporation for those issues …

    Supply and Demand

    The “introduction to economics” text book would also have a section talking about the relationship between “supply” of a product and “unfulfilled demand” for a product. e.g. as “Supply” goes up the “unfulfilled demand” goes down.

    The “slightly unintuitive” concept is that “price” is a third variable NOT ALWAYS related to “supply and demand”

    e.g. Q: if “company” raises the price of their product (and keeps supply constant) how will that impact demand? A: it is impossible to tell.

    Remember the cereal aisle – If “company” raises their prices, then customers MIGHT buy a lower priced alternative product or maybe not buy anything at all.

    This is “price elasticity” – and is another subject 😉

    now if there is only ONE company making “product” – and they keep their prices “high” – all that company is doing is encouraging competition to enter the market.

    I’ll point at “personal computer” sales in the early 1990s as a (kind of) recent example – the cost to buy the “parts” for an “IBM PC compatible” personal computer were (relatively low) compared to selling price.

    IBM being “IBM” made the PC a standard piece of office equipment – but in 2005 sold off their “personal computer division” (to a Chinese company – Lenovo)

    The “IBM price point” encouraged a LOT of “PC clone companies” — e.g. Some young college student at the University of Texas started building and selling PCs out of his dorm room in 1984. In 2024 Michael Dell is worth $96.5 billion …

  • memoirs of an adjunct instructor or What do you mean “full stack developer?”

    During the “great recession” of 2008 I kind of backed into “teaching.”

    The small company where I was the “network technician” for 9+ years wasn’t dying so much as “winding down.” I had ample notice that I was becoming “redundant” – in fact the owner PROBABLY should have “let me go” sooner than he did.

    When I was laid off in 2008 I had been actively searching/”looking for work” for 6+ months – certainly didn’t think I would unemployed for an extended period of time.

    … and a year later I had gone from “applying at companies I want to work for” to “applying to everything I heard about.” When I was offered an “adjunct instructor” position with a “for profit” school in June 2009 – I accepted.

    That first term I taught a “keyboarding class” – which boiled down to watching students follow the programmed instruction. The class was “required” and to be honest there wasn’t any “teaching” involved.

    To be even MORE honest, I probably wasn’t qualified to teach the class – I have an MBA and had multiple CompTIA certs at the time (A+, Network+) – but “keyboarding” at an advanced level isn’t in my skill set.

    BUT I turned in the grades on time, that “1 keyboarding class” grew into teaching CompTIA A+ and Network+ classes (and eventually Security+, and the Microsoft client and server classes at the time). fwiw: I taught the Network+ so many times during that 6 years that I have parts of the book memorized.

    Lessons learned …

    Before I started teaching I had spent 15 years “in the field” – which means I had done the job the students were learning. I was a “computer industry professional teaching adults changing careers how to be ‘computer industry professionals’”

    My FIRST “a ha!” moment was that I was “learning” along with the students. The students were (hopefully) going from “entry level” to “professional” and I was going from “working professional” to “whatever comes next.”

    Knowing “how” to do something will get you a job, but knowing “why” something works is required for “mastery.”

    fwiw: I think this same idea applied to “diagramming sentences” in middle school – to use the language properly it helps to understand what each part does. The fact I don’t remember how to diagram a sentence doesn’t matter.

    The “computer networking” equivalent to “diagramming sentences” is learning the OSI model – i.e. not something you actually use in the real world, but a good way to learn the theory of “computer networking.”

    When I started teaching I was probably at level 7.5 of 10 on my “OSI model” comprehension – after teaching for 6 years I was at a level 9.5 of 10 (10 of 10 would involve having things deeply committed to memory which I do not). All of which is completely useless outside of a classroom …

    Of course most students were coming into the networking class with a “0 of 10” understanding of the OSI model BUT had probably setup their home network/Wi-Fi.

    The same as above applies to my understanding of “TCP/IP networking” and “Cyber Security” in general.

    Book Learning …

    I jumped ship at the “for profit school” I was teaching in 2015 for a number of reasons. MOSTLY it was because of “organizational issues.” I always enjoyed teaching/working with students, but the “writing was on the wall” so to speak.

    I had moved from “adjunct instructor” to “full time director” – but it was painfully obvious I didn’t have a future with the organization. e.g. During my 6 years with the organization we had 4 “campus directors” and 5 “regional directors” — and most of those were “replaced” for reasons OTHER than “promotion.”

    What the “powers the be” were most concerned with was “enrollment numbers” – not education. I appreciate the business side – but when “educated professionals” (i.e. the faculty) are treated like “itinerate labor”, well, the “writing is on the wall.”

    In 2014 “the school” spent a lot of money setting up fiber optic connections and a “teleconferencing room” — which they assured the faculty was for OUR benefit.

    Ok, reality check – yes I understand that “instructors” were their biggest expense. I dealt with other “small colleges” in the last 9 years that were trying to get by with fewer and fewer “full time faculty” – SOME of them ran into “accreditation problems” because of an over reliance on “adjuncts” – I’m not criticizing so much as explaining what the “writing on the wall” said …

    oh, and that writing was probably also saying “get a PhD if you want a full time teaching position” — if “school” would have paid me to continue my education or even just to keep my skills up to date, I might have been interested in staying longer.

    Just in general – an organization’s “employees” are either their “biggest asset” OR their “biggest fixed cost.” From an accounting standpoint both are (probably) true (unless you are “Ivy League” school with a huge endowment). From an “administration” point of view dealing with faculty as “asset” or “fixed cost” says a LOT about the organization — after 6 years it was VERY clear that the “for profit” school looked at instructors as “expensive necessary evils.”

    COVID-19 was the last straw for the campus where I worked. The school still exits but appears to be totally “online” –

    Out of the frying pan …

    I left “for profit school” to go to teach at a “tech bootcamp” — which was jumping from “bad situation to worse situation.”

    The fact I was commuting an hour and a half and was becoming more and more aware of chronic pain in my leg certainly didn’t help.

    fwiw: I will tell anyone that asks that a $20 foam roller changed my life — e.g. “self myofascial release” has general fitness applications.

    I was also a certified “strength conditioning professional” (CSCS) in a different life – so I had a long history of trying to figure out “why I had chronic pain down the side of my leg” – when there was no indication of injury/limit on range of motion.

    Oh, and the “root cause” was tied into that “long commute” – the human body isn’t designed for long periods of “inaction.” The body adapts to the demands/stress placed on it – so if it is “immobile” for long periods of time – it becomes better at being “immobile.” For me that ended up being a constant dull pain down my left leg.

    Being more active and five minutes with the foam roller after my “workout” keeps me relatively pain free (“it isn’t the years, it’s the mileage”).

    ANYWAY – more itinerate level “teaching” gave me time to work on “new skills.”

    I started my “I.T. career” as a “pc repair technician.” The job of “personal computer technician” is going (has gone?) the way of “television repair.”

    Which isn’t good or bad – e.g. “personal computers” aren’t going away anymore than “televisions” have gone away. BUT if you paid “$X” for something you aren’t going to pay “$X” to have it repaired – this is just the old “fix” vs “replace” idea.

    The cell phone as 21st Century “dumb terminal” is becoming reality. BUT the “personal computer” is a general purpose device that can be “office work” machine, “gaming” machine, “audiovisual content creation” machine, or “whatever someone can program it to do” machine. The “primary communication device” might be a cell phone, but there are things a cell phone just doesn’t do very well …

    Meanwhile …

    I updated my “tech skill set” from “A+ Certified PC repair tech” to “networking technician” in the 1990s. Being able to make Cat 5/6 twisted pair patch cables still comes in handy when I’m working on the home network but no one has asked me to install a Novell Netware server recently (or Windows Active Directory for that matter).

    Back before the “world wide web” stand alone applications were the flavor of the week. e.g. If you bought a new PC in 1990 it probably came with an integrated “modem” but not a “network card.” That new PC in 1990 probably also came with some form of “office” software – providing word processing and spreadsheet functions.

    Those “office” apps would have been “stand alone” instances – which needed to be installed and maintained individually on each PC.

    Back in 1990 that application might have been written in C or C++. I taught myself “introductory programming” using Pascal mostly because “Turbo Pascal” came packaged with tools to create “windows” and mouse control. “Pascal” was designed as a “learning language” so it was a little less threatening than C/C++ back in the day …

    random thought: If you wanted “graphical user interface” (GUI) functionality in 1990 you had to write it yourself. One of the big deals with “Microsoft Windows” was that it provided a uniform platform for developers – i.e. developers didn’t have to worry about writing the “GUI operating system hooks” they could just reference the Windows OS.

    Apple Computers also had “developers” for their OS – but philosophically “Apple Computers” sold “hardware with an operating system included” while Microsoft sold “an operating system that would run on x86 hardware” – since x86 hardware was kind of a commodity (read that as MUCH less expensive than “Apple Computers”). The “IBM PC” story that ended up making Microsoft, inc a lot of money. — which was a fun documentary to show students bored of listening to me lecture …

    What users care about is applications/”getting work done” not the underlying operating system. Microsoft also understood the importance of developers creating applications for their platform.

    fwiw: “Microsoft, Inc” started out selling programming/development tools and “backed into” the OS market – which is a different story.

    A lot of “business reference applications” in the early 1990s looked like Microsoft Encarta — they had a “user interface” providing access to a “local database.” — again, one machine, one user at a time, one application.

    N-tier

    Originally the “PC” was called a “micro computer” – the fact that it was self contained/stand alone was a positive selling point. BEFORE the “PC” a larger organization might have had a “terminal” system where a “dumb terminal” allowed access to a “mainframe”/mini computer.

    SO when the “world wide web” happened and “client server” computing became mainstream the concept of “N tier” computing model as a concept became popular.

    N-tier might be a the “presentation” layer/web server, the “business logic” layer/a programming language, and then the “data” layer/a database management system

    Full Stack Developer

    In the 21st Century “stand alone” applications are the exception – and “web applications” the standard.

    Note that applications that allow you to download and install files on a personal computer are better called “subscription verification” applications rather than “N Tier.”

    e.g. Adobe allows folks to download their “Creative Suite” and run the applications on local machines using computing resources from the local machine – BUT when the application starts it verifies that the user has a valid subscription.

    An “N tier” application doesn’t get installed locally – think Instagram or X/Twitter …

    For most “business applications” designing an “N tier” app using “web technologies” is a workable long term solution.

    When we divided the application functionality the “developer” job also differentiated – “front end” for the user facing aspects and “back end” for the database/logic aspects.

    The actual tools/technologies continue to develop – in “general” the “front end” will involve HTML/CSS/JavaScript and the “back end” involves a combination of “server language” and “database management system.”

    Languages

    Java (the language maintained by Oracle not “JavaScript” also known as ECMAscript) has provided “full stack development” tools for almost 30 years. The future of Java is tied into Oracle, Inc but neither is gonna be “obsolete” anytime soon.

    BUT if someone is competent with Java – then they will describe themselves as a “Java developer” – Oracle has respected industry certifications

    I am NOT a “Java developer” – but I don’t come to “bury Java” – if you are a computer science major looking to go work for “large corporation” then learning Java (and picking up a Java certification) is worth your time.

    Microsoft never stopped making “developer tools” – “Visual Studio” is still their flagship product BUT Visual Studio Code is my “go to” (free, multi-platform) programming editor in 2024)

    Of course Microsoft wants developers to develop “Azure applications” in 2024 – C# provides easy access to a lot of those “full stack” features.

    … and I am ALSO not a C# programmer – but there are a lot of C# jobs out there as well (I see C# and other Microsoft ‘full stack’ tech specifically mentioned with Major League Baseball ‘analytics’ jobs and the NFL – so I’m sure the “larger corporate” world has also embraced them)

    JavaScript on the server side has also become popular – Node.js — so it is possible to use JavaScript on the front and back end of an application. opportunities abound

    My first exposure to “server side” programming was PHP – I had read some “C” programming books before stumbling upon PHP, and my first thought was that it looked a lot like “C” – but then MOST computer languages look a lot like “C.”

    PHP tends to be the “P” part of the LAMP stack acronym (“Linux OS, Apache web server, MySQL database, and PHP scripting language”).

    Laravel as a framework is popular in 2024 …

    … for what it is worth MOST of the “web” is probably powered by a combination of JavaScript and PHP – but a lot of the folks using PHP are unaware they are using PHP, i.e. 40%+ of the web is “powered by WordPress.”

    I’ve installed the LAMP stack more times than I can remember – but I don’t do much with PHP except keep it updated … but again, opportunities abound

    Python on the other hand is where I spend a lot of time – I find Django a little irritating, but it is popular. I prefer flask or pyramid for the “back end” and then select a JavaScript front end as needed

    e.g. since I prefer “simplicity” I used “mustache” for template presentation with my “Dad joke” and “Ancient Quote” demo applications

    Python was invented with “ease of learning” as a goal – and for the most part it succeeds. The fact that it can also do everything I need it to do (and more) is also nice 😉 – and yes, jobs, jobs, jobs …

    Databases

    IBM Db2, Oracle, Microsoft SQL server are in the category of “database management system royalty” – obviously they have a vast installation base and “large corporate” customers galore. The folks in charge of those systems tend to call themselves “database managers.” Those database managers probably work with a team of Java developers …

    At the other end of the spectrum the open source project MySQL was “acquired” by Sun Microsystems in 2008 which was then acquired by Oracle in 2010. Both “MySQL” and “Oracle” are popular database system back ends.

    MySQL is an open source project that has been “forked” into the “MariaDB foundation.”

    PostgreSQL is a little more “enterprise database” like – also a popular open source project.

    MongoDB has become popular and is part of its own “full stack” acronym MEAN (MongoDB, Express, Angular, and Node) – MongoDB is a “NoSQL” database which means it is “philosophically” different than the other databases mentioned – making it a great choice for some applications, and not so great for other applications.

    To be honest I’m not REALLY sure if there is a big performance difference between database management back ends. Hardware and storage space are going to matter much more than the database engine itself.

    “Big Corporate Enterprise Computing” users aren’t as concerned with the price of the database system they want rock solid dependability – if there was a Mount Rushmore of database management systems – DB2, Oracle, and Microsoft SQL server would be there …

    … but MariaDB is a good choice for most projects – easy to install, not terribly complicated to use. There is even a nice web front end – phpMyAdmin

    I’m not sure if the term “full stack developer” is gonna stick around though. Designing an easy to use “user interface” is not “easy” to do. Designing (and maintaining) a high performing database back end is also not trivial. There will always be room for specialists.

    “Generalist developer” sounds less “techy” than “full stack developer” – but my guess is that the “full stack” part is going to become superfluous …

  • To REALLY mess things up …

    SO, I tried to change “permalinks” in WordPress and ALL the links broke.

    I’ve been using WordPress for years – but to be honest I’ve never tried to do anything “complicated” (i.e. beyond the “content management” for which WordPress is designed).

    Of course this “blog” thing isn’t making me an $$ so I don’t put a lot of effort into WordPress “customization” – i.e. it doesn’t REALLY matter what the “permalinks” look like.

    “Optimized URLs” used to be a “search engine optimization” (SEO) thing (well, it probably still is a SEO thing) — so I’m not saying that “permalink structure” isn’t important. I’m just pointing out that I haven’t had a reason to change it from the default.

    And Then…

    Like I said, WordPress is great for the occasional “blog” posting – but then I wanted to do some “web 1.0” type file linking – and, well, WordPress ain’t built for that.

    Yes, there are various plugins – and I got it to work. AND THEN —

    I should also mention that I’ve tried launching various “Facebook pages” over the years. One is Old Time Westerns.

    Now, Facebook as a platform wasn’t real sure what “pages” were for – my opinion is that they were basically TRYING to create a “walled garden” to keep users on Facebook – and then of course users see more Facebook ads.

    No, I am NOT criticizing Facebook for offering new services trying to keep people on Facebook — but “Facebook pages 1.0” weren’t particularly useful for “page creators.” In fact Facebook wanted (wants) page creators to PAY to “boost posts” — which functionally means NOTHING goes “organically viral” on Facebook.

    Again, I’m also NOT criticizing Facebook for wanting to make $$ – but no, I’m not going to PAY for the privilege of doing the work of creating a community on a platform, which can decide to kick me off whenever they like.

    Did I mention …

    … I have the required skills to do the “web publishing” thing – so for not much $ I can just setup my own servers and have much more control over anything/everything.

    SO the motivation behind the “Westerns” page was more about me getting in my “amateur historian” exercise than about building a community.

    Ok, sure, I would love to connect with people with the same interests – which is one of those things the “web” has been great at from the “early days.” Notice that I didn’t day “Facebook” is great a finding people of common interests, but the Internet/Web is.

    Facebook is great to “reconnect” with people you once knew or have met – but not so good at “connecting” new people with a common interest.

    Hey, if you are “company” selling “product” and you have a marketing budget – then Facebook can help you find new customers. If you are “hobbyist” looking for other “hobbyists” – well, not so much.

    Yes, Facebook can be a tool for that group of “hobbyists” – but unless you have a “marketing budget” don’t expect to “organically” grow you member list from being on Facebook.

    fwiw: “Facebook pages 2.0” has become “groups” or something – Wikipedia tells me Yahoo! pulled the plug on “Yahoo! Groups” in 2020. The “fun fact” is that the whole “groups” concept predates the “web” – that sort of “bulletin board” functionality goes back to the late 1970’s early 1980’s. Remember the movie WarGames (1983)? That was what he was “dialing into.”

    ANYWAY …

    I have various “example” sites out there – I’ve pointed out that WordPress does somethings very well – but doesn’t do other things well.

    Yes, you could “extend” WordPress if you like – but it isn’t always the “right tool for the job.”

    SO “data driven example” https://www.iterudio.com/us —

    small “progressive web app”: https://media.iterudio.com/j/

    Another “data driven example” – but this time I was trying to create a “daily reading app” from a few of the “wisdom books”: https://clancameron.us/bible/

    A “quote app”: https://clancameron.us/quotes/

    AND then the latest – which is just javascript and css https://www.iterudio.com/westerns/

    The original plan was to just create some “pages” within WordPress – and I wanted the URL to be “page name” — which is why I was trying to change the “permalinks” within WordPress.

    My guess is that the problem has to do with the fact the the “uniform resource locator” (URL) on my server gets “rewritten” before it hits the WordPress “permalink” module – which then tries to rewrite it again. The error I was getting seems to be common – and I tried the common solutions to no avail (and most potential solutions just made the problem worse).

    To err is human; To really foul things up requires a computer.

    Anonymous

  • Simple Fitness part 2 – the interval trainer

    Google tells me that the “fitness industry” was forecast to pass $32 billion in 2022. Which means that “personal fitness” is more than a New Year’s resolution for a large number of people.

    Elite Athletes

    “Exercise Science” has become a more rigorous academic discipline than the old “physical education” catch all. My guess (100% me guessing – just my opinion) is that most “high schools” now have a “strength and conditioning” coach of some kind – at smaller schools it might be a part-time supplemental job held by a teacher/coach of another sport (probably football).

    All of which means that there is a vast amount of “information” out there. If you are an “elite athlete” or if you are responsible for training “elite athletes” there are a lot of factors to consider when designing a “training program” for competition. Much of that information is “sport specific” — e.g. training for “golfers” is much different than training for “marathon runners”.

    The days of athletes “reporting to training camp” and “getting into shape” DURING “training camp” are long gone. The average “elite” athlete probably treats their sport as a year round obligation – and might spend hours everyday “working out” in the off-season to prepare.

    General Fitness

    But wait – this isn’t an article about “elite athlete training.”

    A large amount of research has been done confirming that a “sedentary lifestyle” is actually a health risk. The good news is that recommendations for “exercise for general health” haven’t changed much.

    It would be “best” to get 30 minutes of low to moderate exertion level exercise most days of the week. The exercise doesn’t have to come in one continuous 30 minute period – again the “best” option would probably be multiple 10 minute periods of exercise spaced out over the day.

    Which means if you work in an office building and can make the walk from “car” to “office” take 10 minutes (park at the end of the parking lot, take the stairs) – that would have SOME health benefits — but that is just a made up example, not a recommendation.

    If you are sitting in front of a computer all day – then you should (probably) also stand up and move around a couple minutes each hour. Again, your situation will vary.

    Interval Training

    If you hate to exercise (or if you have trouble finding the time to exercise), but recognize that you “should” exercise – “interval training” might be a good option.

    The idea of “interval training” is that you alternate periods of “high exertion” with periods of “low exertion.”

    Runners might be familiar with the idea of “fartlek training” (Swedish for “speed play”) – where periods of “faster” running are alternated with periods of “slower” running. Google tells me the practice goes back to the 1930’s – and I’m going to guess that MOST “competitive” runners are familiar with the concept.

    From a practical point of view the “problem” becomes keeping track of “rest” and “relief” times.

    With a “fartlek” run in the U.S. you might be able to alternate sprints and jogging between utility poles — assuming your running path has “utility poles.”

    In a “gymnasium” environment “circuit training” becomes an option — e.g. 20 second “work” times followed by 10 second “relief” times (when exercises could be changed if using resistance training or calisthenics.

    Personally I get bored doing the same routine, don’t really want to go to a “gym”, have an abundance of old computers, and some “coding skills.” SO I wrote the little application below.

    Download

    Interval Trainer start screen
    “Select Workout”
    Workout selected

    workout started with a 1 minute “warm-up”

    Since I designed the application of course it seems “obvious” to me — just a simple countdown timer combined with “work” and “rest” intervals.

    Specific “work” and “rest” periods can be entered — e.g. if you wanted to do a “boxing gym” workout you could set the “interval” count to 15, “Work Time” to 3, and then “Rest Time” to 1 – and you would get 1 hours worth of “rounds.”

    The very generic “General Fitness” workout is 5 intervals consisting of 1 minute of “work” and 2 minutes of “active rest” periods — there is a “clacking sound” at 10 seconds remaining and a “bell sound” between periods.

    Exercises

    I like using an exercise bike or a “step” for my intervals – but you can do whatever exercise you like. e.g. Jumping rope or “burpees” would also be good options.

    For “beginners” doing calisthenics for 1 minute is probably not realistic – but it would be a good workout for a college wrestling team.

    You will get more out of the workout if you “walk around” during the “Active Rest” period.

    Core Strength

    There is a “20 second work/10 second rest” option under “Select Workout” – which is a good example for a “planking” type exercise for “core strength”/calisthenics intervals.

    e.g. As an “ex-athlete” over a certain age – the 20/10 intervals are surprisingly tough. But again “currently a competitive athlete” could start with the same workout – they would just get more repetitions done in the same amount of time (and would recover faster).

    If you are looking for something tougher/more challenging – there are a lot of “High Intensity bodyweight” exercise routines out there on the interweb – but again, be careful. Going too slow at the start is MUCH better than “jumping in head first” and getting injured …

    Simple – not easy

    If you do the General Fitness intervals three days a week (ideally with a day in between workout days – e.g. Sunday – Tuesday – Thursday, or Monday – Wednesday – Friday) and then some 20/10 “planks” for core strength (or do push-ups for 20/10 intervals) that is a “not bad” beginner workout.

    Do that workout for six weeks and then maybe think about upping the “intensity” – or start doing the workout 4 or 5 days a week.

    Coaches

    I wrote this application for myself – and it could obviously be improved. I could add a “save custom workout” option with a little effort if there is an interest.

    From MY point of view “coaches”/personal trainers are the folks that would find a “save custom workout” option useful — and there would be “time and effort” involved.

    Download

    The download has been tested on 64 bit versions of Microsoft Windows. I have a “Mac mini” so compiling a OSX version might be an option (if someone actually needs it). Same idea for Linux …

    Download Here

  • authentication, least privilege, and zero trust

    When we are discussing “network security” phrases like “authentication”, “least privilege”, and “zero trust” tend to come up. The three terms are related, and can be easily confused.

    I’ve been in “I.T.” for a while (the late 1980’s) – I’ve gone from an “in the field professional” to “network technician” to “the computer guy” and now as a “white bearded instructor.”

    Occasionally I’ve listened to other “I.T. professionals” struggle trying to explain the above concepts – and as I mentioned, they are easy to confuse.

    Part of my job was teaching “network security” BEFORE this whole “cyber-security” thing became a buzzword. I’ve also had the luxury of “time” as well as the opportunity/obligation to explain the concepts to “non I.T. professionals” in “non technical jargon.”

    With that said, I’m sure I will get something not 100% correct. The terms are not carved in stone – and “marketing speak” can change usage. SO in generic, non-technical jargon, here we go …

    Security

    First, security as a concept is always an illusion. No I’m not being pessimistic – as human beings we can never be 100% secure because it is simply not possible to have 100% of the “essential information.”

    SO we talk in terms of “risk” and “vulnerabilities.” From a practical point of view we have a “sliding scale” with “convenience and usability” on one end and “security” on the other. e.g. “something” that is “convenient” and “easy to use”, isn’t going to be “secure.” If we enclose the “something” in a steel cage, surround the steel cage with concrete, and bury the concrete block 100 feet in the ground, it is much more “secure” – but almost impossible to use.

    All of which means that trying to make a “something” usable and reasonably secure requires some tradeoffs.

    Computer Network Security

    Securing a “computer” used to mean “locking the doors of the computer room.” The whole idea of “remote access” obviously requires a means of accessing the computer remotely — which is “computer networking” in a nutshell.

    The “physical” part of computer networking isn’t fundamentally different from the telegraph. Dots and dashes sent over the wire from one “operator” to another have been replaced with high and low voltages representing 1’s and 0’s and “encapsulated data” arranged in frames/packets forwarded from one router to another — but it is still about sending a “message” from one point to another.

    With the old telegraph the service was easy to disrupt – just cut the wire (a 19th century “denial of service” attack). Security of the telegraph message involved trusting the telegraph operators OR sending an “encrypted message” that the legitimate recipient of the message could “un-encrypt.”

    Modern computer networking approached the “message security” problem in the same way. The “message” (i.e. “data”) must be secured so that only the legitimate recipients have access.

    There are a multitude of possible modern technological solutions – which is obviously why “network administration” and “cyber-security” have become career fields — so I’m not going into specific technologies here.

    The “generic” method starts with “authentication” of the “recipient” (i.e. “user”).

    Authentication

    Our (imaginary) 19th Century telegraph operator didn’t have a lot of available options to verify someone was who they said they were. The operator might receive a message and then have to wait for someone to come to the telegraph office and ask for the message.

    If our operator in New Orleans receives a message for “Mr Smith from Chicago” – he has to wait until someone comes in asking for a telegraph for “Mr Smith from Chicago.” Of course the operator had no way of verifying that the person asking for the message was ACTUALLY “Mr Smith from Chicago” and not “Mr Jones from Atlanta” who was stealing the message.

    In modern computer networking this problem is what we call “authentication.”

    If our imaginary telegraph included a message to the operator that “Mr Smith from Chicago” would be wearing a blue suit, is 6 feet tall, and will spit on the ground and turn around 3 times after asking for the message — then our operator has a method of verifying/”identifying” “Mr Smith from Chicago” and then “authenticating” him as the legitimate recipient.

    Least Privilege

    For the next concept we will leave the telegraph behind – and imagine we are going to a “popular music concert.”

    Imagine that we have purchased tickets to see “big name act” and the concert promoters are holding our tickets at the “will call” window.

    Our imaginary concert has multiple levels of seating – some seats close to the stage, some seats further away, some “seats” involve sitting on a grassy hill, and some “seats” are “all access Very Important Person.”

    On the day of the concert we go to the “will call” window and present our identification (e.g. drivers license, state issued ID card, credit card, etc) – the friendly attendant examines our individual identification (i.e. we get “authenticated”) and then gives us each a “concert access pass” on a lanyard (1 each) that we are supposed to hang around our necks.

    Next we go to the arena gate and present our “pass” to the friendly security guard. The guard examines the pass and allows us access BASED on the pass.

    Personally I dislike large crowds – so MY “pass” only gives me access to the grassy area far away from the stage. Someone else might love dancing in the crowd all night, so their “pass” gives them access to the area much closer to the stage (where no one sill sit down all night). If “big recording executive” shows up, their “pass” might give them access to the entire facility.

    Distinguishing what we are allowed to do/where we are allowed to go is called “authorization.”

    First we got “authenticated” and then we were giving a certain level of “authorized” access.

    Now, assume that I get lonely sitting up there on the hill – and try to sneak down to the floor level seats where all the cool kids are dancing. If the venue provider has some “no nonsense, shaved head” security guards controlling access to the “cool kids” area – then those guards (inside the venue) will check my pass and deny me entry.

    That concept of “only allowing ‘pass holders’ to go/do specifically where/what they are authorized to go/do” could be called “least privilege.”

    Notice that ensuring “least privilege” takes some additional planning on the part of the “venue provider.”

    First we authenticate users, then we authorize users to do something. “Least privilege” is attained when users can ONLY do what they NEED to do based on an assessment of their “required duties.”

    Zero Trust

    We come back around to the idea that “security” is a process and not an “end product” with the “new” idea of “zero trust.” ” Well, “new” as in “increased in popularity.”

    Experienced “network security professionals” will often talk about “assuming that the network has been compromised.” This “assumption of breach” is really what “zero trust” is concerned.

    It might sound pessimistic to “assume a network breach” – but it implies that we need to be looking for “intruders” INSIDE the area that we have secured.

    Imagine a “secret agent movie” where the “secret agent” infiltrates the “super villain’s” lair by breaching the perimeter defense, then enters the main house through the roof. Since the “super villain” is having a big party for some reason, out “secret agent” puts on a tuxedo and pretends to be a party guest.

    Of course the super villain’s “henchmen” aren’t looking for intruders INSIDE the mansion that look like party guests – so the “secret agent” is free to collect/gather intelligence about the super villain’s master plan and escape without notice.

    OR to extend the “concert” analogy – the security guards aren’t checking “passes” of individuals within the “VIP area.” If someone steals/impersonates a “VIP pass” then they are free to move around the “VIP area.”

    The simplest method for an “attacker” would be to acquire a “lower access” pass, and then try to get a “higher level” pass

    Again – we start off with good authentication, have established least privilege, and the next step is checking users privileges each time they try to do ANYTHING.

    In the “concert” analogy, the “user pass” grants access to a specific area. BUT we are only checking “user credentials” when they try to move from one area to another. To achieve “zero trust” we need to do all of the above AND we assume that there has been a security breach – so we are checking “passes” on a continual basis.

    This is where the distinction between “authentication and least privilege” and “zero trust” can be hard to perceive.

    e.g. In our concert analogy – imagine that there is a “private bar” in the VIP area. If we ASSUME that a user should have access to the “private bar” because they are in the VIP area, that is NOT “zero trust.” If users have to authenticate themselves each time they go to the private bar – then that could be “zero trust.” We are guarding against the possibility that someone managed to breach the other security measures.

    Eternal vigilance

    If you have heard of “AAA” in regards to security – we have talked about the first two “A’s” (“Authentication”, and “Authorization”).

    Along with all of the above – we also need “auditing.”

    First we authenticate a user, THEN the user gets authorized to do something, and THEN we keep track of what the user does while they are in the system – which is usually called “auditing”.

    Of course what actions we will choose to “audit” requires some planning. If we audit EVERYTHING – then we will be swamped by “ordinary event” data. The “best practice” becomes “auditing” for the “unusual”/failure.

    e.g. if it is “normal” for users to login between the hours of 7:00AM and 6:00PM and we start seeing a lot of “failed login attempts” at 10:00PM – that probably means someone is doing something they shouldn’t.

    Deciding what you need to audit, how to gather the data, and where/when/how to analyze that data is a primary function of (what gets called) “cyber-security.”

    “Security” is always best thought of as a “process” not an “end state.” Something like “zero trust” requires constant authorization of users – ideally against multiple forms of authentication.

    Ideally intruders will be prevented from entering, BUT finding/detecting intrusion becomes essential.

    HOW to specifically achieve any of the above becomes a “it depends” situation requiring in depth analysis. Any plan is better than no planning at all, but the best plan will be tested and re-evaluated on a regular basis — which is obviously beyond the scope of this little story …

  • Brand value, Free Speech, and Twitter

    As a thought experiment – imagine you are in charge of a popular, world wide, “messaging” service – something like Twitter but don’t get distracted by a specific service name.

    Now assume that your goal is to provide ordinary folks with tools to communicate in the purest form of “free speech.” Of course if you want to stay around as a “going concern” then you will also need to generate revenue along the way — maybe not “obscene profits” but at least enough to “break even.”

    Step 1: Don’t recreate the wheel

    In 2022, if you wanted to create a “messaging system” for your business/community then there are plenty of available options.

    You could download the source code for Mastodon and setup your own private service if you wanted – but unless you have the required technical skills and have a VERY good reason (like a requirement for extreme privacy) that probably isn’t a good idea.

    In 2022 you certainly wouldn’t bother to “develop your own” platform from scratch — yes, it is something that a group of motivated under grads could do, and they would certainly learn a lot along the way, but they would be “reinventing the wheel.”

    Now if the goal is “education” then going through the “wheel invention” process might be worthwhile. HOWEVER , if the goal is NOT education and/or existing services will meet your “messaging requirements” – then reinventing the wheel is just a waste of time.

    For a “new commercial startup” the big problem isn’t going to be “technology” – the problem will be getting noticed and then “scaling up.”

    Step 2: integrity

    Ok, so now assume that our hypothetical messaging service has attracted a sizable user base. How do we go about ensuring that the folks posing messages are who they say they are – i.e. how do we ensure “user integrity.”

    In an ideal world, users/companies could sign up as who they are – and that would be sufficient. But in the real world where there are malicious actors with a “motivation to deceive” for whatever reason – then steps need to be taken make it harder for “malicious actors to practice maliciousness.”

    The problem here is that it is expensive (time and money) to verify user information. Again, in a perfect world you could trust users to “not be malicious.” With a large network you would still have “naming conflicts” but if “good faith” is the norm, then those issues would be ACCIDENTAL not malicious.

    Once again, in 2022 there are available options and “recreating the wheel” is not required.

    This time the “prior art” comes in the form of the registered trademark and good ol’ domain name system (DNS).

    Maybe we should take a step back and examine the limitations of “user identification.” Obviously you need some form of unique addressing for ANY network to function properly.

    quick example: “cell phone numbers” – each phone has a unique address (or a card installed in the phone with a unique address) so that when you enter in a certain set of digits, your call will be connected to that cell phone.

    Of course it is easy to “spoof the caller id” which simply illustrates our problem with malicious users again.

    Ok, now the problem is that those “unique user names” probably aren’t particularly elegant — e.g. forcing users to use names like user_2001,7653 wouldn’t be popular.

    If our hypothetical network is large enough then we have “real world” security/safety issues – so using personally identifiable information to login/post messages would be dangerous.

    Yes, we want user integrity. No, we don’t want to force users to use system generated names. No, we don’t want to put people in harm’s way. Yes, the goal is still “free speech with integrity” AND we still don’t want to reinvent the “authentication wheel.”

    Step 3: prior art

    The 2022 “paradigm shift” on usernames is that they are better thought of as “brand names.”

    The intentional practice of “brand management” has been a concern for the “big company” folks for a long time.

    However, this expanding of the “brand management” concept does draw attention to another problem. This problem is simply that a “one size fits all” approach to user management isn’t going to work.

    Just for fun – imagine that we decide to have three “levels” of registration:

    • level 1 is the fastest, easiest, and cheapest – provide a unique email address and agree to the TOS and you are in
    • level 2 is requires additional verification of user identity, so it is a little slower than level 1, and will cost the user a fee of some kind
    • level 3 is for the traditional “big company enterprises” – they have a trademark, a registered domain name, and probably an existing brand ‘history.’ The slowest and most expensive, but then also the level with the most control over their brand name and ‘follower’ data

    The additional cost for the “big company” probably won’t be a factor to the “big company” — assuming they are getting a direct line to their ‘followers’/’customers’

    Yes, there should probably be a “non profit”/gov’ment registration as well – which could be low cost (free) as well as “slow”.

    Anyone that remembers the early days of the “web” might remember the days when the domain choices were ‘.com’, ‘.edu’,’.net’, ‘.mil’, and ‘.org’ – with .com being for “commerce”, .edu for “education”, .net was originally supposed to be for “network infrastructure, .mil was for the military, and .org was for “non profit organizations.”

    I think that originally .org was free of charge – but they had to prove that they were a non-profit. Obviously you needed to be a educational institution to get an edu domain, and the “military” for a .mil domain was exactly what it sounds like

    Does it need to be pointed out that “.com” for commercial activity was why the “dot-com bubble/boom and bust” was called “dot-com”?

    Meanwhile, back at the ranch ….

    For individuals the concept was probably thought of as “personal integrity” – and hopefully that concept isn’t going away, i.e. we are just adding a thin veneer and calling it “personal branding.”

    Working in our hypothetical company’s favor is the fact that “big company brand management” has included registering domain names for the last 25+ years.

    Then add in that the modern media/intellectual property “prior art” consists of copyrights, trademarks, and patents. AND We (probably) already have a list of unacceptable words – e.g. assuming that profanity and racial slurs are not acceptable.

    SO just add a registered trademarks and/or a domain name check to the registration process.

    Prohibit anyone from level 1 or 2 from claiming a name that is on the “prohibited” list. Problem solved.

    It should be pointed out that this “enhanced registration” process would NOT change anyone’s ability to post content. Level 2 and 3 are not any “better” than level 1 – just “authenticated” at a higher level.

    If a “level 3 company” chooses not to use our service – their name is still protected. “Name squatting” should also be prohibited — e.g. if a level 3 company name is “tasty beverages, inc” then names like “T@sty beverages” or “aTasty Beverage” – a simple regular expression test would probably suffice.

    The “level 3” registration could then have benefits like domain registration — i.e. “tasty beverages, inc” would be free create other “tasty beverage” names …

    If you put together a comprehensive “registered trademark catalog” then you might have a viable product – the market is probably small (trademark lawyers?), but if you are creating a database for user registration purposes – selling access to that database wouldn’t be a big deal – but now I’m just rambling …

  • Random Thoughts about Technology in General and Linux distros in Particular

    A little history …

    In the 30+ years I’ve been a working “computers industry professional” I’ve done a lot of jobs, used a lot of software, and spent time teaching other folks how to be “computer professionals.”

    I’m also an “amateur historian” – i.e. I enjoy learning about “history” in general. I’ve had real “history teachers” point out that (in general) people are curious about “what happened before them.”

    Maybe this “historical curiosity” is one of the things that distinguishes “humans” from “less advanced” forms of life — e.g. yes, your dog loves you, and misses you when you are gone – but your dog probably isn’t overly concerned with how its ancestors lived (assuming that your dog has the ability to think in terms of “history” – but that isn’t the point).

    As part of “teaching” I tend to tell (relevant) stories about “how we got here” in terms of technology. Just like understanding human history can/should influence our understanding of “modern society” – understanding the “history of a technology” can/should influence/enhance “modern technology.”

    The Problem …

    There are multiple “problems of history” — which are not important at the moment. I’ll just point out the obvious fact that “history” is NOT a precise science.

    Unless you have actually witnessed “history” then you have to rely on second hand evidence. Even if you witnessed an event, you are limited by your ability to sense and comprehend events as they unfold.

    All of which is leading up to the fact that “this is the way I remember the story.” I’m not saying I am 100% correct and/or infallible – in fact I will certainly get something wrong if I go on long enough – any mistakes are mine and not intentional attempts to mislead 😉

    Hardware/Software

    Merriam-Webster tells me that “technology” is about “practical applications of knowledge.”

    random thought #1 – “technology” changes.

    “Cutting edge technology” becomes common and quickly taken for granted. The “Kansas City” scene from Oklahoma (1955) illustrates the point (“they’ve gone just about as far as they can go”).

    Merriam-Webster tells me that the term “high technology” was coined in 1969 referring to “advanced or sophisticated devices especially in the fields of electronics and computers.”

    If you are a ‘history buff” you might associate 1969 with the “race to the moon”/moon landing – so “high technology” equaled “space age.” If you are an old computer guy – 1969 might bring to mind the Unix Epoch – but in 2022 neither term is “high tech.”

    random thought #2 – “software”

    The term “hardware” in English dates back to the 15th Century. The term originally meant “things made of metal.” In 2022 the term refers to the “tangible”/physical components of a device – i.e. the parts we can actually touch and feel.

    I’ve taught the “intro to computer technology” more times than I can remember. Early on in the class we distinguish between “computer hardware” and “computer software.”

    It turns out that the term “software” only goes back to 1958 – invented to refer to the parts of a computer system that are NOT hardware.

    The original definition could have referred to any “electronic system” – i.e. programs, procedures, and documentation.

    In 2022 – Merriam-Webster tells me that “software” is also used to refer to “audiovisual media” – which is new to me, but instantly makes sense …

    ANYWAY – “computer software” typically gets divided into two broad categories – “applications” and “operating systems” (OS or just “systems”).

    The “average non-computer professional” is probably unaware and/or indifferent to the distinction between “applications” and the OS. They can certainly tell you whether they use “Windows” or a “Mac” – so saying people are “unaware” probably isn’t as correct as saying “indifferent.”

    Software lets us do something useful with hardware

    an old textbook

    The average user has work to get done – and they don’t really care about the OS except to the point that it allows them to run applications and get something done.

    Once upon a time – when a new “computer hardware system” was designed a new “operating system” would also be written specifically for the hardware. e.g. The Mythic Man-Month is required reading for anyone involved in management in general and “software development” in particular …

    Some “industry experts” have argued that Bill Gates’ biggest contribution to the “computer industry” was the idea that “software” could be/should be separate from “hardware.” While I don’t disagree – it would require a retelling of the “history of the personal computer” to really put the remark into context — I’m happy to re-tell the story, but it would require at least two beers – i.e. not here, not now

    In 2022 there are a handful of “popular operating systems” that also get divided into two groups – e.g. the “mobile OS” – Android, iOS, and the “desktop OS” Windows, macOS, and Linux

    The Android OS is the most installed OS if you are counting “devices.” Since Android is based on Linux – you COULD say that Linux is the most used OS, but we won’t worry about things like that.

    Apple’s iOS on the other hand is probably the most PROFITABLE OS. iOS is based on the “Berkely Software Distribution” (BSD) – which is very much NOT Linux, but they share some code …

    Microsoft Windows still dominates the desktop. I will not be “bashing Windows” in any form – just point out that 90%+ of the “desktop” machines out there are running some version of Windows.

    The operating system that Apple includes with their personal computers in 2022 is also based on BSD. Apple declared themselves a “consumer electronics” company a long time ago — fun fact: the Beatles (yes, John, Paul, George, and Ringo – those “Beatles”) started a record company called “Apple” in 1968 – so when the two Steves (Jobs and Wozniak) wanted to call their new company “Apple Computers” they had to agree to stay out of the music business – AND we are moving on …

    On the “desktop” then Linux is the rounding error between Windows machines and Macs.

    What is holding back “Linux on the desktop?” Well, in 2022 the short answer is “applications” and more specifically “gaming.”

    You cannot gracefully run Microsoft Office, Avid, or the Adobe Suit on a Linux based desktop. Yes, there are alternatives to those applications that perform wonderfully on Linux desktops – but that isn’t the point.

    e.g. that “intro to computers” class I taught used Microsoft Word, and Excel for 50% of the class. If you want to edit audio/video “professionally” then you are (probably) using Avid or Adobe products (read the credits of the next “major Hollywood” movie you watch).

    Then the chicken and egg scenario pops up – i.e. “big application developer” would (probably) release a Linux friendly version if more people used Linux on the desktop – but people don’t use Linux on the desktop because they can’t run all of the application software they want – so they don’t have a Linux version of the application.

    Yes, I am aware of WINE – but it illustrates the problem much more than acts as a solution — and we are moving on …

    Linux Distros – a short history

    Note that “Linux in the server room” has been a runaway success story – so it is POSSIBLE that “Linux on the desktop” will gain popularity, but not likely anytime soon.

    Also worth pointing out — it is possible to run a “Microsoft free” enterprise — but if the goal is lowering the “total cost of ownership” then (in 2022) Microsoft still has a measurable advantage over any “100% Linux based” solution.

    If you are “large enterprise” then the cost of the software isn’t your biggest concern – “Support” is (probably) “large enterprise, Inc’s” largest single concern.

    fwiw: IBM and Red Hat are making progress on “enterprise level” administration tools – but in 2022 …

    ANYWAY – the “birthdate” for Linux is typically given as 1991.

    Under the category of “important technical distinction” I will mention that “Linux” is better described as the “kernel” for an OS and NOT an OS in and of itself.

    Think of Linux as the “engine” of a car – i.e. the engine isn’t the “car”, you need a lot of other systems working with and around the engine for the “car” to function.

    For the purpose of this article I will describe the combination of “Linux kernel + other operating system essentials” as a “Linux Distribution” or more commonly just “distro.” Ready? ok …

    1992 gave us Slackware. Patrick Volkerding started the “oldest surviving Linux distro” which accounted for 80 percent share of the “Linux” market until the mid-1990s

    1992 – 1996 gave us openSUSE Linux. Thomas Fehr, Roland Dyroff, Burchard Steinbild, and Hubert Mantel. I tend to call SUSE “German Linux” and they were just selling the “German version of Slackware” on floppy disks until 1996.

    btw: the “modern Internet” would not exist as it is today without Linux in the server room. All of these “early Linux distros” had business models centered around “selling physical media.” Hey, download speed were of the “dial-up” variety and you were paying “by the minute” in most of Europe – so “selling media” was a good business model …

    1993 -1996 gave us the start of Debian – Ian Murdock. The goal was a more “user friendly” Linux. First “stable version” was 1996 …

    1995 gave us the Red Hat Linux — this distro was actually my “introduction to Linux.” I bought a book that had a copy of Red Hat Linux 5.something (I think) and did my first Linux install on an “old” pc PROBABLY around 2001.

    During the dotcom “boom and bust” a LOT of Linux companies went public. Back then it was “cool” to have a big runup in stock valuation on the first day of trading – so when Red Hat “went public” in 1999 they had the eighth-biggest first-day gain in the history of Wall Street.

    The run-up was a little manufactured (i.e. they didn’t release a lot of stock for purchase on the open market). My guess is that in 2022 the folks arranging the “IPO” would set a higher price for the initial price or release more stock if they thought the offering was going to be extremely popular.

    Full disclosure – I never owned any Red Hat stock, but I was an “interested observer” simply because I was using their distro.

    Red Hat’s “corporate leadership” decided that the “selling physical media” business plan wasn’t a good long term strategy. Especially as “high speed Internet” access moved across the U.S.

    e.g. that “multi hour dial up download” is now an “under 10 minute iso download” – so I’d say the “corporate leadership” at Red Hat, Inc made the right decision.

    Around 2003 the Red Hat distro kind of “split” into “Red Hat Enterprise Linux” (RHEL – sold by subscription to an “enterprise software” market) and the “Fedora project.” (meant to be a testing ground for future versions of RHEL as well as the “latest and greatest” Linux distro).

    e.g. the Fedora project has a release target of every six months – current version 35. RHEL has a longer planned release AND support cycle – which is what “enterprise users” like – current version 9.

    btw – yes RHEL is still “open source” – what you get for your subscription is “regular updates from an approved/secure channel and support.” AlmaLinux and CentOS are both “clones” of RHEL – with CentOS being “sponsored” by Red Hat.

    IBM “acquired” Red Hat in 2019 – but nothing really changed on the “management” side of things. IBM has been active in the open source community for a long time – so my guess is that someone pointed out that a “healthy, independent Red Hat” is good for IBM’s bottom line in the present and future.

    ANYWAY – obviously Red Hat is a “subsidiary” of IBM – but I’m always surprised when “long time computer professionals” seem to be unaware of the connections between RHEL, Fedora Project, CentOS, and IBM (part of what motivated this post).

    Red Hat has positioned itself as “enterprise Linux” – but the battle for “consumer Linux” still has a lot of active competition. The Fedora project is very popular – but my “non enterprise distros of choice” are both based on Debian:

    Ubuntu (first release 2004) – “South African Internet mogul Mark Shuttleworth” gets credit for starting the distro. The idea was that Debian could be more “user friendly.” Occasionally I teach an “introduction to Linux class” and the big differences between “Debian” and “Ubuntu” are noticeable – but very much in the “ease of use” (i.e. “Ubuntu” is “easier” for new users to learn)

    I would have said that “Ubuntu” meant “community” (which I probably read somewhere) but the word is of ancient Zulu and Xhosa origin and more correctly gets translated “humanity to others.” Ubuntu has a planned release target of every six months — as well as a longer “long term support” (LTS) version.

    Linux Mint (first release 2008) – Clément Lefèbvre gets credit for this one. Technically Linux Mint describes itself as “Ubuntu based” – so of course Debian is “underneath the hood.” I first encountered Linux Mint from a reviewer that described it as the best Linux distro for people trying to not use Microsoft Windows.

    The differences between Mint and Ubuntu are cosmetic and also philosophical – i.e. Mint will install some “non open source” (but still free) software to improve “ease of use.”

    The beauty of “Linux” is that it can be “enterprise level big” software or it can be “boot from a flash drive” small. It can utilize modern hardware and GPU’s or it can run on 20 year old machines. If you are looking for specific functionality, there might already be a distro doing that – or if you can’t find one, you can make your own

  • Modern “basics” of I.T.

    Come my friends, let us reason together … (feel free to disagree, none of this is dogma)

    There are a couple of “truisms” that APPEAR to conflict –

    Truism 1:

    The more things change the more they stay the same.

    … and then …

    Truism 2:

    The only constant is change.

    Truism 1 seems to imply that “change” isn’t possible while Truism 2 seems to imply that “change” is the only possibility.

    There are multiple way to reconcile these two statements – for TODAY I’m NOT referring to “differences in perspective.”

    Life is like a dogsled team. If you aren’t the lead dog, the scenery never changes.

    (Lewis Grizzard gets credit for ME hearing this, but he almost certainly didn’t say it first)

    Consider that we are currently travelling through space and the earth is rotating at roughly 1,000 miles per hour – but sitting in front of my computer writing this, I don’t perceive that movement. Both the dogsled and my relative lack of perceived motion are examples of “perspective” …

    Change

    HOWEVER, “different perspectives” or points of view isn’t what I want to talk about today.

    For today (just for fun) imagine that my two “change” truisms are referring to different types of change.

    Truism 1 is “big picture change” – e.g. “human nature”/immutable laws of the universe.

    Which means “yes, Virginia there are absolutes.” Unless you can change the physical laws of the universe – it is not possible to go faster than the speed of light. Humanity has accumulated a large “knowledge base” but “humans” are NOT fundamentally different than they were 2,000 years ago. Better nutrition, better machines, more knowledge – but humanity isn’t much different.

    Truism 2 can be called “fashion“/style/”what the kids are doing these days” – “technology improvements” fall squarely into this category. There is a classic PlayStation 3 commercial that illustrates the point.

    Once upon a time:

    • mechanical pinball machines were “state of the art.”
    • The Atari 2600 was probably never “high tech” – but it was “affordable and ubiquitous” tech.
    • no one owned a “smartphone” before 1994 (the IBM Simon)
    • the “smartphone app era” didn’t start until Apple released the iPhone in 2007 (but credit for the first “App store” goes to someone else – maybe NTT DoCoMo?)

    SO fashion trends come and go – but the fundamental human needs being services by those fashion trends remain unchanged.

    What business are we in?

    Hopefully, it is obvious to everyone that it is important for leaders/management to understand the “purpose” of their organization.

    If someone is going to “lead” then they have to have a direction/destination. e.g. A tourist might hire a tour guide to “lead” them through interesting sites in a city. Wandering around aimlessly might be interesting for awhile – but could also be dangerous – i.e. the average tourist wants some guidance/direction/leadership.

    For that “guide”/leader to do their job they need knowledge of the city AND direction. If they have one OR the other (knowledge OR direction), then they will fail at their job.

    The same idea applies to any “organization.” If there is no “why”/direction/purpose for the organization then it is dying/failing – regardless of P&L.

    Consider the U.S. railroad system. At one point railroads were a huge part of the U.S. economy – the rail system opened up the western part of the continent and ended the “frontier.”

    However, a savvy railroad executive would have understood that people didn’t love railroads – what people valued was “transportation.”

    Just for fun – get out any map and look at the location of major cities. It doesn’t have to be a U.S. map.

    The point I’m working toward is that throughout human history, large settlements/cities have centered around water. Either ports to the ocean or next to riverways. Why? Well, obviously humans need water to live but also “transportation.”

    The problem with waterways is that going with the current is much easier than going against the current.

    SO this problem was solved first by “steam powered boats” and then railroads. The early railroads followed established waterways connecting established cities. Then as railroad technology matured towns were established as “railway stations” to provide services for the railroad.

    Even as the railroads became a major portion of the economy – it was NEVER about the “railroads” it was about “transportation”

    fwiw: then the automobile industry happened – once again, people don’t car so much about “cars” what they want/need is “transportation”

    If you are thinking “what about ‘freight’ traffic” – well, this is another example of the tools matching the job. Long haul transportation of “heavy” items is still efficiently handled by railroads and barges – it is “passenger traffic” that moved on …

    We could do the same sort of exercise with newspapers – i.e. I love reading the morning paper, but the need being satisfied is “information” NOT a desire to just “read a physical newspaper”

    What does this have to do with I.T.?

    Well, it is has always been more accurate to say that “information technology” is about “processing information” NOT about the “devices.”

    full disclosure: I’ve spent a lifetime in and around the “information technology” industry. FOR ME that started as working on “personal computers” then “computer networking”/LAN administration – and eventually I picked up an MBA with an “Information Management emphasis”.

    Which means I’ve witnessed the “devices” getting smaller, faster, more affordable, as well as the “networked personal computer” becoming de rigueur. However, it has never been about “the box” i.e. most organization aren’t “technology companies” but every organization utilizes “technology” as part of their day to day existence …

    Big picture: The constant is that “good I.T. practices” are not about the technology.

    Backups

    When any I.T. professional says something like “good backups” solve/prevent a lot of problems it is essential to remember how a “good backup policy” functions.

    Back in the day folks would talk about a “grandfather/father/son” strategy – if you want to refer to it as “grandmother/mother/daughter” the idea is the same. At least three distinct backups – maybe a “once a month” complete backup that might be stored in a secure facility off-site, a “once a week” complete backup, and then daily backups that might be “differential.”

    It is important to remember that running these backups is only part of the process. The backups also need to be checked on a regular basis.

    Checking the validity/integrity of backups is essential. The time to check your backups is NOT after you experience a failure/ransomware attack.

    Of course how much time and effort an organization should put into their backup policy is directly related to the value of their data. e.g. How much data are you willing to lose?

    Just re-image it

    Back in the days of the IBM PC/XT, if/when a hard drive failed it might take a day to get the system back up. After installing the new hard drive, formatting the drive and re-installing all of the software was a time intensive manual task.

    Full “disk cloning” became an option around 1995. “Ghosting” a drive (i.e. “cloning”) belongs in the acronym Hall of Fame — I’m told it was supposed to stand for “general hardware-oriented system transfer.” The point being that now if a hard drive failed, you didn’t have to manually re-install everything.

    Jump forward 10 years and Local Area Networks are everywhere – Computer manufacturers had been including ‘system restore disks’ for a long time AND software to clone and manage drives is readily available. The “system cloning” features get combined with “configuration management” and “remote support” and this is the beginning of the “modern I.T.” era.

    Now it is possible to “re-image” a system as a response to software configuration issues (or malware). Disk imaging is not a replacement for a good backup policy – but it reduced “downtime” for hardware failures.

    The more things change …

    Go back to the 1980’s/90’s and you would find a lot of “dumb terminals” connecting to a “mainframe” type system (well, by the 1980s it was probably a “minicomputer” not a full blown “mainframe”).

    A “dumb terminal” has minimal processing power – enough to accept keyboard input and provide monitor output, and connect to the local network.

    Of course those “dumb terminals” could also be “secured” so there were good reasons for keeping them around for certain installations. e.g. I remember installing a $1,000 expansion card into new late 1980’s era personal computers to make it function like a “dumb terminal” – but that might have just been the Army …

    Now in 2022 we have “chrome books” that are basically the modern version of “dumb terminals.” Again, the underlying need being serviced is “communication” and “information” …

    All of which boils down to “basics” of information processing haven’t really changed. The ‘personal computer’ is a general purpose machine that can be configured for various industry specific purposes. Yes, the “era of the PC” has been over for 10+ years but the need for ‘personal computers’ and ‘local area networks’ will continue.

  • Industry Changing Events and “the cloud”

    Merriam-Webster tells me that etymology is the “the history of a linguistic form (such as a word)” (the official definition goes on a little longer – click on the link if interested …)

    The last couple weeks I’ve run into a couple of “industry professionals” that are very skilled in a particular subset of “information technology/assurance/security/whatever” but obviously had no idea what “the cloud” consists of in 2022.

    Interrupting and then giving an impromptu lecture on the history and meaning of “the cloud” would have been impolite and ineffective – so here we are 😉 .

    Back in the day …

    Way back in the 1980’s we had the “public switched telephone network” (PSTN) in the form of (monopoly) AT&T. You could “drop a dime” into a pay phone and make a local call. “Long distance” was substantially more – with the first minute even more expensive.

    The justification for higher connection charges and then “per minute” charges was simply that the call was using resources in “another section” of the PSTN. How did calls get routed?

    Back in 1980 if you talked to someone in the “telecommunications” industry they might have referred to a phone call going into “the cloud” and connecting on the other end.

    (btw: you know all those old shows where they need “x” amount of time to “trace” a call – always a good dramatic device, but from a tech point of view the “phone company” knew where each end of the call was originating – you know, simply because that was how the system worked)

    I’m guessing that by the breakup of AT&T in 1984 most of the “telecommunications cloud” had gone digital – but I was more concerned with football games in the 1980s than telecommunications – so I’m honestly not sure.

    In the “completely anecdotal” category “long distance” had been the “next best thing to being there” (a famous telephone system commercial – check youtube if interested) since at least the mid-1970s – oh, and “letter writing”(probably) ended because of low cost long distance not because of “email”

    Steps along the way …

    Important technological steps along the way to the modern “cloud” could include:

    • the first “modem” in in the early 1960s – that is a “modulator”/”demodulator” if you are keeping score. A device that could take a digital signal and convert it to an analog wave for transmission over the PSTN on one end of the conversation and another modem could reverse the process on the other end.
    • Ethernet was invented in the early 1970’s – which allowed computers to talk to each other over long distances. You are probably using some flavor of Ethernet on your LAN
    • TCP/IP was “invented” in the 1970’s then became the language of ARPANET in the early 1980’s. One way to define the “Internet” is as a “large TCP/IP network” – ’nuff said

    that web thing

    Tim Berners-Lee gets credit for “inventing” the world wide web in 1989 while at CERN. Which made “the Internet” much easier to use – and suddenly everyone wanted a “web site.”

    Of course the “personal computer” needed to exist before we could get large scale adoption of ANY “computer network” – but that is an entirely different story 😉

    The very short version of the story is that personal computer sales greatly increased in the 1990s because folks wanted to use that new “interweb” thing.

    A popular analogy for the Internet at the time was as the “information superhighway” – with a personal computer using a web browser being the “car” part of the analogy.

    Virtualization

    Google tells me that “virtualization technology” actually goes back to the old mainframe/time-sharing systems in the 1960’s when IBM created the first “hypervisor.”

    A “hypervisor” is what allows the creation of “virtual machines.” If you think of a physical computer as an empty warehouse that can be divided into distinct sections as needed then a hypervisor is what we use to create distinct sections and assign resources to those sections.

    The ins and outs of virtualization technology is beyond the scope of this article BUT it is safe to say that “commodity computer virtualization technology” was an industry changing event.

    The VERY short explanation is that virtualization allows for more efficient use of resources which is good for the P&L/bottom line.

    (fwiw: any technology that gets accepted on a large scale in a relatively short amount of time PROBABLY involves saving $$ – but that is more of a personal observation that an industry truism.)

    Also important was the development of “remote desktop” software – which would have been called “terminal access” before computers had “desktops.”

    e.g. Wikipedia tells me that Microsoft’s “Remote Desktop Protocol” was introduced in Windows NT 4.0 – which ZDNet tells me was released in 1996 (fwiw: some of of my expired certs involved Windows NT).

    “Remote access” increased the number of computers a single person could support which qualifies as another “industry changer.” As a rule of thumb if you had more than 20 computers in your early 1990s company – you PROBABLY had enough computer problems to justify hiring an onsite tech.

    With remote access tools not only could a single tech support more computers – they could support more locations. Sure in the 1990’s you probably still had to “dial in” since “always on high speed internet access” didn’t really become widely available until the 2000s – but as always YMMV.

    dot-com boom/bust/bubble

    There was a “new economy” gold rush of sorts in the 1990s. Just like gold and silver exploration fueled a measurable amount of “westward migration” into what was at the time the “western frontier” of the United States – a measurable amount of folks got caught up in “dot-com” hysteria and “the web” became part of modern society along the way.

    I remember a lot of talk about how the “new economy” was going to drive out traditional “brick and mortar” business. WELL, “the web” certainly goes beyond “industry changing” – but in the 1990s faith in an instant transformation of the “old economy” into a web dominated “new economy” reached zeitgeist proportions …

    In 2022 some major metropolitan areas trace their start to the gold/silver rushes in the last half of the 19th century (San Francisco and Denver come to mind). There are also a LOT of abandoned “ghost towns.”

    In the “big economic picture the people running saloons/hotels/general stores in “gold rush areas” had a decent change of outliving the “gold rush” assuming that there was a reason for the settlement to be there other than “gold mining”

    The “dot-com rush” equivalent was that a large number of investors were convinced that a company could stay a “going concern” if it didn’t make a profit. However – just like the people selling supplies to gold prospectors had a good chance of surviving the gold – the folks selling tools to create a “web presence” did alright – i.e. in 2022 the survivors of the “dot-com bubble” are doing very well (e.g. Amazon, Google)

    Web Hosting

    In the “early days of the web” establishing a “web presence” took (relatively) arcane skills. The joke was that if you could spell HTML then you could get a job as a “web designer” – ok, maybe it isn’t a “funny” joke – but you get the idea.

    An in depth discussion of web development history isn’t required – pointing out that web 1.0 was the time of “static web pages” is enough.

    If you had a decent internet service provider they might have given you space on their servers for a “personal web page.” If you were a “local” business you might have been told by the “experts” to not worry about a web site – since the “web” would only be useful for companies with a widely dispersed customer base.

    That wasn’t bad advice at the time – but the technology needed to mature. The “smart phone” (Apple 2007) motivated the “mobile first” development strategy – if you can access the web through your phone, then it increases the value of “localized up to date web information.”

    “Web hosting” was another of those things that was going to be “free forever” (e.g. one of the tales of “dot-com bubble” woes was “GeoCities”). Which probably slowed down “web service provider” growth – but that is very much me guessing.

    ANYWAY – in web 1.0 (when the average user was connecting by dial up) the stress put on web servers was minimal – so simply paying to rent space on “someone else’s computer” was a viable option.

    The next step up from “web hosting” might have been to rent a “virtual server” or “co-locate” your own server – both of which required more (relatively) arcane skills.

    THE CLOUD

    Some milestones worth pointing out:

    • 1998 – VMWare “Workstation” released (virtualization on the desktop)
    • “Google search” was another “industry changing” event that happened in 1998 – ’nuff said
    • 2001 VMWare ESX (server virtualization)
    • 2005 Intel released the first cpus with “Intel Virtualization Technology” (VT-x)
    • 2005 Facebook – noteworthy, but not “industry changing”
    • 2006 Amazon Web Services (AWS)

    Officially Amazon described AWS as providing “IT infrastructure services to businesses in the form of web services” – i.e. “the cloud”

    NIST tells us that –

    Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics, three service models, and four deployment models.

    NIST SP 800-145

    If we do a close reading of the NIST definition – the “on-demand” and “configurable” portions are what differentiates “the cloud” from “using other folks computers/data center.”

    I like the “computing as a utility” concept. What does that mean? Glad you asked – e.g. Look on a Monopoly board and you will see the “utility companies” listed as “Water Works” and “Electric Company.”

    i.e. “water” and “electric” are typically considered public utilities. If you buy a home you will (probably) get the water and electric changed into your name for billing purposes – and then you will pay for the amount of water and electric you use.

    BUT you don’t have to use the “city water system” or local electric grid – you could choose to “live off the grid.” If you live in a rural area you might have a well for your water usage – or you might choose to install solar panels and/or a generator for your electric needs.

    If you help your neighbors in an emergency by allowing them access to your well – or maybe connecting your generator to their house. You are a very nice neighbor BUT you aren’t a “utility company” – i.e. your well/generator won’t have the capacity that the full blown “municipal water system” or electric company can provide.

    Just like if you have a small datacenter and start providing “internet services” to customers – unless you are big enough to be “ubiquitous, convenient, and on-demand” then you aren’t a “cloud provider.”

    Also note the “as a service” aspect of the cloud – i.e. when you sign up you will agree to pay for what you use, but you aren’t automatically making a commitment for any minimal amount of usage.

    As opposed to “web hosting” or “renting a server” where you will probably agree to a monthly fee and a minimal term of service.

    Billing options and service capabilities are obviously vendor specific. As a rule of thumb – unless you have “variable usage” then using “the cloud” PROBABLY won’t save you money over “web hosting”/”server rental.”

    The beauty of the cloud is that users can configure “cloud services” to automatically scale up for an increase in traffic and then automatically scale down when traffic decreases.

    e.g. image a web site that has very high traffic during “business hours” but then minimal traffic the other 16 hours of the day. A properly configured “cloud service” would scale up (costing more $$) during the day and then scale down (costing fewer $$) at night.

    Yes, billing options become a distinguishing element of the “cloud” – which further muddies the water.

    Worth pointing out is that if you are “big internet company” you might get to the point where it is in your company’s best interest to build your own datacenters.

    This is just the classic “rent” vs “buy” scenario – i.e. if you are paying more in “rent” than it would cost you to “buy” then MAYBE “buying your own” becomes an option (of course “buying your own” also means “maintaining” and “upgrading” your own). This tends to work better in real estate where “equity”/property values tends to increase.

    Any new “internet service” that strives to be “globally used” will (probably) start out using “the cloud” – and then if/when they are wildly successful, start building their own datacenters while decreasing their usage of the public cloud.

    Final Thoughts

    It Ain’t What You Don’t Know That Gets You Into Trouble. It’s What You Know for Sure That Just Ain’t So

    Artemus Ward

    As a final thought – “cloud service” usage was $332.3 BILLION in 2021 up from $270 billion in 2020 (according to Gartner).

    There isn’t anything magical about “the cloud” – but it is a little more complex than just “using other people’s computers.”

    The problem with “language” in general is that there are always regional and industry differences. e.g. “Salesforce” and “SAP” fall under the “cloud computing” umbrella – but Salesforce uses AWS to provide their “Software as a Service” product and SAP uses Microsoft Azure.

    I just spent 2,000 words trying to explain the history and meaning of “the cloud” – umm, maybe a cloud by any other name would still be vendor specific

    HOWEVER I would be VERY careful with choosing a cloud provider that isn’t offered by a “big tech company” (i.e. Microsoft, Amazon, Google, IBM, Oracle). “Putting all of your eggs in one basket” is always a risky proposition (especially if you aren’t sure that the basket is good in the first place) — as always caveat emptor …

  • statistics vs analytics, sports in general and bowling in particular

    what a title – first the youtube video demo/pitch for the “bowling analytics” product …

    https://www.youtube.com/watch?v=0JKbL4_UEwc&t=1385s

    statistics vs analytics

    Yes, there is a difference between “statistics” and “analytics” – maybe not a BIG difference but there is a difference.

    Statistics” is about collecting and interpreting “masses of numerical data.” “Analytics” is about logical analysis – probably using “statistics”.

    Yeah, kinda slim difference – the point being that there is a difference between “having the numbers” and “correctly interpreting the numbers.”

    “Data analysis” becomes an exercise in asking questions and testing answers – which might have been how a high level “statistician” described their job 100 years ago – i.e. I’m not dogmatic about the difference between “statistics” and “analytics”, just establishing that there are connotations involved.

    Analytics and Sports

    Analytics as a distinct field has gained popularity in recent years. In broad strokes the fields of “data science”, “artificial intelligence”, and “machine learning” all mean “analytics.”

    For a while the term “data mining” was popular – back when the tools to manage “large data sets” first became available.

    I don’t want to disparage the terms/job titles – the problem is that “having more data” and having “analysis to support decisions” does not automatically mean “better leadership.”

    It simply isn’t possible to ever have “all of the information” but it is very easy to convince “management types” that they have “data” supporting their pet belief.

    e.g. I always like to point out that there are “trends” in baby name popularity (example site here) – but making any sort of conclusion from that data is probably specious.

    What does this have to do with “sports” – well, “analytics” and sports “management” have developed side by side.

    Baseball’s word for the concept of “baseball specific data analysis” dates back to 1982 – about the time that “personal computers” where starting to become affordable and usable by “normal” folks.

    My round about point today is that most “analytics” fall into the “descriptive” category by design/definition.

    e.g. if you are managing a ‘sportball’ team and have the opportunity to select players from a group of prospects – how do you decide which players to pick?

    Well, in 2022 the team is probably going to have a lot of ‘sportball’ statistics for each player – but do those statistics automatically mean a player is a “good pick’ or a “bad pick”? Obviously not – but that is a different subject.

    The team decision process will (probably) include testing players physical abilities and watching the players work out – but neither of those 100% equates to “playing the game against other skilled opponents.”

    That player with great statistics might have been playing against a lower level of competition. That player that has average “physical ability test scores” might be a future Hall of Famer because of “hidden attributes”

    i.e. you can measure how fast an athlete can run, and how high they can jump – but you can’t measure how much they enjoy playing the game.

    MEANWHILE back at the ranch

    Now imagine that you are an athlete and you want to improve your ‘sportball’ performance. How do you decide what to work on?

    Well, the answer to that question is obviously going to be very sport AND athlete specific.

    However, your ‘sportball’ statistics are almost certainly not going to help you make decisions on how/what you should be trying to develop – i.e. those statistics will be a reflection of how well you have prepared, but do not directly tell you how to prepare.

    Bowling

    Full disclosure – I am NOT a competitive bowler. I have participated/coached other sports – but I’m a “casual bowler.” i.e. if I have misinterpreted the sport, please let me know 😉

    Now imagine that someone has decided that they want to improve their “bowling average” – how should they approach the problem?

    • Step 1 would be to establish a baseline from which improvements can be measured.
    • Step 2 would be to determine what you need to “work on” to improve your scores from Step 1.
    • Step 3 would be to establish a session of “practices” to work on the items from Step 2.
    • Step 4 would be to re-test the items from Step 1 and adjust steps 2 and 3 accordingly.

    Sure, I just described the entire field of “management” and/or “coaching” – but how well a manager/coach helps athletes through the above (generic) process will be directly reflected in wins/losses in competition.

    Remember that the old axiom that “practice makes perfect” is a little misleading:

    Practice does not make perfect. Only perfect practice makes perfect.

    -Vince Lombardi

    Back to bowling – bowling every week might be fun, but won’t automatically mean “better performance.”

    Keeping track of your game scores might be interesting, but also won’t automatically mean “better scores.”

    I’m told that the three factors for the “amateur bowler” to work on are:

    1. first ball pin average
    2. single pin spare %
    3. multipin spare %

    In a “normal” game there are 10 pins possible each frame. The bowler gets two balls to knock down all 10.

    If your “first ball pin average” is 10, then you are a perfect bowler –and knock all the pins down every frame with your first ball.

    To be honest I haven’t seen any real data on “first ball pin averages” – it probably exists in much the same manner that “modern baseball statistics” can be derived from old “box scores” – but I’m told that a first pin average around 9 is the goal.

    If you consistently average 9 pins on your first throw – then you have a consistent “strike” delivery.

    Which then means that IF you consistently knock down 9 pins – you will have to pickup “single pin spares” on a regular basis.

    Then “multipin spares” are going to be an exercise in statistics/time and fate. Obviously if you average 9 pins on your first ball, the number of “multipin spare” opportunities should be relatively small.

    SO those are the data points being tracked with my “bowling analytics” application.