Google tells me that the “fitness industry” was forecast to pass $32 billion in 2022. Which means that “personal fitness” is more than a New Year’s resolution for a large number of people.
Elite Athletes
“Exercise Science” has become a more rigorous academic discipline than the old “physical education” catch all. My guess (100% me guessing – just my opinion) is that most “high schools” now have a “strength and conditioning” coach of some kind – at smaller schools it might be a part-time supplemental job held by a teacher/coach of another sport (probably football).
All of which means that there is a vast amount of “information” out there. If you are an “elite athlete” or if you are responsible for training “elite athletes” there are a lot of factors to consider when designing a “training program” for competition. Much of that information is “sport specific” — e.g. training for “golfers” is much different than training for “marathon runners”.
The days of athletes “reporting to training camp” and “getting into shape” DURING “training camp” are long gone. The average “elite” athlete probably treats their sport as a year round obligation – and might spend hours everyday “working out” in the off-season to prepare.
General Fitness
But wait – this isn’t an article about “elite athlete training.”
A large amount of research has been done confirming that a “sedentary lifestyle” is actually a health risk. The good news is that recommendations for “exercise for general health” haven’t changed much.
It would be “best” to get 30 minutes of low to moderate exertion level exercise most days of the week. The exercise doesn’t have to come in one continuous 30 minute period – again the “best” option would probably be multiple 10 minute periods of exercise spaced out over the day.
Which means if you work in an office building and can make the walk from “car” to “office” take 10 minutes (park at the end of the parking lot, take the stairs) – that would have SOME health benefits — but that is just a made up example, not a recommendation.
If you are sitting in front of a computer all day – then you should (probably) also stand up and move around a couple minutes each hour. Again, your situation will vary.
Interval Training
If you hate to exercise (or if you have trouble finding the time to exercise), but recognize that you “should” exercise – “interval training” might be a good option.
The idea of “interval training” is that you alternate periods of “high exertion” with periods of “low exertion.”
Runners might be familiar with the idea of “fartlek training” (Swedish for “speed play”) – where periods of “faster” running are alternated with periods of “slower” running. Google tells me the practice goes back to the 1930’s – and I’m going to guess that MOST “competitive” runners are familiar with the concept.
From a practical point of view the “problem” becomes keeping track of “rest” and “relief” times.
With a “fartlek” run in the U.S. you might be able to alternate sprints and jogging between utility poles — assuming your running path has “utility poles.”
In a “gymnasium” environment “circuit training” becomes an option — e.g. 20 second “work” times followed by 10 second “relief” times (when exercises could be changed if using resistance training or calisthenics.
Personally I get bored doing the same routine, don’t really want to go to a “gym”, have an abundance of old computers, and some “coding skills.” SO I wrote the little application below.
Interval Trainer start screen“Select Workout”Workout selectedworkout started with a 1 minute “warm-up”
Since I designed the application of course it seems “obvious” to me — just a simple countdown timer combined with “work” and “rest” intervals.
Specific “work” and “rest” periods can be entered — e.g. if you wanted to do a “boxing gym” workout you could set the “interval” count to 15, “Work Time” to 3, and then “Rest Time” to 1 – and you would get 1 hours worth of “rounds.”
The very generic “General Fitness” workout is 5 intervals consisting of 1 minute of “work” and 2 minutes of “active rest” periods — there is a “clacking sound” at 10 seconds remaining and a “bell sound” between periods.
Exercises
I like using an exercise bike or a “step” for my intervals – but you can do whatever exercise you like. e.g. Jumping rope or “burpees” would also be good options.
For “beginners” doing calisthenics for 1 minute is probably not realistic – but it would be a good workout for a college wrestling team.
You will get more out of the workout if you “walk around” during the “Active Rest” period.
Core Strength
There is a “20 second work/10 second rest” option under “Select Workout” – which is a good example for a “planking” type exercise for “core strength”/calisthenics intervals.
e.g. As an “ex-athlete” over a certain age – the 20/10 intervals are surprisingly tough. But again “currently a competitive athlete” could start with the same workout – they would just get more repetitions done in the same amount of time (and would recover faster).
If you are looking for something tougher/more challenging – there are a lot of “High Intensity bodyweight” exercise routines out there on the interweb – but again, be careful. Going too slow at the start is MUCH better than “jumping in head first” and getting injured …
Simple – not easy
If you do the General Fitness intervals three days a week (ideally with a day in between workout days – e.g. Sunday – Tuesday – Thursday, or Monday – Wednesday – Friday) and then some 20/10 “planks” for core strength (or do push-ups for 20/10 intervals) that is a “not bad” beginner workout.
Do that workout for six weeks and then maybe think about upping the “intensity” – or start doing the workout 4 or 5 days a week.
Coaches
I wrote this application for myself – and it could obviously be improved. I could add a “save custom workout” option with a little effort if there is an interest.
From MY point of view “coaches”/personal trainers are the folks that would find a “save custom workout” option useful — and there would be “time and effort” involved.
Download
The download has been tested on 64 bit versions of Microsoft Windows. I have a “Mac mini” so compiling a OSX version might be an option (if someone actually needs it). Same idea for Linux …
The NFL “divisional playoffs” were this weekend (January 22, 2023) – I thought the “better teams” all won today (Cincinnati beat Buffalo, San Francisco beat Dallas)
Bengals
The final score was Bengals 27 – Bills 10. To my eyes the Bengals are playing like a championship team – I’m not predicting anything, just saying that they are doing a lot of the things that championship teams do.
Of course the Bengals continue to be disrespected by the “experts” simply because, well, they are the Bengals.
e.g. The “spread” was Bengals +6 – which means that the Bills were a 6 point “favorite.”
Sure the Bills were the home team, and they are obviously also a very good team composed of professional athletes – but a 6 point favorite?
Well, you see the “line”/”point spread” in a football game is about getting equal money bet by both sides – then the “house” is guaranteed a % of the money wagered – no matter who wins.
The “spread” isn’t about which team is actually better – it is completely about how money is being wagered on the game. Which again comes back to my point that the Bengals are being disrespected by the “experts”
Experts
Full disclosure – I don’t enjoy “picking” football games. Just in general I don’t bet on sports.
As a “seasoned fan” I don’t bother to watch much “pre-game” coverage. I’ll turn on the game just before kick-off and usually mute the ‘announcers” and listen to music during the game.
HOWEVER – when I was a “not so seasoned fan” I would sometimes watch ALL of the pre-game coverage, then the games, then watch the highlight shows. SO I’ve listened to a lot of “television experts” predict football games.
There was an old “football expert” by the name of Jimmy “the Greek” Snyder who used to predict NFL game scores back in the 1970s/80’s.
Now, ol’ Jimmy was probably wrong more than he was right – I don’t remember ever hearing his “correct/incorrect” numbers – but he was also a “Las Vegas bookmaker” so his win/lose record was MOSTLY irrelevant.
Again, if you are a “bookmaker” you just want a lot of money bet ON BOTH TEAMS – so then you are guaranteed to make money not matter who wins.
ANYWAY – at the end of his career (before he said something inappropriate and got himself fired in 1988) ol’ Jimmy loved himself some Dallas Cowboys (and in his defense the Cowboys were very good in the late 70’s and early 80’s).
The problem was that the Cowboys as a franchise had some problems in the mid 1980’s (which culminated in a change of ownership in 1989), and were just not a good team – but ol’ Jimmy kept on picking them to win
from a “psych 101” point of view ol’ Jimmy “The Greek” was suffering from a bad case of “confirmation bias” in regards to the Cowboys — i.e. he keep expecting them to be championship contenders because they had been championship contenders for so long.
And that brings us to the 2022 Dallas Cowboys. They lost to the San Francisco 49ers today 12 – 19. The line was Cowboys +4.
My guess is that the “betting public” made the “point spread” smaller in the Dallas game and larger in the Bengals games because of “confirmation bias” — i.e. the general public expects the Cowboys to be better than they are and for the Bengals to be worse.
Which is why they play the games …
My opinion on the Bengals win is that the Bengals were the “better team” today. The Bills certainly didn’t “quit” or “play poorly” so much as the Bengals played very well as a team and were in control from start to finish (they looked like “Champions”).
‘dem Cowboys
The Cowboys had another “golden era” in the early-mid 1990’s – winning 3 Super Bowls in 4 years. But haven’t been back to a Super Bowl or Conference championship game since 1995.
In that 27 year “championship game” drought they have only had 7 losing seasons. Team Owner Jerry Jones is willing to invest money in the team, they have a state of the art stadium, and a large passionate fan base – i.e. if there is a “recipe for success” the Cowboys have been following it.
Watching the game today – my opinion was that the teams were “physically equal.” It was a close, entertaining game but I would describe it as the “Cowboys lost” just as much as the “49ers won.”
No disrespect for San Francisco – they are another “doing things right” franchise (but they have made a couple Super Bowl appearances since their “golden era” back in the 1980’s/90s).
But the Cowboys continue to make “small mistakes” that are hard to justify/explain.
The Steelers Hall of Fame Coach Chuck Knoll once said that “Before you can win the game, you have to not lose.”
“Before you can win the game, you have to not lose.”
Chuck Knoll
Yeah, it is a great “football coach” quote – what he (probably) meant is that more games are “lost” because of players making (self-inflicted) mistakes than are ‘won” by players making great plays.
SO the Cowboys have a lot of very talented players – that managed to find a way not to win. I have an opinion on the “why” of the Cowboys continued “non championship” run – but it is just an “opinion” and it isn’t important or useful at the moment …
To the 49ers credit, they let the Cowboys make those mistakes, took the win – and will play next week against the Eagles.
BUT I didn’t get that “championship” feel from the 49ers – that doesn’t mean they won’t win against the Eagles. The Eagles are very good and were dominant in their win – but the Giants had that “happy they won last week” look – so the game will be interesting …
“Fitness” does not need to be complicated and time consuming. The amount of exercise required to “prevent disease” is relatively small – but there are enough variables to make the subject confusing.
SO I’m going to try to boil the subject down as much as possible.
I will start by saying that I am NOT a “fitness professional.”
Once upon a time I thought of myself as a competitive athlete (a LONG time ago). Also “once upon a time” I earned the CSCS from the NSCA (“Certified Strength Conditioning Specialist” from the “National Strength and Conditioning Association”), and passed the ACSM (“American College of Sports Medicine”) “personal trainer” exam about that same time.
All of which means next to nothing.
HOWEVER – I’ve looked at the current research, have an “informed opinion”, and might be “certified” again if I make the effort.
The problem is that there is a LOT of “fitness” information in the marketplace – sorting through the irrelevant information can take some effort.
First things first
FIRST we must distinguish between “fitness for health” and “sports conditioning.”
There is no consensus on the most effective way to train competitive athletes. There are just too many variables.
Obviously “sports conditioning” is going to be “sport specific” – the “best” workout for “long distance runners” will look almost nothing like the “best” workout for “NFL offensive lineman.”
Then “great athlete’s workout plan” isn’t going to work for everyone in that sport. No, I’m not saying that “great athlete” shouldn’t write a “workout book” just that the athlete’s individual workout plan PROBABLY won’t “translate” to the general public. Again, too many variables – so those type of books become “fitness memoirs” much more than “books on fitness.”
To be honest – since the field of “exercise science” has developed over the last 40+ years, the number of “celebrity workout books” has declined. Of course being a “trainer to the stars” is probably still a good “blurb” for a fitness book – i.e. the celebrity’s “personal trainer” might write a book.
However, having six-pack abs and great genetics does not equal “source of good advice.” Particularly with “sports conditioning” – great athletic ability tends to cover up a large number of “workout flaws.”
Consider the myth of an ancient Greek wrestler named “Milo.” Milo supposedly trained by picking up a newborn calf and carrying it around all day. Milo continued to carry the calf around as it grew, until eventually he was carrying around a full grown cow. Obviously he would have had to be incredibly strong – and was unbeatable as a wrestler.
I’m not sure Milo’s workout method would work for “non myths.” (but if you know someone training that way – I’d love to meet them)
SO a lot of “fitness books” make that some error. They prescribe “what so and so likes to do” as opposed to “what will work for the general population.” I’m not saying all “celebrity workout books” are useless – but let the buyer beware.
The point is that “fitness for health” can be very simple. The consensus is that “doing anything is better than doing nothing” and then doing “more” UP TO A POINT is GENERALLY better.
Benefits of exercise
I’m not going to give you a long list of benefits of physical activity. The “long term” benefits all revolve around increased “quality of life.” You are NOT going to live forever in this human body, but you will feel better and be able to function better as you age if you engage in regular physical activity.
Again, anything is better than nothing. The “minimum recommendations” is still 5 days per week of 30 minutes of “moderate intensity exercise” or 20 minutes of “vigorous intensity exercise” 3 days a week. Doing “strength training” a couple times a week is also recommended.
The big danger is being “sedentary” for long periods of time. It would be “best” to spread out activity during the day than to do one long exercise session and then sit all day.
Why people don’t exercise?
The “fitness industry” recognizes the “New Year’s resolution” market – i.e. every year a large number of folks make a “resolution” to “exercise more”/”get in shape” in the coming year.
Obviously that means that people are aware of the need to/benefits of exercise. Why do so many not follow through on their “fitness resolution?”
Well, why any one person isn’t exercising is probably due to a combination of factors.
As a long time observer of “human nature” my guess is that the average “new year’s resolution” to exercise is unrealistic.
Notice that I’m not saying “insincere” – i.e. they honestly intend to try and will make a genuine effort.
No, I’m saying “unrealistic” in the same way that trying to replicate Milo’s workout is “unrealistic” for ordinary mortals.
Ok, say that “apparently healthy” individual has made a resolution to “start exercising.” Our sincere individual makes a plan to get up at 5:30 in the morning, run 2 miles, go to the fitness center and do 30 minutes of weight training, then go to work all day.
If our individual normally sleeps until 7:30 and has to rush to get to work on time – they may set the alarm for 5:30, but hit the snooze button multiple times. They skip the run, and go to the fitness center, which is packed with other resolution makers – so they decide to skip the weight training until next week when it will be less crowded … and then they slide back into their normal routine and the resolution isn’t kept.
OR – if the “resolution maker” does get up and go for that 2 mile run, and then lifts those weights – they are so sore the next day that they have to call in sick.
Well, since “delayed onset muscle” soreness tends to be worst 40 hours after exercise – maybe our resolution keeper makes it 2 days, and THEN they can’t move.
Plan for success
I’m not criticizing anyone, just pointing out that if you want the “fitness resolution”/any change in behavior to become permanent we need to gradually make small changes.
Goal 1 should be “setting yourself up to succeed.”
Remember “anything is better than nothing.” Just making “physical activity” a part of your daily schedule should be “Step 1.”
Logically “Step 2” should then consist of “time and activity.” If you haven’t been physically active this might translate to “activity you hate least” – but you can always change your workout activity, establishing a routine is the point.
There was a study a few years back that came up with a “15 minute drive” number – if a person has to drive more than 15 minutes to the gym, then they won’t stick with their program.
I think they were trying to get more fitness centers built, but the point is obviously worth considering. If you don’t have facilities near by, recognize that you might be setting yourself up for failure IF your plan involves driving over 30 minutes to and from the gym.
Home Gym
A sure way to get around the “drive time” problem is a home gym. There are numerous “home workout” options – ranging in cost from “inexpensive” to “wow.”
The obvious problem is that the “home gym” can become a clothes hanger and not used just as easily as the gym membership can be abandoned.
There is no “best exercise device” – treadmills, rowers, stationary bikes, and “climbers” can all provide great workouts – but if you don’t like the exercise then the machine will just be an expensive place to hang clothes.
You always tend to get what you pay for – so try before you buy if possible.
Know Yourself
Generic advice time: Any “change” is easier if you have a “support group” of some kind.
A secondary benefit to joining a “fitness facility”/rec center is “group exercise” classes. If you have a workout partner that also commits to the class then you are both more likely to continue.
Again, if you hate the exercise and/or aren’t motivated by the group – then just because you have spent money on a class doesn’t mean you will attend.
If you enjoy the social aspect of “exercise classes” then there are other health benefits – but if you want/need to minimize your workout time because of schedule restrictions “classes” probably aren’t for you.
There are low cost, fast, and effective exercise routines that can be performed at home. One of these is “interval training”/”circuit training.” Which I will discuss in another article…
When we are discussing “network security” phrases like “authentication”, “least privilege”, and “zero trust” tend to come up. The three terms are related, and can be easily confused.
I’ve been in “I.T.” for a while (the late 1980’s) – I’ve gone from an “in the field professional” to “network technician” to “the computer guy” and now as a “white bearded instructor.”
Occasionally I’ve listened to other “I.T. professionals” struggle trying to explain the above concepts – and as I mentioned, they are easy to confuse.
Part of my job was teaching “network security” BEFORE this whole “cyber-security” thing became a buzzword. I’ve also had the luxury of “time” as well as the opportunity/obligation to explain the concepts to “non I.T. professionals” in “non technical jargon.”
With that said, I’m sure I will get something not 100% correct. The terms are not carved in stone – and “marketing speak” can change usage. SO in generic, non-technical jargon, here we go …
Security
First, security as a concept is always an illusion. No I’m not being pessimistic – as human beings we can never be 100% secure because it is simply not possible to have 100% of the “essential information.”
SO we talk in terms of “risk” and “vulnerabilities.” From a practical point of view we have a “sliding scale” with “convenience and usability” on one end and “security” on the other. e.g. “something” that is “convenient” and “easy to use”, isn’t going to be “secure.” If we enclose the “something” in a steel cage, surround the steel cage with concrete, and bury the concrete block 100 feet in the ground, it is much more “secure” – but almost impossible to use.
All of which means that trying to make a “something” usable and reasonably secure requires some tradeoffs.
Computer Network Security
Securing a “computer” used to mean “locking the doors of the computer room.” The whole idea of “remote access” obviously requires a means of accessing the computer remotely — which is “computer networking” in a nutshell.
The “physical” part of computer networking isn’t fundamentally different from the telegraph. Dots and dashes sent over the wire from one “operator” to another have been replaced with high and low voltages representing 1’s and 0’s and “encapsulated data” arranged in frames/packets forwarded from one router to another — but it is still about sending a “message” from one point to another.
With the old telegraph the service was easy to disrupt – just cut the wire (a 19th century “denial of service” attack). Security of the telegraph message involved trusting the telegraph operators OR sending an “encrypted message” that the legitimate recipient of the message could “un-encrypt.”
Modern computer networking approached the “message security” problem in the same way. The “message” (i.e. “data”) must be secured so that only the legitimate recipients have access.
There are a multitude of possible modern technological solutions – which is obviously why “network administration” and “cyber-security” have become career fields — so I’m not going into specific technologies here.
The “generic” method starts with “authentication” of the “recipient” (i.e. “user”).
Authentication
Our (imaginary) 19th Century telegraph operator didn’t have a lot of available options to verify someone was who they said they were. The operator might receive a message and then have to wait for someone to come to the telegraph office and ask for the message.
If our operator in New Orleans receives a message for “Mr Smith from Chicago” – he has to wait until someone comes in asking for a telegraph for “Mr Smith from Chicago.” Of course the operator had no way of verifying that the person asking for the message was ACTUALLY “Mr Smith from Chicago” and not “Mr Jones from Atlanta” who was stealing the message.
In modern computer networking this problem is what we call “authentication.”
If our imaginary telegraph included a message to the operator that “Mr Smith from Chicago” would be wearing a blue suit, is 6 feet tall, and will spit on the ground and turn around 3 times after asking for the message — then our operator has a method of verifying/”identifying” “Mr Smith from Chicago” and then “authenticating” him as the legitimate recipient.
Least Privilege
For the next concept we will leave the telegraph behind – and imagine we are going to a “popular music concert.”
Imagine that we have purchased tickets to see “big name act” and the concert promoters are holding our tickets at the “will call” window.
Our imaginary concert has multiple levels of seating – some seats close to the stage, some seats further away, some “seats” involve sitting on a grassy hill, and some “seats” are “all access Very Important Person.”
On the day of the concert we go to the “will call” window and present our identification (e.g. drivers license, state issued ID card, credit card, etc) – the friendly attendant examines our individual identification (i.e. we get “authenticated”) and then gives us each a “concert access pass” on a lanyard (1 each) that we are supposed to hang around our necks.
Next we go to the arena gate and present our “pass” to the friendly security guard. The guard examines the pass and allows us access BASED on the pass.
Personally I dislike large crowds – so MY “pass” only gives me access to the grassy area far away from the stage. Someone else might love dancing in the crowd all night, so their “pass” gives them access to the area much closer to the stage (where no one sill sit down all night). If “big recording executive” shows up, their “pass” might give them access to the entire facility.
Distinguishing what we are allowed to do/where we are allowed to go is called “authorization.”
First we got “authenticated” and then we were giving a certain level of “authorized” access.
Now, assume that I get lonely sitting up there on the hill – and try to sneak down to the floor level seats where all the cool kids are dancing. If the venue provider has some “no nonsense, shaved head” security guards controlling access to the “cool kids” area – then those guards (inside the venue) will check my pass and deny me entry.
That concept of “only allowing ‘pass holders’ to go/do specifically where/what they are authorized to go/do” could be called “least privilege.”
Notice that ensuring “least privilege” takes some additional planning on the part of the “venue provider.”
First we authenticate users, then we authorize users to do something. “Least privilege” is attained when users can ONLY do what they NEED to do based on an assessment of their “required duties.”
Zero Trust
We come back around to the idea that “security” is a process and not an “end product” with the “new” idea of “zero trust.” ” Well, “new” as in “increased in popularity.”
Experienced “network security professionals” will often talk about “assuming that the network has been compromised.” This “assumption of breach” is really what “zero trust” is concerned.
It might sound pessimistic to “assume a network breach” – but it implies that we need to be looking for “intruders” INSIDE the area that we have secured.
Imagine a “secret agent movie” where the “secret agent” infiltrates the “super villain’s” lair by breaching the perimeter defense, then enters the main house through the roof. Since the “super villain” is having a big party for some reason, out “secret agent” puts on a tuxedo and pretends to be a party guest.
Of course the super villain’s “henchmen” aren’t looking for intruders INSIDE the mansion that look like party guests – so the “secret agent” is free to collect/gather intelligence about the super villain’s master plan and escape without notice.
OR to extend the “concert” analogy – the security guards aren’t checking “passes” of individuals within the “VIP area.” If someone steals/impersonates a “VIP pass” then they are free to move around the “VIP area.”
The simplest method for an “attacker” would be to acquire a “lower access” pass, and then try to get a “higher level” pass
Again – we start off with good authentication, have established least privilege, and the next step is checking users privileges each time they try to do ANYTHING.
In the “concert” analogy, the “user pass” grants access to a specific area. BUT we are only checking “user credentials” when they try to move from one area to another. To achieve “zero trust” we need to do all of the above AND we assume that there has been a security breach – so we are checking “passes” on a continual basis.
This is where the distinction between “authentication and least privilege” and “zero trust” can be hard to perceive.
e.g. In our concert analogy – imagine that there is a “private bar” in the VIP area. If we ASSUME that a user should have access to the “private bar” because they are in the VIP area, that is NOT “zero trust.” If users have to authenticate themselves each time they go to the private bar – then that could be “zero trust.” We are guarding against the possibility that someone managed to breach the other security measures.
Eternal vigilance
If you have heard of “AAA” in regards to security – we have talked about the first two “A’s” (“Authentication”, and “Authorization”).
Along with all of the above – we also need “auditing.”
First we authenticate a user, THEN the user gets authorized to do something, and THEN we keep track of what the user does while they are in the system – which is usually called “auditing”.
Of course what actions we will choose to “audit” requires some planning. If we audit EVERYTHING – then we will be swamped by “ordinary event” data. The “best practice” becomes “auditing” for the “unusual”/failure.
e.g. if it is “normal” for users to login between the hours of 7:00AM and 6:00PM and we start seeing a lot of “failed login attempts” at 10:00PM – that probably means someone is doing something they shouldn’t.
Deciding what you need to audit, how to gather the data, and where/when/how to analyze that data is a primary function of (what gets called) “cyber-security.”
“Security” is always best thought of as a “process” not an “end state.” Something like “zero trust” requires constant authorization of users – ideally against multiple forms of authentication.
Ideally intruders will be prevented from entering, BUT finding/detecting intrusion becomes essential.
HOW to specifically achieve any of the above becomes a “it depends” situation requiring in depth analysis. Any plan is better than no planning at all, but the best plan will be tested and re-evaluated on a regular basis — which is obviously beyond the scope of this little story …
As a thought experiment – imagine you are in charge of a popular, world wide, “messaging” service – something like Twitter but don’t get distracted by a specific service name.
Now assume that your goal is to provide ordinary folks with tools to communicate in the purest form of “free speech.” Of course if you want to stay around as a “going concern” then you will also need to generate revenue along the way — maybe not “obscene profits” but at least enough to “break even.”
Step 1: Don’t recreate the wheel
In 2022, if you wanted to create a “messaging system” for your business/community then there are plenty of available options.
You could download the source code for Mastodon and setup your own private service if you wanted – but unless you have the required technical skills and have a VERY good reason (like a requirement for extreme privacy) that probably isn’t a good idea.
In 2022 you certainly wouldn’t bother to “develop your own” platform from scratch — yes, it is something that a group of motivated under grads could do, and they would certainly learn a lot along the way, but they would be “reinventing the wheel.”
Now if the goal is “education” then going through the “wheel invention” process might be worthwhile. HOWEVER , if the goal is NOT education and/or existing services will meet your “messaging requirements” – then reinventing the wheel is just a waste of time.
For a “new commercial startup” the big problem isn’t going to be “technology” – the problem will be getting noticed and then “scaling up.”
Step 2: integrity
Ok, so now assume that our hypothetical messaging service has attracted a sizable user base. How do we go about ensuring that the folks posing messages are who they say they are – i.e. how do we ensure “user integrity.”
In an ideal world, users/companies could sign up as who they are – and that would be sufficient. But in the real world where there are malicious actors with a “motivation to deceive” for whatever reason – then steps need to be taken make it harder for “malicious actors to practice maliciousness.”
The problem here is that it is expensive (time and money) to verify user information. Again, in a perfect world you could trust users to “not be malicious.” With a large network you would still have “naming conflicts” but if “good faith” is the norm, then those issues would be ACCIDENTAL not malicious.
Once again, in 2022 there are available options and “recreating the wheel” is not required.
This time the “prior art” comes in the form of the registered trademark and good ol’ domain name system (DNS).
Maybe we should take a step back and examine the limitations of “user identification.” Obviously you need some form of unique addressing for ANY network to function properly.
quick example: “cell phone numbers” – each phone has a unique address (or a card installed in the phone with a unique address) so that when you enter in a certain set of digits, your call will be connected to that cell phone.
Of course it is easy to “spoof the caller id” which simply illustrates our problem with malicious users again.
Ok, now the problem is that those “unique user names” probably aren’t particularly elegant — e.g. forcing users to use names like user_2001,7653 wouldn’t be popular.
If our hypothetical network is large enough then we have “real world” security/safety issues – so using personally identifiable information to login/post messages would be dangerous.
Yes, we want user integrity. No, we don’t want to force users to use system generated names. No, we don’t want to put people in harm’s way. Yes, the goal is still “free speech with integrity” AND we still don’t want to reinvent the “authentication wheel.”
Step 3: prior art
The 2022 “paradigm shift” on usernames is that they are better thought of as “brand names.”
The intentional practice of “brand management” has been a concern for the “big company” folks for a long time.
However, this expanding of the “brand management” concept does draw attention to another problem. This problem is simply that a “one size fits all” approach to user management isn’t going to work.
Just for fun – imagine that we decide to have three “levels” of registration:
level 1 is the fastest, easiest, and cheapest – provide a unique email address and agree to the TOS and you are in
level 2 is requires additional verification of user identity, so it is a little slower than level 1, and will cost the user a fee of some kind
level 3 is for the traditional “big company enterprises” – they have a trademark, a registered domain name, and probably an existing brand ‘history.’ The slowest and most expensive, but then also the level with the most control over their brand name and ‘follower’ data
The additional cost for the “big company” probably won’t be a factor to the “big company” — assuming they are getting a direct line to their ‘followers’/’customers’
Yes, there should probably be a “non profit”/gov’ment registration as well – which could be low cost (free) as well as “slow”.
Anyone that remembers the early days of the “web” might remember the days when the domain choices were ‘.com’, ‘.edu’,’.net’, ‘.mil’, and ‘.org’ – with .com being for “commerce”, .edu for “education”, .net was originally supposed to be for “network infrastructure, .mil was for the military, and .org was for “non profit organizations.”
I think that originally .org was free of charge – but they had to prove that they were a non-profit. Obviously you needed to be a educational institution to get an edu domain, and the “military” for a .mil domain was exactly what it sounds like
Does it need to be pointed out that “.com” for commercial activity was why the “dot-com bubble/boom and bust” was called “dot-com”?
Meanwhile, back at the ranch ….
For individuals the concept was probably thought of as “personal integrity” – and hopefully that concept isn’t going away, i.e. we are just adding a thin veneer and calling it “personal branding.”
Working in our hypothetical company’s favor is the fact that “big company brand management” has included registering domain names for the last 25+ years.
Then add in that the modern media/intellectual property “prior art” consists of copyrights, trademarks, and patents. AND We (probably) already have a list of unacceptable words – e.g. assuming that profanity and racial slurs are not acceptable.
SO just add a registered trademarks and/or a domain name check to the registration process.
Prohibit anyone from level 1 or 2 from claiming a name that is on the “prohibited” list. Problem solved.
It should be pointed out that this “enhanced registration” process would NOT change anyone’s ability to post content. Level 2 and 3 are not any “better” than level 1 – just “authenticated” at a higher level.
If a “level 3 company” chooses not to use our service – their name is still protected. “Name squatting” should also be prohibited — e.g. if a level 3 company name is “tasty beverages, inc” then names like “T@sty beverages” or “aTasty Beverage” – a simple regular expression test would probably suffice.
The “level 3” registration could then have benefits like domain registration — i.e. “tasty beverages, inc” would be free create other “tasty beverage” names …
If you put together a comprehensive “registered trademark catalog” then you might have a viable product – the market is probably small (trademark lawyers?), but if you are creating a database for user registration purposes – selling access to that database wouldn’t be a big deal – but now I’m just rambling …
I achieved “crazy old man” status a few years back — so when I encounter “youthful arrogance” I’m always a little slow to perceive it as “youthful arrogance.”
Googling “youthful arrogance” gave me a lot of familiar quotes – a new favorite:
I’m too old to know everything
Oscar Wilde
Being “slow to recognize” youthful arrogance in the “wild” probably comes from the realization that when someone REALLY irritates me – and I have trouble pinpointing the reason they irritate me – the reason is (often) that they have an annoying mannerism which I share.
Self-awareness aside – most of the time “youthful arrogance” is simply “youthful ignorance.” Having an appreciation that the world as it exists today did NOT all happen in the last 5 years – you know, some form of “large scale historical perspective” – is the simple solution to “youthful ignorance.”
“True arrogance” on the other hand is a much different animal than “ignorance.” Arrogance requires an “attitude of superiority” and isn’t “teachable.”
e.g. imagine someone having the opinion that the entire computer industry started 5 years ago – because that is when THEY started working in the “computer industry.”
Gently point out that “modern computing” is at least 50 years old and traces its origins back thousands of years. Maybe point out that what is “brand new” today is just a variation on “what has been before” – you know the whole Ecclesiastes thing …
If they accept the possibility that there is “prior art” for MOST everything that is currently “new” – then they were just young and ignorant. If all they do is recite their resume and tell you how much money they are making – well, that is probably “arrogance.”
Of course if “making money” was the purpose of human existence then MAYBE I would be willing to accept their “youthful wisdom” as something truly new. Of course I’ll point back to the “wisdom books” (maybe point out that “the sun also rises” and recommend reading Ecclesiastes again) and politely disagree – but that isn’t the point.
SDLC
The computer industry loves their acronyms.
When I was being exposed to “computer programming” way back when in the 1980’s – it was possible (and typical) for an individual to create an entire software product by themselves. (The Atari 2600 and the era of the “rock star programmer” comes to mind.)
It is always possible to tell the “modern computing” story from different points of view. Nolan Bushnell and Atari always need to be mentioned.
e.g. part of the “Steve Jobs” legend was that he came into the Atari offices as a 19 year old and demanded that they hire him. Yes, they hired him – and depending on who is telling the story – either Atari, Inc helped him purchase some of the chips he and Woz used to create the Apple I OR Mr Jobs “stole” the chips. I think “technically” it was illegal for individuals to purchase the chips in question at the time – so both stories might “technically” be true …
Definitions
The modern piece of hardware that we call a “computer” requires instructions to do anything. We will call those instructions a “computer program”/software.
Someone needs to create those instructions – we can call that person a “computer programmer.”
Nailing down what is and isn’t a “computer” is a little hard to do – for this discussion we can say that a “computer” can be “programmed” to perform multiple operations.
A “computer program” is a collection of instructions that does something — the individual instructions are commonly called “code.”
SO our “programmer” writes “code” and creates a “program.” The term “developer” has become popular as a replacement for “programmer.” This is (probably) an example of how the task of creating a “program” has increased in complexity – i.e. now we have “teams of developers” working on an “application”/software project, but that isn’t important at the moment …
Computer programs can be written in a variety of “computer languages” — all of which make it “easier” for the human programmer to write the instructions required to develop the software project. It is sufficient to point out that there are a LOT of “computer languages” out there — and we are moving on …
Job Titles
The job of “computer programmer” very obviously changed as the computer industry changed.
In 2022 one of the “top jobs” in the U.S. is “software engineer” (Average salary: $126,127: Percent of growth in number of job postings, 2019-2022: 87% – thank you indeed.com).
You will also see a lot of “job postings” for “software programmers” and “software developers.”
What is the difference between the three jobs (if any)? Why is “software engineer” in the top 10 of all jobs?
Well, I’m not really sure if there is a functional difference between a “programmer” and a “developer” – but if there is, the difference is found in on the job experience and scope of responsibilities.
i.e. “big company inc” might have an “entry level programmer” that gets assigned to a “development team” that is run by a “senior developer.” Then the “development team” is working on a part of the larger software project that the “engineer” has designed.
History time
When the only “computers” were massive mainframes owned by Universities and large corporations then being a “programmer” meant being an employee of a University or large corporation.
When the “personal computer revolution” happened in the 1970’s/80’s – those early PC enthusiasts were all writing their own software. Software tended to be shared/freely passed around back then – if anyone was making money off of software it was because they were selling media containing the software.
THEN Steve Jobs and Steve Wozniak started Apple Computers in 1976. The Apple story has become legend – so I won’t tell the whole story again.
fwiw: successful startups tend to have (at least) two people – i.e. you need “sales/Marketing” and you need “product development” which tend to be different skill sets (so two people with complimentary skills). REALLY successful startups also tend to have an “operations” person that “makes the trains run on time” so to speak – e.g. Mike Markkula at Apple
SO the “two Steves” needed each other to create Apple Computers. Without Woz, Jobs wouldn’t have had a product to sell. Without Jobs, Woz would have stayed at HP making calculators and never tried to start his own company.
VisiCalc
Google tells me that 200 “Apple I’s” were sold (if you have one it is a collectors item). The Apple I was not a complete product – additional parts needed to be purchased to have a functional system – so it was MOST important (historically speaking) in that it proved that there was a larger “personal computer” market than just “hardware hobbyists.”
The Apple II was released in 1977 (fully assembled and ready to go out of the box) – but the PC industry still consisted of “hobbyists.”
The next “historic moment in software development” happened in 1979 when Dan Bricklin and Bob Frankston released the first “computerized spreadsheet” – “VisiCalc.”
VisiCalc was (arguably) the first application go through the entire “system development life cycle” (SDLC) – e.g. from planning/analysis/design to implementation/maintenance and then “obsolescence.”
The time of death for VisiCalc isn’t important – 1984 seems to be a popular date. Their place in history is secure.
How do you go from “historic product” to “out of business” in 5 years? Well, VisiCalc as a product needed to grow to survive. Their success invited a lot of competition into the market – and they were unable or unwilling to change at the pace required.
This is NOT criticism – I’ll point at the large number of startups in ANY industry that get “acquired” by a larger entity mostly because “starting” a company is a different set of skills than “running” and “growing” a company.
Again, I’m not picking on the VisiCalc guys – this “first inventor” goes broke is a common theme in technology – i.e. someone “invents” a new technology and someone else “implements” that technology better/cheaper/whatever to make big $$.
btw: the spreadsheet being the first “killer app” is why PC’s found their way into the “accounting” departments of the world first. Then when those machines started breaking, companies needed folks dedicated to fixing the information technology infrastructure – and being a “PC tech” became a viable profession.
The “I.T.” functionality stayed part of “accounting” for a few years. Eventually PCs become common in “not accounting” divisions. The role of “Chief Information Officer” and “I.T. departments” became common in the late 1980’s — the rest is history …
Finally
Ok, so I mentioned that SDLC can mean “system development life cycle.” This was the common usage when I first learned the term.
In 2022 “Software development life cycle” is in common usage – but that is probably because the software folks have been using the underlying concepts of the “System DLC” as part of “software development” process since “software development” became a thing.
e.g. The “Software DLC” uses different vocabulary — but it is still the “System DLC” — but if you feel strongly about it, I don’t feel strongly about it one way or the other – I could ALWAYS be wrong.
I’ve seen “development fads” come and go in the last 30 years. MOST of the fads revolve around the problems you get when multiple development teams are working on the same project.
Modern software development on a large scale requires time and planning. You have all of the normal “communication between teams” issues that ANY large project experiences. The unique problems with software tend to be found in the “debugging” process – which is a subject all its own.
The modern interweb infrastructure allows/requires things like “continuous integration” and “continuous deployment” (CI/CD).
If you remember “web 1.0” (static web pages) then you probably remember the “site under construction” graphic that was popular until it was pointed out that (non abandoned) websites are ALWAYS “under construction” (oh and remember the idea of a “webmaster” position? one person responsible for the entire site? well, that changed fast as well)
ANYWAY – In 2022 CI/CD makes that “continuous construction” concept manageable
Security
The transformation of SDLC from “system” to “software” isn’t a big deal – but the “youthful arrogance” referenced at the start involved someone that seemed to think like the idea of creating ‘secure software’ was something that happened recently.
Obviously if you “program” the computer by feeding in punch cards – then “security” kind of happens by controlling physical access to the computer.
When the “interweb” exploded in the 1990’s the tongue in cheek observation was that d.o.s. (the “disk operating system”) had never experienced a “remote exploit”
The point being that d.o.s. had no networking capabilities – if you wanted to setup a “local area network” (LAN) you had to install additional software that would function as a “network re-director”
IBM had come up with “netbios” (network basic input output system) in 1983 (for file and print sharing) — but it wasn’t “routable” between different LANs.
NetWare had a nice little business going selling a “network operating system” that ran on a proprietary protocol called IPX/SPX (it used the MAC address for unique addressing – it was nice).
THEN Microsoft included basic LAN functionality in Windows 3.11 (using an updated form of netbios called netbeui – “netbios Enhanced User Interface”) – and well, the folks at Netware probably weren’t concerned at the time, since their product had the largest installed base of any “n.o.s.” — BUT Microsoft in the 1990’s is its own story …
ANYWAY if you don’t have your computers networked together then “network security” isn’t an issue.
btw: The original design of the “interweb” was for redundancy and resilience NOT security – and we are still dealing with those issues in 2022.
A “software design” truism is that the sooner you find an error (“bug”) in the software the less expensive it is to fix. If you can deal with an issue in the “design” phase – then there is no “bug” to fix and the cost is $0. BUT if you discover a bug when you are shipping software – the cost to fix will probably be $HUGE (well, “non zero”).
fwiw: The same concept applies to “features” – e.g. at some point in the “design” phase the decision has to be made to “stop adding additional features” – maybe call this “feature lock” or “version lock” whatever.
e.g. the cost of adding additional functionality in the design phase is $0 — but if you try to add an additional feature half-way through development the cost will be $HUGE.
Oh, and making all those ‘design decisions’ is why “software architects”/engineers get paid the big $$.
Of course this implies that a “perfectly designed product” would never need to be patched. To get a “perfectly designed product” you would probably need “perfect designers” – and those are hard to find.
The work around becomes bringing in additional “experts” during the design phase.
There is ALWAYS a trade off between “convenience” and “security” and those decisions/compromises/acceptance of risk should obviously be made at “design” time. SO “software application security engineer” has become a thing
Another software truism is that software is never “done” it just gets “released” – bugs will be found and patches will have to be released (which might cause other bugs, etc) –
Remember that a 100% secure system is also going to be 100% unusable. ok? ’nuff said
In the 30+ years I’ve been a working “computers industry professional” I’ve done a lot of jobs, used a lot of software, and spent time teaching other folks how to be “computer professionals.”
I’m also an “amateur historian” – i.e. I enjoy learning about “history” in general. I’ve had real “history teachers” point out that (in general) people are curious about “what happened before them.”
Maybe this “historical curiosity” is one of the things that distinguishes “humans” from “less advanced” forms of life — e.g. yes, your dog loves you, and misses you when you are gone – but your dog probably isn’t overly concerned with how its ancestors lived (assuming that your dog has the ability to think in terms of “history” – but that isn’t the point).
As part of “teaching” I tend to tell (relevant) stories about “how we got here” in terms of technology. Just like understanding human history can/should influence our understanding of “modern society” – understanding the “history of a technology” can/should influence/enhance “modern technology.”
The Problem …
There are multiple “problems of history” — which are not important at the moment. I’ll just point out the obvious fact that “history” is NOT a precise science.
Unless you have actually witnessed “history” then you have to rely on second hand evidence. Even if you witnessed an event, you are limited by your ability to sense and comprehend events as they unfold.
All of which is leading up to the fact that “this is the way I remember the story.” I’m not saying I am 100% correct and/or infallible – in fact I will certainly get something wrong if I go on long enough – any mistakes are mine and not intentional attempts to mislead 😉
Hardware/Software
Merriam-Webster tells me that “technology” is about “practical applications of knowledge.”
random thought #1 – “technology” changes.
“Cutting edge technology” becomes common and quickly taken for granted. The “Kansas City” scene from Oklahoma (1955) illustrates the point (“they’ve gone just about as far as they can go”).
Merriam-Webster tells me that the term “high technology” was coined in 1969 referring to “advanced or sophisticated devices especially in the fields of electronics and computers.”
If you are a ‘history buff” you might associate 1969 with the “race to the moon”/moon landing – so “high technology” equaled “space age.” If you are an old computer guy – 1969 might bring to mind the Unix Epoch – but in 2022 neither term is “high tech.”
random thought #2 – “software”
The term “hardware” in English dates back to the 15th Century. The term originally meant “things made of metal.” In 2022 the term refers to the “tangible”/physical components of a device – i.e. the parts we can actually touch and feel.
I’ve taught the “intro to computer technology” more times than I can remember. Early on in the class we distinguish between “computer hardware” and “computer software.”
It turns out that the term “software” only goes back to 1958 – invented to refer to the parts of a computer system that are NOT hardware.
The original definition could have referred to any “electronic system” – i.e. programs, procedures, and documentation.
In 2022 – Merriam-Webster tells me that “software” is also used to refer to “audiovisual media” – which is new to me, but instantly makes sense …
ANYWAY – “computer software” typically gets divided into two broad categories – “applications” and “operating systems” (OS or just “systems”).
The “average non-computer professional” is probably unaware and/or indifferent to the distinction between “applications” and the OS. They can certainly tell you whether they use “Windows” or a “Mac” – so saying people are “unaware” probably isn’t as correct as saying “indifferent.”
Software lets us do something useful with hardware
an old textbook
The average user has work to get done – and they don’t really care about the OS except to the point that it allows them to run applications and get something done.
Once upon a time – when a new “computer hardware system” was designed a new “operating system” would also be written specifically for the hardware. e.g. The Mythic Man-Month is required reading for anyone involved in management in general and “software development” in particular …
Some “industry experts” have argued that Bill Gates’ biggest contribution to the “computer industry” was the idea that “software” could be/should be separate from “hardware.” While I don’t disagree – it would require a retelling of the “history of the personal computer” to really put the remark into context — I’m happy to re-tell the story, but it would require at least two beers – i.e. not here, not now
In 2022 there are a handful of “popular operating systems” that also get divided into two groups – e.g. the “mobile OS” – Android, iOS, and the “desktop OS” Windows, macOS, and Linux
The Android OS is the most installed OS if you are counting “devices.” Since Android is based on Linux – you COULD say that Linux is the most used OS, but we won’t worry about things like that.
Apple’s iOS on the other hand is probably the most PROFITABLE OS. iOS is based on the “Berkely Software Distribution” (BSD) – which is very much NOT Linux, but they share some code …
Microsoft Windows still dominates the desktop. I will not be “bashing Windows” in any form – just point out that 90%+ of the “desktop” machines out there are running some version of Windows.
The operating system that Apple includes with their personal computers in 2022 is also based on BSD. Apple declared themselves a “consumer electronics” company a long time ago — fun fact: the Beatles (yes, John, Paul, George, and Ringo – those “Beatles”) started a record company called “Apple” in 1968 – so when the two Steves (Jobs and Wozniak) wanted to call their new company “Apple Computers” they had to agree to stay out of the music business – AND we are moving on …
On the “desktop” then Linux is the rounding error between Windows machines and Macs.
What is holding back “Linux on the desktop?” Well, in 2022 the short answer is “applications” and more specifically “gaming.”
You cannot gracefully run Microsoft Office, Avid, or the Adobe Suit on a Linux based desktop. Yes, there are alternatives to those applications that perform wonderfully on Linux desktops – but that isn’t the point.
e.g. that “intro to computers” class I taught used Microsoft Word, and Excel for 50% of the class. If you want to edit audio/video “professionally” then you are (probably) using Avid or Adobe products (read the credits of the next “major Hollywood” movie you watch).
Then the chicken and egg scenario pops up – i.e. “big application developer” would (probably) release a Linux friendly version if more people used Linux on the desktop – but people don’t use Linux on the desktop because they can’t run all of the application software they want – so they don’t have a Linux version of the application.
Yes, I am aware of WINE – but it illustrates the problem much more than acts as a solution — and we are moving on …
Linux Distros – a short history
Note that “Linux in the server room” has been a runaway success story – so it is POSSIBLE that “Linux on the desktop” will gain popularity, but not likely anytime soon.
Also worth pointing out — it is possible to run a “Microsoft free” enterprise — but if the goal is lowering the “total cost of ownership” then (in 2022) Microsoft still has a measurable advantage over any “100% Linux based” solution.
If you are “large enterprise” then the cost of the software isn’t your biggest concern – “Support” is (probably) “large enterprise, Inc’s” largest single concern.
fwiw: IBM and Red Hat are making progress on “enterprise level” administration tools – but in 2022 …
ANYWAY – the “birthdate” for Linux is typically given as 1991.
Under the category of “important technical distinction” I will mention that “Linux” is better described as the “kernel” for an OS and NOT an OS in and of itself.
Think of Linux as the “engine” of a car – i.e. the engine isn’t the “car”, you need a lot of other systems working with and around the engine for the “car” to function.
For the purpose of this article I will describe the combination of “Linux kernel + other operating system essentials” as a “Linux Distribution” or more commonly just “distro.” Ready? ok …
1992 gave us Slackware. Patrick Volkerding started the “oldest surviving Linux distro” which accounted for 80 percent share of the “Linux” market until the mid-1990s
1992 – 1996 gave us openSUSE Linux. Thomas Fehr, Roland Dyroff, Burchard Steinbild, and Hubert Mantel. I tend to call SUSE “German Linux” and they were just selling the “German version of Slackware” on floppy disks until 1996.
btw: the “modern Internet” would not exist as it is today without Linux in the server room. All of these “early Linux distros” had business models centered around “selling physical media.” Hey, download speed were of the “dial-up” variety and you were paying “by the minute” in most of Europe – so “selling media” was a good business model …
1993 -1996 gave us the start of Debian – Ian Murdock. The goal was a more “user friendly” Linux. First “stable version” was 1996 …
1995 gave us the Red Hat Linux — this distro was actually my “introduction to Linux.” I bought a book that had a copy of Red Hat Linux 5.something (I think) and did my first Linux install on an “old” pc PROBABLY around 2001.
During the dotcom “boom and bust” a LOT of Linux companies went public. Back then it was “cool” to have a big runup in stock valuation on the first day of trading – so when Red Hat “went public” in 1999 they had the eighth-biggest first-day gain in the history of Wall Street.
The run-up was a little manufactured (i.e. they didn’t release a lot of stock for purchase on the open market). My guess is that in 2022 the folks arranging the “IPO” would set a higher price for the initial price or release more stock if they thought the offering was going to be extremely popular.
Full disclosure – I never owned any Red Hat stock, but I was an “interested observer” simply because I was using their distro.
Red Hat’s “corporate leadership” decided that the “selling physical media” business plan wasn’t a good long term strategy. Especially as “high speed Internet” access moved across the U.S.
e.g. that “multi hour dial up download” is now an “under 10 minute iso download” – so I’d say the “corporate leadership” at Red Hat, Inc made the right decision.
Around 2003 the Red Hat distro kind of “split” into “Red Hat Enterprise Linux” (RHEL – sold by subscription to an “enterprise software” market) and the “Fedora project.” (meant to be a testing ground for future versions of RHEL as well as the “latest and greatest” Linux distro).
e.g. the Fedora project has a release target of every six months – current version 35. RHEL has a longer planned release AND support cycle – which is what “enterprise users” like – current version 9.
btw – yes RHEL is still “open source” – what you get for your subscription is “regular updates from an approved/secure channel and support.” AlmaLinux and CentOS are both “clones” of RHEL – with CentOS being “sponsored” by Red Hat.
IBM “acquired” Red Hat in 2019 – but nothing really changed on the “management” side of things. IBM has been active in the open source community for a long time – so my guess is that someone pointed out that a “healthy, independent Red Hat” is good for IBM’s bottom line in the present and future.
ANYWAY – obviously Red Hat is a “subsidiary” of IBM – but I’m always surprised when “long time computer professionals” seem to be unaware of the connections between RHEL, Fedora Project, CentOS, and IBM (part of what motivated this post).
Red Hat has positioned itself as “enterprise Linux” – but the battle for “consumer Linux” still has a lot of active competition. The Fedora project is very popular – but my “non enterprise distros of choice” are both based on Debian:
Ubuntu (first release 2004) – “South African Internet mogul Mark Shuttleworth” gets credit for starting the distro. The idea was that Debian could be more “user friendly.” Occasionally I teach an “introduction to Linux class” and the big differences between “Debian” and “Ubuntu” are noticeable – but very much in the “ease of use” (i.e. “Ubuntu” is “easier” for new users to learn)
I would have said that “Ubuntu” meant “community” (which I probably read somewhere) but the word is of ancient Zulu and Xhosa origin and more correctly gets translated “humanity to others.” Ubuntu has a planned release target of every six months — as well as a longer “long term support” (LTS) version.
Linux Mint (first release 2008) – Clément Lefèbvre gets credit for this one. Technically Linux Mint describes itself as “Ubuntu based” – so of course Debian is “underneath the hood.” I first encountered Linux Mint from a reviewer that described it as the best Linux distro for people trying to not use Microsoft Windows.
The differences between Mint and Ubuntu are cosmetic and also philosophical – i.e. Mint will install some “non open source” (but still free) software to improve “ease of use.”
The beauty of “Linux” is that it can be “enterprise level big” software or it can be “boot from a flash drive” small. It can utilize modern hardware and GPU’s or it can run on 20 year old machines. If you are looking for specific functionality, there might already be a distro doing that – or if you can’t find one, you can make your own
There have been a couple documentaries about the 1975 blockbuster “Jaws” — which probably illustrates the long term impact of the original movie.
Any “major” movie made in the era of “DVD extras” is going to have an obligatory “making of” documentary – so the fact
“Jaws: The Inside Story” aired on A&E back in 2009 (and is available for free on Kanopy.com). It was surprisingly entertaining – both as “movie making” documentary and as “cultural history.”
This came to mind because the “Jaws movies” have been available on Tubi.com for the last couple months.
full disclosure: I was a little too young to see “Jaws” in the theater — the “edited for tv” version of “Jaws” was my first exposure to the movie, when the movie got a theatrical re-release and ABC aired it on network tv in 1979.
I probably saw the “un-edited” version of “Jaws” on HBO at some point – and I have a DVD of the original “Jaws.” All of which means I’ve seen “Jaws – 1975” a LOT. Nostalgia aside, it still holds up as an entertaining movie.
Yes, the mechanical shark is cringeworthy in 2022 – but the fact that the shark DIDN’T work as well as Spielberg et al wanted probably contributes to the continued “watch – ability” of the movie. i.e. Mr Spielberg had to use “storytelling” technics to “imply” the shark – which ends up being much scarier than actually showing the shark.
i.e. what made the original “Jaws” a great movie had very little to do with the mechanical shark/”special effects.” The movie holds up as a case study on “visual storytelling.” Is it Steven Spielberg’s “best movie”? No. But it does showcase his style/technique.
At one point “Jaws” was the highest grossing movie in history. It gets credit for creating the “summer blockbuster” concept i.e. I think it was supposed to be released as as “winter movie” – but got pushed to a summer release because of production problems.
Source material
The problem with the “Jaws” franchise was that it was never intended to be a multiple-movie franchise. The movie was based on Peter Benchley’s (hugely successful) 1974 novel (btw: Peter Benchley plays the “reporter on the beach” in “Jaws – 1975”).
I was too young to see “Jaws” in the theater, and probably couldn’t even read yet when the novel was spending 44 weeks on the bestseller lists.
“Movie novelizations” tended to be a given back in the 1970’s/80’s – but when the movie is “based on a novel” USUALLY the book is “better” than the movie. “Jaws” is one of the handful of “books made into movies” where the movie is better than the book (obviously just my opinion).
The basic plot is obviously the same – the two major differences is that (in the book) Hooper dies and the shark doesn’t explode.
Part of the legend of the movie is that “experts” told Mr. Spielberg that oxygen tanks don’t explode like that and that the audience wouldn’t believe the ending. Mr Spielberg replied (something like) “Give me the audience for 2 hours and they will stand up and cheer when the shark explodes” — and audiences did cheer at the exploding shark …
(btw: one of those “reality shows” tried to replicate the “exploding oxygen tank” and no, oxygen tanks do NOT explode like it does at the end of Jaws – so the experts were right, but so was Mr Spielberg …)
Sequels
It is estimated that “Jaws – 1975” sold 128 million tickets. Adjust for inflation and it is in the $billion movie club.
SO of course there would be sequels.
Steven Spielberg very wisely stayed far away from all of the sequels. Again, the existential issue with MOST “sequels” is that they tend to just be attempts to get more money out of the popularity of the original – rather than telling their own story.
Yes, there are exceptions – but none of the Jaws sequels comes anywhere close to the quality of the original.
“Jaws 2” was released in summer 1978. Roy Scheider probably got a nice paycheck to reprise his starring role as Chief Martin Brody – Richard Dreyfuss stayed away (his character is supposed to be on a trip to Antarctica or something). Most of the supporting cast came back – so the movie tries very hard to “feel” like the original.
Again – I didn’t see “Jaws 2” in the theater. I remembered not liking the movie when I did see it on HBO – but I (probably) hadn’t seen it for 30 years when I re-watched it on Tubi the other day.
Well, the mechanical shark worked better in “Jaws 2” – but it doesn’t help the movie. Yes, the directing is questionable, the “teenagers” mostly unlikeable, and the plot contrived – but other than that …
How could “Jaws 2” have been better? Well, fewer screeching teenagers (or better directed teenagers). It felt like they had a contest to be in the movie – and that was how they selected most of the “teenagers.”
Then the plot makes the cardinal sin of trying to explain “why” another huge shark is attacking the same little beach community. Overly. contrived.
If you want, you can find subtext in “Jaws – 1975.” i.e. the shark can symbolize “nature” or “fate” or maybe even “divine retribution” take your pick. Maybe it isn’t there – but that becomes the genius in the storytelling – i.e. don’t explain too much, let the audience interpret as they like
BUT if you have another huge shark, seemingly targeting the same community – well, then the plot quickly becomes overly contrived.
The shark death scene in “Jaws 2” just comes across as laughably stupid – but by that time I was just happy that the movie was over.
SO “Jaws 2” tried very hard – and it did exactly what a “back for more cash” sequel is supposed to do – i.e. is made money.
“Jaws 3” was released in summer 1983 and tried to capitalize on a brief resurgence of the “3-D” fad. This time the movie was a solid “B.” The only connection to the first two movies is the grown up Brody brothers – and the mechanical shark of course.
The plot for “Jaws 3” might feel familiar to audiences in 2022. Not being a “horror” movie aficionado, I’m not sure how much “prior” art was involved with the plot — i.e. the basic “theme park” disaster plot had probably become a staple for “horror” movies by 1983 (“Westworld” released in 1973 comes to mind).
Finally the third sequel came out in 1987 (“Jaws: The Revenge”) – I have not seen the movie. Wikipedia tells me that this movie ignores “Jaws 3” and becomes a direct sequel to “Jaws 2” (tagline “This time it is personal”)
The whole “big white shark is back for revenge against the Brody clan” plot is a deal breaker for me – e.g. when Michael Caine was asked if he had watched “Jaws 4” (which received terrible reviews) – his response was ‘No. But I’ve seen the house it bought for my mum. It’s fantastic!’
Thankfully, there isn’t likely to be another direct “Jaws” sequel (God willing).
Humans have probably told stories about “sea monsters” for as long as there have been humans living next to large bodies of water. From that perspective “Jaws” was not an “original story” (of course those are hard to find) but an updated version of very old stories – and of course “shark”/sea monster movies continue to be popular in 2022.
Mr Spielberg
Steven Spielberg was mostly an “unknown” director before “Jaws.” Under ordinary circumstances – an “unknown” director would have been involved in the sequel to a “big hit movie.”
Mr Spielberg explained he stayed away from the “Jaws sequels” because making the original movie was a “nightmare” (again, multiple documentaries have been made).
“Jaws 2” PROBABLY would have been better if he had been involved – but his follow up was another classic — “Close Encounters of the Third Kind” (1977).
It is slightly interesting to speculate on what would have happened to Steven Spielberg’s career if “Jaws” had “flopped” at the box office. My guess is he would have gone back to directing television and would obviously have EVENTUALLY had another shot at directing “Hollywood movies.”
Speculative history aside – “Jaws” was nominated for “Best Picture” (but lost to “One Flew Over the Cuckoo’s Nest”) and won Oscars for Best Film Editing, Best Music (John Williams), and Best Sound.
The “Best Director” category in 1976 reads like a “Director Hall of Fame” list – Stanley Kubrick, Robert Altman, Sidney Lumet, Federico Fellini, and then Milos Forman won for directing “One Flew Over the Cuckoo’s Nest.” SO it is understandable why Mr Spielberg had to wait until 1978 to get his first “Best Director” nomination for “Close Encounters of the Third Kind” …
(btw: the source novel for “One Flew Over the Cuckoo’s Nest” is fantastic – I didn’t care for the movie PROBABLY because I read the book first … )
Best vs favorite ANYWAY – I have a lot of Steven Spielberg movies in my “movie library” – what is probably his “best movie” (if you have to choose one – as in “artistic achievement”) is hands down “Schindler’s List” (1993) which won 7 Oscars – including “Best Director” for Mr Spielberg.
However, if I had to choose a “favorite” then it is hard to beat “Raiders of the Lost Ark” (but there is probably nostalgia involved) …
Full disclosure: “Star Wars” was released in 1977 – when I was 8ish years old. This post started as a “reply” to something else – and grew – so I apologize for the lack of real structure – kind of a work in progress …
I am still a “George Lucas” fan – no, I didn’t think episodes I, II, and III were as good as the original trilogy but I didn’t hate them either.
George Lucas obviously didn’t have all of the “backstory” for the “Jedi” training fully formed when he was making “Star Wars” back in the late 1970’s
in fact the “mystery” of the Jedi Knights was (probably) part of the visceral appeal of the original trilogy (Episodes IV, V, and VI – for those playing along)
As always when you start trying to explain the “how” and “why” behind successful “science fantasy” you run into the fact that these are all just made up stories and NOT an organized religion handed down by a supreme intelligence
if you want to start looking at “source material” for the “Jedi” – the first stop is obvious – i.e. they are “Jedi KNIGHTS” – which should obviously bring to mind the King Arthur legend et al
in the real world a “knight in training” started as a “Page” (age 7 to 13), then became a “Squire” (age 14 to 18-21), and then would become a “Knight”
of course the whole point of being a “Knight” was (probably) to be of service and get granted some land somewhere so they could get married and have little ones
since Mr Lucas was making it all up – he also made his Jedi “keepers of the faith” combing the idea of “protectors of the Republic” with “priestly celibacy” — then the whole “no attachments”/possessions thing comes straight from Buddhism
btw: all this is not criticism of George Lucas – in fact his genius (again in Episodes IV, V, VI) was in blending them together and telling an entertaining story without beating the audience over the head with minutiae
ANYWAY “back in the 20th century” describing something as the “Disney version” used to mean that it was “nuclear family friendly” — feel free to psychoanalyze Walt Disney if you want, i.e. he wasn’t handing down “truth from the mountain” either — yes, he had a concept of an “idealized” childhood that wasn’t real – but that was the point
just like “Jedi Knights” were George Lucas’ idealized “Knights Templar” – the real point is that they are IDEALIZED for a target audience of “10 year olds” – and when you start trying to explain too much the whole thing falls apart
e.g. the “Jedi training” as it has been expanded/over explained would much more likely create sociopaths than “wise warrior priests” — which for the record is my same reaction to Plato’s “Republic” – i.e. that the system described would much more likely create sociopaths that only care about themselves rather than “philosopher kings” capable of ruling with wisdom
I just watched a documentary on the “Cola wars” – and something obvious jumped out at me.
First I’ll volunteer that I prefer Pepsi – but this is 100% because Coke tends to disturb my stomach MORE than Pepsi disturbs my stomach.
full disclosure – I get the symptoms of “IBS” if I drink multiple “soft drinks” multiple days in a row. I’m sure this is a combination of a lot of factors – age, genetics, whatever.
Of course – put in perspective the WORST thing for my stomach (as in “rumbly in the tummy”) when I was having symptoms was “pure orange juice” – but that isn’t important.
My “symptoms” got bad enough that I was going through bottles of antacid each week, and tried a couple “over the counter” acid reflux products. Eventually I figured out changing my diet – getting more yogurt and tofu in my diet, drinking fewer “soft drinks” helped a LOT.
The documentary was 90 minutes long – and a lot of time was spent on people expressing how much they loved one brand or the other. I’m not zealous for either brand – and I would probably choose Dr Pepper if I had to choose a “favorite” drink
Some folks grew up drinking one beverage or the other and feel strongly about NOT drinking the “competitor” – but again, my preference for Pepsi isn’t visceral.
Habit
The massive amount of money spent by Coke and Pepsi marketing their product becomes an exercise in “marketing confirmation bias” for most of the population – but I each new generation U.S. has to experience some form of the “brand wars” – Coke vs Pepsi, Nike vs Adidas, PC vs Mac – whatever.
e.g. As a “technology professional” I will point out that Microsoft does a good job of “winning hearts and minds” by getting their products in the educational system.
If you took a class in college teaching you “basic computer skills” in the last 20 years – that class was probably built around Microsoft Office. Having taught those classes for a couple years I can say that students learn “basic computer skills” and also come away with an understanding of “Microsoft Office” in particular.
When those students need to buy “office” software in the future, what do you think they will choose?
(… and Excel is a great product – I’m not bashing Microsoft by any means 😉 )
Are you a “Mac” or a “PC”? Microsoft doesn’t care – both are using Office. e.g. Quick name a spreadsheet that ISN’T Excel – there are some “free” ones but you get the point …
The point is that human beings are creatures of habit. After a certain age – if you have “always” used product “x” then you are probably going to keep on using product “x” simply because it is what you have “always used.”
This fact is well known – and why marketing to the “younger demographic” is so profitable/prized.
ALL OF WHICH MEANS – that if you can convince a sizable share of the “youth market” that your drink is “cool” (or whatever the kids say in 2022) – then you will (probably) have created a lifelong customer
Taste Tests
Back to the “cola wars”…
The Pepsi Challenge deserves a place in the marketing hall of fame — BUT it is a rigged game.
The “Pepsi challenge” was setup as a “blind taste test.” The “test subject” had two unmarked cups placed in front of them – one cup containing Pepsi and the other containing Coke.
The person being tested drinks from one cup, then from the second cup, and then chooses which one they prefer.
Now, according to Pepsi – twice as many people preferred Pepsi to Coke by a 2:1 margin. Which means absolutely nothing.
The problem with the “taste test” is that the person tastes one sugary drink, and then immediately tastes a second sugary drink. SO being able to discern the actual taste difference between the two is not possible.
If you wanted an honest “taste test” then the folks being tested should have approached the test like a wine tasting. e.g. “swish” the beverage back and forth, suck in some air to get the full “flavor”, and then spit it out. Maybe have something to “cleanse the pallet” between the drinks …
(remember “flavor” is a combination of “taste” and “smell”)
For the record – yes, I think Coke and Pepsi taste different – BUT the difference is NOT dramatic.
The documentary folks interviewed Coke and Pepsi executives that worked at the respective companies during the “cola wars” – and most of those folks were willing to take the “Pepsi Challenge”
A common complaint was that both drinks tasted the same – and if you drink one, then drink another they DO taste the same – i.e. you are basically tasting the first drink “twice” NOT two unique beverages.
fwiw: most of the “experts” ended up correctly distinguishing between the two – but most of them took the time to “smell” each drink, and then slowly sip. Meanwhile the “Pepsi Challenge” in the “field” tended to be administered in a grocery store parking – which doesn’t exactly scream “high validity.”
ANYWAY – you can draw a dotted line directly from the “Pepsi Challenge” (as un-scientific as it was) and “New Coke” – i.e. the “Pepsi Challenge” convinced the powers that be at Coke that they needed to change.
So again, the “Pepsi Challenge” was great marketing but it wasn’t a fair game by any means.
fwiw: The documentary (“Cola Wars” from the History Channel in 2019) is interesting from a branding and marketing point of view. It was on hoopladigital, and is probably available online elsewhere …
Difference between “sales” and “Marketing”
If you are looking at a “business statement”/profit and loss statement of some kind – the “top line” is probably gonna be “total revenue” (i.e. “How much did the company make”). The majority of “revenue” is then gonna be “sales” related in some form.
SO if you make widgets for $1 and sell them for $2 – if you sell 100 widgets then your “total revenue” will be $200 (top line) your “cost of goods sold” will be $100 and then the “Net revenue” (the “bottom line”) will be “widgets sold” – “cost of widgets” i.e. $100 in this extremely simple example.
In the above example the expense involved in “selling widgets” is baked into the $1 “cost of goods sold” – so maybe the raw materials for each widget is 50 cents, then 30 cents per widget in “labor”, and 20 cents per widget for sales and marketing.
Then “sales” involves everything involved in actually getting a widget to the customer. While “marketing” is about finding the customer and then educating them about how wonderful your widgets are – and of course how they can buy a widget. e.g. marketing and sales go hand in hand but they are not the same thing.
The “widget market” is all of the folks that might want to use widgets. “Market share” is then the number of folks that use a specific company’s widgets.
Marketing famously gets discussed as “5 P’s” — Product, Place, Price, Promotion, and People.
Obviously the widget company makes “widgets” (Product)- but should they (A) strive to make the highest quality widget possible that will last for years (i.e. “expensive to produce”) or should they (B) make a low cost, disposable widget?
Well, the answer is “it depends” – and some of the factors involved in the “Product” decision are the other 4 P’s — which will change dramatically between scenario A and B.
A successful company will understand the CUSTOMER and how the customer uses “widgets” before deciding to venture into the “widget market space”
This is why you hear business folks talk about “size of markets” and “price sensitivity of markets.” If you can’t make a “better” widget or a less expensive widget – then you are courting failure …
SO Coke and Peps are both “mature” companies that have established products, methods and markets – so growing their market share requires something more than just telling folks that “our product tastes good”
In the “Cola Wars” documentary they point out the fact that the competition between Coke and Pepsi served to grow the entire “soft drink market” – so no one really “lost” the cola wars. e.g. in 2020 the “global soft drink market” was valued at $220 BILLION – but the market for “soft drinks” fragmented as it grew.
The mini-“business 101” class above illustrates why both Coke and Pepsi aggressively branched out into “tea” and “water” products since the “Cola wars.”
It used to be that the first thing Coke/Pepsi would do when moving into a new market was to build a “bottling plant.” So then “syrups” can be shipped to the different markets – and then “bottled” close to where they will be consumed – which saves $$ on shipping costs.
I suppose if you are a growing “beverage business” then selling “drink mix” online might be a profitable venture – unless you happen to have partners in “distant markets” that can bottle and distribute your product – i.e. Coke and Pepsi are #1 and #2 in the soft drink market and no one is likely to challenge either company anytime soon.
“Soft drinks” is traditionally defined as “non alcoholic” – so the $220 billion is spread out over a lot of beverages/companies. Coke had 20% of that market and Pepsi 10% – but they are still very much the “big players” in the industry. The combined market share of Coke and Pepsi is equal to the combined market share of the next 78 companies combined (e.g. #3 is Nestle, #4 Suntory, #5 Danone, #6 Dr Pepper Snapple, #7 Red Bull).
My takeaway …
umm, I got nothing. This turned into a self-indulgent writing exercise. Thanks for playing along.
In recent years PepsiCo has been driving growth by expanding into “snacks” – so a “Cola wars 2” probably isn’t likely …
I’m not looking to go into the soft drink business – but it is obviously still a lucrative market. I had a recipe for “home made energy drink” once upon a time – maybe I need to find that again …