I achieved “crazy old man” status a few years back — so when I encounter “youthful arrogance” I’m always a little slow to perceive it as “youthful arrogance.”
Googling “youthful arrogance” gave me a lot of familiar quotes – a new favorite:
I’m too old to know everything
Oscar Wilde
Being “slow to recognize” youthful arrogance in the “wild” probably comes from the realization that when someone REALLY irritates me – and I have trouble pinpointing the reason they irritate me – the reason is (often) that they have an annoying mannerism which I share.
Self-awareness aside – most of the time “youthful arrogance” is simply “youthful ignorance.” Having an appreciation that the world as it exists today did NOT all happen in the last 5 years – you know, some form of “large scale historical perspective” – is the simple solution to “youthful ignorance.”
“True arrogance” on the other hand is a much different animal than “ignorance.” Arrogance requires an “attitude of superiority” and isn’t “teachable.”
e.g. imagine someone having the opinion that the entire computer industry started 5 years ago – because that is when THEY started working in the “computer industry.”
Gently point out that “modern computing” is at least 50 years old and traces its origins back thousands of years. Maybe point out that what is “brand new” today is just a variation on “what has been before” – you know the whole Ecclesiastes thing …
If they accept the possibility that there is “prior art” for MOST everything that is currently “new” – then they were just young and ignorant. If all they do is recite their resume and tell you how much money they are making – well, that is probably “arrogance.”
Of course if “making money” was the purpose of human existence then MAYBE I would be willing to accept their “youthful wisdom” as something truly new. Of course I’ll point back to the “wisdom books” (maybe point out that “the sun also rises” and recommend reading Ecclesiastes again) and politely disagree – but that isn’t the point.
SDLC
The computer industry loves their acronyms.
When I was being exposed to “computer programming” way back when in the 1980’s – it was possible (and typical) for an individual to create an entire software product by themselves. (The Atari 2600 and the era of the “rock star programmer” comes to mind.)
It is always possible to tell the “modern computing” story from different points of view. Nolan Bushnell and Atari always need to be mentioned.
e.g. part of the “Steve Jobs” legend was that he came into the Atari offices as a 19 year old and demanded that they hire him. Yes, they hired him – and depending on who is telling the story – either Atari, Inc helped him purchase some of the chips he and Woz used to create the Apple I OR Mr Jobs “stole” the chips. I think “technically” it was illegal for individuals to purchase the chips in question at the time – so both stories might “technically” be true …
Definitions
The modern piece of hardware that we call a “computer” requires instructions to do anything. We will call those instructions a “computer program”/software.
Someone needs to create those instructions – we can call that person a “computer programmer.”
Nailing down what is and isn’t a “computer” is a little hard to do – for this discussion we can say that a “computer” can be “programmed” to perform multiple operations.
A “computer program” is a collection of instructions that does something — the individual instructions are commonly called “code.”
SO our “programmer” writes “code” and creates a “program.” The term “developer” has become popular as a replacement for “programmer.” This is (probably) an example of how the task of creating a “program” has increased in complexity – i.e. now we have “teams of developers” working on an “application”/software project, but that isn’t important at the moment …
Computer programs can be written in a variety of “computer languages” — all of which make it “easier” for the human programmer to write the instructions required to develop the software project. It is sufficient to point out that there are a LOT of “computer languages” out there — and we are moving on …
Job Titles
The job of “computer programmer” very obviously changed as the computer industry changed.
In 2022 one of the “top jobs” in the U.S. is “software engineer” (Average salary: $126,127: Percent of growth in number of job postings, 2019-2022: 87% – thank you indeed.com).
You will also see a lot of “job postings” for “software programmers” and “software developers.”
What is the difference between the three jobs (if any)? Why is “software engineer” in the top 10 of all jobs?
Well, I’m not really sure if there is a functional difference between a “programmer” and a “developer” – but if there is, the difference is found in on the job experience and scope of responsibilities.
i.e. “big company inc” might have an “entry level programmer” that gets assigned to a “development team” that is run by a “senior developer.” Then the “development team” is working on a part of the larger software project that the “engineer” has designed.
History time
When the only “computers” were massive mainframes owned by Universities and large corporations then being a “programmer” meant being an employee of a University or large corporation.
When the “personal computer revolution” happened in the 1970’s/80’s – those early PC enthusiasts were all writing their own software. Software tended to be shared/freely passed around back then – if anyone was making money off of software it was because they were selling media containing the software.
THEN Steve Jobs and Steve Wozniak started Apple Computers in 1976. The Apple story has become legend – so I won’t tell the whole story again.
fwiw: successful startups tend to have (at least) two people – i.e. you need “sales/Marketing” and you need “product development” which tend to be different skill sets (so two people with complimentary skills). REALLY successful startups also tend to have an “operations” person that “makes the trains run on time” so to speak – e.g. Mike Markkula at Apple
SO the “two Steves” needed each other to create Apple Computers. Without Woz, Jobs wouldn’t have had a product to sell. Without Jobs, Woz would have stayed at HP making calculators and never tried to start his own company.
VisiCalc
Google tells me that 200 “Apple I’s” were sold (if you have one it is a collectors item). The Apple I was not a complete product – additional parts needed to be purchased to have a functional system – so it was MOST important (historically speaking) in that it proved that there was a larger “personal computer” market than just “hardware hobbyists.”
The Apple II was released in 1977 (fully assembled and ready to go out of the box) – but the PC industry still consisted of “hobbyists.”
The next “historic moment in software development” happened in 1979 when Dan Bricklin and Bob Frankston released the first “computerized spreadsheet” – “VisiCalc.”
VisiCalc was (arguably) the first application go through the entire “system development life cycle” (SDLC) – e.g. from planning/analysis/design to implementation/maintenance and then “obsolescence.”
The time of death for VisiCalc isn’t important – 1984 seems to be a popular date. Their place in history is secure.
How do you go from “historic product” to “out of business” in 5 years? Well, VisiCalc as a product needed to grow to survive. Their success invited a lot of competition into the market – and they were unable or unwilling to change at the pace required.
This is NOT criticism – I’ll point at the large number of startups in ANY industry that get “acquired” by a larger entity mostly because “starting” a company is a different set of skills than “running” and “growing” a company.
Again, I’m not picking on the VisiCalc guys – this “first inventor” goes broke is a common theme in technology – i.e. someone “invents” a new technology and someone else “implements” that technology better/cheaper/whatever to make big $$.
btw: the spreadsheet being the first “killer app” is why PC’s found their way into the “accounting” departments of the world first. Then when those machines started breaking, companies needed folks dedicated to fixing the information technology infrastructure – and being a “PC tech” became a viable profession.
The “I.T.” functionality stayed part of “accounting” for a few years. Eventually PCs become common in “not accounting” divisions. The role of “Chief Information Officer” and “I.T. departments” became common in the late 1980’s — the rest is history …
Finally
Ok, so I mentioned that SDLC can mean “system development life cycle.” This was the common usage when I first learned the term.
In 2022 “Software development life cycle” is in common usage – but that is probably because the software folks have been using the underlying concepts of the “System DLC” as part of “software development” process since “software development” became a thing.
e.g. The “Software DLC” uses different vocabulary — but it is still the “System DLC” — but if you feel strongly about it, I don’t feel strongly about it one way or the other – I could ALWAYS be wrong.
I’ve seen “development fads” come and go in the last 30 years. MOST of the fads revolve around the problems you get when multiple development teams are working on the same project.
Modern software development on a large scale requires time and planning. You have all of the normal “communication between teams” issues that ANY large project experiences. The unique problems with software tend to be found in the “debugging” process – which is a subject all its own.
The modern interweb infrastructure allows/requires things like “continuous integration” and “continuous deployment” (CI/CD).
If you remember “web 1.0” (static web pages) then you probably remember the “site under construction” graphic that was popular until it was pointed out that (non abandoned) websites are ALWAYS “under construction” (oh and remember the idea of a “webmaster” position? one person responsible for the entire site? well, that changed fast as well)
ANYWAY – In 2022 CI/CD makes that “continuous construction” concept manageable
Security
The transformation of SDLC from “system” to “software” isn’t a big deal – but the “youthful arrogance” referenced at the start involved someone that seemed to think like the idea of creating ‘secure software’ was something that happened recently.
Obviously if you “program” the computer by feeding in punch cards – then “security” kind of happens by controlling physical access to the computer.
When the “interweb” exploded in the 1990’s the tongue in cheek observation was that d.o.s. (the “disk operating system”) had never experienced a “remote exploit”
The point being that d.o.s. had no networking capabilities – if you wanted to setup a “local area network” (LAN) you had to install additional software that would function as a “network re-director”
IBM had come up with “netbios” (network basic input output system) in 1983 (for file and print sharing) — but it wasn’t “routable” between different LANs.
NetWare had a nice little business going selling a “network operating system” that ran on a proprietary protocol called IPX/SPX (it used the MAC address for unique addressing – it was nice).
THEN Microsoft included basic LAN functionality in Windows 3.11 (using an updated form of netbios called netbeui – “netbios Enhanced User Interface”) – and well, the folks at Netware probably weren’t concerned at the time, since their product had the largest installed base of any “n.o.s.” — BUT Microsoft in the 1990’s is its own story …
ANYWAY if you don’t have your computers networked together then “network security” isn’t an issue.
btw: The original design of the “interweb” was for redundancy and resilience NOT security – and we are still dealing with those issues in 2022.
A “software design” truism is that the sooner you find an error (“bug”) in the software the less expensive it is to fix. If you can deal with an issue in the “design” phase – then there is no “bug” to fix and the cost is $0. BUT if you discover a bug when you are shipping software – the cost to fix will probably be $HUGE (well, “non zero”).
fwiw: The same concept applies to “features” – e.g. at some point in the “design” phase the decision has to be made to “stop adding additional features” – maybe call this “feature lock” or “version lock” whatever.
e.g. the cost of adding additional functionality in the design phase is $0 — but if you try to add an additional feature half-way through development the cost will be $HUGE.
Oh, and making all those ‘design decisions’ is why “software architects”/engineers get paid the big $$.
Of course this implies that a “perfectly designed product” would never need to be patched. To get a “perfectly designed product” you would probably need “perfect designers” – and those are hard to find.
The work around becomes bringing in additional “experts” during the design phase.
There is ALWAYS a trade off between “convenience” and “security” and those decisions/compromises/acceptance of risk should obviously be made at “design” time. SO “software application security engineer” has become a thing
Another software truism is that software is never “done” it just gets “released” – bugs will be found and patches will have to be released (which might cause other bugs, etc) –
Remember that a 100% secure system is also going to be 100% unusable. ok? ’nuff said
Leave a Reply