I can still remember how happy I was the first time my friends and I managed to program a Pong alike game on our Texas Instrument calculators. It took us weeks before figuring out the algorithm and months before being able to render something decent, almost playable, with zeros and ones on the screen.
At that time, we had to understand low level languages that young folks can not even name today. We had no debug tool and the only way to understand why our application crashed was to try to reproduce the case… on paper.
Later on, I studied in the only university in France that was teaching both Cobol and Java. Their philosophy was that however important Java was becoming (it was the beginning of the two thousands), it was still important to understand the main concepts of IT. They also forced their students to learn the language theory but that was just for the fun. In my last year there, only 6 years after programming my first pong game, we had to produce an OpenGL 3D first person shooter game. It was nothing that we would ever use later on in our lives, but understanding the key concepts of 3D and being able to use complex math was more important than filling up a CV. The whole project took us a couple of weeks.
With today’s technologies, you can build up a playable level of a single game with the latest lighting and texturing technologies in just a couple of hours, thanks to the power, the diversity and the flexibility of the available tools.
The same concept applies to pretty much everything that requires coding. We went from writing eighty characters per line with no objects, to defining a variable name and a method call represented by a box.
Nowadays, most people learn IT the “fast way” that is, with tools that enable auto-completion, give coding advices and have integrated GUI builders based on drags and drops. Creating an application has never been that easy. A couple of clicks here and there, a Google search or two and voila!
But is time sparing worth sacrificing knowledge? What is the big picture on a long term perspective? What will happen to hardcore experts?
If programming is that easy, debugging becomes harder. With all that generated code and programming conventions not always being respected, if someone else than the original developer has to read and fix the code, things can become quite hard and costly. How many of us had to fix a code which contained variable names like txtBox1, txtBox2…, or even worse, ex girlfriends’ names? How many hours are spent yearly debugging simple mathematics operations because precedence rules are not respected? Those are just the easy ones, the most common and harder coding mistakes include, but are not limited to: global variables accessed and modified randomly, methods of tens of lines, exceptions being caught but discarded…
There are tons of resources out there for developers to learn from and many acronyms to discover or to deeper investigate like TDD or BDD (test driven deployment), which is an often mentioned key to high quality softwares, SOLID (Single responsibility, Open-closed, Liskov substitution, Interface segregation and Dependency inversion) which describes the “first five principles” of object-oriented programming and design and so many more. Thanks to the internet, knowledge is only a click away, motivation however, has to be found somewhere else.
On a long term perspective, without proper training, programmers won’t learn much more about the fundamentals and might become somewhat limited. Their errors’ becoming someone else’s to fix and their lack of knowledge becoming time consuming for others.
Most companies understand that principle and have found an easy solution, encouraging through different means, their employees to develop their skills through seminars, conferences, classes and basically everything that can potentially make one become better. This tendency has started in most countries that understand that programming can be a career path and not only another step on a stair that leads to management. Unfortunately, training individuals can be costly and not everyone can afford it. That is where dedication and thrive to learn kick in, buying a book and reading it is affordable by most.
The other problem implied is that as time goes by, experts will become rarer and busier, leaving new programmers to themselves, in charge of their own fixing, eventually leading to chaos and destruction of the world as we know it.
Already in 1995, Niklaus Wirth stated that “software is getting slower more rapidly than hardware becomes faster”. A variant by David May is even more explicit by doing a corollary with Gordon Moore’s law: “Software efficiency halves every 18 months, compensating Moore’s law”.
This proves that knowledge is the key to a good programmer and IT, as every single science, always evolves and so should the people working with it.