This is part 5 of an ongoing series detailing my journey from total noob to hobbyist coder. I share my thoughts as I learn the basics of programming. You can find the rest of the series here.
One of the challenging things about learning to code is that while you are still mastering the basics of fundamental languages, the whole ecosystem seems to be evolving at a breakneck pace. By the time I get a handle on CSS, everyone will scoff at folks who don’t use React. Why learn C or Python when I could be using Go? Why bother with Go when I could devote myself to Rust instead? If Ruby is the language that helps beginners fall in love with programming, why didn’t anyone tell me that at the start?
Charles Martin, a veteran software engineer and consultant, wrote a piece for our our blog this week that touched on this idea same idea in a slightly different way:
Start by accepting that change is inevitable.
Any project, no matter how well planned, will result in something that is at least somewhat different than what was first envisioned. Development processes must accept this and be prepared to cope with it.
As a consequence of this, software is never finished, only abandoned.
We like to make a special, crisply-defined point at which a development project is “finished.” The reality, however, is that any fixed time at which we say “it’s done” is just an artificial dividing line. New features, revised features, and bug fixes will start to come in the moment the “finished” product is delivered. (Actually, there will be changes that are still needed, representing technical debt and deferred requirements, at the very moment the software is released.)
Terrifying! These realizations can, if you’re not careful, lead to a sort of paralysis, a Zeno’s Paradox of programming in which the start of any journey inevitably reveals the impossibility of an end.
I was chatting with Teresa Dietrich, our new chief product officer, about this conundrum. She has been working in technology for three decades, and seen plenty of trends come and go. “Every few years we have this brand new architecture and it’s going to change the world, right? The truth, however, is that you cannot remove complexity from a system.” Every system involves not just code, she explained, but the folks who write it, maintain it, and rely on it. “It’s people, process, and technology. What happens when a new software architecture comes along? You have to ask, where did you move the complexity?”
This way of thinking about the situation struck a chord with me. It’s almost a fundamental law of nature: complexity is a constant. Bringing a new programming language into the picture might solve some pain points or produce some gains in speed and cost. But now your code base has five languages, not four. How does that trickle down across the expertise of the hires you’ve made over the last few years? And the support engineers who sit on another continent, who handle crises for customers in seven different spoken languages. “If we really decide a new coding language is a priority and is going to move the needle, can we step back and find another language to deprecate at the same time?” asked Teresa.
Our brains learn to recognize patterns over time, and that makes certain approaches seem more intuitive. Here’s a nice quote from an article by Sandy Maguire, which has been introducing me to if, and, or, and nand.
“While computer science has very little to do with snowmen, it has everything to do with patterns. The study of computer science, like mathematics, is one of overwhelming self-referentiality. The patterns that seemed so difficult and novel yesterday are today’s run-of-the-mill building blocks. Like a snowball rolling down a mountain, the student of computer science cannot help but gain momentum as these concepts begin converge in one huge avalanche.”
There are lots of ways to express this universal truth, but I tend to favor dark humor. Let me know your own stories about code, complexity, and infinite choice in the comments.
Tags: bulletin, change management, go, rust, stack overflow, stackoverflow, worst coder in the worldAll languages were created equal, but some were created more === than others.
— Arne Mæhlum (@arnemahl) February 1, 2020
27 Comments
You’ve completely misunderstood how software development works if you think the existence of language Y means language X is no longer relevant. It isn’t a zero-sum game where one language takes the place of another. Are you not aware that people still use C, Fortran, and even Cobol? People even still use jQuery!
Some languages are still around because they are useful and have the ability to streamline things to the very fine hairs of max speed, like C and Assembly. Languages like this can also be used for bare metal programming, which does still happen. And these languages are used to prevent all the extra space used for the more modern languages. Programming of Arduinos and similar microcontrollers use C and C++ because they are comparatively so light that a compiled program can exist in 8k of space.
Other languages exist simply because businesses are too stubborn or afraid to change, or can’t understand they are spending more money on the old tech than rewriting things in the new, which is where Fortran and Cobol exist. And they will continue to exist until businesses wise up to find that there aren’t any programmers left to work on their ancient systems anymore.
Yet other languages are in the background, running where you can’t see. There’s still a lot of C running behind modern code, and libraries like JQuery are what some other libraries and frameworks are built on. I’m not making a judgement call whether this is good or not, I’m just saying that it happens. And often enough, people are too unwilling to move away from the hidden dependencies, either because they like it or because they just don’t know the dependency exists.
Then there’s the languages that have been replaced, like Pascal, ADA, B, COMTRON, and hundreds of others. And there’s probably dozens that are on the way out, like Pearl, Ruby, and, yes, Cobol. Both Cobol and Fortran are losing the battle with existence. Not many people are learning them and the people who do know them are either retiring or dying of old age. Those languages will get replaced and relegated to the annals of history, eventually.
So what I’m saying is that some languages should be replaced and many others already have. The languages you and I currently know might not exist in 20-30 years, and that’s perfectly well and good. There are some languages I haven’t in touched close to 20 years, including Pascal, and that’s fine by me.
https://en.wikipedia.org/wiki/Timeline_of_programming_languages
Continuing to use FORTRAN or COBOL is not necessarily a sign of laziness. For the same reason you wouldn’t write a low level tool in a fancy language with expansive libraries, it doesn’t make sense for a lot of the banking/finance industry to rewrite their programs.
The programs written in FORTRAN do their job quickly and correctly. To rewrite them would introduce unnecessary variables. It’s not like these applications are slow, or fail often (if they did, our economy would surely suffer). Replacing them with a new language to satiate the desires of a forward thinking software developer is not the prerogative of that industry. It is to keep the systems online and accurate.
“Anything that’s available is obsolete” is what I heard many years ago applied to technology, and it is true for programming languages also.
So you just have to decide which obsolete technology or language you are going to use.
I think LLVM is the took which is turning language into a preference. Rust, Crystal (Ruby), Clang (default C compiler on Mac), EmScripten Javascript, …
These languages all compile into the same type of code. Yes. I said code. That could be Javascript (eventually it will come back), web assembly, x86 machine code, RISC-V, …
Why learn multiple languages? Well, how fine-grain control over memory? Type safety? Etc.
If you plan on building a OS, web server, or web browser engine, like Servo, V8, or Gecko, Rust has excellent syntax to assist with Security.
If you want embedded microcontrollers including Arduino, CLang.
General web development Javascript or Crystal/Ruby.
Languages have features to assist with a certain task. Quick development, security, memory management for constrained systems. I think because a languages purpose is rarely explained, it causes confusion or a gotta learn em’ all – Pokémon mentality.
Eventually they will become preferences to what best completes a job, including what a person/team knows best.
Low code will not eliminate these. Try Node-Red or Apache Nifi. Something doesn’t work? Drag in a item that lets you write custom code to accomplish it.
Scary stuff. I can only hope that the brains of new programmers are larger than those of their predecessors and can cope with this always-increasing complexity, because mine sure as hell can’t.
I admit I’m a pensioner so I only code for fun now, though that that doesn’t always mean trivial stuff. My own experience has convinced me that adding a complex framework to an already complex language is only going to result in unmaintainable products, and sooner rather than later, at that. I’ve seen it happen too often.
I believe the answer is to reduce complexity, not increase it, and my way of doing this is with a better language – by which I mean a high-level DSL. They work for databases (SQL) and in spreadsheets (Excel macros) so why not elsewhere?
Since nobody else was interested I wrote my own DSL with which to build Web apps. Getting old has its problems but one considerable advantage is in not having to run with the herd in a futile attempt to keep up with the “conventional wisdom” (which is often less than wise). At least I can say “I did it my way”.
I agree! My first experience with coding was around 1988 (best I can remember), in high school. Before there was an I-anything, I typed gibberish into an Apple computer and hit ‘Return’. If I’d keyed each character correctly, I’d hear a few seconds of Jingle Bells. My last job in tech was around 2000. I worked in a medical clinic as they were switching from paper medical records to electronic and my job was to teach the physicians, who stubbornly complained they didn’t have time to check emails more than once a week, to use the software. I became a real estate agent and went in a completely different direction. Now, at times I wonder if I’ve become as stubborn as they’d been. When I read about programming or attempt learning anything current, the gap of time and what I missed in the last 18 years feels as if I need 18 years to catch up to today. It’s as if many parts were added to address certain problems on top of underlying parts that only partially worked and nothing was ever defragmented because it took to long…
Hi Graham, I agree that complexity is the ultimate limit on what humans can get working. I often say that we can make things simpler and gain benefits, but I suspect that the deciders not only don’t share this preference, but actually don’t understand the parts, tools, lifecycle and so on.
Also, we have created incredibly complex things like aircraft, spacecraft, financial systems, governments and so on that have succeeded so far, and no one sees the denouement. Martin Fowler has a cute quote though: “We are writing tomorrow’s legacy code – today.”
Yes, a DSL or roll-your-own framework is an excellent means to making things simpler, more maintainable, and more reusable. But then you have to hire someone almost as clever as yourself to deal with it, and businesses see this as an unwarranted risk. Cheers! (I am not actually British)
35 years ago I interviewed for a programming position. The job was undesirable because they were using an in-house language. Frankly that’s a career ender. Fast forward and I create a new DSL about once a year. Productivity is high, migration is simple and inexpensive, and cross platform is trivial. Performance is mostly a hardware problem.
I’ve been coding since high school in 1970. I must know 25 languages. The first (Fortran 4, now called Fortran 66) took several years to even half master. But in 1990s I reached a sort of implosion. Now a new language takes days to get up to speed in. My advice is really learn a couple of languages and it will become a non-issue.
Instead of introducing complexity with the next big thing ( look at web technologies ) Focus on patterns or so called algorithms.
Concrete knowledge is obsolete
My favorite response to a similar question was “Same sh*t, different language.”
Which is why I stick with C, the solutions I build will perform and will compile. I always build for the lowest common denominator, because any advanced solution (worst of all some ‘modern language’) inevitably will break under its own weight.
Worked for me for years, scalability and reusability begins with eliminating slow, heavy and complicated stuff. My codebase is coherent, has as few dependencies as possible and is written in the simplest, most efficient form.
Always run on vanilla configurations, reduce dependence on third party libraries, and keep it stupid simple.
You have to appreciate the irony. All these new frameworks, libraries, and paradigms keep cropping up in order to “simplify” the process, and now the most stressful and complex part of development (for me at least) is trying to keep up with them all. How many times can this one wheel be reinvented?
Among all the “next big things” that prove to be passing fads are a few real gems whose potential is never realized because most developers never really get them. I’d even put OOP in this category. Someone described the unrealized potential of OOP as the difference between real OO thinking vs. merely writing procedural code with classes. Twenty years after OOP was the buzzword du jour, most developers are still writing procedural code with classes.
and what book(s) do we read to understand real oo thinking?
It’s called “Hype-Driven-Development” and it’s driven by people who do not learn the basics of their current programming language, fail to solve upcoming problems, think they can evade this by using something different (most likely more complex stuff ), run into new problems they do not understand how they get solved in their new framework/programming language and fail again at a different position.
My advice: really learn and understand it, learn to solve and later evade problems in your current system by design and get out of the HDD-loops.
It’s better for your project, better for your heart and better for your familie life.
Before I left the codeface for a seat by the fire I was paid to write code in 17 different languages. No matter which I was currently working in, I always missed at least one feature of another.
I first heard the idea that adding a new language/technology/framework/what-have-you should be accompanied by the subtraction of another some time in the previous century. Management had finally got the message that all these different languages comprising a “system” also made it impossible to maintain.
The solution is to actively look for things to get rid of. This can be sold as elimination of technical debt.
“By the time I get a handle on CSS, everyone will scoff at folks who don’t use React…” That about sums up the entire post.
Do construction companies buy the latest and greatest tools every time they come out? Do airlines completely replace their airplanes every time boeing comes out with a new one? Do Chefs completely replace knifes, pans, pots, etc… every time a new one comes out? Why do we sell used cars? Why do we need an older version of the Iphone? 99.9% of the world doesn’t work like this, why would a coding language be any different?
Hi I was wondering if this debate is at least partially solved by Visual Studio.Net?
Microsoft says that no matter what language you choose, the same .Net pseudo-code will be generated.
I find this is a very good news where I could choose an aging software like Visual Basic yet the end result will be similar to C.net.
What do you guys think, just marketing? 🙂
Xavier
Note: I’m a hobbyist and will go for the fastest development tool and I don’t really care if it completes a task in 3 seconds what C can do in 1 second. In the meantime I don’t need 4 lines just to say “hello world” in a window.
Certainly one should avoid introducing unnecessary/artificial complexity whenever possible, but there will always be a certain baseline level of complexity that is inherent to the problem(s) your software is trying to solve.
Managing that inherent complexity is often a matter of minimizing the worst-case “complexity density” — i.e. rather than trying to put too much functionality into any single spot (where even if it works, trying to understand that code will destroy the mind of any coder who looks at it too closely), you want to spread the complexity out across multiple smaller modules, each of which does its own specific thing in a well-defined and easily-explainable manner. One they are working, those simple modules can be assembled together to solve the higher-order problems (and then if need be, that aggregation can itself be grouped together with other aggregations to solve problems at an even higher level, and so on).
That way the software engineers can “switch gears” to work at different levels of the hierarchy as necessary, rather than having to keep all aspects of the codebase in their heads at once, which would be a superhuman task for any non-trivial program.
The only constant is change.
See, I knew I should have taken COBOL instead of Fortran when I had the chance…
I remember my first programming job. Someone asked me to set up a database, so knowing what little C++ I knew, I looked up how to write files to disk, how to read them from disk, figured I’d have to come up with a file structure, and gave soon afterwards. Today I’m learning docker. Omg, what’s a database => give me a nextcloud, don’t bother exposing the database, now how to get gmail to accept smtp requests.
Programming languages are tools. Each one has different strengths and weaknesses, and typically were created with a specific problem domain in mind. Perl, Fortran, and COBOL, are all examples of this. I wouldn’t generally want to write a C program to replace a COBOL program, because COBOL is really designed to process flat files that contain different ‘record types’. While you can do that with C, or any language for that matter. COBOL is designed to do this, and makes it straightforward. Likewise, if I had a problem that required lots of complex pattern matching and transformation, I wouldn’t choose COBOL or FORTRAN, or even C. Instead, I’d probably go straight to Perl which is designed to do this succinctly.
In short, choose the best tool for the job.
Based on the complexity for the problem we need to make a right choice.
A problem which is complex for someone is an easy for other it depends upon the skill which we developed and apply with our experience and expertise in it.
When you first learn a language you feel its complex over a time it becomes easy.
Complexity is not a constant when an unknown becomes know complexity disappear.
Complexity is our mindset the only constant is change From your article “Start by accepting that change is inevitable.”
Reduce complexity by coding it as you would have said it. I miss VB; no one mentioned it in this thread but there is something special about a programming language that doesn’t use semicolons and maintains a level of human readability. Its not 1998 and all languages are now given about the same capabilities (added for all you C++ snobs who are cringing at the thought of VB)
Nice conversation, thank you.