Requirements volatility is the core problem of software engineering
The reason we develop software is to meet the needs of some customer, client, user, or market. The goal of software engineering is to make that development predictable and cost-effective.
It’s now been more than 50 years since the first IFIP Conference on Software Engineering, and in that time there have been many different software engineering methodologies, processes, and models proposed to help software developers achieve that predictable and cost-effective process. But 50 years later, we still seem to see the same kinds of problems we always have: late delivery, unsatisfactory results, and complete project failures.
Take a government contract I worked on years ago. It is undoubtedly the most successful project I’ve ever worked on, at least from the standpoint of the usual project management metrics: it was completed early, it was completed under budget, and it completed a scheduled month-long acceptance test in three days.
This project operated under some unusual constraints: the contract was denominated and paid in a foreign currency and was absolutely firm fixed-price, with no change management process in the contract at all. In fact, as part of the contract, the acceptance test was laid out as a series of observable, do-this and this-follows tests that could be checked off, yes or no, with very little room for dispute. Because of the terms of the contract, all the risk of any variation in requirements or in foreign exchange rates were on my company.
The process was absolutely, firmly, the classical waterfall, and we proceeded through the steps with confidence, until the final system was completed, delivered, and the acceptance test was, well, accepted.
After which I spend another 18 months with the system, modifying it until it actually satisfied the customers needs.
In the intervening year between the contract being signed and the software being delivered, reporting formats had changed, some components of the hardware platform were superseded by new and better products, and regulatory changes were made to which the system must respond.
Requirements change. Every software engineering project will face this hard problem at some point.
With this in mind, all software development processes can be seen as different responses to this essential truth. The original (and naive) waterfall process simply assumed that you could start with a firm statement of the requirements to be met.
W.W. Royce is credited with first observing the waterfall in his paper “Managing the Development of Large Software Systems,” and the illustrations in hundreds of software engineering papers, textbooks, and articles are recognizably the diagrams that he created. But what’s often forgotten in Royce’s original paper is that he also says “[The] implementation [in the diagram] is risky and invites failure.”
Matching your process with your environment
Royce’s observation—that every development goes through recognizable stages, from identifying the requirements and proposed solution, through building the software, and then testing it to see if it satisfies those requirements—was a good one. In fact, every programmer is familiar with that, even in their first classroom assignments. But when your requirements change over the duration of the project, you’re guaranteed that you won’t be able to satisfy the customer even if you completely satisfy the original requirements.
There is really only one answer to this: you need to find a way to match the requirements-development-delivery cycle to the rate at which the requirements change. In the case of my government project, we did so artificially: there were no changes of any substance, so it was simple to build to the specification and acceptance test.
Royce’s original paper actually recognized the problem of changes during development. His paper describes an iterative model in which unexpected changes and design decisions that don’t work out are fed back through the development process.
Realism in software development
Once we accept the core uncertainty in all software development, that the requirements never stay the same over time, we can begin to do development in ways that can cope with the inevitable changes.
Start by accepting that change is inevitable.
Any project, no matter how well planned, will result in something that is at least somewhat different than what was first envisioned. Development processes must accept this and be prepared to cope with it.
As a consequence of this, software is never finished, only abandoned.
We like to make a special, crisply-defined point at which a development project is “finished.” The reality, however, is that any fixed time at which we say “it’s done” is just an artificial dividing line. New features, revised features, and bug fixes will start to come in the moment the “finished” product is delivered. (Actually, there will be changes that are still needed, representing technical debt and deferred requirements, at the very moment the software is released.) Those changes will continue as long as the software product is being used.
This means that no software product is ever exactly, perfectly satisfactory. Real software development is like shooting at a moving target—all the various random variations of aim, motion of the target, wind, and vibration ensure that while you may be close to the exact bullseye, you never ever achieve perfection.
Making our process fit the environment
Looked at in this light, software development could seem to be pretty depressing, even dismal. It sounds as if we’re saying that the whole notion of predictable, cost-effective development is chasing an impossible dream.
It’s not. We can be very effective developers as long as we keep the realities in mind.
The first reality is that while perfection is impossible, pragmatic success is quite possible. The LEAN startup movement has made the MVP—”minimum viable product”—the usual goal for startups. We need to extend this idea to all development, and recognize that every product is really an MVP—our best approximation of a solution for current understanding of the problem.
The second reality is that we can’t really stop changes in requirements, so we need to work with the changes. This has been understood for a long time in actual software development—Parnas’s rule for identifying modules is to build modules to hide requirements that can change. At the same time, there have been repeated attempts to describe software development processes that expect to provide successive approximations—incremental development processes (I’ve called it “The Once and Future methodology“).
Once we accept the necessity of incremental development, once we free ourselves from the notion of completing the perfect solution, we can accept changes with some calm confidence.
The third and final reality is that all schedules are really time-boxed schedules. We go into a development project unable to say exactly what the final product will be. Because of that, no early prediction of time to complete can be accurate, and all final deliveries will be partial deliveries.
Agile development to the rescue
The Agile Manifesto grew out of recognition of these facts. Regular delivery of working software is part of this recognition: a truly agile project has working partial implementations on a regular basis.Close relationships with the eventual customer ensure that as requirements changes become manifest, they can be fit into the work plan.
In an agile project, ideally, there is a working partial implementation starting very early in a project, and observable progress is being made toward a satisfactory product from the first. Think of the target-shooting metaphor again—as we progress, we’re closer and closer to the center ring, the bullseye. We can be confident that, when time is up, the product will be at least close to the goal.Tags: agile, bulletin, process, software engineering, stackoverflow
Don’t forget function growth.
Requirements volatility is the core problem of ALL engineering.
Unless you’re building a bridge. Or a house. Or a car or …
> Unless you’re building a bridge. Or a house. Or a car or …
Driving any land yacht over roman bridge lately ? Used Wifi in house with large stone walls ?
Actually, software is relatively easy to adapt to volatile requirements, e.g
Oh, that bridge you are building, it now needs to allow big ships up the river …
Didn’t we say, all these houses needed garages, you’ve built them too close together …
This new sports car you are launching, it now needs to be zero emissions and run off batteries …
You get the idea.
Yes, I do get the idea. If it’s software, no problem; we’ll just write some new code. If it’s a bridge, well, I guess you should’ve thought of that before we built it.
That’s my take – the perception that software is simple to change, which stems from the lack of understanding of its inherent complexity by users, and the fact that it is, up to a certain point – is what makes requirements volatile.
Well, at least they don’t want the bridge to rotate upon its axis…roundy-round.. 🙂
Nope, also for houses (renovations), cars (that’s why model years are “a thing”) and even bridges (extra lanes, bigger loads).
Is it that hard for all parties involved to understand the realities and accept some type of cyclic process?
Flexibility is expensive both cash and resources.
E. G. all those DOS games for SVGA and Soundblaster – should they have been written with an very abstract concept of the screen and the sound?
The US Constitution trued to be flexible but witness the endless arguments about”what the framers intended “
Yes in most cases that I experienced. If you tell a customer that his requirements will change, most would not believe you and fear that you say that only to pull more money out of them.
plz sir send me it in pdf
We’ve been beating this horse for at least 35 years. Scrum was introduced by that name in 1986, and earlier versions existed for a decade before that. The real challenge is getting management buy-in for this proven solution. Why do so many companies (and nearly all governments) still cling so tightly to waterfall? As far as governments go, it should be easy enough to convince voters to exert pressure – or would be if they could reason quantitatively. For example, waterfall is probably the #1 reason HealthCare.gov (just the site itself, not the health care program it supports) cost taxpayers at least $800 million, when one-thousandth of that budget would have been extremely generous.
After years of managing projects within the government, banking, entertainment, and lead generation industries, one of the roots of the project failure problem is that customers generally don’t really know what they need. They usually know they wish what they had worked faster, or looked prettier. Frequently they hear about some new toy at some conference, and come back insisting it has to be implemented sometime later today flawlessly.
What they don’t think about is whether that shiny new toy provides required functionality; whether it will work out of the box or will need serious customization (they aren’t willing to spend adequate time/effort to define); they don’t want to hear that it will take time to properly define the requirements; and they think user testing is something someone else (with more time on their hands) should do instead of them.
And then they are the first to complain their chosen new system doesn’t work as expected after T2P.
Which was all avoidable, if the customer defined their needs (instead of a preferred system) and allowed their IT Department to come up with limited menu of GOTS/COTS or custom-built solutions that matched the existing architecture better.
Agile was a band-aid IMO that introduces failure risk, because everyone wants everything “tweet” style these days…put requirements out in 140 characters or fewer, and even then only most people can understand them. Then do your next tweet…and the next. Then they realize they forgot some important functionality that will require change requests impacting each previously implemented “tweet” cycle…wash, rinse, repeat.
Waterfall may not be perfect, but at least it forces a 10,000-foot (end-to-end) look at the entirety of a project, and that view many times helps ferret out potential missed requirements before they negatively impact the project.
I also agree with Tony H.’s observations, especially in that most of these problems have been around for a long time, and Agile isn’t the magic fix everyone likes to think it is.
Unfortunately, it’s the project managers who too frequently end up bearing the brunt of the fall-out of failed projects…that’s why I personally chose the exit ramp on project management work. Life has enough stress without killing yourself to implement a “must-have” system that will probably be replaced again within the next 3-5 years anyway, after the next shiny new toy emerges at some conference.
I think expecting the customer to know what they want in a vaguely defined system when they have no understanding of programming or software is the height of folly. It’d be like if a team of space engineers expected you to give them all the technical implementations of how a space shuttle should work. You could probably suggest it has a rocket engine, a round shape, is capable of entering orbit with a payload, maybe carrying a crew, but you’re not going to know anything about fuel intermix ratios or how the valves inside every component works.
I find the easiest way to deal with requirements creep and feature creep is to give the requesting customer two things:
1) Recommendations on what to implement and why (you want X because it solves A, B, C problem and gives us N,M,O benefits)
2) Limiting requirements [or giving realistic feedback on what is actually possible] (whilst X can be done, it has a poorly defined scope, it’s obsolete, it costs half a million to build, no-one on the team specialises in it, etc)
By telling them the suggested course, you’re eliminating the cognitive burden of them trying to work out where to go. By telling them why a process might fail or is too excessive, you’re warning them away from pitfalls (too many people subscribe to the malicious client ‘trial and error’ where they purposefully say nothing and let the client’s idea fail). If the client is sensible, they will listen to the proposals and actually adopt them – you can win half your software battles this way.
If for some reason they override either one and you’ve warned/advised, when invariably it does go wrong, you will have the paper trail to back you up, and the client might learn from their mistake to trust your judgement in future and avoid pitfalls.
>customers generally don’t really know what they need
I agree that this is a major part of the problem, but I see waterfall merely as a way of papering over it. Waterfall allows customers to hand off a set of requirements and let the developers sort out the challenges and conflicts. Agile forces the customer to deal with the reality of conflicting and changing requirements, and they don’t like this even though they’d spend less and end up happier.
Very well said. May I print this out and stick on the wall in out office?
Me: So how would you do this calculation manually? Are the price increments ignored, or just delayed until after the grace period?
User looks at me like I’m speaking Sanskrit. Through a sandbag.
Me: could you write it on the board?
User: I need a menuscreentransactiontable that …
Me: Does it go like [scribble] this graph or [scribble scribble] this graph?
User: What’s that along the bottom? It doesn’t look like a menuscreentransactiontable.
Me: Umm, time.
User: What’s that up the side?
Me: the price.
User: I need a menuscreentransactiontable …
Sadly, but true to the bone… The main issue is incopetence being ignored around engaged people within the initiative and no idea what is expected.
Some observations. First, If agile was “the answer”, there should be less projects that fail to meet client quality requirements. I’ve worked on quite a few agile projects (seems to be the default nowadays) and I would argue that if anything, project delivery performance is worse, not better.
Agile as a methodology does two things that developers like. It sidesteps the whole fixed price / fixed delivery thing that most customers would really much prefer, second, rightly or not, it puts ownership of risk around change control completely with the business owner.
Agile seems to me, somewhat ironically to work best where either development cycles are really short (eg BI report development) or where requirements are more stable such as functional migration projects.
It is a recipe for disaster when you have a startup business paying a vendor to develop “agile” when the product owner either has no idea or wants to “see something and react”. Customer goes “yeah, I asked for a picture, but I really wanted landscape, not portrait”. Then, “ I like that shade of blue, but could it be a bit darker?” Vendor, in both cases “yes, of course, we will write new stories” Six million in and nothing to show for it.
Call me cynical, but I have a consulting practice where most of my revenue comes from picking up projects where MVP “solutions” get pushed into production and the customer thinks they are “done” and the developers know full well it’s not. Won’t scale. Lots of quick fixes that really needed to be tidied up but weren’t. Buggy code that somehow got through UAT. Badly written orchestrations, indexes or scripts that *ahem* work but run like dogs or chew up insane amounts of resources.
I have a simple test I use whenever I meet a potential client. I just ask to see how their automated test process is maintained. More often than not, there are no automated test tools available. What happens? Developers may do adequate functional testing (there’s that MVP thing again) then “smoke test” in whatever environment that passes and the story gets signed off. This is a disgraceful practice, but quite widespread in my experience. Nothing drives a client up the wall faster than things that were working last month suddenly stop working when an apparently unrelated release gets pushed out without decent regression testing.
This has big problems when it comes to UAT, for example. A window is agreed for UAT. Day 1, a bug is found. Yep, all good, we will got onto that and sure enough, three days later the bug is fixed in the next release. The original bug gets tested, but the testers have no way of knowing that other functionality that has already been tested and signed off now no longer performs the same way (or even works at all). What generally happens is that as the bow wave of open test cases gets bigger and the end of the UAT window gets closer, people start to prioritise some test cases up, and others move into a folder called “nice to do”, eg they become aspirational test requirements.
The few shops that do have automated test tools generally maintain their scripts to the extent to which the customer is willing to pay for it. So over time the utility decays as the functionality in the last few releases doesn’t get reflected in the scripts as we have been so busy with urgent dev work etc etc.
I might add that a lot of these issues were around long before agile.
I do project turnarounds for a living. Again, I like agile as it forces ownership of functional requirements onto the product owner. Where the PO is effective / competent this can be very powerful. However, in 30 odd years, the single most common root cause of project failure (not just technical projects) is absentee management. Managers love starting things, but often move on to the NBI (Next Big Issue) and don’t stay focussed on projects in flight. Absentee management is the kiss of death for any project, especially long projects. Rubbish change control is next on the list of how to murder a project.
Don’t know that Agile really fixes either of those issues, lol.
>project delivery performance is worse, not better
That depends how you measure delivery. With waterfall, the developer delivers something that meets some interpretation of the fixed requirements. With agile, the customer is forced to acknowledge that the requirements are not fixed and therefore the project is never really done. But what is delivered is much closer to their actual requirements at the time of delivery.
Excellent post. I would like read more about similar topics.
The waterfall is still not a method. Requirements volatility is why we use engineering development approaches. And those approaches have been incremental and iterative since engineering began. It’s actual called “The Engineering Method”.
When you build a house that is ” Mobilization” the “development” part is a different phase.
Being a scrum master for many years I hear lots of Agile talk. All that is described is very old traditional software engineering techniques with new marketing.
No plan survives contact with the enemy applies to software development too…
@Tony Harrison, thanks for your comment. It is very useful to hear from someone who’s been there. And compare to my current day-to-day. XD
>>the single most common root cause of project failure (not just technical projects) is absentee management
+ 1 for the whole comment
Breaking news from 1992
Fifty. See, eg, https://smartbear.com/blog/develop/agile-the-once-and-future-methodology/
Let’s say I am a entry level software engineer. What does this mean to me?
Can you give examples how my work should have been different in each of the approaches you mention.
The article looks appears t me like a teaser for a paywall. It start with a problem description and then simply says that agile development is the solution. Hmm.
But how and why?
I was forced to switch from waterfall to scrum at Vodafone. Somehow the management was taught that agile development method make software development faster, cheaper, more flexible and better.
But they did not understand that agile development includes all related departments not only the programmers. How can we deliver sprints when there is nobody to test them? How can we discuss what to include in the next delivery, if the project owner is absent? How can we add time when there is nobody paying for it?
Later at QVC I had almost the same situation. The only difference was, that the project owner was reachable to discuss changes. But all other necessities were missing.
If someone says that agile is the solution, then please explain what that means. Otherwise managers will continue to make the same mistakes again.
Now I am working somewhere else. We work agile, without a specific name (like Scrum) and it goes a little different in each project, depending on the customer. But it works fine. So I saw both sides of the story.
The problem begins when people start using Agile as NOUN … so they can sell it.
Requirements are seldom stable (and if they are, they are probably not really fit for purpose), this is not news.
Choosing the appropriate methodology for a given project should be recognised as something that needs to be a considered decision, don’t just walk into Agile or Waterfall or Youdon or whatever, these decisions matter probably more then most realise and should be made with due consideration for both the nature of the project, the team and the nature of the customer.
Agile is frankly NOT always the answer, and trying to make a customer work with it if the customers organisation is not structured in a way that suits is a hiding to nothing. If the customer is demanding CMM4 SIL3 or 4 product, then something a lot like waterfall is likely going to be easier then going for an Agile methodology, and their requirements process is probably so heavyweight that you are not going to be seeing that many changes.
Note that like Agile, the Waterfall method is really a whole class of methodologies, with varying feedback loops and decision points, talking about Waterfall as a single entity is like assuming that Agile means Scrum.
One thing that never seems to get the traction it should, is that we find that what really helps with software is having developers who have task domain knowledge of whatever the software is going to be used for. A developer who has done whatever the software is being written for is one hell of a lot more useful then one who is purely a software dev, even is the pure software person writes better code. If nothing else the developer with the task domain knowledge will tend to be able to take a good punt on which bits of the spec are likely to be overly volatile, and which are unlikely to change.
Incidentally, if you think the specs on building projects don’t change (even while concrete is being poured), have I got news for you about the number one cause of cost over runs in building projects…..
Requirements changes because of a bad planning. So, somebody is guilty, somebody screwed it big time. Obviously, sh*t happens but we could not be prepared to happen because it’s futile.
And the Agile manifesto is a poor joke. It solves nothing because it is just wishful thinking.
If the manager/product owner did a poor job, then IT IS NOT OUR FAULT.
About waterfall, waterfall is part of a technical requirement, we don’t build a house without a basement, ergo the waterfall is logical. If we split the building of the foundation, then we are still doing waterfall.
Barry on the other hand thinks that “requirements” are just step one of the blame game.
The developers do it, so that later, when the project fails, they can finger-point at the people who didn’t give them the “requirements”.
In that context, I recommend to either listen to
or to watch Barry give a talk
Both about one hour, and very much worth the time.
I don’t agree that all software is constantly changing or else abandoned. It’s probably true of the majority of large scale systems and I have certainly worked on systems like that, but I have also worked on systems that were simple and focused on one or two tasks that remained in production for years and years without changing.
I would still tend to use an agile method of building the application because in the short term the initial requirements can evolve as the users get a feel for it. New requirements may be unearthed from hidden assumptions etc. But once into the maintenance phase continuing change is not always guaranteed. If there is constant change then something like scrum would be good. Some projects have a feast of changes separated by periods of famine with no changes, scrum doesn’t work so well for projects like this but I would still go agile with something like kanban. Then there are the projects that solve that one business problem really well and just remain in production without ever changing.
“Requirements volatility is the core problem of software engineering”
I don’t agree. My perception of the core problem of software engineering is that people lose sight of these facets:
* The reason we develop software is to meet the needs of some customer, client, user, or market.
* The goal of software engineering is to make that development predictable and *cost-effective*.
I’ve seen many methodologies, patterns and conventions come and go over many years. It appears to me that there are always people around who want to formalise software engineering, and these cycles develop whereby they have their way for a little while, throwing obstacles in the path of software engineers – and the goal appears to become one of “perfecting the development process” which has laudable aims but inevitably distracts people away from the real purpose of their work. I’ve met many software engineers who don’t seem to have any business acumen and they fortify themselves in their ivory towers making the development inflexible and costly.
Every so often, someone comes along and invents a better way of doing things. This has a dramatic effect on productivity, but it’s not too long before the “formalisers” start slowing things down. Look at how UML emerged, only for RUP to “formalise” it. So “user stories” emerged, but was then formalised with “eXtreme programming”, SCRUM, and so on. A loose interpretation of “interface and implementation” can end up being stalled by a month-long SOLID wankfest.
In essence, lots of really useful principles have been invented along the way to allow software development to be able to embrace flexible requirements, but there is a class of people in software engineering who always seem to care more about the PROCESS than they do about the PRODUCT. And that is the core problem in software engineering as I see it.
I was a professional software developer for over 30 years. I’m retired now.
The core problem of software engineering is actually human limitations: mental and physiological.
Requirements volatility has been the bane of my career for all of those decades. I remember trying to describe it to managers who did not know what software even is. Underline “is” again.
I remember an experienced manager with a Ph.D. who eventually asked to “see the software” that wasn’t done yet. The entire software staff was totally flummoxed. Eventually, we printed out the source code. We might as well have printed out a hex dump, for all he understood of it.
Those ignorant managers of decades ago were eventually replaced with smart, technically adept software-experienced people. But that never improved the customer. Perhaps if all the customers were smart, technically-adept and software-experienced, the problem would be mitigated. But that’s just a theory. Yet Another Software Development Method. I should hype it.
Requirements grow as hardware capability grows. And hardware has become quite huge.
Very well stated.
I too, have made a career out of reviving almost dead or failing projects. Often the programmers are blamed for not knowing how to do things, but I’ve never actually witnessed that in 30 years. Sure, there are some programmers that are not great, but it is actually “What” they are doing that is the issue and that problem for this lies with management and process. Absentee managers or unskilled managers are the real problem. You have nicely articulated a number of the issues.
I used to be a big agile proponent; not so much now. Not because I think that agile is bad (although some parts of it are), it’s more that it is not the right path for all projects. Specifically, I too have seen the agile approach be the reason for failing start ups, especially when they outsource their development. Often the founders are inexperienced and don’t know how to specify what they want or are unable to understand the real cost of their lack of specificity as they churn through multiple re-dos and pivots. Very often, I see so-called agile used an excuse for not really knowing what or how to reach the destination. Change control helps – sometimes a lot. However, it won’t save a project if there are dramatic changes as this is just churn and you end up throwing away a lot of work.
Ultimately, if you are not clear about what you want and you can’t clearly communicate it and/or it keeps changing, how can you hope to succeed? In my experience, more often than not, it all rolls back to not properly understanding the problem that you are trying to solve. If you don’t have that, then the rest of it will be a waste of time until you either iterate into failure, or you happen to be one of the lucky few who get it right in an early iteration and end up with something good enough.
A problem I’m aware of but don’t know how to deal with is that the customer doesn’t differentiate between fundamental and peripheral requirements. For example
– the system must protect against duplicate records
– it must be possible to export records to Excel
The first is going to be pretty fundamental to the design, but the second could be added at any time. By the time they tell you it would actually be desirable to have duplicate records under some circumstances, your database schema and everything around it has been built on the understanding that this won’t ever happen.
What approaches do people take to identify and work through these fundamental requirements with customers?