Preventing the Top Security Weaknesses Found in Stack Overflow Code Snippets
Last week, we told you about research that found a number of security vulnerabilities in C++ code snippets in Stack Overflow answers, and how some of those flaws had migrated into actual, real-live Github projects. Today, we’re following up on the top eight error types that research highlighted and suggesting ways to avoid making the same mistakes.
To get these answers, we checked the common weakness enumeration database for the numbers that Verdi et al posted in their research. We’re only addressing the CWE numbers that referred to specific weaknesses, not to categories, as those require a little more information before you can begin to address them.
While the research specifically found these vulnerabilities in C++ snippets, the CWEs mentioned below can happen in a number of programming languages. Other programming languages may be more prone to other CWEs, so don’t take this as a security checklist.
Without further ado, if you’ve got problems, yo, I’ll solve them.
CWE-754 Improper Check for Unusual or Exceptional Conditions
This vulnerability occurs based on the assumption that events or specific circumstances never happen, such as low memory conditions, lack of access to resources.
In a given program, you may be able to control most of the flow of data and therefore assume that all function return values will be within expected values. But rare events can still happen. Even worse, sometimes a clever attacker can trigger those events and cause havoc on your system.
You may expose this weakness by assuming that memory will always be successfully allocated when needed, that files or strings will be of a specific size, or that database or input calls will return something other than NULL. In many cases, unexpected data will crash the process. Maybe your code can handle this gracefully. Or maybe the system will continue in an unexpected state, leaving the system vulnerable to an attacker’s machinations.
Recently, an ethernet/serial RTU module exposed this weakness. While the vulnerability has likely been patched, previously, an attacker could initiate a denial of service attack by sending truncated SNMP packets to a specific UDP port or disconnect all connections with an unexpectedly high number of packets on another port.
Some languages may be able to catch these exceptions before they cause harm. But for the lower-level languages that don’t, you can do the following:
- Check your return values every time
- Assume all inputs are malicious and only accept certain values
- Plan for low-resource conditions
- Architect for failure
Exception and failure management aren’t a cure all here. Overly verbose and specific exceptions can give attackers clues as to how to find other holes in your code.
CWE-20 Improper input validation
When software does not validate input properly, an attacker is able to craft the input in a form that is not expected by the rest of the application.
As the old saying goes, “garbage in, garbage out.” If your input is bad, it’s probably going to lead to some bad results. It’s sort of like that prank where you bite into what you think is one of those Lindor truffles but it’s actually a chocolate-covered brussel sprout. Malicious attackers are going to try to slip you some input that’s way worse than brussel sprouts.
This weakness shares a commonality with the previous one, in that a developer has assumed that nobody can access the inputs in their functions. The list of developers who were wrong about this is pretty long.
Some versions of MS Excel (all of which are ten years old or more) allowed arbitrary code to execute on a user’s system once they loaded a spreadsheet. Various objects embedded within that malicious spreadsheet accessed the VBA Performance Cache, which was not sanitized at runtime, so it behaved in unexpected ways.
Especially in networked applications with database-driven backends, software has a ton of vectors for untrusted input. You might think that you could catch this weakness every time just by sanitizing your inputs, but there’s a whole raft of other safety measures to take:
- Check input values on both the client and server
- For any input values combined from multiple sources, validate the combined value as well as the inputs
- Be careful about code that crosses languages, as happens in an interpreted language running on a native virtual machine
- Explicitly convert your input into the expected data type
- Ensure all data is in the canonical character coding scheme before validating
- Use static analysis
CWE-252 Unchecked return value
The return value is not checked by a method or function, which may create an unexpected state.
You may have noticed a theme by now: check yourself before you wreck yourself. Or, more plainly, validate all data to make sure it is what you think it is. If you look at a function and think either that it can never fail or that it doesn’t matter if it fails, you’re exposing yourself to this vulnerability.
Similarly to the previous weaknesses, this one could leave a system in an unexpected state, causing crashes or other weirdness, including executing arbitrary code. That’s just what happened in a number of Linux implementations in 2007; an attacker could sneak crafted TLV packets into a BGP packet that would let them execute what they wanted. This was especially dangerous because BGP packets are usually passed by autonomous systems to share routing information.
Mitigating this is simple: check your return values, make sure they are within expected ranges, and throw exceptions if they aren’t.
CWE-477 Use of obsolete function
The code uses deprecated or obsolete functions, which suggests that the code has not been actively reviewed or maintained.
As exploits are discovered and software practice changes, languages and software may deprecate some functionality as outdated. While you may be able to continue using deprecated functions, there can be security issues when you do. And some functions, like rand(), might be plain obsolete for serious code.
A more serious problem that this weakness can indicate is that the software in question isn’t being maintained or updated. Obsolete functions can linger if nobody keeps up with the changes in software and systems. In an unmaintained system, security flaws can snowball as the software ages. Questions sometimes get new answers, but for those code snippets posted on Stack Overflow years ago, nobody’s maintaining them (nor should they), so it’s on you to make sure everything there is still safe.
In the NUUO CMS in 2018, outdated software components left users open to arbitrary code execution. In complex software with lots of dependencies, it doesn’t even need to be your code that has the security weakness in order to open it up to threats.
Mitigation for this weakness is simple, even if the implementation is not: refactor your code to avoid deprecated or otherwise obsolete functions. If you need to use these functions, make sure that you understand the security implications and plan accordingly.
CWE-789 Uncontrolled memory allocation
Memory is allocated based on invalid size, allowing arbitrary amounts of memory to be allocated.
Memory allocation is a pretty common function, especially within lower-level languages. But if code allocates a massive amount of memory, it can lead to system slowdowns or crashes. While a system failing is bad enough, on flexible cloud systems, overallocation can be extremely expensive.
This is cousin to CWE-20, Improper input validation, as the input that needs to be validated is being supplied to memory allocation functions. Memory may be increasingly cheap, but it is still finite. If an attacker can tie up all the memory on your hardware, it can not only crash your program, but any other programs running on that system.
In 2008, the IBM solidDB didn’t validate the field that set the amount of memory to allocate. So any incoming packets could crash the system just by supplying a large value in this field. As solidDB was embedded in a lot of networking and telecommunications equipment, this weakness could have wreaked a lot of havoc with the underlying infrastructure of the internet.
Mitigating this is the same as CWE-20 above, plus ensuring that you are not exceeding the memory limits of your system.
CWE-158 Improper neutralization of null byte or null character
The input is received from an upstream component, but it does not neutralize or incorrectly neutralizes when null bytes are sent to a downstream component.
In C++, Strings are terminated by null characters. Most of the time, this allows a lot of flexibility, as Strings can be of any length. But a user can enter any number of characters as input, including null characters in the middle of Strings. In this case, the String will be truncated to that null.
Most of the time, this doesn’t matter. But if that String is combined as part of a longer String, say a URL or username, then they may be able to get a program to give up more than it’s supposed to. While this question is about PHP, the technique is the same; a null character in input can cancel out anything appended to that String, including the file extension that would have made this a file name. With a little clever scripting, malicious agents could traverse the web directory and get whatever files they wanted.
This is what happened in a lot of early web servers. Attackers could tack a null at the end of a URL and list directories, read source code, and generally bypass all access restrictions. These source files would then give away a lot of extra information about the system that could be exploited.
In trying to catch these attacks, your code should assume that somebody’s going to slip a null in input. Here’s a few ways to protect your code:
- Remove null characters within input
- Explicitly encode input with the canonical method
- Use a combination of whitelists and blacklists to designate valid values and block invalid ones
CWE-134 Use of externally controlled format string
The software uses a function that accepts a format string as an argument, but the format string originates from an external source.
Format functions, like printf(), take a String with format functions and convert it to a regular String. The format parameters look like %x and will process an argument in the function, adding it to the return value. However, if the String is externally controlled—that is, taken as input or read from a file/data location—and not properly validated, an attacker could slip their own parameters into the function.
Some of the parameters allow reading and writing from the memory stack or process memory. Allowing that sort of data from arbitrary user input is a deeply bad idea. Attackers could read information from memory or execute arbitrary code, depending on what else is in the format string.
In 2006, Dia, a GTK+ based diagramming tool, allowed denial of service attacks (and possibly code execution) when it tried to open a bmp file named with a bunch of %s in a row. The software would try to display a warning message with the name of the file, but because it was processing the format parameters, it displayed sensitive information instead.
The solution is pretty simple: make sure all Strings passed into format functions are static and within your control. Sometimes compiler warnings can indicate problematic areas, so don’t ignore them.
CWE-476 Null pointer deference
A NULL pointer dereference occurs when the application dereferences a pointer that it expects to be valid, but is NULL, typically causing a crash or exit.
I’ve gone through release notes when half of the issues being fixed were because of null pointer exceptions (or dereferences). While many null pointers can crash a system or cause unexpected program exits, sometimes they can leave the system in an unexpected state—strange, unreproducible quirks or pulling data from other memory addresses. It becomes a security issue when an attacker reliably finds an avenue to induce a null in a system.
While a lot of null pointers happen because of sloppy programming, in asynchronous systems, a null pointer can arise from a race condition. That’s where two async processes read and write the same variable. These can happen out of order, so when one process moves faster and reads the variable before the other process has a chance to set it, boom! Null pointer dereference.
In 2004, OpenSSL had a null pointer dereference that allowed attackers to craft a specific SSL/TLS handshake that crashed the service, effectively creating a denial of service attack.
By checking for these common errors in the code you copy from Stack Overflow, you can avoid a lot of serious security risks. Share some knowledge, check out some answers, and be safe out there.Tags: cybersecurity, research, security
@Ryan. Sigh! This is all great stuff, of course, and exactly the kind of thing security specialists have been saying for years.
However, in the real world it’s not enough to try to fix developer security by separate teaching about vulnerabilities. Developers have many other concerns apart from security, and unless they are made aware of potential problems at exactly the time they are working on the relevant code, most simply ‘do the sensible thing’ and copy the dodgy code or whatever. That’s not because they are lazy or evil; it’s because they are professionals with a deadline.
So if you want to fix Stack Overflow, please provide a way to mark security errors,
*** In the thread where they occur ***. Often later responses in a thread point out security errors, but sometimes that’s too late and developers miss them.
Research has found other ways to help developers to be alert to security problems at the right time, including suggestions how to use Stack Overflow: https://www.motivatingjenny.org
Marking security errors, fine, it’s probably appropriate, but I’m sure some of those “professionals” would then consider *any* code without such marking as security-vetted and undoubtedly thoroughly safe.
So, as many warnings as possible, too, I say.
I don’t like the idea.
Does it worth the effort? Imho comments and downvotes are working good enough to filter out answers with poor code. You are basically trying to automate codereview, which is imho not yet possible.
You could spend your energe elsewhere instead of adding a small (but annoying) static analyzer layer, which require answerers to fix problems, which answers may not even focus on (think of pseudo-code answer).
P.S.: I am not a Jon.
Then how about not impersonating a Jon?
I wish comments and downvotes did filter out answers with poor security code. There have been many academic studies, including the one Ryan cited in https://stackoverflow.blog/2019/11/26/copying-code-from-stack-overflow-you-might-be-spreading-security-vulnerabilities/ that found otherwise (and if you read that paper you’ll find it cites a lot of other similar results)
One possible solution might be a way to tag a comment or answer in a different colour or with a flag ‘Insecurity suspected’. It would require a lot of meta discussion to get the social mechanics right, of course.
Probably the most common vulnerability is a programmer who doesn’t know their tools. And unfortunately, we see that demonstrated here. “In C++, Strings are terminated by null characters”. That’s a C thing. C++ has the std::string type, which has an explicit size() member and therefore can contain null characters.
And if you start with a flawed premise, the remediation is also suspect. “Remove null characters within input “. Why only those? The example given is a web server. There you really should be whitelisting the allowed input, not blacklisting specific individual characters. Everything that’s not on the whitelist should give the exact same ‘401 forbidden’ error without even looking at the file system. That way you can’t use the error to extract meta-information about the file system.
There are hundreds of these security vulerabilities documented in the CWE and OWASP. Are you suggesting that contributors check all of them before posting answers to Stack Overflow?
A competent developer ought to be aware of them at all times.
You don’t need to know the CWE list by heart, of course, that classification system exist mostly just to categorize or tag things, with a lot or repetion or unnecessive generality; it would benefit everyone to browse through it every once and then, but a security-minded, up-to-date developer would already be aware of at least most of them.
There’s no such thing as a “Linux implementation”. The bug was in a *tool* for Linux called tcpdump. It could very well have been for any Unix-like system, not just Linux.
Anyone who assumes that any code sample on SO (or indeed anywhere) can be used verbatim without any thought about the implications IMO gets what he deserves when things go south in his production code.
When I write a piece of sample code, I set out to show how to solve the problem at hand with minimal distractions. I assume others do the same thing.
So I omit null checks (unless the code is to demonstrate how to do a null check, obviously) and other things that any production level code should have and concentrate on the problem being demonstrated.
A stern warning somewhere easily accessible on the site (like in a FAQ) that code samples should never be taken as gospel and only serve as a starting point for your own code may be a good idea, but I doubt many people will read it.
Just as I doubt many people who get hit by the consequences of using code samples verbatim without thinking about the implications are going to read this blog.
Agreed. A good practice is to understand every line of code you write/copy. This actually may help one save time by avoiding obscure/confounding bugs as well during testing.
I agree fully. But I will say this – it seems a lot of questions on SO are from someone in the learning phase, so it would make sense to add notes with the code as to where security pitfalls may be hiding.
Even if it is not a student type question, the added notes would help future readers, like me…
Do the users of said production code get what they deserve?
Please stop having good ideas, you aren’t there yet
I think the responsible route is to warn the users on as many occasions as possible, as long as the warnings don’t get too much in the way.
For example, it’s probably a no-brainer to display a warning on copy events in `code` blocks (it’s not supported by every browser, but it would probably cover most users).
And maybe add to every one of those blocks a title attribute with another (short) warning, if it doesn’t impair accessibility.
And, if you’re looking for a more complex solution, maybe something like a small, clickable exclamation point icon on the top-right of every block, or maybe just the multi-line ones.
I found the vulnerabilities and coding weaknesses in the database to be vague and unhelpful. Both the descriptions and the examples. Adding a list of warnings to code will, for the most part, just result in ignored noise. Perhaps a list of lint warnings appropriate for the code base at the time the question is answered will serve just as well?
The first problem on the list was largely solved by checked exceptions. I think I’ve seen every argument ever made against checked exceptions, and all of them were based on ignorance of how checked exceptions are supposed to be used and/or how they work in practice. Recently, with all previous arguments thoroughly debunked, opponents of checked exceptions have retreated to saying they make things harder for compiler writers. I think the time is coming soon when everyone will see the light and finally accept that checked exceptions are literally the best thing since function parameters. Checked exceptions reduce cognitive load and improve code quality and reliability in ways that are highly analogous to using function parameters instead of global state.
Nice, after looking into (un)checked exceptions a bit, your strategy seems much like how in some languages you return `(:OK, result)` instead of just `result`, and `(:Error, result)` instead of an Exception. You can then pattern match on the ‘OK’ and ‘Error’ states. Meaning you have to handle them when you do something with the returned value. Plus you can easily add extra good & bad states by just naming the first item differently.
On the whole I agree, checked expressions help a lot. I just wish we could use a generics-like mechanism to declare them more precisely: Iterator to declare which exceptions, if any, an iterator can throw.
Could you also write a script to check out the hook while my DJ revolves it?
From the title of the blog I had hoped that Stackoverflow would invest in tooling that would ‘cough quietly’ when you made one of these mistakes. Ah well, we can dream 😅.
intresting and lively article, liked your work, thanks for sharing….