The perils of impersonation tooling
On Wednesday, July 15th, a bitcoin scam hit Twitter. Celebrities such as Elon Musk, Barack Obama, and Bill Gates appeared to tweet out a message that promised to return double the amount of bitcoin sent to a specific wallet. It wasn’t a spontaneous and simultaneous act of generosity, it was a scam.
At this point, anybody with a high profile Twitter account was scrambling to change their password or check for data breaches. We were scrambling, too, though luckily, we weren’t affected. The culprit had nothing to do with password breaches or account hacks. The cause is still being sorted out, but it looks like it was internal Twitter tooling that allows employees to impersonate any user. (Some hackers are claiming that it was someone with internal access, possibly a Twitter employee.)
For many applications with multiple users, this is pretty common. Administrator routes on services can allow internal company superusers to fix the unfixable. Imagine if you could no longer access your email account or your two-factor authentication device. What do you do then? You call the company and beg for help.
But as this attack shows, administrator routes that allow control over user accounts or their important details can be a massive security risk in itself. There are a few ways that an organization can increase the security around these endpoints, some of which can frustrate or minimize the damage done by malicious attackers—even those within the company. In this article, we’ll cover the best ways to deal with impersonation routes, and whether you really need them at all.
No routes is good routes
This single most effective way to secure impersonation tooling is to not have them:
No matter how locked down your admin routes are, they still offer an attack vector. By not exposing them in production code, you eliminate the possibility of attack. It can make the job of your customer support people a little more difficult, but this is the only method that will stop every method of attack, including social engineering attacks.
This is the same issue as government-mandated encryption backdoors; there’s no such thing as a “good guys” only backdoor. If it exists to provide access to you, it can be found by a motivated and malicious attacker. So your best bet is to not expose this sort of exploitable functionality at all.
Especially after a high profile exposure like this, it’s worth re-evaluating whether you need to have this sort of functionality available in production. If you really don’t need it, if you can find other ways to get the results that you need without an exploitable API route, then maybe it’s time to eliminate the thing. But if you do actually need this route, then it’s time to make it as secure as possible.
Best practices for secure routes
For a sensitive, exposed route, you’ll need to muster as much security as you can afford that fits your risk tolerance. But let’s assume that time and money are no object. Below are practices that will stop most malicious attacks or will be able to track down root causes when they happen.
First of all, ensure that the routes are not accessible outside of the internal network or VPN. With so much of networked applications being served from cloud providers and many of us working remotely, VPNs are essential to keeping secure the traffic that manages your business. Impersonation routes, if you have them, absolutely fall under that essential category.
VPNs can be trickier, especially if you have a lot of traffic moving through your internal systems. At the beginning of quarantine, we shared how we set up our VPN to keep sensitive data secure efficiently. A well-configured VPN can keep internal data and routes from becoming external. Poorly configured VPNs can stall traffic and run up massive usage bills.
Next, provide strong controls on who and how these endpoints can be accessed. Authentication tokens and passwords are a minimum. A sensitive endpoint should have multi-factor authentication in order to make it actually secure. Adding a phone number or using an authenticator app might not always cut it, as a determined attacker can use social engineering, man-in-the-browser attacks, and spearfishing to dupe a target.
Instead of these two-factor authentication methods that basically use another single-use password delivered to a specific device, you should use a device that is the password. These hardware tokens are USB dongles that contain a unique passkey that the user will never see. This is one step up from creating a password that you can’t remember; if you can’t read the password yourself, you can’t be tricked into sending it to a clever social engineer.
Along with controls on how a route provides access, your security program should limit who gets access. The fewer people who have access to sensitive resources, the less likely that resource is to be compromised. Customer service representatives may need to experience an issue as a specific user; developers may need to test features with real data. If you’ve thought it through and decided that the route that does this needs to be in prod, don’t give access to everyone. Designate one or two (or however many required to eliminate bottlenecks without opening the floodgates) individuals who have these privileges.
Ultimately, no security method is foolproof. As Douglas Adams said, “A common mistake that people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools.” Your final security measure on sensitive endpoints isn’t prevention related; it’s about tracking where a breach happened and letting your team run damage control. Good auditing, logging, and alerting should monitor every usage, query, and failure that involves a sensitive endpoint.
The premise is simple: if you can’t prevent the crime, the next best thing is to track down the culprit. In the case of the Twitter incident, the security recommendations above may or may not have stopped the hackers. However, an alert would have notified an on-call engineer about the strange behavior, logs would have recorded the specific actions, and auditing would have built a trail of the malicious behavior. The devil, as always, is in the details, and good practices here let your internal teams get the data they need to diagnose problems while ensuring that logs aren’t leaking sensitive information or running up excessive cloud storage costs.
If nothing else, last Wednesday’s Twitter incident shows that no one is immune from security kerfuffles. The harder you make it to change account details or otherwise take control of those accounts, the better—even for your internal employees.
Tags: api, security
3 Comments
Another thing I’d add: if you think you need impersonation accounts, you should also limit what functions are available while impersonating an account. For example, a support person should never need to write a tweet on someone else’s account. They should never need to view or create DMs.
Problem #1: Relying on Twitter for anything credible or substantive.
Problem #2: See problem #1.
Interesting,
The only pitfall I find is the following paragraph:
—
The premise is simple: if you can’t prevent the crime, the next best thing is to track down the culprit. In the case of the Twitter incident, the security recommendations above may or may not have stopped the hackers.
concretely this part:
—
However, an alert would have notified an on-call engineer about the strange behavior, logs would have recorded the specific actions, and auditing would have built a trail of the malicious behavior. The devil, as always, is in the details, and good practices here let your internal teams get the data they need to diagnose problems while ensuring that logs aren’t leaking sensitive information or running up excessive cloud storage costs. –
and specially from:
—
However, an alert would have notified an on-call engineer about the strange behavior, logs would have recorded the specific actions, and auditing would have built a trail of the malicious behavior.
It seems the writer has all the knowledge on how not only “public network” tracking, also employee and internal admin network tracking service is designed, built and maintained.
(And… ah yes, at the Twitter traffic scale, not a personal blog traffic scale.)
Yes, what you suggest can be done, with thousands of dollars spent on a high-risk (risk comes from effectiveness) network traffic analysis system, by concrete efforts on feature detection, infrastructure design/redesign, and why not, on the feature detection simulation and model development to get your “magic AI” network analysis system to predict what a hacker will do in your system. Naah.
I mean the “magic AI ” that will do what you propose:
– “… an alert would have notified ”
triggered by the magicians
I mean that you have high security experts on call 24/7 yes for your:
– “… on-call engineer about the strange behavior,”
capable of resolving everything at every time.
And yes which salary you pay to this crew? how long you retain them? Stop there for now.
– ” logs would have recorded the specific actions”
Clearly shows you are an expert group of engineers writing.
– “and auditing would have built a trail of the malicious behavior.”
And then what? I have an IP that does this this and that.
another 100 IPs doing this,
another 1000 IPs doing that.
Another nightmare. A lot of work, for not even a 20% of precision and lots of false positives to “keep my crew addicted to tooling”. Yeah.
—
The devil, as always, is in the details, and good practices here let your internal teams get the data they need to diagnose problems while ensuring that logs aren’t leaking sensitive information or running up excessive cloud storage costs.
—
Good it was mentioned. I’d rather use “the devil is always in the money”
Every Information Security Engineer, I mean not a “Security Expert” writing posts.
I mean a hands-on guy, hands on engineers; know, that IT security is not absolute. And by definition, an absurd. That’s the beauty of software.
So what you “pretend” is Twitter having a secure system right? Absurd. And so for any system out there. With or without “VIP” accounts.
“Bytes are not VIP”. That’s another beauty of software.
Go to have a coffee with the CEO and the CS(ecurity)O, and probably what you will get, is very aligned with the idea I may come to express.
What I mean is that this post is on Stack Overflow.
It’s “kind of unkind” to post comments which argue solutions, speaking in the name of the “grandmother of security” and provide guidelines to whom?
to Twitter Security Engineers? Really?
(I’m not a Twitter Security Engineer)
Under my point of view, to better understand where one has its own feet, it helps to “keep the proper tone” when speaking about incidents of this dimension, and so to keep away from the “what should be done” or “what I would have done”, even when the responsibility is not even at your indirect sight. (ignorance has no modesty)
Its fine to write opinionated posts in a blog, that is what blogs are for.
Its fine, also, to “keep the distance” when the technological complexity of the element (the article, the thingy) is so vast, that even, prevention is unreachable for world-class security experts and so to avoid paragraphs like the one I underlined, more part of the course of smart assess that part of the speech of coherent writers.