Loading…

How an interview code submission that wasn’t even submitted changed our process

In a previous role, I was an engineering manager for a well-known company for a particular tech stack. One way we sorted through all the applications was requiring coding tests for potential candidates. One truly stood out, and it taught me to think about what I am really looking for in these sorts of submissions.

Article hero image

In a previous role, I was an engineering manager for a well-known company for a particular tech stack. We were heavily involved in the community and allowed for remote hires, so we received a constant influx of applications for open roles. One way we sorted through them all was requiring coding tests for potential candidates. And as you can imagine, we got all ranges of results from massively impressive to wondering if this candidate was messing with us. But one truly stood out, and it taught me to think about what I am really looking for in these sorts of submissions.

First, a disclaimer.

I know the practice of asking people to code for free just to get an interview is not popular right now. The pros and cons of this are a completely different discussion altogether. During the time this story occurred, it was more accepted, and we had a lot of success with it. There was a time when people didn’t have a bunch of public repos to peruse.

The submission process

We made a public repo on GitHub under our org account for the purpose of tests. The instructions and ask were simple, and laid out in a README.

Instructions:
1. Fork this repo on GitHub
2. Create a program that can interactively play the game of Tic-Tac-Toe against a human player and never lose.
3. Commit early and often, with good messages.
4. Push your code back to GitHub and send us a pull request.
We are a [REDACTED*] shop, but it is not a requirement that you implement your program using that tech stack.

*Tech stack removed to protect the innocent.

That was all we asked; we intentionally left it open-ended. Some people tried to impress us with huge, elaborate apps that used different services and engines. We had one submission that worked through CLI just because “I was bored and wanted to try it.” We tried to keep an open mind when reviewing the candidates, and multiple people were required to weigh in before a final decision. If there were more approvals than rejections on the PR, we brought the person in to learn more.

We were not looking as much at the tech, typos, edge case bugs they left, or even if their tic-tac-toe engine never actually lost; we had many unanimous approvals for apps that sometimes lost. We wanted to see factors that tended to align with our team and workflows: How often did they check in? How were the commit messages? Were tests added or needed? How readable and organized was the project?

There was no hard list, but we found early on that these were better factors to look at instead of just focusing on if the app won all the time. We tried to be fair, but often we saw some submissions that were not in good faith. Sometimes we saw apps that were straight copy and pastes from other sites — we began to memorize them after a while — with comments and credits from a whole other author. But more often than not, if the app made sense and the code was readable, even if in a very different style than what we were used to, we would be up to talking more and asking the applicant about it.

The one that stood out

After talking with a recruiter who introduced us to a particular candidate, it became obvious that it might be tough for this applicant to move forward with the coding test. The person was very busy at work and was worried they couldn’t do a code submission in time. I let them know there was no time limit or minimum for how many hours they had to work on it; we wanted to make a decision in about two weeks, so that would work for us if it would for them. Everyone gave a thumbs-up, and we were hopeful for a good submission.

What more am I trying to learn?

At two weeks minus a day, we got a pull request and an email. We looked at the PR first, and, sadly, there was little to go on. The structure of the app was well-organized, and we could see the path they would have taken — commits were frequent and with good messages — but the meat of the app was missing. Sadly, we couldn’t even run it, and we were pretty sure that time just ran out on them. I read the email, and they were very apologetic. They explained that they didn’t have time due to work and personal issues, so their submission was incomplete. But then over the next three paragraphs, they explained what they would have done.

  • They linked to articles on Minimax that they were going to take as inspiration. They wondered if Negamax may be faster and would have tried to find out.
  • They listed parts they thought would be tough to deal with based on experience and listed some things they would try if plan A failed.
  • They wrote how they would add tests for certain sections but wouldn’t for others and gave a quick explanation on what they called “test bloat” and why they tried to avoid it.

The points were concise, but still very clear. Normally, I would have replied with well wishes on their challenges and mention that I would reach out if we start another round. But after thinking about it, I wondered: what more am I hoping to learn?

We had a few early pieces of their code to see a little style and their thought process on how they would move forward to address pitfalls. Even the commit messages, as few as they were, showed clarity and consideration for the reader. I compared the factors that became clear on this submission to other more finished examples and noticed that I got just as much of a view into the candidate in this submission with an explanation as I did in others with unanimous approvals. So I copied the three paragraphs from the candidate with my thoughts, a link to the PR, and emailed it to the pool of reviewers before I went to my next meeting. When I came back, I had three replies in the email chain that just said, “Ship it.”

The lesson

I thought about that submission a lot. Why did this work well yet was a big deviation from what we had planned? How did we get such a strong impression of them with very little code? What am I really looking for in these code submissions? The takeaway wasn’t that they got the interview (they did great) or that they got an offer (they politely declined), but in asking “what are we really looking for in these tests?”

It is a tough question. The change for us wasn’t immediate, it was far more gradual in allowing a wider range of how submissions could be done. Do we even need code? How much code? Let’s try less. Let’s try none! Do we just skip to the phone screen and talk about how they would start?

We started looking at how many people started participating more in the tests and saw the number of great candidates that came along with this more open-minded approach.

What we do now

Since then, I have moved away from coding tests before having a conversation. There are now many avenues to see how people develop that don’t require them spending a whole night coding for just a chance at an interview. Interviewing and screening will always be hard to do well; they take a lot of work and understanding. Now I try to take a moment before each step and remind myself what I am attempting to learn with the process I am working from, inside and outside of interviewing. There is always fluff that feels comfortable and obvious, but when we are able to focus on what we are really trying to learn, and the wide array of ways we can learn, it becomes a lot easier to connect with people. And in the end, whether in hiring, coaching, or giving recognition, connecting with people is the goal for a manager.

Add to the discussion

Login with your stackoverflow.com account to take part in the discussion.