Building great tech teams is a really tough challenge. A lot of engineers underestimate how hard it can be. But software is just the result of putting the right people in the room and keeping them motivated on the goal. Building the team is tantamount to the entire mission of a startup, which is why VCs are so interested in the team during a fundraising process. You may have disruptive technology, early signs of traction, and a great brand, but no investor is going to give you the capital to pursue your ambitions, if she doesn’t believe that you already have an awesome team in place.
I breakdown the process of building awesome teams into three phases: hiring, onboarding, and supporting growth. In this post, I primarily focus on that middle phase, onboarding, but I also talk a bit about what comes before and what goes after.
Hiring
In the hiring phase of the process, you’re doing the hard work of sorting through a bunch of people who might be great, and trying to get to confident answers on who actually is great. I’ve written a bit about hiring and being hired before.
To me, the story of hiring is all about putting your values into action. Who you hire and who you don’t directly reflects what you actually value in your team members.
Since these values can sometimes be implicit, I recommend actually trying to write out what it is you value. Here is an example of an exercise that I went through with an old team of mine. We tried to answer for ourselves as a team, what were the principles which we agreed upon and how did that translate into the practices, the actual things that we would do, as a reflection of those principles.
With such a detailed picture of what our team was all about, it was pretty easy to talk about more tactical concerns like job descriptions and technical evaluation criteria. In particular, I think that getting really explicit about this stuff can make things like writing code tests and their passing criteria much simpler.
Onboarding
The next phase of this process is onboarding, and man, do people disagree about this one. Even my spell check tells me it should be spelled “on-boarding,” but I’m sticking to my guns. Here’s my 2¢.
Let’s start with what it is. To me, onboarding is the the journey from zero to productive. In the beginning, a new employee can’t deliver any value, because he hasn’t even shown up to work yet! If you can get him sufficiently motivated and oriented to show up on day one, then you’re over a big hump. Hopefully, now you’ve got an engineer with some relevant technical skills physically in the office. Great!
Challenges
But that’s not remotely the end of the process. The number of technical topics a new engineer might need to learn to be useful on your team could include:
- The programming language/s you use
- The database/s use
- The IDE/s you prefer
- Pervasively used programming techniques like functional or reactive programming
- Your build tooling
- Your continuous integration infrastructure
- Your deployment infrastructure and process
- Your source control usage (i.e. your git flow)
- Your cloud and/or cluster computing infrastructure
- Your monitoring and alerting systems
- Your solution for credentials and access control
That’s a lot of stuff. If you’re lucky, your new team member will come in knowing some of that stuff. They’ll probably know something about the language that you use and might even know a bit about your database. But in reality, if you don’t eventually address all of these topics, your new team member will end up feeling like “the new guy” for quite a while.
Shipping on Day One
One great solution I’ve seen is for people to focus on “shipping on day one.” This strategy tries to sweep as many of the above topics into one busy day, executed as part of a pair with a current team member. Once a team member has shipped something, anything at all, then they’ve begun to conquer many of those challenging unknowns.
There are several variants on this strategy of shipping on day one. I once saw the guys at Wealthfront give a talk describing how they actually have job candidates merge to master and deploy to production during their interview! Those guys write some good tests.
At x.ai, we do something similar to this variant. We’ve found that our unique blend of cutting-edge AI research, principled functional programming, and a modern data architecture has its own challenges. So, we’ll often sit a candidate down with our actual code and pair program. Even though we do this during what is nominally the hiring process, if we hire the candidate, this is actually the beginning of the onboarding process. We are teaching that engineer how we put these tools together and for what reasons.
Fake Work
A further variant on the “shipping on day one” technique that I like is fake work. This is some piece of work that could be real work, but is not. Instead, it’s an intentionally designed exercise that helps a new team member get exposed to all of those technical bullet points from above. If you use two databases for two different types of data within your system, a good fake user story might involve reading from one and writing to the other.
Let me be clear though, when done properly, creating fake work for a new candidate to work on is hard. I’m currently in the middle of writing a book on reactive machine learning, and one of the toughest requests from my publisher has been that I write some sorts of exercises for the readers to work through.
Writing exercises for technical book readers is hard for the same reasons that writing fake user stories for new engineers is hard. You need to make reasonable assumptions about their prior knowledge. You have to explain all sorts of background context that’s simply required to navigate the problem. You have to decide where to cut off scope and find ways to minimize the distractions of incidental complexity. Often you’ll find there are things about your real world system (machine learning or otherwise) that are just too hard to work with that you’d simply like to change rather than teaching a new engineer how to work with these pain points.
The process of thinking through what new team members really need to know will also give you a chance to write up reusable materials that capture specific aspects of delivering software on your team. I generally prefer written documentation for this purpose to time-inefficient classroom-style sessions. Documentation can be read and re-read by a new engineer as he needs to, while still serving as backup materials supporting one-on-one pairing with an existing team member. All of the thought behind the knowledge a new engineer needs to amass gets captured in something like your wiki or other documentation tool.
Intentionally designed fake work also has the advantage of allowing you a context in which to introduce a whole bunch of other topics that you can almost guarantee your new team member has not seen before:
- Your specific problem domain
- The fundamental novelties of your system’s solution to the given problem
- The history of your application’s architecture and its evolution
- Known deficiencies in your application’s implementation
- The mechanics of your story execution process
- Social processes around how to get answers to questions
- Unique aspects of your software development process
- Expectations around operational ownership
A new team member needs to learn these topics to be productive, and she can only learn them from one of your more experienced team members. Of course, fake work isn’t the only way to pass on this knowledge. You could also totally use real work, as well.
Regardless of whether you use real or fake work to get new team members up to speed, you should execute roughly the same process: intentionally pick out a piece of work designed to teach important topics for being successful on your team and then ensure that a pair is available to work full-time with your new team member.
Measuring Progress
One dimension of onboarding that can be quite complicated is the amount of time it takes. This is a critical concern when planning an onboarding process. Much of the early work in onboarding should involve something like full-time pairing or at least constant access to a pair. Engineers aren’t cheap, and at rocket ship startups, like where I work, people are busy. Spending time on onboarding is a very real investment. So, people are rightly concerned about the possible costs.
How long should an onboarding process be?
As long as it takes.
But how do you know if this process should take two weeks or two quarters? Being a data-driven guy, I’d say you should try to gather some data. Specifically, someone, ideally a team lead or manager, should be responsible for regularly checking in with the new team member and discussing his progress towards specific learning objectives.
And just like those team values, those learning objectives should be explicit and shared. A new engineer should have a list of things that he’s agreed to work on developing skills in with his manager, and they should regularly record their discussions about his progress towards competency in those areas. Those learning areas could be general technical topics like dependency management or they could be quite specific to your team such as “that terrible old application that keeps breaking under load at 2AM.”
Note that this means that the new engineer’s manager should not be the new engineer’s primary pair. You want to establish some sort of a distinction between doing the actual work of onboarding and helping ensure that the onboarding process is successful.
The need for this distinction may not seem obvious, but it helps prevent some onboarding anti-patterns. In particular, some of the worst onboarding that I’ve ever seen have been done by some of the best engineers I’ve ever worked with. Technically excellent engineers are not always good at understanding the challenges that new engineers on the team face. Awesome engineers are often more focused on shipping than teaching, and new engineers end up politely confused and not yet productive in the ways that they want and need to be.
A manager can really help in these scenarios by talking to the customers of the new engineer’s work. None of this is meant to be in secret. This shouldn’t be some sort of closed-door process. If you choose to discuss these learning objectives as part of new team members’ OKRs, then this information will be public, by default.
The data driving this process should simply be a series of honest conversations like this:
Manager: Is the new guy awesome at delivering what you need yet?
Customer: No, not really.
Manager: Ok. Then, let’s figure out what we need to do to help him get there.
Everyone on the team should be aligned on the goal of getting each new team member up to speed and delivering at the same level as the old timers. Just as each of the old timers needed to learn how to be successful in their current roles, they should now all be aligned around supporting each new team member in his journey towards mastery of his new role.
A particular technique that can help with having blameless conversations about what’s working and what’s not in onboarding is a retrospective. This is a meeting where everyone suggests ideas for discussion about the good, bad, and other aspects of your onboarding process. The intent of this meeting is not to point the finger at people, but rather to be honest and constructive with each other about what you think your performance as a team in this area has been and how you’d like to improve it in the future.
Supporting Growth
I’ve spent most of this post on the topic of onboarding, but before I wrap up, I want to be clear about what it feeds into: the never-ending process of supporting your fellow team members in their pursuit of mastery. Great engineers want to become even better engineers. It is part of the (often implicit) contract that an employer makes when it hires an engineer, “We’re going to help you grow as a technical professional.”
I’ve written a fair bit about the importance of this aspect of an engineering career and some specific techniques that help with this activity. Helping engineers become the best that they can be is one of the crucial missions for a tech organization to take on. As I’ve said in other contexts, smart engineers who are in high demand should absolutely be picking between employers based on how dedicated those employers are to supporting engineers’ personal growth.
Whether you use study groups, communities of practice, technical mentoring, or any other technique to support engineers’ growth on your team, your actions should reflect the values that you established in your hiring process. If your organization hired well, you should now be part of a team of engineers who are capable of and interested in becoming awesome in all of the ways that they need to be. And if your team did a good job of onboarding, your teammates should have made it up the ramp to being participants and contributors to all of the things that your team does to support eachothers’ growth.
Now is when the fun really starts. Now, you get to see the person who couldn’t get their build deployed just a few months ago save the day with an emergency hotfix, late at night, with no one around to help. Or maybe it’s the person who didn’t know any of the terminology of machine learning in his job interview who just shipped a new system of predictive microservices that self-validate their predictive capabilities.
Anyone who works in startups could tell you:
There’s something magical about seeing a rocket takeoff.
The story is the same whether that rocket ship is an individual engineer, a team, or even a whole startup. When you invest in people and really work to help them be successful, they can do amazing things.
The above post is based on my experiences at more tech startups than I’d like to admit, but it is primarily motivated by discussions with my teammates at x.aiaround how to be awesome at onboarding. Thanks to everyone who shared their thoughts on this topic and gave feedback on this post.
If you want to check out how we’re doing at hiring, onboarding, and supporting growth because you too want to be an awesome engineer, I encourage you to check out our jobs page.
About the Author
This article was written by Jeff Smith, Author of Machine Learning Systems @ManningBooks. Building AIs for fun and profit. Friend of animals. See more.