How We Ship: Hootsuite's Former CTO

Our ultimate goal is to understand and share with you, at a nuts and bolts level, how great companies ship software to customers. To do that we're doing email interviews with engineering, product, and project managers at great companies to understand how they ship. If you'd like to share how your team ships software, get in touch!

Today's interview is with Simon Stanlake, who was CTO at Hootsuite in the early days. On Simon's watch, the product dev team grew from 8 to 125 people, and annual revenue went from $0 to $50 million. Simon shares the lessons he learned about shipping while growing the Hootsuite team, and the lessons he teaches to the young startups he works with now.

Codetree: Because shipping software is context dependant, can you let us know your team size and any other context you think might be important for readers?

Simon Stanlake: I now work with different teams of varying sizes - from 5 to 20 people. All are deploying web / mobile saas products. Also worked for Hootsuite and grew that team from 8 to 125 people with multiple sub teams. I was fortunate enough to work with people who were open to experimentation and candid conversations about how things were going, so we spent a lot of time analyzing our performance and making incremental changes to our process. We got to see the pros and cons of a lot of different organizational structures and different approaches to shipping.

For this conversation, it’s probably most interesting to talk about about the model I try to incrementally nudge teams towards in my advisory role. This model is largely made from the way we did things at Hootsuite, and some things I've learned since I left.

The model is very context dependant - it’s a result of a value system applied to a set of conditions (business objectives, customer needs, market conditions, team size and makeup etc). I feel most of the practices are broadly adaptable but the core component is constant introspection and adjustment - we constantly fine tuned it to suit the present reality.

CT: How do you decide what to build next? What does that process look like?

SS: Tactically, I try to have a pruned and prioritized backlog ready at the task level (one day’s worth of work at most) that will keep engineering teams busy for at least a week ahead. Teams just pluck off the next most important thing. This is usually pretty easy - there’s always stuff to work on. The tricky part is making sure those tasks roll up into initiatives that are strategically important to the company. I use a higher level initiative (or Epic) backlog that tasks roll up into. One backlog for product initiatives, and one for technical initiatives. Tasks are either bugs, or roll up into either one of these types of initiatives.

Determining the prioritization of strategic initiatives is done about one quarter in advance - most places I’ve worked are fast paced enough such that planning in detail further out than 3 months isn’t worth the effort - you just can’t anticipate what’s going to be important that far ahead.

I like to keep the prioritization process as close to "gut feel" as possible. I’ve seen lots of teams put together complicated algorithms and scoring mechanisms to try to calculate the most important thing to work on. My experience is these are more effort than they’re worth - you get all the information you need to make a good decision in less time by having conversations with stakeholders and doing some thinking.

The core component of my development model is constant introspection and adjustment - my teams constantly fine tune it to suit the present reality.

CT: Could you sketch out what shipping a new feature looks like for you end-to-end?

SS:

  • Start with strategically important business objective (“Increase Revenue” or “Reduce Churn”)
  • Identify a success metric to track.
  • Somebody (typically Product leadership) proposes an initiative that will push the needle.
  • A high level concept is developed of what the finished feature might look like - eg: a wireframe - something that is quick to make and allows everyone to understand what they’re building towards.
  • Team develops an MVP that answers the question “what’s the smallest thing we can ship that allows us to test our assumptions, and course correct if necessary?”
  • Designer / Product iteratively spec out MVP, with feedback from dev.
  • Dev / product break into job stories, dev breaks down further if necessary into technical tasks.
  • Create a backlog for the initiative.
  • The team then churns through the backlog until MVP is shipped (2 weeks at most, hopefully much sooner).
  • The team then takes stock of what they’ve learned and adjusts course if necessary - continue along path, make changes to plan, or shut down if it turns out they’re dead wrong.

CT: Do you estimate how long features will take to build? If so what does that process look like? Do you track if features are behind or ahead of schedule? Do you track how long they took compared to how long you thought they would take?

SS: Yes and no - I try to estimate which initiatives can be completed in a quarter, but not when in the quarter. Also estimate what can be accomplished within one dev iteration (effectively a sprint - usually one or two weeks). I try to stay away from committing to a due date for a feature until you’re less than one iteration away from completion - the reason being you should be in the mindset of pushing business objectives and generating learning, not shipping features. You will likely find that what you started out building looks a lot different once you’ve finished than what you planned - so how can you possibly estimate?

That said, I always track what you expected to ship in a dev iteration and what was accomplished. Same at quarterly level: what business objectives did we want to accomplish last quarter and what did we actually achieve? Regular retrospectives are used to find causes for any discrepancies and to identify ways to improve throughput.

CT: How do you have a particular process for paying down technical debt and/or making significant architectural changes?

SS: Technical debt is always non-zero, so needs constant attention. The trick is to apply the right level of effort, and to adjust as you go. Make sure backlog is a combination of technical and product roadmap (and bugs). I do a quarterly review of level of effort applied against each (ie: 70/30 product/technical), and use this ratio as a guideline for backlog prioritization.

It’s important to apply the same framework for technical debt / architectural changes as you do for product initiatives - think about business objective, success metric and MVP, commit to continuously shipping increments and measuring your success against your metrics. This is often difficult for technical initiatives - for example, how do you (effectively and easily) measure the business improvement made by introducing Vue.js? I ask teams to at least think about this and do their best, to try to drive home the concept that everything we do is about making the business successful.

CT: How do you deal with delivery dates and schedules?

SS: I'm on the "it ships when it’s ready" end of the spectrum - but you can protect yourself from the risks of this approach by committing to continuously shipping something at each dev iteration.

Most places I’ve worked are fast paced enough such that planning in detail further out than 3 months isn’t worth the effort - you just can’t anticipate what’s going to be important that far ahead.

CT: How many people are in your product + dev team? What are their roles?

SS: Typical team looks like:

  • 3-5 developers
  • PM
  • Designer
  • QA developer
  • Operations dev

At Hootsuite we were at the scale where each team was (likely) working with their own infrastructure so some team-level ops support was necessary. We experimented a lot with this - the best mix seemed to be having a central ops team responsible for the building block level of infrastructure (ie: security, raw containers, monitoring and alerting) and distributed ops on each team who were experts in the initiatives the team was driving, and could coordinate with central ops to figure out the best way to support.

CT: Do you organize people by feature, or by functional team, or some other way?

SS: I am on the feature-based-organization end of the spectrum. The main advantage here is that teams can develop what I call a “non-rational-attachment” to a feature or product component. Their emotional attachment and depth of knowledge in what they’ve built can push them through obstacles in ways you don’t see if teams are formed based around skill specialization and who may work on many different parts of your product. There are trade-offs though - primarily a risk of silos of information within your organization, difficulty synchronizing between teams, and of course non-rational-attachment can cause teams to go off the rails if not monitored.

My experience is that complicated scoring models to discover the most important thing to work on are more effort than they’re worth. You get all the information you need to make a good decision in less time by having conversations with stakeholders and doing some thinking.

CT: How do you know if a feature you’ve built was valuable to customers? What metrics do you use if any?

SS: “Value” can translate to many different metrics - if I had to pick one number to track as a proxy for value, I’d use churn. If customers who use your new feature are more sticky, you’ve delivered value.

CT: Do you have a dedicated testing role? How important is automated testing in your process? What tools do you use for testing?

SS: Automated testing is critical, non-negotiable, foundational (how else can I say it?). There’s no feeling like deploying a change in a system that is well covered by reliable, performant automated testing - it allows you to spend more time in the advanced human part of your brain instead of the fight-or-flight reptilian part.

I aim for QA Engineers on each team (as best we could - we could never find enough). The primary role of QA Engineer is the maintenance of the automated testing infrastructure. Part of this is mentoring the team on practices that enable automated testing, working together to develop shared responsibility for the creation of tests and measurement of how we were doing. I feel leadership and communication are very important qualities for a QA Engineer. It's also a huge benefit to have a strong QA advocate on the development team. I’ve seen instances where the trust between dev and QA breaks down and it’s not pretty. Things either grind to a halt or your quality goes out the window.

CT: Does your team create functional specs for new features? If so what do they look like? What do you use to create them? What does the process look like for reviewing them?

SS: Not really - definitely on the “working software over extensive documentation” end of the spectrum here. Certainly use tools like Invision to quickly try out ideas at low cost. Also have found Google Slides to be a useful mockup tool if you want to quickly hack something together.

When tackling technical debt I ask teams to think about the business objective, success metric and MVP, commit to continuously shipping increments, and measuring your success against your metrics. This is difficult but I ask teams to at least think about this and do their best, to try to drive home the concept that everything we do is about making the business successful.

CT: What about technical specs? What do they look like? Who reviews them? Who has final say on the spec?

SS: Same thing - by agreeing to work on very small iterations and get working software out as quickly as possible, you reduce need for technical specs.

CT: What’s the one tool your team couldn’t live without, and why?

SS: Have to say Github - it sits at the center of everything we do in our process, from planning, development, review, testing, deployment

CT: What tools are important to you in your engineering process?

SS: In addition to GitHub, Slack for realtime communication, Jenkins for job management. Having a feature flagging system is absolutely invaluable.

CT: Do you triage issues? If so what does that process look like? How often do you do it?

SS: Depending on issue volume, usually a couple days a week. Time box to 30min. Try to get everyone present - usually at least the reporter, a developer, PM, and customer service. Go through new issues and prioritize based on customer impact - either add to immediate backlog if they outrank current job stories or put them in the icebox

"Feature teams" have the advantage of developing what I call a “non-rational-attachment” to a feature. Their emotional attachment and depth of knowledge in what they’ve built can push them through obstacles in ways you don’t see if teams are formed based around skill specialization.

CT: What part of your current process gives you the most trouble? What is working the best?

SS: Something that I struggle with (it came up again today) is standups. They can be extremely effective in developing team cohesion, sharing knowledge and avoiding collisions. On the other hand, effectively encouraging active participation (vs just staring at the floor till it’s over) can be difficult. Also, I’ve found that no matter when they’re scheduled you’re always ripping at least one person away from a task in which they’re in deep thought - a costly interruption. I’ve experimented by using Slack to post daily updates which solves the interruption problem, does nothing for the active participation problem (does anyone even read them?) and in fact introduces a new problem - by removing the face-to-face time we experienced a sense of drift of alignment and purpose - the team felt a decreased sense of engagement with goals we were driving towards.

Going forward I would like to continue to experiment with asynchronous standups (ie: Slack) but find ways to encourage active ingestion of the content and counteract the “drift” by creating other opportunities to come together and align around goals.

CT: What is something you do that they feel is different from what others are doing?

SS: Priding ourselves on our ability to have candid conversations, be open to experimentation and to hold ourselves accountable for incrementally improving. I feel a lot of teams take retrospective time as just-another-meeting but for me and the teams I was on we really looked forward to it, really valued it and took it very seriously.

CT: Where do you get feature ideas from?

SS: Typically from the product management role - they are a result of a synthesis between the business objectives and a lot of conversations / research with inside/outside stakeholders. That said, I’ve seen them come from everywhere in the organization. Everyone gets a chance to pitch during quarterly roadmap reviews.

Automated testing is critical, non-negotiable, foundational (how else can I say it?). There’s no feeling like deploying a change in a system that is well covered by reliable, performant automated testing - it allows you to spend more time in the advanced human part of your brain instead of the fight-or-flight reptilian part.

CT: If you had to give your shipping process a name what would be it be?

SS: I like non-dogmatic-agile - I’m gonna steal it :) I’m firmly in the camp that agile is not a set of practices but is a mindset and value system that is optimized to lower risk and ultimately improve business outcomes. The actual practices that develop are a result of applying that mindset to your specific business problems.

CT: When you are building new features how do you handle bugs that get found before the feature ships? After it ships?

SS: Ideally we would be releasing a new feature to a user group we’ve identified as early adopters - in this case 100% polish isn’t a requirement, so some bugs can go out with the shipped feature. We would triage them and prioritize in the same way we do for regular bugs. If a bug is identified after a release, depending on the severity, we’d also have the option of toggling a feature flag to turn the feature off before too many people stumbled on it.

CT: What does your branching and merging process look like?

SS: I push for a mantra of “everyone merges to master once per day”. This requires a lot of discipline in how you break up your tasks, schedule code reviews and use feature flagging but it’s possible. If you can get there, it’s a huge boost to your adaptability.

The master branch is considered good-to-go - everything merged there is headed to production environment asap - definitely the same day, if not sooner. Teams or individuals can create feature branches but need to merge to master (not just from master) regularly - this keeps other team members informed as to what else is going on in the codebase and catches potential merge conflicts between teams/devs as early as possible.

CT: How is new code deployed?

SS:

  1. automated testing run locally
  2. PR created
  3. code review
  4. merge to master
  5. automated testing
  6. deploy to production w/ feature flag off
  7. turn on for QA in prod environment and manual QA
  8. incrementally turn on feature flag

I feel a lot of teams take retrospective time as just-another-meeting but for me and the teams I was on we really looked forward to it, really valued it and took it very seriously.

CT: How often do you ship new code to customers?

SS: As often as possible. Teams I’ve worked with have shipped as many as 20 times per day.

CT: Do you use feature flagging? Staging servers?

SS: Yes and yes. Couldn’t live without feature flagging. I could live without staging - though it is useful as a basis for automated testing and for manual QA in some cases. I feel there’s a better option that utilizes containerized versions of the stack for automated testing (then throws them away) and more advanced feature flagging and test isolation in the real production environment. We always struggled with drift between staging and production and I constantly felt we could save time and uncover problems faster by going straight to production.

CT: Do you use Sprints? How long are they? How is that working out?

SS: Sometimes. For example in a customer success focused team (ie: bug bashers) it makes sense to do away with sprints since your objective is to be highly adaptable and just focus on churning through tickets as fast as possible. You’re not generally driving towards a product objective, it’s the bugs that determine your direction. With feature teams, I’ve found that sprints are helpful in that they force a regular check in on how the team is performing against their higher goal, and forces a discussion on how they can improve or whether they need to adjust their roadmap based on what they’ve learned. It’s a function of the amount of planning you’re doing - a sprint gives you a cadence for checking in on the plan. In a bug bashing team, you’re creating a new plan almost every day, so sprints become less helpful.

CT: Are you guys remote? If so, how’s that working out for you?

SS: I have worked with remote developers and had varying levels of success. I believe my role is unlocking the creative potential of the people on my team, and if working remotely is a key part of that equation for some team members, I feel I need to support it. There are side effects though which you need to be conscious of and work to counteract. Some general observations:

  • if possible, the team should be entirely co-located or entirely remote. A combination is possible, it just means as a manager you have to be more creative in order to reduce feeling of isolation from remote team members, help keep them aligned with business objectives and encourage local team members to “work out loud”
  • Increased risk with less experienced developers. Hard to substitute F2F time with junior team members
  • Even more important to keep your task size small - otherwise you risk drifting off course, and team may start questioning commitment level (“that guy hasn’t created a PR for two weeks… is he even doing anything?”).
  • If the team is experienced and has a clearly defined backlog, it can be awesome

Simon is advising a handful of startups. If you're interested in learning more about how he can help you, get in touch with him on LinkedIn.


Can we improve this? Do you want to be interviewed, or want us to interview someone else? Get in touch here!

Get better at building and shipping great software

Get emailed when new interviews with engineering leaders from great companies launch. We'll email you once a week, max.

Is managing your GitHub issues painful?

Codetree gives you lightweight project management on top of your GitHub issues. Try it for free for 14 days.