How We Ship: AdStage

Our ultimate goal is to understand and share with you, at a nuts and bolts level, how great companies ship software to customers. To do that we're doing email interviews with engineering, product, and project managers at great companies to understand how they ship. If you'd like to share how your team ships software, get in touch!

Today's interview is with Gordon Worley, who's Head of Operations at AdStage. He shares how the development team at AdStage ships software. AdStage delivers a paid advertising management, automation, and analytics platform that companies depend on to optimize and understand their ad spend.

Codetree: Because shipping software is context dependant, can you let us know your team size and any other context you think might be important for readers?

GW: AdStage has about 30 employees split evenly between engineering and non-engineering roles. We deliver a paid advertising management, automation, and analytics platform that our users depend on to optimize and understand their ad spend. Our applications have JavaScript frontends that communicate to backend services with both webhooks and traditional XHR requests, with those backend services written in a mix of Ruby, Clojure, and Elixir. Because our customers depend on us to automate bid adjustments, ad rotations, and ad schedules, we need to be always on, highly available, and provide consistent behavior our customers can count on, so we can't afford to ship broken code and fix it later since by then we might have misspent millions of dollars in ad spend for our customers.

CT: How do you decide what to build next? What does that process look like?

GW: There's two main ways we decide what to build. The less common route is experimentation through internal hack days and allowing engineers time to develop "spikes" that are high risk, high reward. These sometimes produce obvious winners that we then carry through to production. Most of the time, though, we simply build what our customers tell us they want. Our VP of Product, Paul Wicker, organizes customer feedback sessions, collects feedback from customer success and sales, and talks constantly with engineering about what they think is necessary to maintain our products for our customers and deliver new features to them in a timely manner at a level of quality they will be satisfied with. He converts this information into feature scores made up of many components and the highest scores becomes our pool of candidate features. Finally he meets with engineering and customer success in planning meetings to discuss and decide what features we'll work on. Decisions are generally made via consensus and if we don't have buy in from everyone involved in a feature we'll usually keep exploring that to understand why until everyone agrees to work on a feature or not.

CT: Could you sketch out what shipping a new feature looks like for you end-to-end?

GW: As previously mentioned, product, customer success, and engineer collaborate to decide what features to build, then from there the conversation switches to one of implementation. Sometimes this involves writing PRDs (product requirement documents) and breaking the work down into stories, but other times it results in a single "story" to build the feature and the details are worked out during development with constant interaction between product, customer success, and engineering, reducing planning overhead and constantly making sure we're on the right track.

Once engineering and product feel a feature is done and ready to ship, it's handed over to our QA engineers. They go through and test both that the functionality in the new feature works as expected and that no existing functionality is broken by the new feature. They do this with a mix of automated and manual testing that is additional to the automated tests engineering builds during development to verify the correctness of the code.

Engineering and QA work on feature branches in our Git repos, and when QA accepts a feature they merge the feature branch into master. At this point they will trigger a deployment through a custom-built release system. You can read more about how that system works in this Medium article.

CT: Do you estimate how long features will take to build? If so what does that process look like? Do you track if features are behind or ahead of schedule? Do you track how long they took compared to how long you thought they would take?

GW: Yes. We've tried many estimation techniques, but what we've settled on is reassessing progress daily and estimating remaining days of work. We used to but no longer actively track how accurate our estimates are and instead use sentiment to notice when we are failing to estimate accurately. If we've been under or overestimating a lot lately, then we'll typically revise estimates up or down until they seem to be generally in alignment.

This system obviously requires a lot of trust between engineering and product that engineers are working hard that our product manager isn't making unjustified demands of engineering. Generally everyone knows and understands the urgency of a feature, the quality we are shooting for, and has empathy for the competing needs of employee happiness, time to market, the sales cycle, and of course customer satisfaction. This lets us worry less about estimates and more about everyone doing their best together to deliver a great customer experience.

CT: How do you have a particular process for paying down technical debt and/or making significant architectural changes?

GW: Mostly this happens whenever engineering makes the case that it's necessary. Since engineers know as well as everyone else in the business what our timelines and business needs are, they are just as capable as anyone of making educated decisions about when non-functional changes are needed. As always, decisions are made in consultation with product and customer success, and generally if there is a strong enough case for dealing with tech debt or making architectural changes we'll do it.

Many times these changes are done as "spikes" by single engineers who are given a week or more to work on making the change. User-facing feature development continues on, and when the spike is complete it's evaluated and if everyone agrees it will be merged in. This sometimes involves a certain amount of rework on features depending on the nature of the change, but again everyone is pretty happy to do it since we wouldn't merge in the spike if everyone didn't agree it was necessary.

CT: How do you deal with delivery dates and schedules?

GW: Basically we ship when things are ready. We tried hard to set dates and reduce scope to hit dates, but in the end we found all this did was get us to deliver shitty software that didn't do what anyone needed it to do. Our cycle is generally to build things out to completion, give them time to soak in production, and only then formally announce that products have shipped. We value quality highly and would rather delay than ship something we aren't proud of because this is what our customers have told us they want both with their words and their money.

CT: How many people are in your product + dev team? What are their roles?

GW: We do development in groups of 3-4 software engineers working on stories focused on related subsets of the product. We partition our stack in different ways at different times depending on our quarterly business goals. Each of these product development pods shares the same product manager, design engineers, QA engineers, and SREs. This means that engineers in the pods take on primary responsibility for their portion of the product and are supported by others with functional, cross-product responsibilities. Product engineers drive feature development, and take on more or less specialized work as they are able and as needed by people in other roles.

For example, as Head of SRE, how much I'm involved with a feature varies a lot on its needs and the engineers work on it. Sometimes a feature needs new software installed on our servers or integration with external services that could require my expertise, but some engineers are familiar enough with these parts of our systems that they can mostly do the work themselves and may just need me to verify their work or ask me questions. Others may be less familiar and need more help. Similarly some engineers are better at design than others and so may need more or less help from our design engineers.

CT: Do you organize people by feature (e.g. cross functional feature teams made up of devs, PMs, designers etc.), or by functional team (e.g. front-end team, backend team, etc), or some other way?

GW: Both. See previous question, but we have pods of product engineers working on features who are supported by functional teams where specialization has proven necessary (design, QA, SRE, support).

CT: How do you know if a feature you’ve built was valuable to customers? What metrics do you use if any?

GW: We ask. This gets us some of the richest feedback about how customers interact with features because it includes their thoughts on their experience with the product. But, because that's time consuming, we also collect a lot of metrics via our product to understand user behavior. Using Mixpanel events, Fullstory, and custom logging and analysis of user transaction data in Sumo Logic, we have a pretty good idea of what our users are doing and we can use this to see trends. We can pretty easily see if users take advantage of new features and track adoption rates. This then feeds into customer success who may offer training, marketing who may highlight features, and product and engineering who may modify a feature if the data suggests users are struggling with it in some way.

CT: Do you have a dedicated testing role? How important is automated testing in your process? What tools do you use for testing?

GW: Yes. All of our code is covered by automated functional tests and we perform continuous integration with Solano to ensure we never ship code with known errors or regressions.

Our dedicated QA engineers then focus primarily on integration testing. They test new features on sandbox versions of our full system where they can do anything our customers might later do in production. They have a slightly different mindset than our product engineers and focus on how things might fail rather than how to make things work. They then manually test features in these sandbox environments and over time automate the tests they find themselves consistently performing to increase the speed at which they can confidently test new features.

This doesn't mean we never ship bugs, but most bugs that would have made it to production without them are caught and fixed before a user ever sees a feature.

CT: Does your team create functional specs for new features? If so what do they look like? What do you use to create them? What does the process look like for reviewing them?

GW: Sometimes. There's always a conversation and everyone knows what's expected, and for complicated features there's often a PRD laying out what the feature needs to be able to do and explaining the motivation behind it. They are just Google docs. There's not a review process per se but they evolve as part of the continuing conversation during development.

CT: What about technical specs? What do they look like? Who reviews them? Who has final say on the spec?

GW: Not usually. Engineers working on features generally discuss this and may make some notes in the Github issue, but it's usually not explicitly documented. We only hire midlevel and senior engineers, so everyone is skilled enough to work out acceptable technical solutions to deliver features, but again since everyone is in constant communication it generally evolves with the project. There's no one with final say, but generally in cases of uncertainty we defer to whoever has the most experience with a particular technical area with the understanding that we are always experimenting and may later do something different depending on the results during implementation.

CT: What tools are important to you in your engineering process?

GW: Github, Solano, Zenhub, Google Docs, Google Sheets, Aha, Slack, and various custom built tools for running sandbox review environments and deployment.

CT: Do you triage issues? If so what does that process look like? How often do you do it?

GW: Yes. When bugs or other emergent issues appear, they go first to a support engineer who talks with everyone involved in a feature to make sense of it and prioritize it appropriately. These issues are then reviewed every couple weeks with customer success, product, and engineer to make sure nothing has slipped through the cracks or been misprioritized. Those issues then are generally dealt with by the engineering feature pod working on a particular part of the product.

CT: What part of your current process gives you the most trouble? What is working the best?

GW: I'd say our biggest challenge is figuring out how to scale. Our current methods work well so long as we can cut our org into slices small enough to communicate and reach consensus, but we estimate they will break somewhere around the time we reach 50 engineers and our current matrix organization will start to break down because the rows and columns will be too big. So far we've had steady growth in the size of the engineering team that hasn't forced this issue yet, but we're aware it will eventually crop up and have to be dealt with in the future.

CT: What is something you do that they feel is different from what others are doing?

GW: I think the big thing I see that's different on our team from what I've seen in other teams is the level of trust, empathy, and skill. It's something we look for specifically when hiring: we're not willing to tolerate hot-shot engineers who can't work well in a team nor are we willing to hold on to great people who aren't able to hold their own. Every one of our engineers could be a team lead at another company, and we take advantage of that to build a strong team that's able to work without heavy-handed coordination because everyone knows how to align to our goals and cooperate without egos and politics getting in the way.

CT: Where do you get feature ideas from?

GW: Customers, other products, our own ideas about what customers might like, and playing around with what we think might be fun to build.

CT: If you had to give your shipping process a name what would be it be?

GW: Geez. I mean we don't really have a name for it. I'd be hesitant to call it some form of Agile even though it shares some of the goals of Agile and a little of the form. Whatever you might want to call it I think the distinguishing features are high trust, empathy, technical skill, alignment, and communication.

CT: When you are building new features how do you handle bugs that get found before the feature ships? After it ships?

GW: Bugs found by QA go right back to the engineers working on a feature. QA happens synchronously with development, and often starts before a feature is complete, and even if an engineer has another project they are starting on, finishing an existing feature is top priority.

CT: What does your branching and merging process look like?

GW: We have a shared development branch in each repo called "fix". Work generally branches off from here in feature branches that are then merged back into fix. Fix is ready to merge into master at any time but is kept separate so that high priority bug fixes can go into master and be deployed immediately whereas fix will go through some regression and smoke testing by QA before it is merged to master.

CT: How is new code deployed?

GW: After CI has passed, QA performs testing and if everything looks good the code is merged down and deployed to production.

CT: How often do you ship new code to customers?

GW: Code ships multiple times a day. Our main limitations on shipping right now are how quickly we can complete development and QA. We could theoretically put out about 10 deploys a day given our current system's lag times and verification steps.

CT: Do you use feature flagging? Staging servers?

GW: We sometimes use feature flagging, but it's mostly to let new features soak with internal and friendly users before we make them generally available. We use sandbox review environments and a staging environment. Staging is primarily restricted to use by SRE to test and verify deployments and system integration that can't be accurately simulated in the sandbox, with most QA happening on the review sandboxes.

CT: Do you use Sprints? How long are they? How is that working out?

GW: Not really. We sometimes group together work in 2 week sprints that we expect to complete in that time, but this primarily a planning convenience and not a commitment. We'll change things up as necessary whenever it's necessary.


Can we improve this? Do you want to be interviewed, or want us to interview someone else? Get in touch here!

Get better at building and shipping great software

Get emailed when new interviews with engineering leaders from great companies launch. We'll email you once a week, max.

Is managing your GitHub issues painful?

Codetree gives you lightweight project management on top of your GitHub issues. Try it for free for 14 days.