How We Ship: Allay

Our ultimate goal is to understand and share with you, at a nuts and bolts level, how great companies ship software to customers. To do that we're doing email interviews with engineering, product, and project managers at great companies to understand how they ship. If you'd like to share how your team ships software, get in touch!

Today's interview is with Andrew Draper, a co-founder and head of Product at Allay. He shares how the development team at Allay ships software. Allay uses your company's health data to recommend health insurance plans that provide major savings through lower rates and annual reimbursements.

Codetree: Because shipping software is context dependant, can you let us know your team size and any other context you think might be important for readers?

AD: Our product team is made up of 5 people, 4.5 devs and 1 designer. Because health benefits in the US have been largely untouched by the internet industry, a lot of the logic around our product is quite convoluted. Our biggest challenge is simplifying that down to an easily understood and easy-to-use product. We also have 3 different types of users, brokers, employers and employees, which means 3 different levels of knowledge and 3 different (but related) types of tasks.

CT: How do you decide what to build next? What does that process look like?

AD: We plan in 2 week increments, 2-3 increments at a time. We also have a weekly product meeting that includes 3 people: a BD person, a designer/product manager and developer. That way we get all points of view and understanding of the pain we’re looking to solve.

Typically one of us will of had something brought to our attention via customer support or sales. This happens through direct questioning via email/phone, in-app messaging, or just observations of how people are using the product. This gives us a full picture of how people are evaluating us as a potential fit, and how customers who have already entrusted us with their business see us. We keep a Trello board for bigger picture items, and during our weekly product meeting (depending where things are at we may only do this bi-weekly) debate how best to prioritize and what can give us the biggest boast at the given time and then split those bigger things into Github issues (that we primarily view in Codetree’s task board view).

We also have 1 dev dedicated to support, which tends to take up about 50% of their time (used to be more but thankfully we’ve been making some progress!) It keeps interruptions to a minimum across the team. Each support request/bug is assessed and either a Github issue is created and assigned to the appropriate person on the team, the dev on support jumps in and clean it up immediately, or we bring in other people to help out (our least favorite given it’s the most disruptive).

CT: Could you sketch out what shipping a new feature looks like for you end-to-end?

AD: Generally a new feature’s lifespan looks like this:

  1. Gather information about the feature
  2. Create a spec as to what we want to accomplish and what the outcome should be
  3. Create mockups of the feature and how it fits into the rest of the product
  4. Gather feedback on the mockups
  5. Iterate on the mockups
  6. Hand off mockups to create a technical spec
  7. Break the technical spec into tasks
  8. Assign the work
  9. Create a feature branch
  10. Do the work
  11. Write tests
  12. Submit a pull request and assign a reviewer
  13. Test, review and either define remaining issues or ok the PR for release
  14. Merge to the develop branch
  15. Test on staging
  16. Merge to master and deploy

CT: Do you estimate how long features will take to build? If so what does that process look like? Do you track if features are behind or ahead of schedule? Do you track how long they took compared to how long you thought they would take?

AD: In a general sense yes (weeks/months), but not in a very granular manner (rarely an exact date). We’ve found being terribly specific leads to missing deadlines more times than not and if we spend more time earlier on in the process (steps 1-7) and focus on finding the quickest path to getting something in the hands of the person that’ll be using what we’re working on it works out best.

CT: How do you have a particular process for paying down technical debt and/or making significant architectural changes?

AD: This has been an ongoing issue for us over the last 2 years as we’ve significantly pivoted while keeping the base of the product intact. In general we’ll schedule and prioritize it like any other feature or as we’re working on a related feature or in a related area of the product we’ll schedule it in.

CT: How do you deal with delivery dates and schedules?

AD: We start by having a target of when we’d like the feature to be complete and released. We re-assess this on a weekly basis and will look at ways to reduce the scope to fit into that target or decide to push the target if we feel that’s the best course of action. We usually try to err on the side of getting the feature into the hands of users as early as possible while still delivering enough value that they’ll find it useful.

CT: How many people are in your product + dev team? What are their roles?

AD: There’s 5 people on our product team currently. 1 Designer/front-end dev, 2 front-end devs and 2 back-end devs. We’ll often split PM duties between the designer/FE dev (me) and our lead dev/architect who is also primarily a FE developer.

CT: Do you organize people by feature (e.g. cross functional feature teams made up of devs, PMs, designers etc.), or by functional team (e.g. front-end team, backend team, etc), or some other way?

AD: For the most part we’re a functional team, with a few exceptions now and again. I’ve worked in both types of teams and each have their own advantages/disadvantages mostly dependent on the type of product. Because Allay is such a technical product with a ton of old-fashioned thinking we have to condense and evolve into something that can be done via software instead of spreadsheets and the back of napkins. We’ve found functional roles to be successful primarily because of the additional depth that focusing on backend or frontend can bring to a given task/challenge.

CT: How do you know if a feature you’ve built was valuable to customers? What metrics do you use if any?

AD: It’s dependent on what the feature is, but typically it’ll be through direct feedback or through engagement metrics that we’ve decided to track for that feature.

CT: Do you have a dedicated testing role? How important is automated testing in your process? What tools do you use for testing?

AD: No, each dev is responsible for testing their own code, writing tests and creating a pull request. We’ll also release to a staging server for additional feedback/testing by other members of the team. Once we’re comfortable a PR is created and assigned to someone else who’ll do a code review to further ensure we’re not going to blow anything up when it’s deployed.

We actually have 2 staging environments, the first is for testing feature branches with staging data (affectionately named Crunk), the second is more of a true staging environment that’s a replica of our production environment and is a final sanity check before deploying.

CT: Does your team create functional specs for new features? If so what do they look like? What do you use to create them? What does the process look like for reviewing them?

AD: Our functional specs tend to be mockups that contain varying degrees of interactivity/clickability so we can better grasp how the feature will work before dealing with a technical spec/work order.

CT: What about technical specs? What do they look like? Who reviews them? Who has final say on the spec?

AD: It’s dependent on the feature, but generally as the person responsible for the mockups I’ll meet with our lead dev, go over the mockups and review the tech spec to ensure we’re both in agreement to move things forward.

CT: What tools are important to you in your engineering process?

AD: Github (source control), Trello (planning), Codetree (project management/planning), Circle CI (automated testing and deployment), Sentry (error tracking), Scout (server monitoring), Slack (github, circleci and sentry notifications as well as team communication), Heap Analytics (real-time user tracking)

CT: Do you triage issues? If so what does that process look like? How often do you do it?

AD: Yes we do. The process falls into 3 main categories: backlog, this week and doing. Backlog can be contributed to by anyone and doesn’t include a milestone. We have a Friday product meeting between BD, me and our lead dev where we set priorities for the coming week. The ‘this week’ category is populated in that meeting, with a goal of 2-3 issues per person. Each person is responsible for pulling the issues assigned to them under ‘this week’ into the ‘doing’ category as they’re working on each issue. Once complete, that issue is then moved into the ‘complete’ category.

CT: What part of your current process gives you the most trouble? What is working the best?

AD: Front-end testing is an ongoing source of trouble, sometimes more than others, along with understanding where we’re at in the progress of completing a feature (particularly larger features). On a smaller scale we’re typically fairly confident we’re working on the right thing but in a more macro sense there’s a constant worry that we’re not.

CT: What is something you do that they feel is different from what others are doing?

AD: Passionately avoiding the use of Jira. Early on I spent about a week looking for other software solutions and defining a process that was lightweight enough to not get in the way of getting things done while being detailed enough to provide confidence that we’re moving forward. For the most part it’s survived quite well with a few adjustments along the way as we’ve learned/adapted as a team.

CT: Where do you get feature ideas from?

AD: Sales call feedback, customer support requests and directly querying our customers.

CT: If you had to give your shipping process a name what would be it be?

AD: “Realistically business-focused”, meaning we’re logical, focusing on business needs and goals along with providing a base level of value to our customers as quickly as possible and iterating to provide more over time.

CT: When you are building new features how do you handle bugs that get found before the feature ships? After it ships?

AD: Both are handled largely the same, by issuing Github issues. A priority may be assigned automatically by the person creating the ticket or after discussing it with someone else on the team. In rare cases we may discuss it as an entire team to ensure we’re setting expectations and timeboxing the problem efficiently.

CT: What does your branching and merging process look like?

AD: We use a slightly modified Gitflow process. Master mirrors our production environment, feature branches are created off of a develop branch and merged back into develop before being merged into master for final deployment.

CT: How is new code deployed?

AD: We have pre-staging environment where all pushes to the develop branch are automatically pushed after running through automatic testing on CircleCI. Once we’re happy with that (e.g. all tests have passed and we’ve verified that everything’s working) we’ll push to staging for one more check, often involving people outside the product team. After that a pull request will be created, tests run again and if everything’s good the thumbs up is given and we’ll deploy to production.

CT: How often do you ship new code to customers?

AD: As often as we can, multiple times a week, sometimes multiple times daily.

CT: Do you use feature flagging? Staging servers?

AD: Staging servers yes. Feature flagging not really, although we have built features for ourselves that we’ve then opened up to different types of users we support (e.g. super admin, brokers, employers, employees).

CT: Do you use Sprints? How long are they? How is that working out?

AD: Early on we were pretty strict about using sprints but inevitably we’d simply be moving unfinished work from the last sprint into the new sprint and it created more overhead than actual meaningful results.

CT: What's your tech stack?

AD: The Allay tech stack looks like this:

Angular 1.5 (one day we’ll go through the pain of upgrading >2.0)

Python (Django)
Postgres database

Can we improve this? Do you want to be interviewed, or want us to interview someone else? Get in touch here!

Get better at building and shipping great software

Get emailed when new interviews with engineering leaders from great companies launch. We'll email you once a week, max.

Is managing your GitHub issues painful?

Codetree gives you lightweight project management on top of your GitHub issues. Try it for free for 14 days.