How We Ship: Codetree

Our ultimate goal is to understand and share with you, at a nuts and bolts level, how great companies ship software to customers. To do that we're doing email interviews with engineering, product, and project managers at great companies to understand how they ship. If you'd like to share how your team ships software, get in touch!

Today's interview is with Kareem Mayan, a Partner at Codetree. He outlines how the Codetree development team ships software. Codetree is a project management tool for software teams that use GitHub issues.


Codetree: Because shipping software is context dependant, can you let us know your team size and any other context you think might be important for readers?

Kareem Mayan: Our product/dev team consists of four people. Codetree is a fairly straightforward CRUD app. The largest bit of complexity involves the two-way sync with GitHub issues.

CT: How do you decide what to build next? What does that process look like?

KM: We plan work in monthly cycles at the beginning of the month. We'll first identify a business goal to drive the business forward (like "Increase signups" or "Increase Trial to Paid Conversion" or "Lower Churn").

We decide what to work on in two ways. The main way we decide on work is by starting with the business goal - recently it’s been one of increasing new signups, increasing the trial to paid rate, or decreasing churn.

Once we agree on the goal, we pick issues from our list of Triaged issues that we think will help us achieve the business goal. This list is generated from two main sources:

  1. Conversations with customers via Intercom, Helpscout, our public GitHub Feedback repo, and phone calls. We try and have as many conversations as possible with customers throughout their time with us - before they buy, after they sign up for a trial (“why did you sign up?”), when they purchase (“why did you purchase?”), during their subscription with us (“how can we improve things?”), and after they cancel (“why did you cancel? How are you going to solve the project management problem going forward?”).
  2. Our product strategy. This encompasses our point of view on the problem, our Unique Selling Proposition (what makes us the #1 choice for a large set of customers vs the competition), and where we think the market is going.

We pick issues from our “product pillars” - themes that mostly fall out of #2 above that we think are worth investing in.

If possible, we may do some quick back of the envelope math to figure out if it’s worth building.

And the second way we pick what to work is at the discretion of the dev on support. We try really hard not to be interrupt-driven, but we rotate support duties every week. So on occasion when a support request comes in, the dev who’s doing support (and who is thus interrupt-driven for the week) may just fix the issue to get a quick win for a customer. These are super satisfying!

CT: Could you sketch out what shipping a new feature looks like for you end-to-end?

KM: Here’s the process we use to take an idea through to launch:

  1. Pick someone to be the owner / product manager
  2. Owner creates a good spec
  3. Ticket the feature
  4. Divvy up the work
  5. Write (brief) technical spec
  6. Break feature down into work items
  7. Create a feature branch and write code
  8. Submit PR and do User Acceptance Testing (UAT)
  9. Test, review, and file bugs
  10. Fix bugs
  11. Merge to master and deploy
  12. Create metrics to show usage
  13. Found Bugs in production?

1. Pick someone to be the owner / product manager

We are all dev+product manager hybrids and have been both devs and PMs in the past for other companies. The owner owns the spec and is responsible for dragging the feature across the line.

2. Owner creates a good spec

Here's an example of the actual spec we used to build Epics.

The PM will spec the feature, usually with visuals. It depends on the feature, but the visuals could be anything from Balsamiq wireframes, to marked-up screenshots, to HTML and Javascript. Use good judgement - the visuals and the spec are a document to communicate what's required to the developers. They're not an end in itself, but an artifact of shipping good software. In other words, don't waste a ton of time on the spec - do enough to communicate what needs to be built to the dev. In rare cases this could even be sketching out an approach on a whiteboard and a conversation with the dev.

Specs should be first written in Google Docs. We use Google Docs because:

  1. It’s nice to have everything in one place
  2. It has great review tools (it’s easy to comment, suggest and merge / reject changes, and keep a running list of open feedback)
  3. The editing surface works super well - you can mix text and graphics and bullets without wanting to pull your hair out.

Specs for large features should be circulated to at least one other PM for feedback. The goal is to get maximum brain power on the feature.

Here's a template for a spec we use for large features:

Context
Provide business context on the feature. Answer things like what customer problem does it solve, what is the solution, why are we building it, etc. Anything to help the reader understand where the feature fits into the bigger picture.

Goals
Make explicit what we're trying to accomplish by building the feature. Could be things like "Feature parity" or "Increase visitors to the site by x%" or "Provide key workflow used by most customers". The numbers aren't key at this point, but filling out this section is a thought exercise as to what we want the end state to look like.

Scenarios
What customer scenarios does this feature pave?

Description
Describe the feature. Use screenshots, bulleted lists, etc to describe how the feature should work. Add notes for the dev to consider during technical implementation if you can (e.g. "When doing the typeahead implementation for this, you may want to look at the typeahead implementation for [feature xyz] to see if you can re-use some code). Be sure to highlight in yellow any open loops (decisions that need to be made).

Open Loops
This should be the place you put a list of decisions that need to be made, along with pro's and con's. You'll have to work this list, get feedback, have discussions etc to get this list to zero before you ticket the spec.

Closed Loops
This is the place you put Open Loops once decisions have been made. Be sure to document the decision so we can refer back to it.

3. Ticket the feature

The PM creates "Head issue" in Codetree (this is the "tracking" issue that shows the status of all the other issues) and assign the "Epic" label if it's a large enough feature.

The Dev or more likely the PM then creates "feature issues". Each feature issue should ideally be a slice of deliverable customer value. The head issue is made dependent on each feature issue using Codetree’s “needed-by” syntax. This lets us track the issues that need to be complete before the entire feature can ship.

4. Divvy up the work

If multiple people are working on the feature, we divvy up who’s working on which piece.

5. Write (brief) technical spec

We don’t always do this, but if we think the changes are tricky or aren’t 100% of the approach we’ll do this. The purpose of a technical spec is to communicate how the dev is going to build the feature so we can get maximum IQ on the problem. A tech spec doesn’t need to be long, but it does need to lay out the technical approach you want to take for a brief async discussion.

This should be a lightweight doc. We don't care about the format, we care about the content. It could be a Slack conversation for a simple feature. For example, we're in the middle of building Epics which is fairly complicated, but the tech spec is 2 pages (see the actual tech spec here). We don’t stand on ceremony and like to KISS.

6. Break feature down into work items

The dev should break down each feature into "work items" in the form of a checklist inside each feature’s issue.

If the dev sees a quicker path to finishing the feature, he should discuss the trade-offs with his PM. E.g. "Doing it the way you've specced will take a week, but if you do it this way instead it'll take a day". We rely heavily on good judgement to make these kinds of decisions.

7. Create a feature branch and write code

We use short-lived feature branches branched off (and eventually back into) Master. We’re moving towards using feature flags to hide code in production until it’s ready to launch to customers.

8. Submit PR and do User Acceptance Testing (UAT)

When the feature is ready for UAT, the dev should create a PR and assign the PM as a reviewer on the PR. He should also email / Slack the reviewer and confirm the reviewer has responded and is aware that the ball is in their court -- we don’t rely on the issue issue tracker to hand off work.

9. Test, review, and file bugs

When the PM has tested it and done any code review, he'll create "feature bugs" - these are bugs found in UAT.

  • Each bug should be a new issue that the feature issue is dependent on
  • These bugs are assigned to the dev for fixing. Note in some cases the PM and Dev may choose to punt on a non-showstopper bug so it doesn’t block the feature launch. In this case it’ll be triaged or assigned to a milestone to fix after the feature launches.
  • The reviewer should email/Slack the dev to let him know that he has new things to fix for the feature to be completed. He shouldn’t rely on the issue tracker to let the dev know.

10. Fix bugs

The dev should fix bugs.

  • Fixes should be made in the same feature branch
  • The dev should test his fixes before marking them as resolved for the PM to test
  • He should email / Slack the person who found the bug once it's resolved so they can also test the fix

11. Merge to master and deploy

Once a feature is ready and tested and all tests are passing in CircleCI, the PM will:

  1. Merge PR and push to master
  2. Make sure tests pass
  3. Test on master
  4. Deploy to staging immediately
  5. Smoke test staging
  6. Deploy to production
  7. Smoke test production

12. Create metrics to show usage

We use Mixpanel’s Autotrack feature so we can track any client-side events retroactively. This lessens the pressure to metric up front. If a feature has a direct tie to driving the business, we’ll metric it in Mixpanel. If it doesn’t, we’ll rely on Autotrack or fire in server-side events and create a to-do to check usage a few weeks later.

13. Bugs in production?

In general, we'd prefer to roll forward to a fixed version vs. roll backwards to a known-good version. Rolling backwards means there's broken code on Master that prevents new deploys.

CT: Do you estimate how long features will take to build? If so what does that process look like? Do you track if features are behind or ahead of schedule? Do you track how long they took compared to how long you thought they would take?

KM: We don’t do a great job of estimating or scheduling. If a feature is a v1 we look for the quickest way to get it out there to get customer feedback.

CT: Do you have a particular process for paying down technical debt and/or making significant architectural changes?

KM: Debt or architectural changes get addressed in one of two ways:

  1. It gets prioritized and worked on like all other features
  2. It gets addressed when a dev is heads-down in a feature and there are quick wins or necessary changes that need to be made

CT: How do you deal with delivery dates and schedules?

KM: We generally ship when it’s ready, though we’re pretty aggressive about cutting scope to get something minimal but useful out there. We believe no features survives first contact with the customer and try and be diligent about not over building.

CT: How many people are in your product + dev team? What are their roles?

KM: We’re lucky to have three devs who can also wear the PM hat. We have one dedicated dev. We don’t have a dedicated designer so we repurpose existing building blocks or work with a contractor if necessary.

CT: Do you organize people by feature (e.g. cross functional feature teams made up of devs, PMs, designers etc.), or by functional team (e.g. front-end team, backend team, etc), or some other way?

KM: We’ve used both models in the past and evidence suggests better features come from cross-functional feature teams. Ownership is higher because the team is empowered to make the fixes necessary to solve the customer’s problem.

CT: How do you know if a feature you’ve built was valuable to customers? What metrics do you use if any?

KM: It depends on the feature but we’ll often look at engagement with the feature. A good starting place is “What percentage of people have used the feature within the last N days”.

CT: Do you have a dedicated testing role? How important is automated testing in your process? What tools do you use for testing?

KM: We don’t have a dedicated testing role. The dev tests the initial version of the feature before filing a PR. The PM tests the feature and files bugs once the PR has been filed.

Automated testing is huge for preventing regressions. We use minitest for API-level tests. We’re still looking for a great SPA framework for integration / front end tests.

CT: Does your team create functional specs for new features? If so what do they look like? What do you use to create them? What does the process look like for reviewing them?

KM: Absolutely. Garbage in, garbage out. See above.

CT: What about technical specs? What do they look like? Who reviews them? Who has final say on the spec?

KM: We do create them. They’re as long or as short as they need to be. The point is to sketch out the technical approach. They get reviewed by at least one other dev, more if the feature is complex. We want maximum brainpower on implementation approach before coding starts when the cost of change is low. Final say on the spec goes to the dev who’s doing the implementation.

CT: What tools are important to you in your engineering process?

KM: GitHub for source control. Codetree for project management and product planning. CircleCI for automated testing. We are experimenting with Twist as a way to have more thoughtful discussions than Slack, and it's working well. Honeybadger for exception notifications. Helpscout, Intercom, and our Github Feedback repo for funnelling customer requests into our process. Mixpanel for analytics. Google Docs for writing specs, and Balsamiq for wireframing.

CT: Do you triage issues? If so what does that process look like? How often do you do it?

KM: We do. New issues get added to Codetree with no Milestone. These are Untriaged issues. We Triage unmilestoned issues regularly. During Triage, an issue will be marked as:

  1. Closed, won’t fix
  2. Put into Backlog Milestone (we’ll pull from here during our planning phase)
  3. Put into current milestone (this really only happens for showstopper bugs)

We’ll also tag issues with things like “customer-request”, or “enhancement”, or “bug”, along with a priority (high / medium / low).

We triage on an as-needed basis.

CT: What part of your current process gives you the most trouble? What is working the best?

KM: Key problems:

  • Integration / front end testing is a real pain. We live in fear of regressions, especially since our app is front-end heavy. Selenium has a lot of overhead to write tests and is not great for javascript-heavy testing.
  • Understanding the status of in-flight work is problematic. E.g. how much work is left to do on a given set of features, and when will it launch.
  • Are we building the right things? We have a bunch of debt that we need to pay down and a bunch of new features to build to pave key customer scenarios. How far can we push it with the debt before it bites us?
  • It’s hard to measure whether a feature is really impacting retention or not.

What’s working well:

  • Our feature specs are tight
  • We have a tight loop with customers and think we have a good understanding of their problems
  • Our shipping process works well. This isn’t our first rodeo and we know how to reliably ship good quality software that solves customer problems.
  • We don’t overcomplicate our systems. We skew towards simple, low-maintenance systems e.g. Boring Tech

CT: What is something you do that they feel is different from what others are doing?

KM: We track customer requests so we can:

  1. Understand how many people want a feature
  2. Reach out to them when we start building it to really understand the key customer problem / scenario so we can solve the right problem.
  3. Close the loop with them by letting them know we solved their problem after we deploy a fix.

We just add a link to the Intercom conversation or Helpscout ticket to the GitHub issue. When the issue gets closed the Intercom integration re-opens all the linked up conversations so we can notify those customers, which is super nice. We’ll also obviously go through the Helpscout conversations and email each person.

Customers are often surprised and delighted that you’re a) paying attention, b) fixing their issues, and c) remember and care enough to email them, even if it’s been months since they first reported the issue.

CT: Where do you get feature ideas from?

KM: We have all been devs, and product / project / eng managers at various points throughout our career, so we have a point of view on what the app should do. That said, we love talking to customers and solving their problems, so we gather feedback from customer conversations via Intercom, Helpscout, phone calls, and our public GitHub feedback repo.

CT: If you had to give your shipping process a name what would be it be?

KM: Pragmatic and focused on shipping.

CT: When you are building new features how do you handle bugs that get found before the feature ships? After it ships?

KM: We pretty much have a 1:1 mapping between issues and PRs. So when we find bugs in UAT, we add them as new issues in Codetree. We use the Codetree dependency syntax to keep track of whether the issue’s bugs have been fixed.

The fixes go into the same branch / PR as the initial issue, so when the bugs are marked as resolved the PM does UAT on the issue, making sure that both the bugs are fixed and the feature works as specced. If a bug is low priority we may punt on it instead of fixing for the sake of getting it out the door.

Bugs that we find after a feature ships are generally fixed right away if they’re showstoppers or impact many users. If they don’t, it goes through our triage process.

CT: What does your branching and merging process look like?

KM: We branch off master into short-lived feature branches where coding happens. A PR gets filed when it’s ready to merge back into Master. Testing happens on the feature branch, and the feature branch gets merged back into master when it’s ready to go to staging / prod. We’ve started using super simple feature flags (e.g. if user.admin then show new feature) to deploy partially-built features to production. It lets us deploy smaller changesets to prod while lowers the chances of regressions. It’s worked out really well so far.

CT: How is new code deployed?

KM: Pretty basic at this point - we use Heroku and just deploy master from the command line with git push heroku-production master.

CT: How often do you ship new code to customers?

KM: As often as possible, usually several times a week.

CT: Do you use feature flagging? Staging servers?

KM: Just started using feature flags - see above. We use a staging server to smoke test new features / bug fixes in a prod-like environment.

CT: Do you use Sprints? How long are they? How is that working out?

KM: We don’t really use “sprints” per se. There’s a lot of overhead of determining what work will fit into an arbitrary time box and we’re not willing to pay that cost.

But we do plan work in a monthly cycle. We put our key “must do” issues into a monthly Milestone, and then add some smaller work / high impact issues too.


Can we improve this? Do you want to be interviewed, or want us to interview someone else? Get in touch here!

Get better at building and shipping great software

Get emailed when new interviews with engineering leaders from great companies launch. We'll email you once a week, max.

Is managing your GitHub issues painful?

Codetree gives you lightweight project management on top of your GitHub issues. Try it for free for 14 days.