How We Ship: Azavea
Our ultimate goal is to understand and share with you, at a nuts and bolts level, how great companies ship software to customers. To do that we're doing email interviews with engineering, product, and project managers at great companies to understand how they ship. If you'd like to share how your team ships software, get in touch!
Today's interview is with Hector Castro, who's VP of Engineering at Azavea. He shares how the development team at Azavea ships software. Azavea is a B Corporation that creates civic geospatial software and data analytics for the web.
Codetree: Because shipping software is context dependant, can you let us know your team size and any other context you think might be important for readers?
Hector Castro: We have about 30 people split into 5-6 teams shipping software in a handful of different contexts. Some teams focus solely on professional services projects. Others, only product. Some work on a mixture of the two. Lastly, one of the teams works almost exclusively on a large open source library.
CT: How do you decide what to build next? What does that process look like?
HC: For projects, we usually try to do most of the work upfront in terms of project scope and milestones. We iterate on the project in two week sprints until the contract is over and the client is happy.
For products, we are generally pretty aggressive at trying to build features based on customer demand. We design it first, build it, test it, then ship it. Same two week sprint cycle. Long term roadmap stuff we’re still learning how to do better.
CT: Could you sketch out what shipping a new feature looks like for you end-to-end?
- We observe user behavior or get feedback on functionality from a user
- Product manager and tech lead try to determine how we can meet the need
- That work turns into GitHub issues
- Issues become tasks for sprints across relevant teams
- UI/UX team works on mockups; produces markup
- Development team builds mockups into the application
- Development and Operations teams determine backend requirements for the feature and implement them
- We connect the frontend with the backend (usually web API)
- We test feature in staging
- We deploy the feature to production
- We try to observe how the user uses the feature; we ask for feedback
CT: Do you estimate how long features will take to build? If so what does that process look like? Do you track if features are behind or ahead of schedule? Do you track how long they took compared to how long you thought they would take?
HC: Big features get split into smaller tasks, which get put into two week sprints. Each task is assigned a point value relative to its risk and level of effort, which roughly map to fractions of a day. Sum estimate for a feature is sum of all points for all tasks associated with a feature.
Per sprint, we track points estimated vs. points completed, giving a coarse evaluation of gap between planned and actual time to complete. Don’t have a good way of measuring over multiple sprints.
CT: How do you have a particular process for paying down technical debt and/or making significant architectural changes?
HC: Quarterly bug bashes, per sprint active support rotations where you either fix small issues or address support (low volume support right now). For professional services work, we try work in specific changes we’d like to make into future amendments to the contract, while being transparent about how budget/timeline affected decisions to date.
For architectural changes, we use Architecture Design Records. That forces collection of context, decision, and consequences of an architectural change. It gets reviewed like a normal PR. If accepted, tasks to make the change happen can go into the next sprint.
CT: How do you deal with delivery dates and schedules?
HC: We generally lean more toward reducing scope or complexity vs. saying something will ship when it is ready.
CT: How many people are in your product + dev team? What are their roles?
HC: 8-10, depending on which product or project. Roles are:
- UI/UX designers
- Product/Project managers
CT: Do you organize people by feature (e.g. cross functional feature teams made up of devs, PMs, designers etc.), or by functional team (e.g. front-end team, backend team, etc), or some other way?
HC: The former, except for UI/UX and Operations. Those two service all teams of PM + developers.
CT: How do you know if a feature you’ve built was valuable to customers? What metrics do you use if any?
HC: Usage based on UI based analytics, backend application instrumentation, ad-hoc queries against application logs. Also, direct communication with users through customer support channels. Additionally, for professional services, we often rely on the client to relay qualitative perspective from their users/partners/clients.
CT: Do you have a dedicated testing role? How important is automated testing in your process? What tools do you use for testing?
HC: No. Usually the burden of testing falls on PM, developers, operations.
Everything goes through a CI pipeline. Automated testing is used for all aspects of the application, but we don’t really do UI level integrations tests (cost of test maintenance too high). Load testing often occurs when problems arise. Little to no regression testing.
CT: Does your team create functional specs for new features? If so what do they look like? What do you use to create them? What does the process look like for reviewing them?
HC: Not really for products. For client projects, sometimes. Usually a collaborative process between client and tech lead + PM.
CT: What about technical specs? What do they look like? Who reviews them? Who has final say on the spec?
HC: Not really.
CT: What tools are important to you in your engineering process?
CT: Do you triage issues? If so what does that process look like? How often do you do it?
HC: Ideally, at the end of every sprint. Roughly, a half a day is set aside to plan work for the next sprint.
CT: What part of your current process gives you the most trouble? What is working the best?
When the issue backlog grows, being able to keep older things in context when making decisions now (here, old could be as little as 1-3 weeks).
Associating points to tasks and roughly completing those tasks within the estimated duration.
CT: What is something you do that they feel is different from what others are doing?
HC: As an organization, we work on multiple products and multiple projects. For a ~60 person company, that is a fair amount of context switching, also very different decision making processes.
CT: Where do you get feature ideas from?
HC: Ideally, users and deciphering user feedback.
CT: If you had to give your shipping process a name what would be it be?
HC: “Something that works pretty well for projects, and products we are still trying to get better at.”
CT: When you are building new features how do you handle bugs that get found before the feature ships? After it ships?
HC: Fix right away if small, otherwise create issue and prioritize. If after, create issue and prioritize. Determine if it warrants a hotfix, or can be part of next release.
CT: What does your branching and merging process look like?
HC: Features branches, which lead to PRs, which get reviewed by a colleague, then merged.
CT: How is new code deployed?
HC: PR branches are tested in CI, merges are tested and deployed to staging. Once in staging, features are tested before task can be marked as verified. After sprint review, issue for task is closed.
Deployment of feature is usually rolled up into releases at different cadences. Some products ship multiple times a week. Others once a sprint, others once a month.
CT: Do you use feature flagging? Staging servers?
CT: Do you use Sprints? How long are they? How is that working out?
HC: 2 weeks. Presumably well. Unclear what an effective alternative would be.
Can we improve this? Do you want to be interviewed, or want us to interview someone else? Get in touch here!