How We Ship: LemonStand
Our ultimate goal is to understand and share with you, at a nuts and bolts level, how great companies ship software to customers. To do that we're doing email interviews with engineering, product, and project managers at great companies to understand how they ship. If you'd like to share how your team ships software, get in touch!
Today's interview is with Bruce Alderson, who is CTO at LemonStand. He shares how the LemonStand dev team ships software. LemonStand is a refreshingly customizable online retail platform for web developers, agencies and fast growing brands.
Codetree: Because shipping software is context dependant, can you let us know your team size and any other context you think might be important for readers?
Bruce: We have 7-15 developers throughout the year, split into 2-3 teams, focused either on features, operations, developer support, or agency work. We ship a large-scale cloud retail platform, so we focus on scalability and security, in addition to usability and feature depth.
As our platform includes various development tools for building out online retail stores, we also provide a lot of developer support. Everyone at LemonStand contributes to support too, which keeps us all connected to how the platform is used and where it’s succeeding or failing to meet the needs of our customers.
CT: How do you decide what to build next? What does that process look like?
Bruce: We plan what to work on next differently, depending on what’s driving the project.
For features, we keep a number of lists of requests, dream features, and product directions that we review every quarter. We have a focused, prioritized list that includes customer stories and references, which drives our roadmap. We manage the master list carefully, as we believe it’s our path to success.
We also work closely with our customers and key strategic partners. We may choose to bump a roadmap project to the top of the list when the opportunity to work with someone great arises. There is something special about building out a feature with a partner, as it gets us feedback sooner, and results in a stronger feature.
There is something special about building out a feature with a partner, as it gets us feedback sooner, and results in a stronger feature.
CT: Could you sketch out what shipping a new feature looks like for you end-to-end?
Bruce: For features in our long term view, we love to collect stories and examples to help us think about how our platform should be. Both the early thinking and succinct summarization are vital in bringing the right features to our platform.
We also believe that ideas need to be demonstrated regularly. Once we decide to work on something, we pitch the idea and story to the team. Those early discussions are great for getting people thinking critically about a feature, which paves the way for seeing and thinking about design and implementation hurdles early.
Our basic process goes something like:
- Collect, detail, and summarize stories about features
- Filter and prioritize, and queue up for development
- Pitch the feature to the team
- Draft a design approach and possible mockups
- Pitch design approach to team, and iterate if needed
- Github issue(s) are opened with summaries of the story (and links)
- Detailed specifications are added where needed
- Specs are pitched and iterated on, which may result in rethinking design, asking questions about stories, or even re-considering the feature
- Features are implemented, with automated unit tests and manual checklists
- Features are merged to our master (staging) branch using PRs, the PRs are reviewed carefully (and gated by CI tests), and they make their way to our staging system
- Manual checklists are run, and features are reviewed against their stories
- We also demo the features in their full form and tweak as needed
- Sets of features are merged to Production and deployed
- We monitor and patch as needed
CT: Do you estimate how long features will take to build? If so what does that process look like? Do you track if features are behind or ahead of schedule? Do you track how long they took compared to how long you thought they would take?
Bruce: Yes, we use estimates as part of planning our projects. We use them to think about the cost and timing of projects, and then we use them later to measure the overall success. We have occasionally identified projects going past their estimate and used it as a trigger to reconsider our approach or timing. We also reflect on projects that have exceeded their budgets, and look for ways to improve our product planning and development processes. When a project goes past its estimate, we see it as an indicator that something in the development machine needs tuning, and we have found that tuning to be a valuable way to improve our pace of growth.
When we look at our roadmap project spreadsheet, we use rough estimates from the product team as a way to figure out how to get the best value for our current customers, prospects, and long term vision. We revisit our estimates with feedback from the developers once we have completed stories, product requirements, and a basic approach (usually before lower level specifications). And for our agency work, we will sometimes take this a bit further, estimating after outlining the lower level deliverables, just to reduce risk a bit more.
Estimates for work on features are first-order estimates, which are really just informed guesses. We generally use two methods for each estimate, one based on historic data about previous projects (e.g., this project is about half the size of that other project) and one based on splitting up the work into parts and using standard sizes for the work . This isn’t as complex as it sounds and we keep a few spreadsheets around to take the guesswork out of it.
The great thing about estimating and tracking work is that while the methodology is imperfect, it pushes us to understand how we’re doing and where we need to improve. It influences both team and personal growth, as well as helping to set a pace for our work. Even just a bit of measurement and prediction is good for improving the cadence of releases.
The great thing about estimating and tracking work is that while the methodology is imperfect, it pushes us to understand how we’re doing and where we need to improve.
CT: How do you have a particular process for paying down technical debt and/or making significant architectural changes?
Bruce: We like to use opportunity when evolving our architecture. We have long term plans that are part of our ongoing research and development, which we inject into projects based on need and timing. One of the most satisfying projects is one that both brings new or improved features for customers and one that improves the overall system. We believe that features inform architecture to some degree, as a system that doesn’t make key features and behaviours easier is probably taking the wrong approach.
I’m not a huge fan of using the term technical debt, as debt in business implies clear terms, careful planning, and a structure around how you resolve the debt. Most of what we call technical debt is really just regret for shortcuts or missteps in approach, the rest is just the evolution of the product and improving features. Thinking of our mistakes as debt shifts the blame by legitimizing historic decisions, and implying that we knew what we were doing when we made them. Instead, we try to use our mistakes to help us avoid weaker approaches in the future, and to improve our internal software development methodology.
I’m not a huge fan of using the term technical debt, as debt in business implies clear terms, careful planning, and a structure around how you resolve the debt. Most of what we call technical debt is really just regret for shortcuts or missteps in approach.
We do have a process defined for paying down our regret that we call JEDI (which is to say: just effin’ do it!). It’s an emphatic approach to squashing bugs, improving UX, and making structural improvements that otherwise wouldn’t get done. We allow everyone some time every week to fix what they deem to be important, and celebrate these wins with demos and a fun #Slack scoring system (using the fab @PlusPlus SlackBot). There’s huge satisfaction in fixing something that’s bugging you, as it relieves those frustrations that build up around things that didn’t quite get done right.
CT: How do you deal with delivery dates and schedules.
Bruce: we use both delivery dates and reducing scope to manage releases. We balance these factors against making sure a feature is actually useful (i.e., when it’s ready), as there is so little value in most partial features. Occasionally we’ll rework a feature into multiple phases mid-project, if we find our understanding of complexity has changed or other operational concerns have blown our timelines. We believe in a balance of flexibility and using time to drive our focus.
One of the important things to remember in software development is that the path is very difficult to predict early on. If a process is too rigid, it may not take advantage of timing or may not result in the best product possible. On the other hand, it’s important to manage costs carefully, especially time, as it’s easy for a project to lose focus.
One of the important things to remember in software development is that the path is very difficult to predict early on. If a process is too rigid, it may not take advantage of timing or may not result in the best product possible. On the other hand, it’s important to manage costs carefully, especially time, as it’s easy for a project to lose focus.
We like to think about our product as a journey. There are waypoints. We set goals to get to those points. We push hard to do so, remembering that we have a long trip ahead of us. We tune the process to get us there faster, focusing on the end (and not the means). We find there are projects that need a slightly different process, as the path isn’t the same for every release. Realizing this allows us to tune our methodology as we go.
CT: How many people are in your product + dev team? What are their roles?
Bruce: We currently have 5 people dedicated to product design and development.
We try to keep our product development as flat as possible, so that we can pull the best from everyone’s skills. That said, Danny (our CEO) and I organize and do a lot of the design working with our developers, testers, and customers.
Danny collects and summarizes the stories from our customers and staff. He spends time digesting and dreaming about where we should go next, and getting all of that information into a form the rest of us can think clearly about.
I manage most of the architecture, high level designs, and mockups, though I love to share this work around the team to help developers learn and grow, as well as spreading the workload between our talented team members.
Ross (VP of Growth) plays a huge role in giving meaning and facts to back up our priorities, as without facts we’d be left guessing where our next steps should be. Everyone else contributes their product knowledge and design skills for specifications, feedback, and asking the hard questions (like: is this the right approach? are we missing something?).
Without facts we’d be left guessing where our next steps should be.
CT: Do you organize people by feature, or by functional team, or some other way?
Bruce: We try to focus people based on projects, but also we like to vary what teams work on somewhat (as opportunity allows). Part of this is to round out individual and team growth, and partly it’s based on when teams are ready for their next project.
While the teams and structure changes over time, we do tend to end up with people that focus more on front-end or backend work (as it fits their skills and preferences).
CT: How do you know if a feature you’ve built was valuable to customers? What metrics do you use if any?
Bruce: We use a few techniques to measure how features are working out for customers, including anonymous usage patterns, anonymous statistics, anonymous surveys, and various one-on-one surveys. Information for our various methods helps us learn about friction, fit, and performance of various changes and tweaks to our platform.
Ross and his customer success team are pivotal in collecting up references, feedback, and UX analysis. We find that having the success team own the measurement of user experience and happiness gives them the tools they need to provide amazing support, maintain our online documentation, and feed back into the design and development process. In short, it helps them understand where the product needs improvement, and it keeps the feature process in check.
We find that having the success team own the measurement of user experience and happiness gives them the tools they need to provide amazing support, maintain our online documentation, and feed back into the design and development process.
CT: Do you have a dedicated testing role? How important is automated testing in your process? What tools do you use for testing?
Bruce: No. The weight of testing and quality is on the entire team. We use checklists, reviews, monitoring, and various automated tools. Different types of testing is commonly done by specific people: usability and acceptance testing is usually done by the PM and support lead, but it’s certainly not limited to them. Everyone can report issues and stop a release if something fails to meet our standards.
Every pull request is gated by a CI pipeline of unit and code lint tests, in addition to careful code and UI review. We don’t have any automated UI tests (yet), so we use checklists that guide individuals through the key new features.
CT: Does your team create functional specs for new features? If so what do they look like? What do you use to create them? What does the process look like for reviewing them?
Bruce: Yes, we try to summarize the driving story or request to kick off a sprint or longer project. We review these brief functional specs by pitching or kicking off the project, talking through the what and why (with some minor discussions on the how, if it’s important to the feature).
The feature summaries are part of the initial Github feature tickets, either as a checklist or point form summary of the stories.
Our functional spec reviews are collaborative. While Danny or I will have organized the information, everyone will usually have seen the various customer requests already, and been part of dreaming and talking about the feature before it’s on the roadmap. We talk about the features as a group, but are also invited to contribute in the ticket and via Slack.
CT: What about technical specs? What do they look like? Who reviews them? Who has final say on the spec?
Bruce: We write brief technical specifications for all projects, most sprints, and any bug fix that requires significant change. We all review the technical specifications pretty closely, but Danny and I occasionally help nudge an approach closer to a set of stories than it already is, mostly making sure all aspects of the stories are recognized in the specs.
We believe that it’s important to think about how to solve problems before writing code, and that feedback on the process improves quality significantly. Specifications don’t have to be long either, they just need to cover how to approach a problem, and specifically what needs to change. Even if you didn’t write it down formally, a responsible developer would be jotting it down before writing code anyway.
We write specifications in Github tickets, using Markdown, including diagrams (that can be drawn on paper and photographed, or in something like Sketch or Omnigraffle). Specs include links to the user stories or customer tickets, and cover topics like:
- general approach,
- schema changes
- architectural changes
- API and file changes
- key algorithms or protocols, and
- finalized UI mockups
We want to get agreement on these key design elements before spending time building them, as we can prevent early missteps and future regret by designing things as a team. Also, it’s helpful to have some history on why approaches were selected (as our future selves are pretty forgetful).
These internal specifications don’t take long to write either (a few hours for a week long sprint). Mockups are generally close to complete before specifications are written, though they may be updated as a result of something discovered when planning the detailed work.
CT: What’s the one tool your team couldn’t live without, and why?
Bruce: Slack. We’re 100% a remote team, and #Slack is the hub of our communication. I’m also pretty sure we couldn’t live without Slack and Github’s emoji support and ❤️.
CT: What tools are important to you in your engineering process?
Bruce: Our CI tools are key in our development setup. These include standard tools like Git/Github, TravisCI, and various build, lint, and other tools. These standard tools are great, as they’re easy to setup, and they pay back in a huge way in terms of consistent quality checkpoints.
We actually use a number of low-tech tools for product management, including Google Sheets and Docs. We’ve looked into various PM tools, but haven’t taken the time to move all of our stories and priorities into one, mostly due to the time it would take to do so.
We track customer requests and support generally using Intercom, linking to Github tickets for things we’re taking action on. It’s a great tool for responsive support, though it does have some shortfalls for us (like lacking clean customer metadata and launchboard links).
For actual product design we use tools like Sketch. Occasionally we’ll use Invision, though the extra time it takes to produce an interactive mockup is usually more worth it for things we show off externally, as internally we have a pretty good rapport between our developers and designers.
CT: Do you triage issues? If so what does that process look like? How often do you do it?
Bruce: Absolutely. We triage weekly and as-needed when important issues come up. We set priorities based on surface area, damage or risk, and timing. Our team is also very good at intuiting emergent cases, and we all pitch in when something unexpected and important needs to be solved.
We use a Github project board to triage issues. It’s a great way to visualize priority and flow. We cheat a bit and use a few columns for emergent priorities (like needs discussion, 🔥, and so on). This helps guide the triage weekly, and helps us remember what state we’re in.
CT: What part of your current process gives you the most trouble? What is working the best?
Bruce: We sometimes struggle to get team members to show off their work. That demo and pitch process is crucial for getting early feedback, for learning, and for recognizing great work. We find when focused on a task that people have a harder time reflecting their work outward.
We have been very successful with two things: writing simple and clear detailed specifications, and our automated testing, linting, and code review process has squashed a large number of trivial bugs we were seeing 2-3 years ago. It feels great to look back and see how far our team and each individual has progressed.
We have been very successful with two things: writing simple and clear detailed specifications, and our automated testing, linting, and code review process has squashed a large number of trivial bugs we were seeing 2-3 years ago. It feels great to look back and see how far our team and each individual has progressed.
CT: What is something you do that they feel is different from what others are doing?
Bruce: I feel like we may do a bit more upfront design than other software service teams. The big-A-Agile movement can often partition and spread design between sprints, which is great for exploring features, but less helpful when you already have a strong long term vision.
CT: Where do you get feature ideas from?
Bruce: We actually get ideas from a number of sources:
- We all have previous (and current) experience running agencies, where we built online retail sites. We use our agency team as a way to prove out new designs and UIs, before putting them into production.
- Our customers are great; they work with us to define what their stores need, or what their fulfillment and marketing teams need. We get a lot of great feedback and requests, which Danny and Ross distill into ✨.
- We also do a bunch of research, not only about our competitors, but about what successful businesses are doing online. Retailers like Harry’s and MeUndies (for example) have proven out a number of vital ways of selling things online.
- We also love dreaming about how to build better online stores. It’s just fun to think and talk about. Sometimes we’ll just ask the question what if …. ?
CT: If you had to give your shipping process a name what would it be (e.g. “Non-Dogmatic Agile”, “Waterfall”)?
Bruce: I’d call it Little-A-agile with a hint of non-Waterfall-waterfall in the design stages. There are benefits to iterating on stories and design before sprinting, and being flexible enough to see opportunities instead of getting muddled in the motions of process.
There are benefits to iterating on stories and design before sprinting, and being flexible enough to see opportunities instead of getting muddled in the motions of process.
CT: When you are building new features how do you handle bugs that get found before the feature ships? After it ships?
Bruce: Any major issue found before shipping will delay a release. We have a large number of customers who rely on the platform for their revenue, and we will not ship knowingly broken features. After shipping, we take emergency response measures if something happens to go wrong, which pull the team outside of their normal working hours and approach (in other words: we treat issues after release very seriously). All other issues are triaged in our normal triage process, and will be remedied as their situation requires.
CT: What does your branching and merging process look like?
Bruce: We use a fairly typical dovetail approach where master is the staging branch, and production is the tested, installable and running branch. Our goal is for master to always be sane and stable, and for production to always be perfect.
All features and fixes get their own branch off of master, and branches may be merged into each other (if they rely on each other). Feature branches include their originating issue # if they have one, and these branches are responsible for merging master changes in and testing before their final merge down. All branches are reviewed by someone else (or multiple team members), and must pass all unit tests.
We document releases or deploys in the master to production PRs, so we have some annotated history of PRs contained in various releases.
CT: How is new code deployed? E.g. PR branches are run through automated testing, manual UAT and then an automated process kicks them off to production.
Bruce: We have not fully automated deployments through the CI stack (yet), but our deploy is automated as a single command button per layer. All deployments occur only after reviews, and the automated and stage checklists pass. Also, deploys can be instantly rolled back if needed.
CT: How often do you ship new code to customers?
Bruce: We generally release changes weekly, but minor improvements may be more frequent, and major ones less frequent.
CT: Do you use feature flagging? Staging servers?
Bruce: We use both staging servers and a single beta opt-in flag.
Our feature flagging is super simple: we have a beta-feature flag that new features can get associated with, and a single toggle in our support dashboard for each customer store. This allows us to introduce customers to major new features by hand, and then roll the feature out more widely once tested. We can also disable beta features with a single migration for the next cycle of beta features.
We also can flag features for specific account plans (e.g., Enterprise Plan only), but recently we’ve flattened the majority of our features to be available for all of our service plans to make it easier for prospective customers to choose between plans.
CT: Do you use Sprints? How long are they? How is that working out?
Bruce: We aim to break projects into digestible chunks, with regular demos, and sane pacing. We try to keep project releases frequent, and aim for 1-2 week cycles if we can. For larger projects, we still use shorter internal releases, as larger work is much harder to review and QC.
I’m not a huge fan of calling every development cycle a sprint, as it suggests you’re always running your team as fast as you can. I do think you can use sprints for special projects, but calling every chunk a sprint doesn’t really mean much to us. For us, most our projects are features that we craft carefully, keeping scope and timelines reasonable.
CT: Are you guys remote? If so, how’s that working out for you?
Bruce: Yes! We’re 100% remote now. We had tried various hybrid remote/downtown approaches in the past, but found that engaging remote workers really works best when everyone is remote. It also gets everyone a bunch of extra time every week, which is a great win.
Remote teams are a challenge, but it’s working well for us. We have made a habit of talking daily, by making up reasons to meet. We’ve found that we need some daily calls that are just watercooler chat, to keep us all connected with each other. We use other calls for triage, one-on-one coaching, feature demos and pitches, and planning our agency work.
Like the way LemonStand ships? They're looking for incredible people. Check out their careers page to learn more about LemonStand and open positions!
—
Can we improve this? Do you want to be interviewed, or want us to interview someone else? Get in touch here!