How GoSquared Ships Software
Our ultimate goal is to understand and share with you, at a nuts and bolts level, how great companies ship software to customers. To do that we're doing email interviews with engineering, product, and project managers at great companies to understand how they ship. If you'd like to share how your team ships software, get in touch!
Today's interview is with James Gill, who's CEO at GoSquared. He shares how the development team at GoSquared ships software. GoSquared builds software for small and medium sized online businesses to help them grow.
Codetree: Because shipping software is context dependant, can you let us know your team size and any other context you think might be important for readers?
James Gill: At GoSquared, we’re a team of 7 people right now. We build software for small and medium sized online businesses that helps them grow. Our platform brings together a bunch of tools that are usually in different places: CRM, live chat, and analytics. We’re in a highly competitive industry, and the most important thing for us is to move fast and use our size as a weapon against the bigger, slower players in our space.
CT: How do you decide what to build next? What does that process look like?
JG: Deciding what to build next is always a difficult challenge – no matter which business, every single product company I’ve ever spoken with has struggled on this! For us, the process is always evolving, but right now here’s how things work: we continuously take feedback from our customers. Every request, no matter how small, we dive deeper on it (ask why as many times as required), and we note it down along with who asked for it.
Customer requests are one input for our roadmap. The other key input is our own vision and direction – as the famous saying goes, customers don’t necessarily know what they want until you show them.
We come together as a team every quarter to surface all the feedback we’ve received, and outline our direction for the upcoming quarter as a team. Everyone on the team takes turns to present their thoughts on the past quarter and what we should tackle next. We then define a series of objectives / goals for the quarter. This then trickles down into sprints of 2 weeks each, where we tackle the key product features and improvements. We don’t care who comes up with the idea, we care about whether it’s a good idea. Everything we try to do within the team is to encourage a mindset of “the best idea wins” – rather than the loudest voice.
CT: Could you sketch out what shipping a new feature looks like for you end-to-end?
JG: At the size we’re at we have a relatively minimal amount of process around shipping new features. We’re always trying to find the right balance of too much vs too little process in order to move as quickly as possible and build the best product we can.
Every new feature starts with a spec – we wrote about this a while ago here.
Within the spec, we have a section that we stole from Amazon: we write the blog post first. We do this so that everyone on the team can focus on the outcome of the feature, rather than just the technical implementation details. It becomes a lot easier to clearly see when a feature isn’t complete when you all know what you’re working towards.
The spec is shared with the whole team – we use Dropbox Paper extensively for this – and people can drop feedback and questions in there that the product owner can then gather and use to iterate the spec before it’s then added to the backlog. At the next sprint meeting we’ll then decide which features will be tackled in the next sprint and break down the work and assign it to the relevant team members.
We ask ourselves “is what we have now better than what’s currently in front of customers?” If the answer is yes, then we ship.
CT: Do you estimate how long features will take to build? If so what does that process look like? Do you track if features are behind or ahead of schedule? Do you track how long they took compared to how long you thought they would take?
JG: We aren’t too strict on this, but we’re getting better at it – we estimate based on a scale of 1, 2, 4, 8, and 16. 1 = a small task, likely achievable within an hour. 2 = a few hours. 4= roughly half a day. 8 = a full day. 16 = more than a day and likely needs to be broken down into smaller tasks. We review after every sprint to understand what we managed to complete, what fell behind, and where we slipped up. Again, we ask “why?” a lot to get to the root cause of issues here.
CT: How do you have a particular process for paying down technical debt and/or making significant architectural changes?
JG: Architectural changes are always driven by the customer need – e.g. if we want to introduce improved or drastically new functionality then we’ll do everything we can to understand whether we can do it with our existing infrastructure or if need to rethink areas.
Technical debt is a slightly different topic, and it’s an area where we have a little more process. We often found technical debt would accrue on projects that lacked clear ownership, so a key thing we did was ensure every project had an owner who was responsible for its care and attention – owners could call on others on the team to assist at any time if they felt they needed it.
Another thing we’ve experimented with in the past is “bug fix Fridays” where we would take a Friday out every other week to address any issues and bugs that were less critical but still important.
CT: How do you deal with delivery dates and schedules?
JG: We have a pretty aggressive focus on shipping frequently – we know that most decisions are reversible and can be made faster by shipping and learning, rather than debating and waiting.
We ship many updates to production every single day, and anyone on the team can push to production. With larger features and product launches, we tend to set a date when we’re nearing completion, and then reduce scope to ensure we launch on time.
We know that when we fix issues quickly, we can turn a complaining customer into a loyal and happy one.
CT: How many people are in your product + dev team? What are their roles?
JG: Three full-time developers (all of whom are extremely versatile and can operate on any part of the stack), one hybrid PM / designer (me), and then everyone else on the team is very product-minded too so will assist with QA and testing. One of the benefits of building a product that we use ourselves is that we all have a deep understanding of the problems we’re solving and the areas of the product the are great vs need work.
CT: Do you organize people by feature (e.g. cross functional feature teams made up of devs, PMs, designers etc.), or by functional team (e.g. front-end team, backend team, etc), or some other way?
JG: Currently cross-functional but this is less relevant at our size right now.
CT: How do you know if a feature you’ve built was valuable to customers? What metrics do you use if any?
JG: With every feature, during the spec phase we outline what we’re trying to impact by building the feature. For example, it might be a product improvement intended to increase activation of a specific feature, in which case we’ll define the success metrics around that. Or it might be to increase revenue, so we’ll define a metric that is related to revenue in some way.
We used to be very bad at this, and we’ve been doing a lot of work internally to avoid a “ship it and forget it” mentality. Sprint retrospectives are also helpful for us to evaluate where we see early signs of success or failure in what we’ve shipped.
We also obsessively measure NPS (Net Promoter Score) of our customers, and review key product metrics by signup cohort so we can see more clearly when we’re making improvements.
CT: Do you have a dedicated testing role? How important is automated testing in your process? What tools do you use for testing?
JG: We don’t have a dedicated role but we run a series of automated tests whenever we deploy to production.
CT: What about technical specs? What do they look like? Who reviews them? Who has final say on the spec?
JG: Final say always ultimately falls back to the owner of feature / improvement.
CT: What tools are important to you in your engineering process?
JG: We’ve built a lot of our workflow around GitHub – we use it for all code versioning, and we use GitHub Issues for all big tracking. We use Codetree for managing our Issues, and we use ClubHouse for product planning and sprints.
We use Slack extensively – we have an #ops channel where everything pipes into so the whole dev team can see what’s going out, and if any builds are failing. We use Jenkins for our CI, and we use a version of Deliver for deployments. We use Mocha and Jest for testing, and Fastlane for our native mobile apps.
CT: Do you triage issues? If so what does that process look like? How often do you do it?
JG: We have a manageable incoming issue volume, so we triage stuff as it comes in on the fly.
We categories issues into:
- “Drop everything, fix this now.” (hopefully a rare event)
- “It will go onto this sprint, and I'll fix this as soon as I get to a convenient point.”
- “We'll add it to the backlog and prioritise it against everything else for the next sprint.”
We use our own platform (all our customer information is in GoSquared People), so when a customer reports an issue, we’ll understand their lifetime value, usage levels, and more, to help prioritise their query. We also know whenever we fix issues quickly, we can very rapidly turn a customer complaining into a customer who is incredibly loyal and happy – to the point where they’ll often share the experience publicly.
These two Twitter threads from @petjb and @codetheory are good examples of how fixing issues quickly can make customers happy.
We have a pretty aggressive focus on shipping frequently – we know that most decisions are reversible and can be made faster by shipping and learning, rather than debating and waiting.
CT: What part of your current process gives you the most trouble? What is working the best?
JG: One of the hardest things is always figuring out where to spend our time so it’s most effective – deciding which feature to tackle next. There’s always so many options, and so many exciting avenues. A lot of the time we’re having to say no to really really good ideas in order to maintain focus.
CT: What is something you do that they feel is different from what others are doing?
JG: We have an incredibly small product team for the size and quality of our platform. We serve thousands of customers, and process incredibly large volumes of data with the platform we’re building, and many of our customers often think we have an engineering team of a hundred people. We’ve reached where we’re at by being fortunate enough to have an amazingly talented team, and making some bets on technology that have paid off.
CT: If you had to give your shipping process a name what would be it be?
JG: Really struggling to answer this one. Probably mostly as agile as possible.
CT: When you are building new features how do you handle bugs that get found before the feature ships? After it ships?
JG: We evaluate how many people will be affected by the issue and how critical the issue is for those people, and use that to decide whether it should hold up the feature from shipping or be fixed after the fact. We always ask ourselves “is what we have now better than what’s currently in front of customers?” If the answer is yes, then we ship, and fix after.
CT: What does your branching and merging process look like?
JG: We create branches for each feature, and we also have an alpha and beta branch for pushing new releases out to subsets of customers earlier. We pair up to review PRs before anything goes live.
CT: How is new code deployed?
JG: We have a bunch of different services all running differently (e.g. mobile apps vs frontend apps vs server-side running on instances, containers, or AWS Lambdas).
On the whole, we have tests that are run on every deploy, which can also be run locally during development, and for most services we have a staging deployment where we can examine how a service runs in a production-like environment.
CT: How often do you ship new code to customers?
JG: We ship updates to product multiple times a day. We ship new functionality (the stuff customers will actually notice) at least a couple times a week.
CT: Do you use feature flagging? Staging servers?
JG: We use feature flagging for rolling out functionality to subsets of users to gather feedback before rolling out to 100% of our user base. We haven’t needed to do this much, but as we continue to grow we’re finding ourselves doing it increasingly frequently to ensure updates and improvements are well received and well thought-through before hitting everyone. We pick up a lot of edge-cases by feature flagging and rolling out gradually.
CT: Do you use Sprints? How long are they? How is that working out?
JG: We used to have one week sprints until the start of Q3 – we are now experimenting with 2 week sprints that run Wednesday-to-Wednesday. Primary reason for this is to give ourselves more time to tackle more significant improvements, and also to ensure we spend more time on shipping and less time on planning / meeting.
CT: Thanks James, this has been a fantastic interview. Really appreciate it!
JG: This has been a really valuable set of questions to answer actually – we have been reviewing a lot of internal processes lately and this has really helped me articulate what we do and why :D
Want to work with James at GoSquared? They're hiring! James and GoSquared are also offering a 50% discount off the first month on their Live Chat tool. It's a great way to get direct, relevant feedback on the product you're building - compared to stuffy methods like email, you can the real, raw feedback of users with Live Chat. Use code "codetreesquared" when you try GoSquared's Live Chat to get 50% off the first month.
—
Can we improve this? Do you want to be interviewed, or want us to interview someone else? Get in touch here!