Personal retrospectives

| categories: organization, tldr, remote, culture

As an expansion of my snippets habit, for the last 7 years I've written up a "yearly snippets" doc each year - a personal retrospective for the year.

I've used various formats - from "incident review" style through "4Ls" - but always with the intention of sharing more broadly than just to me and my manager. My colleagues have generally appreciated the perspective, and some have taken up the practice.

One of my favourite things about doing this is that it's for me. Yes, it's useful to my manager (and a useful way of "managing up"). I use it as input to the local performance and review system. But I'm the main audience and it's my practice. I hate the kinds of performance mangement systems we see in tech companies. Having something that "works around" them to my benefit is very freeing.

Anyhow, Stig and I wrote up some notes and guidance about the idea at $current:

The systems design interview

| categories: tldr, interviewing, advice

This post describes (roughly) the aims and format of typical systems design interviews, and gives guidance on how to approach them.

The intended audience is systems and software engineers who are less familiar with this style of interview.


The company

Many companies - large and small - include a "systems design" slot in their on-site interviews, where the candidate is asked to design a relatively large-scale system - typically involving multiple cooperating machines and services.

The aim of the company in these interviews is to gauge the candidate's experience with and intuition for the design of distributed systems.

It's also a good opportunity to see how well the candidate communicates ideas, and what working with them would be like at a design level.

The interviewer

The interviewer shares the company's aims, more or less, and needs to:

  • set the scene, laying out the systems design scenario to be covered;
  • guide the interview, clarifying things and giving hints where necessary;
  • inflect the direction the discussion takes depending on the candidate's experience;
  • ask questions and dig into concrete elements of the design to make sure the candidate
    1. knows what they're talking about and
    2. can reason about and defend specific points, rather than talking "abstractly" throughout the interview;
  • assess the candidate's domain knowledge, problem-solving ability, communications style, and culture fit.

Taken together, this is a fairly tall order, so usually interviewers in this area have significant interviewing experience.

The candidate

Your aim is obviously to get an offer. ;o) In this case, that means you need to:

  • demonstrate good communication and problem solving skills;
  • show that you can apply any knowledge you have that is relevant to the problem domain and/or navigate the unfamiliar parts;
  • dig into and solve design issues appropriate to your experience level.

Think of yourself throughout as a "future colleague" - taking a collaborative approach to the discussion.



Typically an interview like this comprises a single design question followed by a long discussion. Roughly speaking, you can expect things to go like this:

  1. The interviewer sets up the scenario. Some examples:
  2. The interviewer gives some starting data to work from: user numbers, data sizes, that sort of thing. It's important to note that this will be incomplete. Do not try to work from the initial conditions in splendid isolation. Instead,
  3. You clarify the scope of the problem, asking for any further data you feel you might need to start on the design.
  4. You decompose the problem into sub-problems.
  5. You start tackling one, again asking for any data not already provided, or deriving new data from what you've been given.
  6. The interviewer digs into concrete details, asks for clarifications, and provides hints or further information where necessary.
  7. Go to step 4 and repeat until time runs out. :o)



The interviewer wants to understand how you think, so staring at the whiteboard for 5 minutes is not very useful to them. If you usually work in silence, practice on paper before the interview, speaking aloud about what you are doing.

If there's a whiteboard - there most probably is - use it for everything. If there isn't, ask for paper. Write down design parameters given to you, or that you derive. Draw system diagrams, data flow diagrams, API specifications. It helps clarify your thinking, encourages useful decomposition of problems, and makes it easier for the interviewer to follow and assess you.

Ask lots of questions. You are not expected to be able to just fly this without interacting with your interviewer: treat them as a peer with whom you are working through the design. If you are making an assumption (that you're aware of), state it clearly and ask the interviewer if they'd like you to modify it.


Don't attack the problem as a monolith. As soon as you've clarified the initial design parameters with the interviewer, think about the "relatively independent" elements of a system that could meet the design goals and sketch them out on the board.

In large-scale designs, this often suggests appropriate (initial) division of the solution into services which can be designed independently, deriving their design parameters from those of the overall problem.

Having sketched something, look over it critically and discuss it with your interviewer: is there any major part of the original requirements that you haven't addressed? Anything more loosely or tightly coupled than it should be?

Then, you can ask your interviewer which element they'd like you to tackle first.


One of the things your interviewer will be looking for is how you handle concrete elements of the design. This is often about e.g. how many machines you are going to need for some sub-service given the design parameters, or what volume of data you expect to have in flight or at rest at any given time.

So you need to be ready to make estimates and do some simple arithmetic. If it's not something you've often done, try taking "latency numbers every programmer should know" and applying them to a problem or system you know well. See how the theory compares to your practical experience. :o)

You might enjoy working through "computers are fast" by Julia Evans and Kamal Marhubi to sharpen your intuitions.

Back-of-the-envelope calculation like this helps constrain the solution space and gives you a clearer idea of what might work. Note that an estimate can be quite broad and remain useful: your objective is to determine "is this even practical" rather than get (nearly) exact numbers.


There are some fundamental things we need to keep in mind when designing a system distributed across multiple machines, and becoming familiar with them should help you navigate the problem domain.

Jeff Hodges' notes on distributed systems for young bloods covers many of these and is worth internalizing. My own talk "Designing for Brobdingnag" also looks at building blocks and patterns in large-scale systems design and includes a set of further resources on the last page of notes.

This is a big area, but in order to tackle a large-scale design it will help to have at least some idea of your options for:


If you have access to someone who runs interviews like this (and doesn't have a conflict of interest), ask them to help you by running a mock interview & then writing up

  • feedback as though for an interview process and
  • specific notes you can use to improve.


Junior candidates sometimes get wigged out by these interviews and lock up. The failure mode here is expecting too much of yourself: the interviewer knows your experience and background (they've read your CV), so will already have a good idea of the level they can expect. It's their job to appropriately assess the discussion relative to your experience. So do your best, make a good stab at the problem (see "Decomposition" above), try to have a good time, and above all don't compare yourself to an "ideal, more experienced you".

Conversely, if you're a senior candidate with relevant domain knowledge, the interviewer is justified in expecting a lot out of the discussion, and you may find them digging pretty deep. Don't be shy about going into excruciating design details if that's where the interviewer wants to go. They're trying to assess your limits, and to get into concrete specifics: a sometime problem with senior candidates is that they adeptly handwave these. :o)

Python in private repos

| categories: python, tldr, work

At $currentplace we work mostly in Python. We do everything we can to automate away busywork, so we like CI and the family of related tools and ideas. We've put quite a bit of work into a smooth build/test/deploy cycle, and along the way I've spent more time than I care to think of messing about with Python packaging.

Last month, a friend asked how we manage build dependencies, and my long answer eventually turned into a company blog post:

Developing and deploying Python in private repos

At Hosted Graphite, most of our deployed services are written in Python, and run across a large installation of Ubuntu Linux hosts.

Unfortunately, the Python packaging and deployment ecosystem is something of a tire fire, particularly if your code is in private Git repositories. There are quite a few ways to do it, and not many of them work well.

This post tells the story of what we have tried, where we are now, and what we recommend to programmers in a similar situation.


| categories: organization, tldr, remote, culture

A good habit I picked up at the last place was keeping "snippets" - short notes about what I worked on in a particular week. Something like this:

Weekly snippets: $project cleanup; meetings & troubleshooting.

"Short" week, in that I carried 2 days' work into it. All caught up (down?) now.

  • Removed daemonization code from $project- we expect to run it under supervisor everywhere. $link

  • Updated $otherproject to depend on that, plus Puppet changes to run under supervisor, deployed to $cluster.

  • Ran and wrote up a $topic meeting. $link

  • A frustrating day investigating $protocol issues in $cluster, much excluded but nothing pinpointed. Also some troubleshooting support during an incident on Thursday night - opened $task and updated $doc for followup.

  • Wrote up a plan for $new_project.

  • Discussed $speculative_project ideas with Eman & Charlie.

  • Copious Free Time: started working on Simple ESP Security Association Manager.

I write about this here because snippets are brilliant. I love having them to look back over - to find PRs I worked on, or docs I wrote, or when what happened, what I was thinking at a particular time. Without them, I'd be at a loss to explain what I did in any particular month, or would need to reconstruct that from old notes or emails. Writing a retrospective of my first year at the new place - do I get to call it new after a year? - pulled this into focus.

Some of my colleagues keep daily snippets. I find that weekly is about the right granularity for me: real nitty-gritty detail is in a notebook I maintain day to day. We've taken to just submitting them to a Github repo, and made a robot to notify us when something's committed so we can follow along. As a remote worker, I need to make it easy for the team to stay in touch with my work, and snippets certainly help.