Software companies typically have some kind of planning process that happens as part of any product or feature iteration. The planning process might be different depending on the type, scale or uncertainity of the iteration being planned.

At Echobox if an iteration is small and fast enough we aim to minimise the process as much as possible (perhaps even no planning), but for larger changes some form of dedicated technical planning document will be created. The goal of such a document is to highlight significant architectural decisions, risks, costs, possible alternatives for discussion etc. Planning should always be a net positive activity. Planning should always result in an overall process that is faster. If not then you spent too long planning, for example worrying about insignificant details early on.

A key benefit of efficient high level technical planning is that it gives greater confidence in the scale of technical effort that might be required. Not necessarily to the level of a delivery estimate but at least a reasonable degree of feasibility. This helps re-validate the expected value of any larger change before we start implementing.

The following is our cheat sheet of things that can get highlighted when reviewing planning for larger pieces of more mature work. Please note that planning for any type of experiment, POC or MVP should consider a different approach/perspective. They are ordered by the ease with which any such issue can be spotted. If planning may contain considerations at the harder end of the spectrum we recommend it’s important to get input from more experienced team members.

Ultimately everything boils down to “maximum impact for lowest cost/effort” but looking under the hood that quickly gets complex. This cheat sheet was originally created for engineers unfamiliar with planning, to help get them up to speed more quickly. There are quite a few points here but for any one piece of planning only a small handful should ever be relevant.

These are general rules of thumb, remain pragmatic at all times. There can always be exceptions where justifiable, e.g. in the interests of speed.

Easy

  1. Detailed technical planning has begun or worse finished and the product part of the spec is not yet complete, and may still change. That being said back and forth high level technical discussions are always encouraged, at all stages, to ensure we avoid waterfall like patterns.
  2. Proposed work is not well aligned to the squad objectives. Might just be a miscommunication in the planning but would need to be resolved.
  3. Proposed implementation, e.g. API endpoint, is too specific and only considers a small number of (mentioned) use cases, i.e. normally from the explicit product stories. For no to very little extra effort it can almost always be generalised to make it simpler and/or easier to iterate in future.
  4. Proposed implementation doesn’t address all requested product requirements.
  5. Services, interface, data structures, methods etc. are named after their ‘use case’ rather than WHAT they do. For example a POJO named ‘Newsletters’ when it’s actually a collection of generic emails which could be named EmailList (or similar). The use case might have been for Newsletters but the POJO contains emails so newsletters shouldn’t have been mentioned.
  6. Units have not been included in proposed variable names. e.g. timeCreated instead of unixTimeCreated! Units should be included everywhere, all the time, no exceptions.
  7. Planning may not indicate that important documentation may need to be updated. e.g. customer facing or service level docs.
  8. Exposes implementation details in interfaces rather than using loosely coupled identifiers.
  9. Proposed implementation involves PII/Personal Data and has not considered GDPR.
  10. Planning contains ambiguous definitions, has outdated sections, or contradicts itself. This can happen if the product spec changed after planning started and it hasn’t been reviewed as a whole again. Or significant changes were made to the planning and old sections remain unedited.
  11. Planning hasn’t used the document templates we’ve created to try and keep our approach consistent.
  12. Comments and questions from previous review rounds remain unresolved.
  13. Date times are being communicated in a date string when they should instead be numeric unix times. There are only a tiny number of cases when we’d want to use a date string so if in doubt use a unix time (and also see point 1.6 above).

Medium

  1. Implementation is over-engineered when that scale/performance can’t yet be reasonably anticipated. For example when we’re building an MVP, which is almost entirely assumptions, and we’re proposing to deploy an entirely new service to solve the problem. Keep it simple first!
  2. Proposed implementation can be simplified with no loss of product functionality.
  3. Proposed implementation is too large in technical effort to justify the customer/product value expected. If so this is work we should put on hold immediately unless we can find an alternative approach. For example large scale refactoring if the rationale for doing so is not yet strong enough.
  4. Proposed implementation can be modified with no extra effort to make it more extensible for the future.
  5. Incorrect conventions have been applied. For example using a snake_case when it should be camelCase and vice versa, or inconsistent naming of the same object/class. Providing parameters by query parameters if the convention is to use path parameters and vice versa.
  6. Proposed implementation has too many untested assumptions for its size in technical effort so an additional POC/research needs to be done first.
  7. Proposed implementation would be hard/costly to iterate in future. Normally this means we need to make different technical choices to make future iterations easier.
  8. Proposed implementation would need to be entirely rebuilt if iterated in future. Preferably our iterations, from a technical perspective, always move in the general direction of a known end goal. Main exception here is if the reduction in technical effort now justifies a future rebuild once remaining assumptions are validated.
  9. Proposed implementation is very similar to other work recently completed or in progress by others which does not appear to have been referenced.
  10. Proposed implementation involves deployment complexities or risks that haven’t been considered, for example how easy will it be to rollback changes, could users be logged out by this change?
  11. Proposed implementation hasn’t considered and/or mentioned likely infrastructure cost implications.
  12. Proposed implementation hasn’t considered the potential impacts on other squads, e.g. downtime causing significant cross squad disruption.
  13. Implementation doesn’t mention how to handle likely error cases/scenarios. For example a failing or hanging synchronous inter-service/HTTP call, or lost/corrupted events.
  14. Implementation introduces avoidable synchronous inter-service calls as part of synchronous incoming requests. We generally want to avoid this pattern.
  15. Proposed changes don’t include suitable considerations towards what monitoring or metrics can be used to determine impact or success.

Hard

  1. Proposed implementation contains ambiguous or unclear naming. For example we’re adding a new field to Redshift reporting databases and we’ve proposed a field name of ‘data’.
  2. Proposed implementation can be GREATLY simplified for no or only a small loss in functionality. For example we can just solve the problem manually.
  3. Building product agnostic functionality in a product specific way, or in the wrong location/API.
  4. Proposed implementation can be significantly simplified by moving extraneous product requirements into future iterations.
  5. Proposed data structures have not sufficiently considered data normalisation, minification, costs of future extensions and compatibility with likely future use cases.
  6. Proposed implementation would lead to microservice anti-patterns, for example strongly coupled services, lack of cohesion, chatty interfaces etc.
  7. Is likely to introduce disproportionate support problems/challenges.
  8. Proposed implementation has understated or missed technical or product risks that could jeopardise the current or future iterations.
  9. Proposed implementation represents an iteration towards significant future complexity and/or functionality and that future state is not yet clear enough, aka we’re flying blind.
  10. Proposed implementation is merging considerations that are best kept separate. For example extending an existing endpoint’s functionality which will result in completely different responses. Perhaps best implemented as a new endpoint.
  11. Proposed implementation hasn’t sufficiently considered security vulnerabilities.
  12. Planning has only addressed what was requested when there are indications we’re possibly solving the product problem in the wrong way.
  13. Technical planning has possibly made an inefficient choice in relation to unspecified product requirements.
  14. Proposed implementation has considered GDPR (Easy) but has not suitably mitigated the risks associated with it.

Very Hard

  1. Proposed implementation diverges from longer term technical roadmaps and visions, for example new feature/service decoupling, security first, simplifying existing complexity.
  2. Proposed implementation does not have the right degree of coupling, normally too much, across existing services and features.
  3. Proposed implementation will align badly with future resource considerations.

Very Very Hard

Too many considerations have been considered and we’ve now taken too long considering things rather than building, learning and iterating quickly 🙄 Solving this particular balance is outside the scope of this post but there are many great resources out there which try to help.

Thanks for reading!

Menu