Don’t Succumb To “Facebook Envy”. Solve The Problem In Front Of You
by Jason Gorman, Dec. 1, 2017
Gorman says the key thing for future-proofing code is not to anticipate what you may want to do with the code in the future and build in parts for just that purpose, but to make your code such that it is easy to change. This of course makes a ton of sense: you can’t tell what direction you are going to need to change the code in the future, or otherwise you would just write the code that way! So don’t do anything constructive in some particular direction: just use general principles that make your code easy to change.
Part of that means making it easy to understand, and reducing the amount of connascence that links all the parts together in a fragile way. Good code + good coders = adaptability. That’s better than trying to make the code ‘robust’ from the start by designing it to do all the things out of the box. Don’t add features unless there is a known need. Once something gets into the code, it’s hard to eliminate it without complaints / dependencies get built on top of it. So write as little as possible from the get-go.
Other good points that Gorman has: My Solution To The Dev Skills Crisis: Much Smaller Teams
The way to deal with complexity of code is to break the functionality into appropriately-sized chunks with weak interactions through well-defined, limited interfaces. The chunk needs to be small enough that a single developer/pair can comprehend/build it. Breaking the chunks down too far and distributing over many people increases the costs of coordinating the people — this like trying to get 9 women to have 1 baby in 1 month.
On the other hand, if you assign too large a chunk to one person or set of people, the complexity will be too great to comprehend, and your developers will get bogged down. Adding new people to speed things up will not work because they will get confused and make mistakes. The key point is that the boundary conditions between chunks need to be aligned with the domain of responsibility of a sufficiently cohesive chunk of developers (probably not more than two). If you have too many people on a chunk, you effectively start to blur responsibility for changes. This gets really bad with a large chunk, b/c then people need to understand the changes made in other parts of the chunk, b/c they are all interlinked due to the lack of a interface boundaries. Also, they have to comprehend a larger system to make their own changes.
Not Gorman’s work, but the ‘mythical-man-month’ is evidently the assumption that the rate of progress scales linearly with the number of people working on a chunk of code, and the amount required scales linearly with the size of the code. This is obviously false.
Basically, Gorman is debunking a bunch of fads in software development management. This post touches on the subject of knowledge beyond the human scale, and also perverse incentives.
Methods like SAFe, LeSS and DAD are attempts to exert top-down control on highly complex adaptive organisations. As such, in my opinion and in the examples I’ve witnessed, they – at best – create the illusion of control. And illusions of control aren’t to be sniffed at. They’ve been keeping the management consulting industry in clover for decades.
The promise of scaled agile lies in telling managers what they want to hear: you can have greater control. You can have greater predictability. You can achieve economies of scale. Acknowledging the real risks puts you at a disadvantage when you’re bidding for business. That is, there is money to be made helping people stay in denial about unpredictability. Well, that’s nothing new: look at religion.
When we iterate our designs faster, testing our theories about what will work in shorter feedback loops, we converge on a working solution sooner. We learn our way to Building The Right Thing. … So ask your requirements analyst or product owner this question: ‘What’s your plan for testing these theories?’ I’ll wager a shiny penny they haven’t got one.
Another idea I’m getting from Gorman’s blog: the idea that user requirements are dumb. If you want to intelligently solve the user’s problem, you can’t expect them to explain it to you like you are a computer yourself — precisely, that is. You’ve got to grasp the concept, put yourself in their shoes. Because humans can’t communicate the way machines do: they communicate by inference, not by specification, due to bandwidth limits. In principle, you could use a ‘POV gun’ to inject someone with your perspective, but we aren’t there yet.
This is exactly the same problem as trying to teach AI how to solve problems in a way that humans would find acceptable: it’s going to fail unless the AI figures out how to read human minds via inference, because human communication just isn’t up to the task of transmitting that kind of information. Making your code easy to change is mandatory, because you are going to develop it iteratively, rather than monolithically in one go, because you need to test hypotheses about users’ requirements experimentally — you have to implement a thing in order to see if that’s what they wanted. Until they have an implementation in front of them, you can’t get at all the requirements. Evidence-based business.
Try as we might to build the right thing first time, by far the most valuable thing we can do for our customers is allow them to change their minds. Iterating is the ultimate requirements discipline. So much value lies in empirical feedback, as opposed to the untested hypotheses of requirements specifications.
Crafting code to minimise barriers to change helps us keep feedback cycles short, which maximises customer learning. And it helps us to maintain the pace of innovation for longer, effectively giving the customer more “throws of the dice” at the same price before the game is over.
It just so happens that things that make code harder to change also tend to make it less reliable (easier to break) – code that’s harder to understand, code that’s more complex, code that’s full of duplication, code that’s highly interdependent, code that can’t be re-tested quickly and cheaply, etc.
And it just so happens that writing code that’s easy to change – to a point (that most teams never reach) – is also typically quicker and cheaper.