This is something I came up with a while ago when I was working on a large integration/orchestration project for a now-defunct MVNO (mobile virtual network operator). It was a pretty complex project, which had many external collaborators and end-points into the system. The project was having some issues with being able to reliably deliver incremental functionality on time. I had a directive to use my judgment to essentially turn the project around.
I had just come off a project which had excellent pure unit test coverage, but not very good overall test coverage with actual collaborators. The integration points were much more difficult and painful than they needed to be. On the other extreme, I inherited a bunch of existing non-unit tests as part of the MVNO project. These were brittle, took a long-time to run, and failed for a bunch of different reasons. As a result, they weren’t run very often, became neglected, and eventually worthless. There was no continuous integration in place, so the feedback loop between breaking something and finding what you broke was very loose.
To make matters just a little more dicey, I was working with another test-focused developer who had a strong preference for writing epic end-to-end tests, while I was trying to make the code more able to be tested in isolation. I resented the fact that “his” tests were keeping me from running the tests as part of the cruise control build.
I came up with what I called the “Food Pyramid” as a way of balancing these different forces and points of view. I broke the one massive test project into three distinct assemblies with their own goals. The differences between the projects are best explained in a series of pictures:
So, did it work? Yes and no. We eventually had much smoother and more confident deployments thanks to the end-to-end tests. I also managed to refactor and add new code more quickly with fewer bugs thanks to my unit-test safety net. But, just as I was starting to get a handle on the project, the client filed for bankruptcy and stopped paying their vendors (which unfortunately included me). It seems that having a viable business model trumps all.
I have, however, used these and similar distinctions successfully on subsequent projects, and I know that at least one development organization I’ve explained the idea to has adopted the pyramid as part of their overall testing strategy.
I’ve been arguing (mostly unsuccessfully) for some time that Lean Software Development and Agile are two different intellectual frameworks for understanding the same underlying concepts. The artificial distinctions that people draw are often based on connotations of the words themselves, such as “agile is about fast while lean is about cheap”.
You can’t really talk about them being alternatives, if you are doing agile you are doing lean and vice-versa. Agile was always meant as a very broad concept, a core set of values and principles that was shared by processes that look superficially different. You don’t do agile or lean you do agile and lean. The only question is how explicitly you use ideas that draw directly from lean manufacturing.
One thought on drawing ideas directly from lean manufacturing: The ideas that work in both manufacturing and software (such as empowerment, incrementalism, tight feedback loops) don’t necessarily work in software just because they work in manufacturing. That’s a naïve mindset that can lead you to all sorts of bad conclusions. Some ideas work in both due to the nature of the people involved in both.
It’s easy to get ideas from manufacturing as it’s a human endeavor that’s more mature and has been very well studied and documented. An observant and mindful person could surely draw lessons from all sorts of places. It’s possible that someday we’ll all understand software development well enough that we won’t need to borrow concepts from other disciplines. It’s clear to me that we aren’t there yet, though.
The headline of the local Seattle Newspaper yesterday caught my eye. It was about how surgeons at the UW (my Alma Mater) are now using aviation-inspired checklists to make sure they don’t, you know, leave stuff inside of people. This resonated with me, because I had resolved to making checklists for photo gear packing after a near-fiasco last weekend.
I was taking wedding reception photos as a favor for an old friend. For the most part, I hate the entire idea of doing wedding photography. The risk/stress/reward/effort ratios don’t work out right. This one actually worked out really well, as the couple and wedding party were all great. They also had a “real” photographer working (who had more invested in one lens than I have in my entire setup, alas) which freed me up to take some more experimental pictures, such as HDR still lifes of the venue, controlled motion blur shots of people dancing, and grainy black and white candids (I love tmax 3200 because of the grain, not in spite of it).
I took special care to make sure that all of my camera and flash batteries were charged before the event, but I didn’t spend a lot of time packing the bag before I left. I just threw everything into the camera bag and ran out the door. When I got to the venue and started putting all of the things together, I found I was missing a small but vital piece, the caddy that holds two batteries and slides into my battery grip. Without it, my DSLR wouldn’t work. Panic!
After calling home 15 times, I finally got in touch with my wife, who graciously drove across town with a small piece of plastic so I could actually turn the camera on. Fortunately, I brought a film SLR as a backup, and had just finished my last roll when she pulled up.
If I had made a simple packing checklist, much pain and fear would have been avoided.
And while I’m not a fan of oppressive standards, heavyweight processes, or detailed artifacts in software development (my thinking is that if you’re constrained by your conventions, the best you’ll ever be is conventional) simple “have I forgotten this” checklists are insanely valuable. Based on what I’ve seen, they’re also underused.
My first job in software was split between testing and support for a small company making technical graphics software. The testing department was pretty unstructured. We had reasonable automated test coverage (horrible by today’s standards, but OK for the time) but all manual testing was, “Hey Martin, I just fixed this bug, go poke at it!” and “We’re realeasing a beta next week, test everything!” and “We’ve got a release candidate, get the people from sales and marketing to play with it!”
So, for no other benefit than my own confidence, I made a some checklists, just to keep from forgetting to test specific features/permutations. Eventually the company adopted my checklists and handed them around when we did our “all-hands” pre-release testing. Just asking people (including me) to be mindful of the feature list when they were doing their exploratory testing helped us find many problems as well as places where we could improve the user experience.
I saw this again a few days ago when I was looking at a web application that made a lot of pretty common security mistakes (no SSL for login, emailing passwords in cleartext, etc.). “These are all essentially solved problems,” I thought, “Shouldn’t there be a checklist for this sort of thing?”
Many experienced folks have a sort of mental checklist. Things they know instinctively to look for. Like my old mentor who would always enter “O’Brien” into name fields to catch inappropriatley escaped SQL input. This is one of the reasons why domain knowledge is so valuable. How do we get people to capture and share this domain knowledge? Couldn’t a development group be able to use a set of mature “don’t forget to think about thing X” checklists as a competitive advantage for design/development/testing?
While writing this, I’m reminded of my favorite marketing professor at the UW. After spending a whole quarter discussing different theories, reading case studies, and pulling examples from real-life companies, the last day of class she said (and I’m paraphrasing here, because that was some time ago).
In the real world, when you start in marketing professionally, it’s just checklists: Have I identified the market for my product or service? Have I explained my offering to someone who doesn’t understand it? Have I underpriced? Have I overpriced? Am I saying something stupid or offensive in this ad? How will my competitors react to this change? How will my customers react to this change? The answers are all easy, but bad marketers forget to ask the questions.
Sure, that doesn’t get you to greatness (or even guarantee goodness), but it at least keeps you from forgetting about the obvious.
This is a particularly odd example, as the three-letter-acronym “DSL” is already in wide use as the name of a technology nobody really understands (Digital Subscriber Line). It reminds me of the time Microsoft rolled out the acronym “DNS” for Digital Nervous System. This isn’t quite that bad, but is still pretty confusing.
Martin Fowler is been writing about Domain Specific Languages for some time now. He’s apparently even writing a book on the subject at the moment. For the longest time, I just ignored blog (and bliki) posts on the subject, as I read the title and immediately assumed that it was something that wouldn’t work for me. It was only when a friend said “no really, you need to read about this” that I actually got it.
In the style of my previous decompression artifacts, I’ve created one for DSLs.
Once you get past the almost inevetable initial misunderstanding of what it actually is, creating a domain-specific, fluent, expressive interface for your domain objects is a really cool idea. It’s also something you can work with incrementally. If you just change every void method ot a “this” method, none of your existing clients will break, and people who aren’t down with DSLs don’t have to do anything any differently.
On a side note: I recently did a TDD presentation for a large-ish company in Portland that has been working with ThoughtWorks for a while. One of the attendees said to me, before I started, “Oh, someone said Martin was doing a presentation and I was expecting Martin Fowler.”
Ouch. Talk about living up to high expectations. I’ve been a big fan of Fowler’s work (and of ThoughtWorks in general) for a while now. There are actual new ideas in his writing, while I just organize, package, and give context to existing ideas.
While daunted, I think I managed to put on an OK presentation.
The other day I was discussing a process/dev workflow problem with one of my friends. I managed to get a basic understanding of what the problem was (team disagreement about the importance and sequence of sprint reviews, retrospectives, and planning) but we were both too pressed for time to brainstorm for a solution. He had a meeting to attend, and I had a demo to give.
The only advice I gave was, “The solution here is not process dogma. You can’t just fall back on the Scrum rulebook and say ‘we’re supposed to do it this way’, you have to get to the value of why you should do the previous review/retrospective before the next planning.”
The value, of course, is that you should be taking what you learned from the last sprint and using it to inform your actions of your next sprint. It’s Scrum’s larger-scale feedback loop (the small feedback loop is the daily meeting). Inside or outside of Scrum, feedback loops are important for making software well.
When I was thinking about it later, I realized that process dogma is never the solution. Software development is intellectual work, and to make a persuasive case with skeptical people, you have to do better than “because the book says so.”
Winston Churchill had a much faster and sharper wit than I have. Consider this* famous exchange:
Lady Astor: “If you were my husband, I would put poison in your coffee.”
Churchill: “If I were your husband, I would drink it.”
I, on the other hand, always think of the correct thing to say in an argument a few days later. It’s not just for ex-post-facto arguments, either. I’ve been doing some technical trainings lately, and I try to have a more conversational style than just reading lots of bullet points from slides. I figure that if I don’t understand a topic well enough to speak about it extemporaneously, than I have no business talking about it.
In the last presentation I made, I talked about the practice of refactoring and the concept of code smells. I gave examples of a few of high-value smells and then gave this little wishy-washy disclaimer.
“When you encounter these code smells, it doesn’t mean that you have to change it, it just means that you should look at your code closely here, as you may have problems.”
It’s not so bad, it’s pretty much what everyone says about refactoring. What I should have said, was this:
Code smells are correlations to quality problems. Heavily commented code blocks aren’t necessarily bad, but lots of comments very strongly correlate to readability problems. You don’t fix it by deleting the comments. You fix it by making your code readable enough to stand without the “how does it work” comments.
Long methods aren’t necessarily bad, but long methods very strongly correlate to cohesion problems. It’s possible, and sometimes required, to have a long method that’s perfectly cohesive, but it’s outside the norm.
And, of course, you don’t fix the problem based on the correlation to the problem, you fix the actual underlying problem. Breaking a long ProcessThings() method into three arbitrary methods called StepOne(), StepTwo(), and StepThree() doesn’t actually make the code any better.
You see, that’s not wishy-washy at all, and it appeals to the distinction between correlation and causation. It’s not as funny as poisoned coffee, but it has some concreteness to it.
*After looking up the Astor/Churchill exchange, I found that it’s very possibly an apocryphal story. Oh well, it’s still funny.
In my post about guilt as a code smell, a comrade pointed out that it’s perfectly possible to test abstract classes in isolation, you just make a concrete derived class as part of the test (thanks Craig!). Having a distinct concrete subtype for testing is something I’m already doing a lot with endo-testing, so it’s not even totally without precedent.
It does still bother me, though, and I’m not 100% sure why. Some thoughts:
Let’s say your abstract class is using the Template Method pattern, where it has a public method which just delegates “downwards” to abstract methods with different overridden implementations. This is a perfectly good use of an abstract class, yet it seems kind of pointless to test in isolation, as you’re going to be testing each implementation and will be testing those base classes anyway.
The scenario that I had in the other post had a different kind of abstract class, with no public methods, just protected ones. I’m just achieving zero redundancy by moving code logic upwards in the tree. Testing here becomes trickier, as there’s no publicly exposed design surface. Should I just make one up? The right solution, for me, for that time, was not to test the abstract class, but to move that logic into a distinct service class so I could work with it more directly. It’s textbook “favor composition over inheritance” design patterns stuff.
In any case, I stand corrected. It’s absolutely possible to test an abstract class in isolation. It is impossible, however to test a base class in isolation without testing its abstract class. Which could make testing the abstract class in isolation kind of pointless.