When it comes to software project management, Agile has won. Oh, sure … most projects are still doing things in a more traditional way, using a waterfall process (or, in many cases, no recognizable process at all). But the best teams are using agile processes, and the thought leaders in software methodologies are focusing their work on agile processes.
But Agile is much more than just a way of running software projects; at its core, it’s a philosophy of how to approach any endeavor. Just take a look at the Agile Manifesto:
Manifesto for Agile Software Development
We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:
- Individuals and interactions over processes and tools
- Working software over comprehensive documentation
- Customer collaboration over contract negotiation
- Responding to change over following a plan
That is, while there is value in the items on the right, we value the items on the left more.
Software is mentioned, but most of the manifesto is applicable to any business endeavor.
Many teams have successfully used agile principles to improve the way they run projects. But can we use those principles to do more? Can we use agile principles to improve how we plan for projects? How we identify and choose projects? How we recruit and hire? How we run our entire department, or business, or enterprise?
Most discussion of agile has been focused on individual projects, or even smaller scales. For many teams, iterations or sprints represent the primary focus; a project is organized as a series of iterations, and little attention is given to managing the project as a whole.
One of the strengths of the agile mindset is that you can get good results that way, and a focus on iterations is not a bad thing! It’s always easier for people to focus on short-term goals, and if doing so yields good results, everybody wins.
But we can do better if we also use agile principles to improve our practices at the scale of the entire project, and even larger. Using agile at more strategic levels can help in two ways.
First, it can help us to avoid project failures. Even projects that are managed according to agile principles can fail, and when that happens, it’s often due to factors that lie beyond the short horizons that we focus on during iterations. To cite just one example, the project that gave birth to Extreme Programming, Chrysler’s C3 project, eventually failed. Kent Beck, the project’s longtime leader and the primary inventor of Extreme Programming, explained that over the course of the project, two crucial customer responsibilities diverged: responsibility for specifying project requirements was given to one customer representative, whereas responsibility for evaluating the project’s performance was given to another. The team failed to notice this risky situation, and ultimately the project was deemed to be a failure, even though the team had been successfully doing all that was asked of it. To avoid such situations, an agile project must periodically lift its gaze toward longer horizons and more distant connections, staying aware of the larger context in which the project exists.
There’s a second way that strategic application of agile principles can help. Our goal as software developers should not be merely to have gigs today; we want to have careers. Each project can be a success or failure by itself, but we should also measure it by the position it leaves us in—whether we are well placed to land the next project, and whether we’ve learned things that will make us even better at what we do.
In much of his writing about agile, Alistair Cockburn compares software development to a “cooperative game”—that is, a game where players collaborate to see how much they can achieve together, rather than competing to see who “wins”.
One particularly compelling example of a cooperative game is rock climbing. A big part of rock climbing is “setting up for the next win”. Experienced climbers plan sequences of moves so that, when they’ve traversed one section of the climb, they’re properly “set up” for the next section. That can involve the particular positions of hands and feet, so that the body is properly situated to reach for the next hand- or footholds. It can also involve ending up at the right place on the cliff face, so that there’s a feasible path onward.
Additionally, rock climbers follow planned progressions in their climbs, choosing new climbs that will expose them to new challenges and help them learn new skills. This is how climbers prepare for more challenging climbs to come.
So in several ways, part of the game of rock climbing is keeping the game going. That applies to our goal in software development. We want to succeed in the narrow senses of fulfilling all the requirements, keeping costs and timelines in check, and having delighted users. But we’d also like to succeed in much bigger ways:
Without sacrificing any of the short-term excellence our customers are paying for, we want to use each project to set up for the next win—by providing us the opportunity to play again, and by improving the way we do things so that we can take on bigger challenges and do an even better job.
That’s why we want to take agile from tactics to strategy.
Before we get into the details, though, it’s important to understand the fundamentals of Agile. Many people tend to think of Agile in terms of a particular process—XP, say, or perhaps Scrum, or some hybrid, in-house agile process. But those processes are just manifestations of agile principles in the context of software project management. At a deeper level, what’s Agile all about?
It’s well known that feedback is a crucial part of agile processes. Under scrutiny, agile processes begin to look like feedback engines. Our friend Venkat Subramaniam has referred to agile development as “feedback-driven development”.
You can see the emphasis on feedback in the opening statement of the manifesto:
We are uncovering better ways of developing software by doing it and helping others do it.
Instead of sitting back and thinking about how to do software development better, the Agile Manifesto emphasizing getting to work and paying attention to the results as a way of improving.
But not just any feedback will do. There are many kinds of feedback mechanisms in software processes, and some of them are far from agile. In some cases the feedback comes too late to help; in others, the cost of gathering the feedback overwhelms the process. (And all too often, both of those things are true.) How do we choose the right feedback mechanisms?
The next line of the manifesto gives us a clue:
Through this work we have come to value …
Value: we should focus on what’s valuable, and what provides greater value. Value decisions can be about ethics and morals, but in business contexts they’re also about what gives the most bang for the buck. We should assign greater value to feedback mechanisms that provide more for less cost.
Feedback in agile processes should be both timely and economical. The cost of the feedback mechanism should be matched to the size of the decisions being evaluated. What that means is that, for small-scale, tactical decisions, we need very cheap feedback mechanisms, because we’ll be engaging those mechanisms frequently. And for larger-scale strategic decisions, it’s reasonable for the feedback mechanisms to be a little more costly, because we’ll be looking at larger timeframes and reaping similar benefit from the lessons we learn.
As mentioned above, many teams now embrace agile at the tactical level. And that’s definitely the right place to begin. Starting with the idea of iterative development, it’s very common for teams to embrace practices like:
Those practices make iterations run well, and help the team keep up a steady rhythm of productivity. But it’s common to see teams that don’t really take a similarly agile approach to managing the project as a whole. Practices that contribute to project agility include:
Story point estimation and risk analysis can work together to contain risk on the project. Many teams don’t like to estimate stories until the iteration when they’re to be worked on. And there are some good arguments for that approach; often the team doesn’t know enough about some stories to give accurate estimates several iterations in advance. However, advance story point estimation helps to identify high-risk features or areas of uncertainty. And risk analysis builds on that, looking explicitly at project risks, helping the team to identify them early and plan for them. Risky areas of the project, by their very nature, threaten to take longer or require more effort than might be expected, so effective risk management involves addressing risks as early as possible in the project, leaving time to react and adjust.
At Relevance, we usually do early estimation of every story, and track that number as the “original estimate”. Later, when we are ready to schedule the story for the current iteration, we estimate the task again; that number is called the “planning estimate”. By that time we know more about the task itself, as well as the current state of the system, so the planning estimate is usually much more accurate. Some projects spend more time than others on original estimates, but we do track both numbers so that we can improve our estimating skills.
Story point estimation is a great example of how practices in agile processes support and reinforce one another, and how small changes in a practice can increase its value. Breaking features up into small tasks, or “stories”, is a key part of many agile processes. However, it’s sometimes difficult to know whether stories are sufficiently small to be well understood, or whether a story might be hiding poorly understood complexity.
Rough estimates in terms of “points” can help. A story that the team agrees is just one or two points is clearly small and simple, and almost certainly well understood; it might correspond to anywhere from one to four hours of work.1 But what about a story that seems like four points? Five? Ten?
There are problems lurking there. A four-point story will probably take, at minimum, half a day, and more likely an entire day or more. Day-long tasks are fairly big; a lot of things need to be done, and any one of them could turn out to be more difficult than expected.
Additionally, there is error in every estimate, and granularity matters. The difference between a one- and a two-point task is 100%, but the difference between a four- and a five-point task is just 25%. And when you get up to nine and ten points, the difference is 10% or so. Can you really estimate a two- or three-day task with that degree of precision?
To deal with these issues, many teams force larger estimates to be less precise. Some teams only allow using powers of two as estimate values (1, 2, 4, 8, etc.). At Relevance, we’ve settled on using Fibonacci numbers (1, 2, 3, 5, 8, 13, and so on). Either series will work, and both have important advantages. For one thing, there are few time-consuming arguments about one point here or there on large stories. Additionally, this system makes it difficult to hide risk. Larger stories tend to blow up to much larger point values (for example, because nobody’s quite comfortable calling it a five, and the next alternative is eight). That helps highlight the risk, and is an acknowledgement of the uncertainty inherent in large stories. Ideally, any story over five points will be split into multiples. Doing that might involve some risk analysis, or a conversation with customers to clarify the requirements, or perhaps a quick spike to explore the uncertain technical aspects of a story. That entire process is unusually effective risk management.
Story estimation is also essential for burndown tracking. Burndown charts are valuable tools for project managers, customers, and developers alike, but far too few agile projects employ them or other analytical tools to help gauge the progress of the entire project over the course of several iterations.
The real focus of this article, though, is applying agile principles beyond even the project scale. We can use agile principles to build better teams and software businesses:
Here are the things we’ve been doing at Relevance. We’re still learning how to do them well—that’s a big part of what being agile is all about—but we’re convinced that this is what agile principles look like when applied strategically.
Pair programming has taught us something unintuitive: two people can be more productive working on a single task, guiding and checking each other as they work, than they can be working separately on two tasks. There are many reasons why pairing is good for productivity, but one crucial realization is that pairing helps eliminate mistakes that ordinarily wouldn’t be detected until later, when detecting and correcting them is more costly. Pair programming is a low-cost, immediate feedback mechanism.
It’s an interesting exercise to compare pair programming with “programming by committee”. If pair programming is good, wouldn’t group programming be better?
My colleague Stu Halloway frequently gives conference talks that include live coding, and makes jokes about taking pair programming to the next level, to “giant-room-full-of-people programming”. That turns out to work fairly well in Stu’s talks, but that contest is rigged: Stu carefully plans and rehearses those demos, guides the discussion with an experienced hand, and relies on the audience only to help him out of rare slipups.
True “group programming” doesn’t really work well at all. Just like any committee endeavor, more often than not the group gets bogged down in petty disagreements about inconsequential decisions, or becomes mired in “analysis paralysis”. In programming, one person making decisions is good, and two together are better—and beyond that things quickly fall apart.2
That lesson can apply equally well at the strategic level. I don’t mean that two people should sit side-by-side, hour after hour, day by day, thinking “strategic thoughts.” As discussed above, we should strive for economy by matching our practices to the scale of our decisions and feedback loops. For strategic practices and decisions, we should “pair” by seeking someone to plan with, bounce ideas off of, and muse about lessons learned.3 At Relevance, for example, we have “PM buddies”—each project manager has someone they consult with periodically to hold them accountable, help them stay focused on what’s important, suggest ideas for dealing with problems, and try to make sense of confusing patterns. PM buddy conversations happen weekly (more or less), and are incredibly valuable.
Part of any successful agile project, we believe, is choosing the right technologies for the project. We do believe agile principles can still help you be successful no matter what tech mix you’re working with. Nevertheless, quality technology that helps you to both move quickly and respond to change really amplifies your success.
That’s doubly true when taking a strategic view. In the past, many consultants and consulting firms have tried the opposite approach: they’ll look for a technology that is complicated and difficult to work with and teach, become expert in that technology, and then help to promote that technology, helping to create a market for their specialized knowledge. That strategy has worked well for some, but we think customers are wising up to that approach. Furthermore, it’s not a recipe for true success. We want customers that continue to work with us, and tell their friends about us, because they enjoy the working relationship we have and because we build software that rocks—not because they’re kind of stuck with us because their employees don’t understand (or want to understand) what we do.
We look for tools and technologies that are as agile as we want to be. That’s why we love low-ceremony languages like Ruby, Clojure, Smalltalk, and Haskell. That’s also why we like platforms like Rails and Seaside, and simple, agile project management tools like Mingle, Pivotal Tracker, and Lighthouse.
Some people prefer voice, some email, some IM; others strongly prefer face-to-face communication. Spend time figuring out what your customer’s style is and adapt to it! It’s worth spending time trying out new ways of communicating so that you can choose the right communication channel for different types of information.
Adding steps or other channels to the communication pipeline is a project risk, but in the real world it happens. We have learned many surprising things from asking project owners to walk through how they present the project to their stakeholders. One customer had an absentee stakeholder who never logged into Mingle, but received a print (on paper!) summary of things going on in Mingle from the accessible stakeholder. This is not an isolated issue. Taking into account customer preferences, we place a much higher value or printing Mingle cards than we might for ourselves.
We also had a customer who used Mingle card transitions as the primary indicator of process, while the team was pushing updates via email and updating Mingle infrequently. What happened there? The team had adapted to one customer (who preferred direct communication) and carried that same style of communication over to a subsequent project.
Now we understand that different customers have different preferences, and we try to have multiple options. We also spend time at the start of a new project figuring out what channels will work best for the particular mix of participants. Communication channels are important enough to deserve explicit attention.
Another traditional pattern in software development is to specialize, specialize, specialize. “Nobody hires generalists,” we’ve heard. And there’s some truth in that. But what we’ve noticed is that, while perhaps few teams go out in search of a generalist for a new position, they all want badly to keep any generalists they have on staff.
So specialize, yes, but look for opportunities to broaden your base of skills. Is your project tripping over a problem that’s outside your skill set? Spend at least a little time trying to solve it yourself, rather than waiting for someone else to save the day, or going in search of another expensive specialist. Over time you can build a broad base of general skills that will be applicable across many projects.
There’s a balance to be struck here: if there’s an expert on the team who can solve that specialized problem in a few minutes, your two days trying to figure it out yourself is just a waste of the project’s time. But you can ask to pair on the task with the expert, so that maybe you can handle simple things yourself in the future. (And as a team, you should foster a culture where experts want to share their expertise, not hoard it.)
One recent project at Relevance presented unusual challenges in the area of database query performance. We’ve all got experience with query optimization, but the sheer amount of data and the complexity of the queries in this case went beyond what we knew. We found ourselves balancing conflicting goals:
We ended up using a mix of billable hours and slack time to research and explore some solutions, and then consulted an outside expert to evaluate and critique the solution we’d identified. Since we were broadening our skills, it made sense to spend a few non-billable hours on the task in addition to the hours the customer paid for. Bringing in the outside expert to critique a proposed solution was much more effective than handing over the entire problem. Everyone was happy with the way the process worked.
Additionally, carefully weigh the future value of new skills. If something strikes you as an unusual problem that you may never encounter again in a software development career, then by all means seek out an expert to solve the problem in a hurry so you can get back to what you do best. But in most cases where we’ve heard “That’s not my job” or “I don’t do that stuff”, the problems are the kind that recur on project after project.
One of the reasons we love Rails is that it assumes that developers will be comfortable with all kinds of different tasks: design, HTML, CSS, programming, database design, SQL, query tuning and optimization, testing, automation, and deployment. Hyper-specialization may in some cases be good for an individual’s career, but more sensible specialization—a few core areas of expertise built on a broad foundation of general development skills—is even better, for the individual and for the team. Seek to build a team of generalizing specialists.
The “spike” (or “spike solution”) is a term used in agile processes for a very quick, exploratory experiment or prototype that is used to rapidly, cheaply determine whether some planned design or implementation strategy will work. Sometimes a spike is used to compare two or more possible strategies. Ordinarily a spike should take an hour or two, or at most half a day.
But the real point of a spike is not a particular time box; instead, the point is that the spike should take considerably less time than the full-fledged solution. The goal of a spike is to either prove or disprove an idea for a very small cost, to avoid spending the larger cost for a full implementation (and the plans that go with it) for an idea that ultimately will not work well.
The same idea can be applied to more strategic, process-oriented solutions. It’s often possible to try out a long-term idea—a new organization of a larger group into teams, for example, or a new project estimation strategy—more quickly, and at a lower cost, than would be required for the full-scale solution. Try the idea out for just one iteration, for example, with minimal overhead, and evaluate the result. Just as with a conventional spike, everyone should be aware of the limitations, and that the result of the spike will not have all of the characteristics of the full implementation. But even the spike can reflect the strengths or weaknesses of the idea, and potentially save a lot of money. (Spikes are most valuable, after all, when they show that an idea will not work.)
Customers aren’t nearly so valuable to your business as good partners. I think that’s obvious enough.
What doesn’t seem to be obvious is that your customers feel the same way about their vendors.
Don’t limit yourselves in your role on a software project by what you assume your customer wants. Don’t passively accept requirements from and implement them as closely as possible. Instead, learn your customer’s business. Think about, and ask questions about, why features are important and how they will be used. Suggest improvements to features, or entirely new features.
Suggest entirely new projects that you can do together—projects that will help their business, and (obviously) will help yours, if the customer pays you to develop them.
Unlikely, you say? You think they won’t take such suggestions from you? Perhaps that’s because you haven’t taken the partnership far enough. Have you convinced them they don’t need a requested feature yet? Have you explained why a simpler, less costly approach will have more benefits? Have you suggested that they might not really need an entire new project they’ve been thinking about? That’s the kind of thing a partner does.
This is a crucial point. As a partner, you have a stake in the benefit side of the cost/benefit analysis, as well as the cost side. A colleague tells this story: “One of our most disappointing projects hit all the features requested, at planned cost, while shipping real production releases every iteration! Imagine our chagrin when, at the end of the project, the customer performed a benefit analysis (using data we had never seen) and concluded that no customer software needed to be written.”
The bit in the Agile Manifesto about customer collaboration isn’t just a cliché you can use to avoid having to make a detailed, long-term plan. Your customers want you to help them not just by building software for them; they want you to help them understand what software can do, how it can help them—and also how it can’t help them, what its limitations are. We programmers aren’t the only ones who sometimes try to solve people problems with technology; our customers do it all the time. Your customers want you to move around, sit on the same side of the table with them, understand their businesses and their problems, and help to solve them. If you do that, they’ll keep coming back, and they’ll tell others.
It may seem like nonsense, but it’s the truth: not all waste is wasteful.
I won’t spend much time on this point, because it’s been done so well in Tom DeMarco’s wonderful book, Slack. But here it is in a nutshell: people and organizations need some slack—rest, relaxation, downtime, and diversion—to be at their most effective and productive. The most obvious reason that we need slack is so that we’ll have time left in our schedules to take advantage of new, unexpected opportunities, or to adapt to sudden changes. However, the importance of slack is even more basic than that. Slack allows us to be creative, to learn and explore new things, to have the physical and mental energy for bursts of sustained productivity.
At Relevance, we provide for slack time in two ways. First, every Friday is “open-source day”. We use Fridays to work on things that matter to us, and usually that means writing (or contributing to) open-source software projects that help us to work better—such as Castronaut, Tarantula, Micronaut, and Vasco. Some of our Friday time is spent exploring new technologies and tools.
Another way we ensure that we have slack time is by carefully avoiding “death march” schedules. We don’t “sprint” through iterations; we work at a sustainable pace that allows us to stay highly effective throughout the project.
Certain kinds of inefficiencies are deadly, and should be eliminated ruthlessly: busy work, things a well-programmed tool could do for us, things that require our energy and concentration without providing sufficient return. But other inefficiencies, such as having fun at work, pursuing our passions, learning broadly and exploring the nooks and crannies surrounding our work, constitute the slack that gives us room—and the spark—to move.
Lift your gaze beyond your current project, your current technology, and even beyond your current business model (and, for that matter, the business models of your customers and competitors). Look for trends that point to broad-based changes. Software development has changed a lot over the years, and it will change again.
Don’t, however, be naive about trends. Sometimes they represent once-and-for-all sea changes, but more often they’re just parts of larger cycles. For example, we’ve seen a gradual but almost universal turn away from desktop applications toward web apps, and it would be easy to believe that, with connectivity becoming more nearly universal, traditional desktop applications will all but disappear. Take a step back, however, and this looks like part of a cycle, or possibly a step up to a new equilibrium. Architecturally, web apps aren’t so different from how systems were built back in the days of time sharing, and all of a sudden there’s tremendous buzz around building rich apps for phones.
Furthermore, trends can be derailed by unexpected events. For that matter, not all trends are good trends; sometimes the industry might be making a wrong turn, and the best strategy is to find a way to ride it out, solidifying your experience with superior techniques while the industry tries something and eventually finds it wanting. (EJBs come to mind.)
Lift your gaze, spot the trends … and evaluate them carefully. You might lead your customers on a new and profitable road, or perhaps be their guide through some rocky territory. Or you might need to advise them to stay away.
Learning from mistakes doesn’t always just happen; sometimes you have to plan for it. People are so good at noticing things and learning from them at very small time scales that, during activities like pair programming, a lot of the learning happens without our noticing it. Over periods of days, weeks, or months, however, we seem to be equally good at forgetting mistakes, successes, and what we should be learning from them.
That’s why it’s important to have periodic retrospectives—run by someone who understands how to do it well. At Relevance, we do project regularly throughout a project’s lifecycle. More important, perhaps, for strategic thinking as a company, we do company-wide retrospectives every two weeks, and more in-depth retrospectives twice a year. That might seem like a lot, and some of us thought it was overkill when the idea was first proposed—but we’ve all come to find these retrospectives immensely valuable.
A common practice on agile projects is to build just what you need for the requirements you’re implementing today; don’t build infrastructure or glue to support future requirements, because those requirements might change or even disappear altogether. “You Ain’t Gonna Need It” is the mantra.
But experienced developers, even while embracing that practice in general, have always been aware that there are occasional exceptions. When you’re building something very similar to what you’ve built several times before, your experience can tell you that there’s a very high likelihood that you will need something extra, and it will be cheaper to build it now. A few years ago I wrote about one example that I still strongly believe in: if your system needs a configuration language, it’s nearly always a good idea to go ahead and use a real, full-fledged programming language for that task right from the start. On the other hand, only a few systems need that kind of external configurability at all. YAGNI should still be the default position.
At the process level, projects tend to be much more alike than they are down at the level of code and tests. Our experience as project managers and participants tells us that some of the problems we encounter on current projects are quite likely to occur on future projects.
As a result, when we build things to help us solve our project’s problems—whether technical solutions or process elements—we consider taking the time to build a little more generality or communicate it to the larger team in such a way that it can be an ongoing, repeating part of our process. We tell the other project teams at the next retrospective, for example, or check the automation tool into a company-wide tools repository rather than the project-specific repo.
At the same time, though, we want to ensure that such things don’t become onions in the varnish. We take to heart Alistair Cockburn’s admonishments that all process elements and tools have costs of their own, and that they tend to outlive the problems that they were intended to solve, eventually becoming costly drags on efficiency. So we consider our processes during retrospectives, pruning them ruthlessly of procedures that cost more than they’re worth.
As technologists, most of us want to stay on the cutting edge, using tools and techniques that represent the best we currently know how to do. However, there are some real dangers in riding the wave too aggressively.
The first danger is that we might be bitten by hidden flaws. New things often have compelling advantages that are obvious, paired with serious problems that are much more subtle and might only become clear over time. Jumping too early to a new tool or technique, without sufficient experience, can leave you vulnerable when those flaws decide to bite.
The other danger is that of constant churn: being so busy changing that you never build the deep expertise that lets you move really quickly.
You absolutely should stay abreast of new technologies and techniques, evaluating them for inclusion in your toolset. And you probably should be more aggressive about adopting them than the average software team; stagnation is a bigger problem in our industry than excessive change. But be aware of the cost of change itself. If a new tool of framework is 15% better along some axis than the one you’re currently using, a wholesale switch is a very questionable proposition. There may be weaknesses you’re unaware of that offset the obvious gains. Furthermore, deep expertise counts for a lot more than a 15% benefit, so during the period when you’re learning the new tool, you’ll be moving more slowly.
This is part of what we use slack time for: exploring new ideas and technologies on play projects and internal tasks, trying to stay abreast of new things so that, when something clearly has a really substantial benefit over current practice (as Rails did a few years back), we’ll be ready.
Programmers are always pointing out that things “can’t be measured”—and often we’re right about that. In our field, especially as scales grow beyond tiny, controlled experiments and toy programs, there are way too many variables to know for sure what’s being measured. Additionally, we deal with nebulous things like “features” and “requirements” that don’t even have clear definitions, in many cases.
Tom Gilb is a software consultant who is most famous for stating what has become known as “Gilb’s Law”:
Anything you need to quantify can be measured in some way that is superior to not measuring it at all.
Gilb’s law points out that “that’s not measurable” is often a cop-out. There’s always some way to measure things. The measurement may be inaccurate or questionable in many ways, but it can still give us valuable information that we can’t get if we don’t even try to measure.
Here’s an example: when people have chronic pain, they report their pain level at each medical check in, using a number from 1 to 10. This is totally subjective, but more important to the person than any number of objective measures. Additionally, the way that number changes over time is much more important than the specific value on a particular day.
We can do the same thing in software. During retrospectives we ask every participant for a number of 1-to-10, subjective metrics like that. It turns out that asking about happiness with the pace of the project and getting a chorus of “eight!” responses is a better indicator than a beautifully predictive burndown chart.
That said, it’s important to recognize the limitations of such “measurements of the unmeasurable”. A few years ago, I blogged about some of the limitations and hazards hidden in Gilb’s Law. Later, Jason Yip formulated a brilliant corollary of his own:
Anything you need to quantify can be measured in some way that is inferior to not measuring it at all.
Taken by itself, “Yip’s Corollary” is so obvious as to be useless. But combined with Gilb’s Law, it is a vital dose of reality and wisdom. Clearly there must be something that’s better than nothing—but just picking something at random could easily be worse!
I think the lesson is clear: measure the unmeasurable, yes—but be very careful about what metrics you choose, and avoid using the results to draw firm conclusions. Instead, use your measurements to frame hypotheses that you then test through the other mechanisms we’ve discussed.
Finally, of course, none of this will matter unless you really want to succeed and make it work. None of it is difficult to understand. What’s difficult is actually doing it, day to day. Doing things consistently is hard. Reflecting while you’re working—thinking about what works well and what doesn’t, and how you can improve—is hard. Leading and participating in honest, open retrospectives is hard. Leaving slack time in your schedule when there’s money that could be made is hard. Being honest with your customers when doing so means giving up short-term revenue—or even (gasp!) admitting you made a mistake that cost them money—is hard.
You can succeed without doing the hard stuff, but the odds are against you. Similarly, you can fail while doing the hard stuff, but the odds are decidedly in your favor. By following agile principles, you can have:
and you’ll maximize your chance of success as individuals and as a team.
Is that what you want?
This article is a companion to a conference talk of the same name. To see it, try to catch one of Stuart Halloway’s appearances at a No Fluff, Just Stuff show this year. Alternatively, you can contact us to have one of us come speak to your group.
Comments and suggestions are most welcome.
Thanks to Stuart Halloway and Jess Martin for thoughtful comments on an earlier draft of this article.
The way teams think of points as corresponding to real units of time is itself a complex topic. We will discuss that more in our upcoming article about Iteration Zero.↩
This does not mean that having multiple pairs review a critical code path is a bad idea, though. One of the most interesting lessons of the conference talks is that occasionally somebody will approach the speaker in private after the talk and propose a better solution than anyone in the room came up with!↩
In fact, in some ways development is the least important thing to pair on! Development has unit tests, and leaves the clearest artifacts for review later. Business analysis, customer interaction, interface design, sales, marketing, and invoicing have nothing as good as unit tests for a single person to use, and leave more ambiguous artifacts for later review.↩