Three Experiments in Doing Less

By Dan Ward


Early one morning I was minding my own business in the shower (that font of all true insight) when a brilliant, exciting, rock-your-socks-off idea popped into my head. Unlike Archimedes, I resisted the urge to shout eureka and run down the middle of the street wearing less than a toga. Instead, I toweled off and made myself presentable before venturing out into the world to test this aquatically-inspired concept, which can be summed up in seven words:

What if we do a little less?

Impressive, no? Ok, hang with me…

Like Dr. Who’s TARDIS, this unassuming little question is bigger on the inside than the outside. If we use it right, it can transport us to astonishing places and lead to incredible discoveries. This paper tells three brief stories where this question had a starring role. They all happened to me, and – minor spoiler alert – at least two have happy endings.

Before we get to the stories (patience, grasshopper!), let me point out this question is a question.

  • That is, it is not a prescription.
  • It is not a recommendation to do less.
  • It is not a call to simplify, minimize, or scale down.

It is a question (this bears repeating), and as such is open to a variety of answers, including “bad things would happen if we do less, so don’t do less.” Just something to keep in mind as we proceed. Another thing to keep in mind is that anyone can ask this question, from the most senior executive to the most junior intern. You don’t even have to know the answer in advance. In fact, it’s probably best if you don’t.


As the new program manager in charge of building an airborne radar system, I observed a phenomenon which you may have also encountered in your work: my team was spending a lot of time in meetings. This was hardly the first time I had encountered intelligent lifeforms being confined to conference rooms for unconscionable durations, ostensibly for business purposes but without fulfilling any actual purpose. No doubt you have observed this phenomenon as well.

Something about the situation smelled fishy, so I started to poke around a bit. Fortunately, I watch a lot of Sherlock Holmes television shows and my observation skills are nearly on par with John Watson himself. My thorough investigation led to an intriguing deduction: time spent in meetings is not always well spent.

I’m sure that shocks you exactly as much as it shocked me.

This is where my shower-inspired question comes in to play. Rephrasing it slightly, I asked:

What if we spent less time in meetings?

My team and I immediately launched a little experiment to see if we could come up with a scientific answer. Our experimental condition involved (brace yourself): cutting the duration and frequency of our meetings in half, which ended up being easier to do than I anticipated. They were all pretty enthusiastic to give it a try.

Now, this change would be pointless if we didn’t also analyze the results, so we set about to take some measurements. Our data collection method involved paying attention to things like morale, communication quality, and work quality. Because the team was small and co-located, we took an organic approach and discussed our observations directly with each other. In a larger, more geographically dispersed unit, we might have done a brief survey or two.

What did the data show?

  • Morale went up. A lot.
  • Communication improved. A lot.
  • People came to meetings on time. Amazing!
  • People were more engaged and more prepared. Nice!
  • People had more time to do the work. Wonderful!
  • The work got better. No kidding.
  • There was no downside. Science!

Of course, things could have easily gone the other way. Holding shorter, less frequent meetings could have worsened communication. People might have felt left out, rushed, or ignored. The quality of our work could have dropped. Doing less could have made things worse.

If the data had revealed a drop in performance, we could have changed the experimental conditions quickly and easily – extending the length of some meetings or adding another meeting, then continuing to monitor the results. The potential harm from this little experiment was constrained and temporary, which is something to keep in mind as you craft your own experiments.

In summary:

  • Our hypothesis was that less time in meetings would make things better.
  • Our experiment involved scaling back our meetings.
  • Our data collection involved paying attention to morale, communication, and work quality.
  • Our conclusion was that things got better.
  • Yay for science.

You can perform the exact same experiment on your project if your team seems to be spending too much time in meetings. Or you can use the same method to test a different hypothesis, which brings us to the next story.


A junior engineer asked me to review the latest draft of his magnum opus, an epic 125-page piece of fantasy literature known as a Test & Evaluation Master Plan. As I lit my pipe and settled in for a nice long read, I noticed something strange. This Test and Evaluation Master Plan didn’t say a thing about testing, evaluating, or master planning until page 26.

Um, someone is missing the point.

The entire document wordy-wordy-wordy to the point of distraction-distraction-distraction. The first quarter consisted entirely of preamble
and boilerplate, explaining the project’s entire history and pre-history and… well, you get the picture. I began to wonder if perhaps some of those pages might be oh-so-slightly unnecessary. Without even pausing to take another shower, I cleverly asked:

What if we made the document shorter?

Ironically, this experiment in “doing a little less” involved doing a little more work. We grabbed a red pen and edited the document down to the essentials, removing the preamble, deleting the boiler plate, crossing out the redundant and irrelevant parts. Then we crossed out more redundant parts. And more. And more.

Disregarding the voluminous details requested by the over-engineered template, we focused on satisfying the document’s actual purpose:
describe our test and evaluation plans. Anything that did not serve this purpose had to go. What remained was a tightly focused 20-pager that represented our test plans accurately and completely.

I loved this new version, if love is not too strong a word to use when discussing an engineering document. However, my deep and abiding
affection for this beautiful work of art was entirely irrelevant because I was not the approval authority or signatory for the plan. A Higher
Authority would have the final word on whether this shorter version was any good.

I suggested my young friend present both the 125 page version and the 20-page version to The Approver and ask him to pick which one he would like to use. That would constitute an important data point, although the ultimate proof as to the quality of the work would come when the plan was put into practice.

Sadly, we never got to the ultimate proof. Our experiment was terminated early because… The Approver picked the longer version.
This was not the result either of us wanted, but it is an important part of this story. We were trying to figure out what would happen if we made the document shorter. The data showed an unexpected result: the shorter document was rejected.

Yay for science. I guess.

We knew this outcome was possible, but it was disappointing nevertheless. We were happy with the results of doing less, but someone
else had a different opinion. They insisted on doing more. Want to avoid this mismatch? Do the experiment with the stakeholders
and decision-makers, not to them. Invite external decision makers to participate in the experiment from the start. Rather than surprising anyone with a trimmed-down alternative, explain your plan in advance. That increases the likelihood that your story will have a happier ending than this one.


It was January when I assumed the program manager position on the airborne radar project from Story #1. We’d scheduled our first flight for May, a date that made the team nervous. Instead of an assembled radar pod with working software, we had a stack of parts and a rough idea of how everything fit together. There was an awful lot to do in a relatively short time. Possibly even too much to do.

The most obvious solutions to the “lots to do, not enough time” problem involves a time machine (see Hermione’s Time Turner in Prisoner of Azkaban). Unfortunately, neither Dumbledore nor J.K. Rowling were returning my owls.

Since magic was clearly no help, I pulled my deerstalker cap on tighter, grabbed a magnifying glass, and the game was once again afoot. A closer look at our test plan showed we intended to hit ten distinct test points on the first flight. That sounded like an opportunity to me, so I whipped out the question once again and asked:

“What if we only hit two test points on the first flight?”

A quick back-of-the-envelope analysis revealed an unexpected answer: scaling back our first-flight plans meant we could easily fly in April, a whole month ahead of schedule.

The early flight would focus on the most urgent, time-critical aspects of the design (airworthiness, etc), and we would save the long-lead items for later. A more detailed assessment confirmed our first look, so we made the change.

Flying early allowed us to learn faster. We validated our technology and several key assumptions sooner than planned, and applied those learnings to our subsequent decisions. Our schedule simultaneously got shorter and less risky. Not bad.

Putting hardware and humans into the sky meant the stakes were higher than when we were just scheduling meetings and writing documents. These test flights were focused on achieving our program’s operational objectives and validating our technical capabilities.
Precisely because the stakes were so much higher, it was even more important to do it right. In this case, “do it right” meant getting airborne as soon as possible, learning quickly, and addressing the most critical test points first. It meant building firewalls between our tests and putting our eggs in multiple baskets, rather than trying to do everything at once.

Incidentally, the idea of simplifying our flight test agenda did not come entirely out of thin air. The specific inspiration came from a book titled Campaigns of Experiments by Alberts and Hayes, in which they wrote:

“[Doing] too much in a single experiment greatly increases the probability that the experiment will… be of little value.”

Another source of inspiration is Eric Ries’ Lean Startup book, which introduces a 3-step feedback loop called Build – Measure – Learn. Just like John Boyd’s famous OODA loop, the sooner we get started and the less time we spend going through the loop, the better.

We avoided delays and were able to quickly make some key decisions that were originally scheduled for later. The savings allowed us to fly twice as many flights as originally planned, adding new test scenarios and pushing the envelope. We had $7M left over when the program ended. Doing a little less allowed us to do a lot more for a lot less.


The great aeronautical engineer William Bushnell Stout famously advised his colleagues to “Simplicate and add lightness.” One way to apply his advice in fields beyond aeronautics is to regularly ask and answer the question “What if we do a little less?”

Ask the question as specifically as you can and as often as you can, in as many places and ways as you can. Run little experiments to collect data, and learn as much as you can as quickly as you can.

Not sure where to start? Start anywhere. Ask the question about almost anything. It does not have to be a big deal or the program’s highest risk item. It can be as simple as your team’s meeting schedule or a single planning document.

As you establish the habit of asking and answering the question, it’ll get easier to apply. Encourage your team members to develop the same habit. Involve other stakeholders and partners (remember: with them, not to them).

You just might find that doing less is the secret to doing more.


Submit a Comment

Share This