Disrupting Acquisition Blog

CAEs Must Provide Congress Their Naughty and Nice List

by | Dec 15, 2021 | NDAA, Policy

Naughty or Nice listEver ask a parent which child is their favorite and which they like the least? Silly question, right. Most parents would say, at least in front of their kids, “I love all my kids equally.” Well Congress, which passed the NDAA in time for Christmas, is developing a naughty and nice list of DoD’s MDAPs and wants the acquisition executives’ inputs.

In the FY22 NDAA bill, Sec. 806, Congress wants DoD’s Component Acquisition Executives (CAEs) to tell Congress each year which five major defense acquisition programs (MDAPs) they are most proud of and which five they are most disappointed in. CAEs will develop criteria to rank their programs based on their performance and describe how it aligns with acquisition best practices. The CAEs will be required to report to Congress for the next three years on the key factors, root causes, and get-well plan for each of the five lowest performing MDAPs.

There are roughly 84 MDAPs and 6 programs that will grow up to become MDAPs in the future. The Navy has 40 MDAPs, the Air Force has 28, the Army 20, and the other DoD Components have two. It is unclear why Congress added this, as if DoD doesn’t submit enough reports, nor do the struggling acquisition programs get ample increased attention. The bottom five programs would be at increased risk of OSD or Congress cutting their budget or cancellation.

This reminds me of the performance management system Jack Welch established at GE. In addition to employees, managers were ranked and grouped into the top 20% A players, the middle 70% B players, and the bottom 10% C players. The A players received raises and promotions. The C players were considered for termination. Higher performing B players were motivated to become A players, while lower performing B players were motivated not to fall into the C rating. This system was implemented at several companies, but then generally abandoned due to negative consequences of a cut-throat environment and class action lawsuits.

As a thought exercise, and inputs for the CAEs, the following is what I believe the criteria should be to rate programs, along with the criteria that may regrettably creep into the ratings.

The top criteria in my mind should be: Did/will the program deliver mission impactful capabilities to the Warfighters at the speed of relevance? Combatant Commanders, particularly INDOPACOM, are screaming for more capabilities ASAP to address the current and emerging threats across an increasing range of mission areas. Operational units strain their Joint Forces to meet current mission needs with systems that were designed and delivered decades ago and are increasingly vulnerable and unavailable.

For the MDAPs that have produced and delivered capabilities to date, are they having mission impact? Do they provide increased military performance to deter or defeat adversaries? Do they enable the retirement of legacy systems and manual processes? Do they provide a new range of military operations, reduce risk to Warfighters, and/or reduce operational costs? Or, are they disappointing and short lived because they failed to perform to expectations or the operational environment and threats have evolved? The Zumwalt-class destroyers come to mind here.

The new Software Acquisition Pathway includes a Value Assessment that is conducted at least annually. The operational sponsor, with inputs from end users and other stakeholders, provides written feedback to the acquisition community on the mission performance outcomes and their satisfaction with the software delivered. This feedback is invaluable to shape the requirements, investments, and designs for future iterations. MDAPs may want to consider using value assessments for sponsor feedback.

For the MDAPs that are still in development, some of the performance criteria should be:

  • Time to Initial Operational Capability (IOC)/Full Operational Capability. How much time will it take to deliver capabilities to the Warfighter? Is the program operating with a sense of urgency to balance speed with rigor? Most MDAPs take over a decade from initiation (e.g., MDD) to IOC. Many MDAPs take over a decade to achieve IOC after the Milestone B decision authorizes beginning development. There are some officials who say “if you want it done right, you’re going to have to take the time.” While I agree we can’t develop and produce MDAPs overnight, we can no longer afford to take over a decade to deliver new major systems. There are vast opportunities to streamline requirements, acquisition, contracting, and budget processes prudently. We need to revisit the scope of the MDAPs that seek to achieve three technological miracles during development, to deliver “next-gen” systems. DoD needs to structure programs around accelerated developments and production of mature technologies with regular iterations as technologies mature and operations/threats evolve. DoD also needs to restructure MDAPs from monolithic programs to dynamic capability portfolios. Ideally programs are demonstrating prototypes, experiments, MVPs, and interim developments with end users early. This would be a subjective rating based on the scope and schedule as to their relative speed.
  • Return on Investment – Potential Mission Impact. How much will the MDAP “move the needle” on operational performance, address critical gaps, and/or strengthen mission effects/kill chains? This is relative to the funding investment made in the system.
  • Responsiveness and Scalability. Has the program effectively integrated a Modular Open Systems Approach (MOSA) to enable technology insertion from multiple companies? Or is the program vendor locked with a closed proprietary solution by a major defense contractor which will lead to increased costs and decreased incentives for innovation? Is the MDAP designed to be responsive to changes in operations, threats, and technologies which are guaranteed to evolve over the expected operational lifespan of the system.
  • Contractor Performance. Is the contractor developing and producing the complex system effectively or are there critical technical issues, delays, and cost overruns?
  • Operational Test Assessments. This one requires an evolution of thinking. Operational testers must rigorously test the performance of these major systems. Yet we need to move beyond the pass/fail mindset based on requirements and test criteria established a decade earlier. CAEs should examine if the program adopted a “shift left” test strategy and cybersecurity strategy to assess performance early and adjust. Operational commanders, acquisition decision authorities, and key stakeholders will make fielding and retrofit decisions based on the test results along with the current operational environment and threats.

Criteria That I Would Not Recommend, But Likely Will Due to Muscle Memory or Politics

  • Cost, Schedule, Performance against their Acquisition Program Baseline (APB). This outmoded approach needs to be retired in the DoD and is a lazy approach to assess programs. While measuring against an APB is easy for Pentagon headquarters staffs and GAO auditors, it doesn’t provide the right incentives or measures to shape programs. This is not to say that constraints don’t work, and we shouldn’t hold the program managers accountable to controlling costs and delivering on schedule. We should track schedules, costs, and technical/design maturity, yet it is a flawed management practice to assess programs primarily against estimates made years earlier. Same for EVMS and monthly funding execution rates. I will write another post about APBs soon.
  • Budget. Each Service identifies a priority set of programs to invest in and seek Congressional support for these sizable investments. If a program is a top priority to the Service, but isn’t performing from an acquisition perspective, the SAEs will be reluctant to put them in the bottom five for fear of being cut in the next budget cycle. Similarly, their “favorite children” may make the nice list, even if they don’t deserve it. Some SAEs may sub-optimize the criteria to enable biases here.
  • Politics. If based on objective criteria, an MDAP was identified to be on the naughty list, in part due to poor contractor performance. What if the prime contractor was from the district of one of the Congressional Defense Committee Chairmen, and the contractor is a major donor to the chairman’s campaigns? Would there be pressure to take them off the naughty list? This is Washington, where politics shape nearly every decision. Would there be political pressure to put an average program on the nice list based on the defense contractor lobbying Congress? During the Obama Administration, the former USD(AT&L) and SAEs published an annual three tiered list of the major defense contractors based on their contract performance. 

Summary

It will be interesting to see how this reporting plays out and what the incentives and behaviors it drives.

  • Will it force CAEs to think critically about what truly matters with acquisition program performance?
  • Will that drive different reviews, initiatives, and incentives for programs?
  • Or will it be just another paperwork exercise to check the box?
  • Will it be a bureaucratic and political nightmare to game the ratings to protect programs or offer up sacrificial lambs regardless of acquisition performance?

What will Congress do with this new naughty and nice list? Will nice programs get a little cash to buy more toys? Will naughty programs get a lump of coal in their stocking, or worse, a call from the GAO?  

What do you think should be the key criteria to rate acquisition program performance?

0 Comments

Submit a Comment

Disclaimer:  The opinions expressed here are those of the authors only and do not represent the positions of the MITRE Corporation or its sponsors.

""

Subscribe to Our Newsletter

 

Share This