The Justin Fulcher Framework for Evaluating Public Sector AI Tools

As federal agencies pour resources into artificial intelligence, the question of how to evaluate those investments has become increasingly important. Justin Fulcher, who has worked on technology modernization from both the private and public sides of that equation, offers a framework grounded in operational experience rather than marketing.

Fulcher co-founded RingMD, building a telemedicine platform across Asia, before serving as a Senior Advisor to the Secretary of Defense. The thread connecting those experiences is a consistent focus on what makes technology work inside constrained, regulated environments and what causes it to stall.

Three Questions Worth Asking

Fulcher’s approach to evaluating AI in government centers on friction reduction. The first question is whether a given tool removes an existing obstacle from the agency’s core mission or introduces a new one. The second is whether implementation requires extensive retraining that may not be feasible at scale. The third is whether the tool generates compliance concerns that will create delays downstream.

Tools that fail on any of those dimensions face significant resistance in government environments, regardless of their technical sophistication. Justin Fulcher has been explicit about this: technology adoption in regulated environments succeeds when it reduces existing friction rather than creating new complexity. That standard is more demanding than it might appear.

Durability Over Speed

One of the principles Fulcher returns to consistently is durability. AI deployments that generate early enthusiasm but create new administrative burden, require ongoing vendor support to function, or don’t survive leadership transitions are not successful modernization. They’re postponed failures.

“Serious work is defined less by certainty at the outset than by stewardship over time,” Fulcher has written. That perspective prioritizing what holds up over what looks impressive initially reflects lessons from building technology that had to perform in environments with limited infrastructure, diverse regulatory regimes, and constrained resources.

His work at the Defense Department, which shortened software procurement timelines significantly, reflects the same discipline. The reforms were designed to endure, not just to demonstrate momentum. For agencies evaluating AI today, Justin Fulcher’s framework suggests that durability, operational fit, and implementation discipline are the metrics that matter most. See related link for more information.

 

Follow for more about Justin Fulcher on https://www.facebook.com/JustinLFulcher/

Related Posts