A Therapeutic (for the author) Rant About the Catatonic State of Vulnerability Remediation
I attended a proposers webinar last week for ARPA-H’s new UPGRADE (Universal Patching and Remediation for Autonomous Defense) program, an admirable attempt to automate vulnerability scanning and remediation in healthcare environments, some of the most vulnerable and critical networks in the corporate landscape. trackd is all about automating vulnerability remediation, so we thought our participation might make sense. I was disappointed – but unsurprised – to learn that a major element of the program is building an “emulation” of a hospital’s entire network, effectively a gigantic sandbox, first. That virtual network would then be scanned for vulnerabilities, virtually patched, and then, and only after an analysis of the results of the simulated patching, the actual network vulnerabilities would be remediated.
So, ARPA-H, a government agency whose mission is make “pivotal investments in breakthrough technology,” and to specifically avoid investments that yield just incremental improvements, is spending substantial resources (total funding awards for the program have not been disclosed) to mimic exactly the process that IT teams have using to placate their fears of patch deployment disruptions for decades. ARPA-H is building a bigger and better sandbox, not exactly the stuff of science fiction.
I really wish that Henry Ford had said “if I asked people what they wanted, they would have said a faster horse” (he probably does too), but since the sentiment is consistent with something he’s likely to have believed, let’s pretend he did. Indeed, it’s common for a new technology to speed up or make an existing, universally-accepted process more efficient. But true “breakthroughs” are achieved only when the new technology renders obsolete that existing, time-worn process. We traveled on the surface of the earth to reach California from the East Coast via stagecoach, train, and then by car, all improvements over the prior technology, until doing so via airplane changed the paradigm. The Pony Express was replaced by the more efficient modern Postal Service, but ultimately, the process was the same: carry a physical piece of paper from one place to another. It wasn’t until the fax machine, and then email/texting, that the process of communicating in printed words and images at a distance was truly revolutionized. Uber and Lyft have definitely improved the experience of taking a cab from one location to another, but perhaps someday teleportation will render our current state-of-the-art in human transportation an amusing anachronism.
Today, those in vulnerability management often create development environments (aka sandboxes) to test whether or not new patches will cause disruptions on their networks…just like they’ve been doing for 3 decades. Which leads to only one conclusion: ARPA-H is funding an effort to build a faster horse.
This is not to disparage ARPA-H (well, maybe a little), but rather to highlight a reality that’s killing the vulnerability remediation community: living like it’s the early 2000s. We don’t use dial-up internet or floppy disks anymore, so can someone please tell me why we treat vulnerability patching like we did when more than half of corporate America was still using Lotus Notes?
Like most legacy behaviors, they made complete sense at the time they were developed. Twenty years ago, patching broke stuff…a lot. So, a cautious approach to deploying patches wasn’t just prudent, it was non-negotiable. Moreover, the international cyber criminal community was a shell of its current incarnation, so prioritizing the risk of a disruption over that of a cyber compromise was unquestionably solid risk management.
But that was then.
Today, cyber criminals are ubiquitous, well-armed with technology that requires much less sophisticated actors, and now have untraceable digital currency enabling easy monetization of their successful compromises. But even more importantly, patches don’t break stuff nearly as frequently anymore. Some estimates put the percentage of patch rollbacks at less than 2% (our data at trackd is consistent with that number with respect to patch disruptions). So what we’ve seen in the last 20 years is the complete inversion of the risk analysis. The risk of network compromise now far outweighs the risk of a meaningful patch-induced disruption. And the time to patch continues to be measured in months or quarters instead of minutes or hours, and there’s only one reason for that: a now-irrational fear of network disruption.
It’s difficult to change a mindset founded in a decades-old process, but for all the new scanners, prioritization engines, automated patching vendors, and AI-driven (fill-in-the-blank), the vulnerability vendor community has made no progress in the past 20 years, as it still takes months to patch vulnerabilities, the only statistic that matters in our game.
The ethos among IT practitioners has to change, and trackd is building the tool they need to give them confidence to do so.