Most software teams struggle to answer a deceptively simple question: What’s actually in our software?
Not at a high level—but at the level where risk lives.
Modern software changes constantly. Over time, visibility fades. But teams keep shipping. And trust quietly shifts from something that’s verified to something that’s assumed.
That assumption is where software security problems tend to start.
SolarWinds is often treated as an example of a code-signing failure. But that interpretation misses the more important lesson.
The malicious code that compromised the Orion platform entered the build process long before the software was signed. Once it was part of the build, it moved through the pipeline like any other component—compiled, packaged, and ultimately distributed to customers at massive scale. Nothing about the final artifact appeared out of place.
Code signing did what it was designed to do. It verified the identity of the publisher and preserved the integrity of the software after release. The breakdown happened earlier, when trust was applied without the contents of the build being fully understood or validated.
That distinction matters. Signed software carries an implicit promise of trust. When malicious or vulnerable code is already embedded upstream, that trust gets extended far beyond where it should. In the case of SolarWinds, compromised software moved quietly into thousands of organizations, including critical infrastructure and government environments, long before anyone knew there was a problem.
What made the attack so effective wasn’t just sophistication—it was invisibility. The malicious code blended in with legitimate components, survived the build process, and inherited trust simply by being part of the release. By the time the issue surfaced, the software was already deployed, installed, and running inside customer environments.
SolarWinds didn’t introduce a new kind of risk. It exposed what happens when organizations lack clear, enforceable visibility into what enters a software build. Once that visibility is lost, security teams are left responding after the fact, when trust has already been extended downstream.
When teams have clear visibility into what’s included in a software build, the conversation around risk changes in meaningful ways.
Risk stops being abstract. Vulnerabilities are no longer just entries on a scan report—they’re tied to specific components, usage paths, and exposure levels. That context makes it easier to decide what actually needs attention and what can be addressed through compensating controls or timing.
Visibility also brings accountability. Teams can trace where components came from, how they entered the build, and who approved their use. That traceability strengthens governance without turning security into a bottleneck, because decisions are grounded in facts rather than assumptions.
Perhaps most importantly, visibility supports defensible trust. When customers, partners, or regulators ask questions about software risk, organizations can point to evidence rather than intent. They can explain what’s in a release, what risks were identified, and how those risks were addressed before the software shipped.
This shift matters because trust is no longer implicit. Buyers expect transparency. Regulators expect documentation. Security teams expect fewer blind spots.
Visibility gives organizations a way to meet those expectations without slowing development to a crawl.
For many organizations, the challenge isn’t understanding why visibility matters. It’s making it consistent and enforceable across teams, builds, and pipelines.
That’s where platforms like DigiCert Software Trust Manager come in. Rather than treating visibility, vulnerability scanning, and signing as disconnected steps, Software Trust Manager brings them together into a single, governed workflow. Teams gain insight into what enters a build, evaluate risk before trust is applied, and carry that context forward into signing and release.
Centralization matters here. When visibility lives in the same system that controls signing, organizations can enforce policies instead of relying on documentation and good intentions. Decisions about who can sign, what can be signed, and when trust is applied are based on evidence from the build itself—not assumptions made downstream.
Software Trust Manager is designed to integrate directly into CI/CD pipelines, so these controls operate where development already happens. The goal is to make visibility part of the workflow, not an extra step added after the fact.
As software continues to move faster and reach further, visibility remains the first requirement for trust. Having a way to operationalize that visibility is what allows organizations to apply trust deliberately, at scale.
Get in touch to see how DigiCert Software Trust Manager supports visibility, governance, and signing at scale.