Why cyber underwriting needs more than a questionnaire. A joint perspective from the CISO and insurance leadership at Spektrum.
The majority of cyber insurance underwriting is still based on manual questionnaires and self-reported answers that fail to accurately represent actual security posture and thus provide a view of risk that does not align to reality. Security teams and business owners are tasked with summarizing complex infrastructures into a series of binary statements, resulting in “truth gaps” due to nuance in language and process. Underwriters, in turn, are asked to make multi-million-dollar decisions based on unverifiable information. Morale hazard and adverse selection hazards present in all insurance underwriting, but the extent to which both can and should be curbed in cyber insurance is meaningful. In fact, both sides are underserved.
We believe the future of cyber underwriting must be grounded in validated posture data, not just attestations. Cybersecurity systems are already capable of emitting verifiable, real-time signals that can be used to inform risk models, underwriting decisions, and claims assessments. What’s been missing is the infrastructure to turn those signals into trusted, portable, and privacy-preserving proof. That is exactly what Spektrum has built.
The limitations of today’s model
Every year, cybersecurity team fills out multiple insurance applications, each asking some version of the same question: “Do you have X control in place?” Whether the control is MFA, endpoint protection, or disaster recovery, the answer is expected in binary terms: yes or no. The reality is rarely that simple.
For example, we might have MFA deployed across all SaaS applications, but not yet enforced on legacy systems. Or we may have endpoint protection rolled out to all production systems but still be onboarding it for internal tools. And what does it mean when the question is nonsensical, such as, “Do you have MFA on all networks?” These answers require context, clarity, and confidence. There’s no room for nuance in a 10-page form.
From an underwriting perspective, this introduces unnecessary uncertainty. Even if a submission comes from a reputable company with a well-known broker, we still cannot independently verify the security posture. We rely on static documents, vague attestations, and subjective interpretations of control strength. This leads to either pricing in additional risk (at the expense of the client) or offering coverage on faith (at the expense of the carrier). Neither outcome is optimal.
System-driven proof: replacing declarations with verified data
What we’ve introduced at Spektrum is a way to replace self-reported posture with cryptographically validated proof, derived directly from the systems that manage and secure an organization’s infrastructure.
This is not theoretical. Every modern security tool is capable of emitting data about its configuration, health, and coverage. We collect that data through secure integrations, normalize it, and convert it into Resilience Tokens, structured, tamper-proof indicators that reflect the presence and performance of a specific control or set of controls.
For example:
- A backup platform confirms that critical systems are covered with automated and scheduled daily backups, that those backups are encrypted and stored in an immutable format, and that they have been successfully tested for recovery in the last 90 days. That data is converted into a Backup Token.
- An identity provider confirms that MFA is enabled across all administrator accounts, using a hardware-based method. The email platform validates that MFA is required for all user access. Those criteria become an MFA Token.
- An endpoint detection system confirms that protection is deployed to 100% of production assets. That becomes an EDR Coverage Token.
Each of these tokens is cryptographically signed, time-bound, and continuously updated. They are privacy-preserving by design, built using zero-knowledge proofs and API-level validations that do not expose configuration details or raw logs.
What this changes for underwriters
The move to machine-based verification is not about surveillance. It’s about trust. Underwriters don’t need to see a client’s internal logs or know how their network is architected. They simply need to know whether the relevant controls are in place, operational, and verifiable.
Resilience Tokens give the ability to assess that posture in near real time, with consistency across clients. This eliminates the guesswork from underwriting and enables a level of transparency that’s not possible through static submissions. When a client shares their Resilience Passport with an underwriter, the underwriter gains immediate visibility to the actual state of an organization’s cyber maturity across core domains: identity, infrastructure, data protection, incident response, and recovery. This is achieved without oversharing or exposing sensitive data, and it obviates the need for audits.
This approach accelerates quote generation, improves risk modeling, pricing precision, and significantly reduces the risk of dispute during the claims process. Since it is now possible to verify that all appropriate and required controls were active prior to an incident, there is no need for an extended post-breach investigation. That improves outcomes for both the insured and the insurer. This also accelerates the claim’s adjudication process which is a win-win for both insured and insurers: the insured is paid faster and the insurer’s claims team saves time while releasing money tied up in reserves.
What this changes for security leaders
For security leaders, this model finally aligns insurance with how cybersecurity is actually implemented. We no longer have to force an assessment of our entire operating model into a yes/no questionnaire. Instead, we can let our systems do the talking.
We also gain internal clarity. Resilience Tokens are not only used for external validation, they serve as internal indicators of where we are compliant and where we have gaps. The posture data that powers insurance submissions is the same data we use for internal audits, regulatory compliance, control validation, and executive reporting. This unification reduces overhead and creates a consistent, defensible narrative about risk management.
Importantly, we remain in control of what gets shared and when. Tokens are shared explicitly, not continuously. This ensures we meet our privacy, confidentiality, and compliance obligations while still enabling efficient insurance workflows.
A shared understanding of risk
This new model doesn’t eliminate the role of the broker, the underwriter, or the security team. It enhances their work by introducing a layer of trust and standardization that has been missing from cyber insurance since its inception.
By moving away from subjective declarations and toward verifiable posture, we create alignment. Security leaders no longer need to “speak insurance,” and underwriters no longer need to decode technical narratives. The system provides the translation.
This is how we build an insurance process that reflects proof of risk rather than assumptions and promises. It is how we move from friction to trust, and from static coverage to dynamic insurability.
Cyber risk is dynamic. Insurance decisions based on outdated forms and unverifiable data cannot keep pace. By anchoring underwriting in system-level truth, we improve outcomes for everyone involved. The future of cyber insurance is not just about asking better questions. It’s about being able to validate the answers, efficiently, securely, and at scale.




%20(1).jpg)