[Servercert-wg] Ballot SC39: Definition of Critical Vulnerability

Ryan Sleevi sleevi at google.com
Tue Dec 15 19:59:28 UTC 2020


On Tue, Dec 15, 2020 at 12:27 PM Neil Dunbar via Servercert-wg <
servercert-wg at cabforum.org> wrote:

> Ryan,
>
> Sorry I didn't have time to address this earlier.
>
> Some observations on the points below. Under the old definitions, over
> 2020, there were 3024 vulnerabilities [*] disclosed at CVSS 2.0 level 7.0
> or higher; under CVSS 3.0, there would be 7385 High Level or Critical
> levels to address (ie, >= 7.0). This represents a significant increase in
> the number of vulnerabilities which would need  to be patched in the 96
> hour window.
>
> Now - I know that not all of those vulnerabilities would actually be
> present in many CA systems, but it's reasonable to conclude that
> maintaining the 7.0 threshold for CVSS 3.0 would probably double the
> patching workload on CAs.
>
I don't think that's reasonable to conclude without more data.

> That is not necessarily a Bad Thing: but it is something which CAs would
> need to consider in adopting/rejecting this ballot.
>
Equally for root programs, when thinking about the security of their users,
and what they may require independently. There certainly is benefit from
CAs sharing concrete data, and I appreciate the analysis you provided so
far, but I don't think it's as compelling a case as suggested.

> In order to keep the number of vulnerabilities (more or less) the same,
> the equivalent CVSS v3.0 threshold would be 8.8: in 2020, there were 3065
> such vulnerabilities disclosed. Perhaps that would be the appropriate level
> to set the trigger for 96 hour patching? Agreed, it's diverging from what
> the CVSS 3.0 ranking calls "Critical", but should that be a significant
> concern? At least it would cover everything which is considered "Critical"
> by the rest of the world, and also includes a lot of "this is very, very
> serious" issues.
>
I don't think that's a reasonable goal either, for the reasons I addressed
previously. I also think is misleading to suggest a "double the patching
workload" and, again, suggesting 96-hour patching, which I highlighted
previously has enough loopholes *today* that CAs can (and, inevitably, do
and have done, based on CA incidents), document their way out of it,
without any transparency until it actually shows as an incident.

Do I think that's bad? Of course. Do I think we should fix it? Of course.
But that's a further argument in *support* of keeping it at 7.0, and then
reigning that nonsense in, rather than allowing shenanigans, weakening
security on the basis of suggesting we don't want to encourage shenanigans,
and then not preventing shenanigans in the first place.

I understand that, at the core, there's a disagreement on the ordering. I
can understand and appreciate the perspective that "We shouldn't do
anything controversial now", but that's merely suggesting we should step
backwards to avoid stepping on any toes. More importantly, however, by
preserving the current combination (of unfortunate infinite flexibility,
while also capturing the goal of >= 7.0), we move forward in getting CAs to
maintain appropriate documentary evidence as to when/how/why they *don't*
respond in 96 hours. This is *critical* to do sooner than later, because it
allows us to have a more meaningful discussion about removing/reducing that
flexibility or bringing additional transparency/accountability, by
providing actual data (i.e. "We saw the number of patches increase 2x, but
we deemed them not important because x/y/z"), than simply throwing a dart
at the wall and hoping we got it right.

That approach is one of the few things that help keeps productive
discussions; CAs are afforded flexibility, but at the cost of documenting
it (see also, CAA, EV validation sources, validation methods of 3.2.2.4).
While this is a marginal cost in the short-term, it allows us to have a
data-driven discussion for the long-term, without having to rely on CAs'
gut instincts or business interests. Absent that sort of data, we're back
to the situation where browsers have to directly impose requirements to
protect their users as part of their root program requirements, which
inevitably CAs feel is unfair, but equally entirely self-inflicted.

More concretely:
- I do support updating the URL
- I don't believe maintaining at 7.0 will result in double the *patching*
workload, or even double the *response* workload, but acknowledge it *may*
increase the degree of responsiveness/documentation CAs have to maintain
- I believe that, long term, we may want to trade certain areas of
flexibility afforded to CAs as a whole, by introducing more gradations in
normative requirements (e.g. a distinction between Critical and High, but
with fewer "get out of jail free" cards), and am receptive to data-driven
discussions about that
- I'm willing to trade a short-term increase in the use of "get out of jail
free" if it supports those goals of having a more equitable, more secure
long-term solution.
- I'm not willing to trade short-term security in the hopes of a long-term
solution.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cabforum.org/pipermail/servercert-wg/attachments/20201215/893afd78/attachment-0001.html>


More information about the Servercert-wg mailing list