[Servercert-wg] Updating BR 6.1.1.3
Ryan Sleevi
sleevi at google.com
Mon Apr 13 08:23:03 MST 2020
On Sun, Apr 12, 2020 at 11:10 PM Corey Bonnell <CBonnell at securetrust.com>
wrote:
> As such, execution of the algorithm above on an ad-hoc basis is not
> feasible, so such lists must be pre-computed and the results presumably
> hashed and stored in some lookup table for efficiency. This contrasts
> sharply with the ROCA vulnerability check that you mentioned, as that can
> be done relatively quickly with no significant performance degradation to
> the CSR submission workflow.
>
I understand why you're making this comparison, but I don't think it's
germane to the expectation that CAs have a process to determine and reject
CA weak keys.
This is like comparing and contrasting the "Any other method" form of
domain validation with one of the prescribed meanings, and complaining that
the additional requirements "contrasts sharply with the 'any other method',
which can be done relatively quickly" with no significant effort by the
domain holder.
Yes, sometimes requirements for CAs mean they can't just take the lazy way
out, but I don't think that in any way changes the expectations. I'm
particularly keen to avoid the irrational demarcation between methods that
are 'easy' and methods that are 'hard' and to somehow suggest that methods
that are 'hard' allow the CA to take shortcuts. This is especially
unreasonable when, as you note, a variety of means exist at the CA's
disposal to make these computations easy, including the most obvious,
precomputation.
That "someone else" has done /some/ precomputations does not, cannot, and
should not absolve the CA of understanding the underlying root issue and
taking adequate steps. You might view this as the "63-bits vs 64-bits"
debate, but we're not even in the same ballpark here, when CAs have been
repeatedly shown, year after year, to not be taking even the base measures.
> With all of this in mind, the number of tables that must be pre-computed
> by a CA faithfully following the SHOULDs in section 6.1.6 of the Baseline
> Requirements and Mozilla Policy and allowing modulus lengths of 2048-4096
> must compute and store the following number of tables:
>
>
>
> For modulus lengths 2048-3064 (128 key sizes in this range): 2^255 * 128 =
> 2^262
>
> For modulus lengths 3072-4096 (129 key sizes in this range): 2^63 * 129 =
> ~2^70
>
The assumption here is flawed, which is somehow that CAs must support
these, and must support these with precomputed tables. The reality is that
this is a set of tradeoffs here for how a CA can meet their obligations,
and suggesting that "foolish implementation A, which ignores all
constraints" is somehow proof that the expectation is unreasonable is not
convincing. Yes, we can construct strawman implementations that fail to
make the tradeoff.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cabforum.org/pipermail/servercert-wg/attachments/20200413/b7736fc0/attachment.html>
More information about the Servercert-wg
mailing list