[Servercert-wg] Updating BR 6.1.1.3

Ryan Sleevi sleevi at google.com
Fri Apr 17 11:17:13 MST 2020


Currently, we have in the BRs an expectation that you implement an
algorithm, which in pseudo-code is something like:

function isDebianWeak(key) {
  for architecture in (le32, le64, be32) {
    for pid in (0...32767) {
      if (key == debian_key(architecture, pid, length(key), exponent(key)) {
        return true;
      }
   }
  return false;
}

The effect of the proposal is to do the following

function isDebianWeak(key) {
  if (length(key) not in (1024, 2048, 4096)) {
    return false;  // This is a lie
  }
  for architecture in (le32, le64, be32) {
    for pid in (0...32767) {
      if (key == debian_keys[length(key)][architecture][pid]) {
        return true;
      }
    }
  }
  return false;
}

On Fri, Apr 17, 2020 at 1:42 PM Corey Bonnell <CBonnell at securetrust.com>
wrote:

> As described in my previous message, implementation of the "algorithm"
> requires precomputation of an incredible number of tables, the vast
> majority of which must be generated on moribund hardware of questionable
> availability.
>

The algorithm itself doesn't require precomputation. A CA could absolutely
run this computation themselves real-time if they wanted. Does that mean it
will take longer to get a cert? Yes. But that's the difference between what
the BRs require and implementations a CA practice.

A CA could simply "return true" for any key sizes they don't know - failing
closed, rather than failing open - if it's too expensive to precompute.

But also, let's do the napkin math here. A presentation - from 2008, so 12
years ago - explored this.
https://trailofbits.files.wordpress.com/2008/07/hope-08-openssl.pdf - with
$8 of 2012 compute.


> Not strawmen, and not foolish. The allowed set of exponents (and other
> restrictions in section 6.1.6) are informed from NIST guidance, which
> explicitly allows for exponents to be generated randomly within the range
> of exponents recommended in section 6.1.6. I pointed to several open-source
> implementations that would require an astronomical number of tables such
> that even with parallelizing the effort, would take billions and billions
> of years to generate.
>

I'm fully supportive of CAs wanting to restrict the exponent to F4 to
reduce the work that CAs may have to do.


>
> Additionally, the previous remediations accepted by the Root Programs for
> CAs not flagging Debian weak keys in the openssl-blacklist blocklists has
> been for these CAs to check for weak keys enumerated within that package.
> Nowhere in the associated discussions were the allowed set of modulus
> lengths, exponents, or platforms to be checked by the CA brought up. This
> precedent, coupled with the analysis in the previous message and above,
> would suggest that the current expectation is for CAs to check for those
> keys within the openssl-blacklist package.
>

We're discussing CAs who *didn't even* blocklist the known-compromised
keys. At a minimum, yes, they can and should have used available sources,
and I think we're in agreement that openssl-blacklist is a readily
available source.

I'm fully supportive of clarifying the requirements to make it clearer to
CAs the algorithm captured above. The computational complexity you allude
to is a business choice the CA makes, in terms of the set of algorithms and
parameters they accept and support. There's no BR requirement that a CA
accept beyond F4 for the exponent, for example, or that a CA support
2088-bit keys or 3576-bit keys. Those are, ultimately, choices the CA
makes, and provided they implement the algorithm, they're fine.

The suggestion to restrict to only the openssl-blacklist has the moral
equivalent of defining a 3.2.2.4 method that is

function validateDNS(domainName) {
  if (!endswith(domainName, ".com"))
    return true;  // Assume it validated OK if it's not a .com address
  // do the actual validation
}

Of course that would be problematic. If the answer is "It's really hard to
validate domains that aren't .com with this method", the answer is "You
don't have to use this method or accept those domains". When it comes to
debian weak keys, there's a business choice being made.

If your view is that the requirements of the algorithm above are too
difficult to implement, and thus its easier / more cost effective for a CA
to avoid accepting such parameters entirely, that still doesn't mean the
BRs prohibit those parameters. That just means there is a trade-off being
made by CAs here.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cabforum.org/pipermail/servercert-wg/attachments/20200417/cac35f1d/attachment.html>


More information about the Servercert-wg mailing list