[cabf_validation] Draft Meeting Minutes from 16-December-2021
bwilson at mozilla.com
Tue Dec 21 21:25:03 UTC 2021
*Validation Subcommittee Meeting of Thursday, December 16, 2021*
*Roll Call:* Ben Wilson, Corey Bonnell, Tim Hollebeek, Wayne Thayer, Ryan
Dickson, Paul van Brouwershaven, Andrea Holland, Aneta Wojtczak, Brittany
Randall, Bruce Morton, Clint Wilson, Dimitris Zacharopoulos, Janet Hines,
Joanna Fox, Jose Guzman, Kati Davids, Kidd Freeman (OATI), Michael
Slaughter, Martijn Katerbarg, Natalia Kotliarsky, Pekka Lahtiharju, Rebecca
Kelley, Tobias Josefowitz, Trev Ponds-White, Tyler Myers, and Iñigo
*Antitrust Statement:* Read by Tim.
*Agenda* was reviewed.
*Next Call:* Call of December 30, 2021, is cancelled. Next call is
January 13, 2022.
*Vulnerabilities of Distributed Domain Validation*
We discussed the recent research paper (
https://dl.acm.org/doi/pdf/10.1145/3460120.3484815) introduced by Wayne
Thayer to the mailing list on multi-perspective DNS validation to prevent
certificate misissuance due to DNS hijacking. Let’s Encrypt has implemented
multi-perspective domain validation. We might want to invite the authors of
this recent paper to present to us.
Wayne explained that the researchers performed downgrade attacks. A CA
queries DNS from multiple points, but the attacker eliminates some to where
the system is forced down into only using one. The attacks degrade DNS
responses on the other points and then you attack just the one that is
remaining with cache poisoning, route hijacking, etc.
Wayne asked whether the Validation Subcommittee was the right group to
tackle this issue with requirements aimed at closing down some of the
vulnerabilities. He noted that the research paper hints at ways that the
attacks could be mitigated to some extent.
Paul said it might be something for the NetSec group to address because a
couple of the recommendations were RPKI and DNSSEC. Many DNS servers
automatically select the best performing authoritative DNS server. By
attacking by overloading DNS servers with queries, it could result that the
attacker’s DNS server becomes the best performing DNS server. One of the
strategies is to randomly select DNS servers so that even poorly performing
ones are queried. Paul said that NetSec might more appropriate because
things like RPKI are a best practice for anyone operating a network.
Cloudflare and others have had success in implementing RPKI to prevent
these kinds of things from happening. It would help if CAs were required to
run their own infrastructure behind a network that is protected by RPKI.
For random selection for DNS lookup, that might be a NetSec issue, too.
Tim noted that these methods may not be specific to CAs only, and that it
could just be considered good practice. Tim said that NetSec is
traditionally focused on protecting CA systems from external attacks, but
technical details of validation would be better addressed in the Validation
Subcommittee, and that is why we previously discussed multi-perspective
validation. Paul agreed that it would still be within the Validation
Subcommittee’s purview, and we could come forth with technical
Tim noted that when we discussed multi-perspective validation previously,
and as he recalled without looking at the discussions, there may have been
hesitation due to issues with practice and scale that caused people to not
move forward with it as a requirement.
Dimitris recalled that we discussed requiring RPKI, which basically accepts
only signed route objects, and whether the CA needs to have its own
infrastructure for this and must follow this method, or whether upstream
service providers could be required by the CA to implement this. A CA that
does one or the other has a better chance of mitigating these types of
Paul said it depends, because if you use an upstream ISP, the ISP should
use RPKI not just for you as the CA, but because it is an ISP, but some
ISPs might say that they don’t have that capability. The other challenge is
that you can do it as an authoritative source (someone cannot fake to be
you), intercepting and proxying response codes. But it is really about the
subscriber target you are checking, if they are not doing RPKI as well,
then the attack could still happen on their side. It might be good to make
these things long-term requirements. If it were a requirement for ISPs of
CAs, then it could help add pressure on the ISPs to implement it because if
you are not going to do this, then we need to go somewhere else. Tim stated
that it would be good to find out who is doing it and that the CA market
might not be big enough to push ISPs in any significant way.
Paul – You can use RIPE Stat (https://stat.ripe.net/) you can enter your IP
address to see if it is announcing an RPKI record. So, it’s really easy to
see if your network is protected. More and more RPKI is becoming a
Tobias asked how would RPKI mitigate these attacks?
Paul responded that the research paper describes how the attacker would
capture packets, re-encode them, proxies them, etc.
Tobias said but everybody needs to use it.
Paul said yes, but the attack was also implemented on the CA side as well.
Tobias said that if RPKI were deployed, then we wouldn’t need
multi-perspective validation to begin with.
Paul said the challenge would be how to get subscribers to implement RPKI,
but CAs need to step up and begin implementing it themselves, like
Cloudflare has done.
Dimitris said it’s partially mitigated unless everybody implements RPKI.
Wayne noted that we should not focus just on RPKI because it is only one of
the mitigations, and multi-perspective validation is better than validation
being done by CAs today. There are things we could do here, such as
improving authoritative name server selection. There are things that can be
done here or in NetSec to improve from single-point validation to
multi-point validation. It is a matter of resources and time as to the
progress we can make on addressing these issues.
Tim said that looking at this in depth will be more helpful than just
having various people saying that CAs should be doing various things. Tim
will see if we can speak with the authors. Paul agreed that it would useful
and that we should also be looking to see what solutions are easier for CAs
to implement in the short term. We should look at the effort needed for CAs
to take in order to get us where we want to be (e.g. an analysis of the
costs, burdens and benefits).
*Ballot SC-52 - Time Intervals for CRLs*
See discussion at
We discussed whether we should put time frames in terms of seconds in the
Dimitris said he wanted a few more days to consult with his team before we
started a vote. Tim said that would be fine.
Wayne asked when the ballot discussion period would expire and whether we
needed to refresh the ballot or start voting now because we don’t want to
be voting during the holidays.
Tim said he would work on a refresh it and put it out on January 3, 2022,
*CNAME delegation to the CA*
Discussions about CNAME delegation have previously occurred. See
have revolved around the purpose of the CA going through the effort of
retrieving a token that they put there and whether that tests that the
customer still has the delegation in place at the time of issuance.
Wayne said he liked Tim’s proposal, but his concern is that it changes the
requirement for the applicant to demonstrate control of time at time of
issuance to require once at time of initial issuance and to leave it in
place until they revoke it. His concerns are about renewals where a CNAME
has been delegated to a CA, the CA is going to check it to make sure that
the CNAME is there, but the applicant is no longer involved. That differs
from the other methods where we require the applicant’s involvement.
Tim agreed that it is qualitatively different and that we should discuss
whether it is good and if there are concerns.
Wayne said that it makes automation of certificate issuance easier, as long
as it’s secure. A random value is only value for 30 days, and why can this
Tim said, “If there are problems with the method, the fact that the CA is
involved, … if there is anything going wrong, then there may be problems
with some of the CDN providers ... “ Do we need to revisit that, and if so,
what security analysis do we need to do?
Wayne said that the distinction, from the CA’s perspective, is that if the
CDN is requesting the certificate, they are the applicant, legal issues
aside. From a CA’s perspective, the applicant is demonstrating control,
but when the CA is doing it, that is not the applicant demonstrating
control. But what about Google, Apple, etc. getting certificates from
themselves? What if the organization is both the CA and the Applicant?
Tim that is a known problem with the BRs today, similar to whether a CA can
generate the keys if it is also the applicant.
Wayne said the problem is the argument that if you are the CA, then you
can’t also be the applicant.
Tim said we should fix these inconsistencies.
Corey noted that the requirement that the CA provide test websites with
certificates raises a similar question.
Wayne asked, in the case where the CA is not allowed to do something,
because it is validation, can the CA claim an exception because it is doing
it as the applicant?
Tim said the requirements contradict themselves on what to do when the CA
is the applicant.
Wayne asked whether this is a “fail-closed” situation?
Tim wondered whether CAs would have to revoke the certificates on their
test websites, and Wayne said it depends on what validation methods were
Ben said that if we go too far down with the analysis it gets to the point
Tim said we should amend the requirements to address, “when the CA is the
Applicant” and specify that when it is doing CA things, it follows CA
requirements and when it is doing applicant tasks, it follows applicant
requirements, and when there is a contradiction, resolve it to say that the
CA needs to do nothing special from what they were otherwise doing.
Wayne said that would solve the problem here, for the most part.
Clarification works that any of these validation methods have to be tied to
a particular customer account.
Tim said it would clarify and one could argue that the “MAY” language is
useful because it indicates that it is allowed behavior by the CA. It
clears up the ambiguity of whether the customer is allowed to delegate to
the CA or not.
Dimitris asked how would this account-focusing part work with ACME?
Tim said he thought this would work better with ACME because you already
have the account separation.
Corey said that the applicant would create a one-time CNAME for the ACME
challenge subdomain for the CA. If you’re using a particular ACME client,
Certbot, then it can do the DNS, you could set up the DNS challenge on your
behalf. You could have the client skip that part and have it delegated to
the CA, and it publishes the challenge. There is limited benefit, I guess.
Tim – The ACME protocol could be upgraded to handle this better. Because if
the ACME protocol allowed you to specify that the CNAME delegation was
already in place, then people could use that instead of the new URL.
Certbot would need to be updated to be aware of this method of validation.
Wayne suggested that the ballot be circulated for discussion.
Tim will propose a ballot. Ben and Clint will look at the draft ballot and
let Tim know whether they’ll be able to endorse.
*Certificate Profile Suggestions*
See Corey’s email -
Ryan Dickson said that he had argued previously to keep the requirement for
an AKI in self-signed root certificates. As he understands it, Windows,
MacOS, and maybe Adobe use AKI and SKI as part of path building. Chrome is
looking to move away from platform verifiers, so the concern for Chrome is
not particularly long-lived, and others have not voiced interest in keeping
the requirement. So I’m fine with going ahead and removing it from the
draft ballot as Corey had suggested.
Ryan D. said that a second issue is cross-certificates requiring EKUs. What
is being changed? Isn’t this more of a dis-ambiguation? BR section
18.104.22.168 says the EKU MAY be present for what we are re-phrasing as
affiliated cross certificates. “EKU may be present” is equivalent to either
presence or absence of the EKU. We need to be mindful of affiliated cross
certificates and non-affiliated cross certificates. For affiliated cross
certificates, the proposed language is a “SHOULD.” Functionally, I don’t
see it as different. Only change is that affiliated CAs we’re going from
“May” to “Should”. For unaffiliated cross certificates, the current
language says, “For all other Subordinate CA Certificates, including
Technically Constrained Subordinate CA Certificates: This extension MUST
be present”, etc. Functionally, I do not see a difference. How do other
people interpret this?
Tim said that the European regulators and auditors follow DigiCert’s
approach to “shoulds” and require explanations for departures.
Clint said that CAs should justify departures from any “shoulds”, but it’s
still different from a “must”.
Tim acknowledged that the transition from a “may” to a “should” is a small
Ryan D. said we’re trying to jam a lot into these profiles. The more we can
simplify, the better.
Tim said that on the next call we should be prepared to discuss what items
we can just ship because there aren’t any issues, and which are the ones
that need more discussion.
Dimitris said he I thought that we had previously agreed to post the
profiles ballot without any normative changes and then have a version 2. It
is much easier to evaluate these kinds of changes in two steps. Some
auditors read RFC 2119 a little more strictly.
Trev agreed with Dimitris that we had previously discussed the transition
from “mays” to “shoulds” and that “shoulds” were considered best practices
and could eventually become “musts” and we should treat “shoulds” this way.
Ryan D asked whether we should remove the “shoulds”?
Tim said that some people in some standards groups take that approach. It
would be great if we could modify RFC 2119 to explain in greater detail
what “should” means.
Trev suggested that next year we look at the “shoulds”.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Validation