[cabfpub] Minutes from CA/Browser Forum Face-to-Face meeting March 22-23, 2017

Kirk Hall Kirk.Hall at entrustdatacard.com
Tue May 16 16:34:27 UTC 2017


Minutes from CA/Browser Forum Face-to-Face meeting March 22-23, 2017
Day 1 - Wednesday, March 22
Attendees: NOTE: The following list is of attendees that were there during at least one of the three days: Rick Andrews, Symantec; Ryan Sleevi, Google; Steve Medin, Symantec; Alex Wight, Cisco; JP Hamilton, Cisco; Jos Purvis, Cisco; Bruce Morton, Entrust Datacard; Tim Hollebeek, Trustwave; Eric Mill, FPKI (GSA); Deb Cooley, FPKI (DOD); LaChelle<https://www.cabforum.org/wiki/LaChelle> Levan, FPKI (GSA); Geoff Keating, Apple; Kirk Hall, Entrust Datacard; Chris Bailey, Entrust Datacard; Dean Coclin, Symantec; Gervase Markham, Mozilla; Tyler Myers, GoDaddy<https://www.cabforum.org/wiki/GoDaddy>; Wayne Thayer, GoDaddy<https://www.cabforum.org/wiki/GoDaddy>; Curt Spann, Apple; Chi Hickey, FPKI (GSA); Zhang Yi, CFCA; Franck Leroy, Certinomis (Docapost); Jeff Ward, BDO / WebTrust<https://www.cabforum.org/wiki/WebTrust> Chair; Don Sheehy, CPA Canada / WebTrust<https://www.cabforum.org/wiki/WebTrust>; Frank Corday, Trustwave; Moudrick Dadashov, SSC; Atsushi Inaba, GlobalSign<https://www.cabforum.org/wiki/GlobalSign>; Arno Fiedler, D-TRUST GmbH; Cornelia Enke, SwissSign<https://www.cabforum.org/wiki/SwissSign> AG; Doug Beattie, GlobalSign<https://www.cabforum.org/wiki/GlobalSign>; Li-Chun Chen, Chunghwa Telecom Co. Ltd.; Ben Wilson, DigiCert<https://www.cabforum.org/wiki/DigiCert>; Richard Wang, WoSign<https://www.cabforum.org/wiki/WoSign>; Wei Yicai, GDCA; Ou Jingan, GDCA; Masakazu Asano, GlobalSign<https://www.cabforum.org/wiki/GlobalSign>; Tarah Wheeler, Symantec; J.C. Jones, Mozilla; Feng Lin, CFCA; Jeremy Rowley, DigiCert<https://www.cabforum.org/wiki/DigiCert>; Robin Alden, Comodo; Peter Bowen, Amazon; Xiaosheng Tan, Qihoo 360; Zhihui Liang, Qihoo 360; Fotis Loukos, SSL.com; Leo Grove, SSL.com; Chris Kemmerer, SSL.com; Jody Cloutier, Microsoft; Andrew Whalley, Google; Phillip Hallam-Baker, Comodo (portions by telephone); Ryan Hurst, Google.
Working Group Reports
Policy Review Working Group
Note Taker: Ben Wilson The Policy Review Working Group was formed to advise the Forum on comparisons and consistency between the CA/B Forum guidelines and industry technical standards such as RFC 3647 and the NISTIR 7924.
Yesterday, the Policy Review WG reviewed Li Chun's proposal to modify section 7.1.4.2.2 of the Baseline Requirements. In Taiwan, there is a pre-existing naming scheme operated by the government that does not include localityName or stateOrProvinceName. The current Baseline Requirements say that for OV certificates you have to have one of these fields. A possible solution proposed during the meeting of the Working Group was to provide a carve out for Taiwan provided that the entity that is the subject of the certificate is registered in the government database, the official name of that to be provided in the ballot.
The Working Group has also reviewed the use of the term "CA" in the Baseline Requirements. There are instances where it is used inconsistently and ambiguously in the BRs, and the group has been working on trying to clarify that, with a more recent focus on the term "Root CA". One approach may be to reduce the number of times we use the term "Root CA" and put the focus more on CA key pairs. Following the meeting yesterday, there has been continued discussion on the WG list about trying to be more consistent with RFC 5280 and other requirements and some of us are now reviewing those documents.
Other topics discussed yesterday included: "who or what signs?", is it the CA, private key, certificate, or what?; delegated third parties; affiliates; and subject identity information, a discussion of adding descriptors and the question of when you can use the organizationalUnit (OU) field, in other words, is it mis-issuance if a CA shoves words like "Domain Control Validation" in the OU field without other subject information in the DN?
Governance Change Working Group
Note Taker: Dean Coclin The working group reviewed the final version of the charter which had been posted for public input here: https://cabforum.org/current-work/governance-working-group/ We also reviewed the draft charter of working groups document that Virginia and Andrew had drafted. The next step is to take these and draft new bylaws. The group will endeavor to have something ready for the Berlin meeting.
Validation Working Group
Note Taker: Kirk
Jeremy noted that the Validation Working Group had held teleconferences every two weeks, and were working on a number of proposed changes to the Baseline Requirements and the EV Guidelines. He discussed several of the recent draft ballots, and said they would be introduced as ballots over the coming weeks.
Browser News
Apple
Note Taker: Chris csk
Presented by Curt Spann (CS)

  *   CA-issued SHA-1 certificates (Safari/WebKit) shall be disallowed via a security update "fairly soon" (see also: https://support.apple.com/en-us/HT207459)
  *   A reminder: Apple will depreciate ALL SHA-1 signed certs later in 2017 (though enterprise roots can still be SHA-1 - see link above).
  *   A project for later this year: Apple shall be attempting to reduce the number of roots per CA provider held in the store.
Q from Rick Andrews: How will this removal/reduction of roots be implemented?
A (CS): Definitely will be reviewed first.
Q from (Dean Coclin?): Why is Apple taking this step?
A (CS): As a general management and security issue, to get a sense of *why* these are in the store.
Observation by Jos Purvis: Sounds like there is no set goal or specific number of roots to remove per CA?
Response (CS): More getting a 'lay of the land' for why these roots are in the store.
Contribution by Gerv Markham: Gerv made an offer of the CCADB to help manage Apple's root store, to general mirth.
Q from Dean Cochlin: Are there dates for this root pruning process?
A (CS): No firm date for this - look for more information in late 2017
Later followup Qs from Peter Bowen (after other browser presentations): What are Apple's thoughts regarding 1) reduced certificate lifespan and 2) certificate transparency?
A (CS): 1) We're in favor of Mozilla's drive toward future reduced lifespan, no announcements or timeline at present. 2) CT: looking into the matter, also no announcements or timeline.
Followup response from Apple's Geoff Keating: Regarding cert lifespan, one year may be 'the WORST of all possible worlds' - too short for comfortable manual implementation, too long for efficient automation, best number may be shorter or longer.
Google
Note Taker: Peter Bowen

  *   Started experimenting with AIA fetching on Chrome on Android
     *   Does not include WebView<https://www.cabforum.org/wiki/WebView>
  *   Stop looking at common name in Chrome 58
     *   Enterprise policy can enable for non-public certs
  *   TLS 1.3 is in testing
  *   Phasing out trust of WoSign<https://www.cabforum.org/wiki/WoSign> and Startcom certificates across the board
     *   Chrome 56 was dated based check
     *   Chrome 57 has whitelist, with full distrust by mid-2017
  *   Certificate viewer UI moved to the developer tools
     *   Chrome knows this is more difficult and is looking at ways to make it more accessible
  *   Chrome 57 includes basic support for Roughtime
     *   Used to improve SSL interstitials to help indicate to users that device clock might be wrong
  *   Chrome 57 rolled out "form not secure"
     *   Password or credit card on HTTP (not HTTPS) highlights that "this is not secure form"
  *   Chrome 57 marks data: scheme URLs as not secure
  *   Google recently held CT policy days in Mountain View to discuss log operation and inclusion policy
     *   Ryan will be sending update
Q from Dean
When is Chrome going to start marking non-HTTPS sites as "non secure"?
A from Andrew Whalley
That is intended end state, but date is based on when %age of insecure page loads is low enough to mark affirmatively insecure
Q from Wayne
Updates on CT for all certificates policy?
A from Ryan Sleevi
No updates at this time.
Q from Peter Bowen
Root program policy status?
A from Andrew Whalley
"We're hiring"
Microsoft
Note Taker: Doug Beattie
1. EIDAS best practices document is being developed:

  *   State of the art document for technical best practices. EDAS will reference this within their documents.
  *   Joint document by all root programs (Google, Apple, Mozilla) and will include rules, aspiration's, etc.
  *   Target April release
2. Disable trust for Wosign and StartCom<https://www.cabforum.org/wiki/StartCom> (for new certs, not old SSL certs) on April 25.

  *   Will get more aggressive to enforce audits for all CAs because we can turnoff TLS till CA resolve the problem.
3. SHA-1: Edge and IE will stop trust on May 9th. Backed off the February date.
4. June Program Requirements Updates. Will send out for review before. Nothing major planned
5. Common CA database

  *   MS and Mozilla both using it
  *   Reduce overhead for processing audit records.
  *   Submit documents, validate and then update the database. People not needed to review each audit. Needs to standardize on the format.
  *   Upload and parse audit attestation letters and validate content (in Azure)
  *   Want to standardize the audit letter
  *   Beta test in May/June.
  *   Webtrust is putting new standards in process for the content of these letters which will help reduce failures.
6. Cert viewer in Edge: There is a task to do this, but no schedule
7. No plan to deploy a MS CT log server.
Mozilla
Note Taker: Rick Andrews
1. Policy 2.4 Shipped We shipped Mozilla Root Store Policy 2.4, which makes our documented practice conform much more closely with reality. This update, which came after a long period of little change, was confined to "urgent" or "uncontroversial" changes. So it is not the end of the improvements in the policy.
Version 2.4.1 will be a reorganization of the document. This is released as a separate version to make the diffs of changes for later versions simpler. There is no intent to make any normative requirements changes in this update. We are currently discussing this version in the mozilla.dev.security.policy forum. Please help review the document to make sure that is the case. You have this week and next week to do this; I intend to ship at the end of the month.
2.5 will then start to tackle some of the bigger issues. These might include:

  *   Which parts of the policy apply to certificates issued under Technically Constrained Sub CAs;
  *   Which parts of the policy apply to S/MIME;
  *   SHA-1 deprecation timeline for S/MIME;
  *   Tighten up requirements around audit declarations and their scope and clarity.
It's difficult to have S/MIME-related conversations in the CAB Forum at the moment so we are using our CA Communication, about which more later, to try and begin a dialogue with CAs on that topic.
2. SHA-1 Support Recently, coincidentally in the same week that the SHAttered result was published, we disabled SHA-1 support for all our users for public roots in Firefox 52 and later, including Firefox 52ESR. "Disabled" means showing an overridable "Untrusted Connection" dialog. For now, it can still be re-enabled by setting the preference security.pki.sha1_enforcement_level to the value 4 (to allow SHA-1 certificates issued before 2016) or 1 (to allow all SHA-1 certificates). You can also set it to 0 in order to block all SHA-1 certificates (including those from non-public roots). We do not have any current plans to remove this preference, but we may at some point in the future.
3. Security UI We have been working on displaying negative indicators when HTTPS is not in use, starting with high-risk situations. As of Firefox 51, Firefox shows a struck-through lock icon in the URL bar when the user visits a page containing a password element over HTTP. And as of Firefox 52, they also get an in-context drop-down warning. We have a bug open to add the ability to show a negative indicator for all non-HTTPS sites. At first, it will be disabled by default; we expect to enable it once the measured HTTPS percentage is high enough that the risk of warning fatigue is acceptable.
4: Queue for Public Discussion The discussions are taking even longer than usual, because there are currently no resources to do the detailed review of the CP/CPS/audit statements that Ryan and Andrew were kindly doing previously. We are trying to figure out how to get the discussion process moving forward again, and apologize for the delay.
5. Revoked Intermediate Certs We have everything in place to start an automated sync from CCADB to OneCRL; our engineer expects to be turning that on very soon. After that, certs marked as revoked in CCADB should be on the list very quickly, without the need for manual intervention.
6. CA Communication Will be out shortly. The questions to be asked include these topics:

  *   Domain validation according to the ten approved methods (from Ballot 169)
  *   Yearly CP/CPS updates including version number revision
  *   Audit statement requirements
  *   Information about RAs who do domain validation
  *   Problem reporting mechanism
  *   CAA identifier list (concern about that getting out of symc)
  *   SHA-1 and S/MIME input sought
Look in your inbox by the beginning of April.
7. Root Store Community We are attempting to get the "Mozilla CA Community" rebranded as the "Common CA Database", as other root stores come on board. We are working out how we can use data from crt.sh to add flags to the Common CA Database that will indicate if we need to check a record. Based on those flags (and double-checking the record) the Common CA Database will send email to the CA's primary POC and CC their email alias to let them know which of their records they need to update. This will flag:

  *   Disclosure Incomplete
  *   Unconstrained id-kp-serverAuth Trust
  *   Disclosed, but with Errors
  *   Disclosed (as Not Revoked), but Revoked via CRL
  *   Unknown to crt.sh or Incorrectly Encoded
This month, we are rolling out the new process for providing annual updates. Mozilla's next audit reminder emails (sent by CCADB on the 3rd Tuesday of each month) will point CAs to this new process for providing their updates. Note that in addition to audit statements, test websites (valid, expired, revoked) will also be required/collected/tested (as per the BRs). We've added Audit Archiving to CCADB -- so audits will be loaded into CCADB for permanent storage as audit archive records.
URLs related to the above:

  *   HTTP Password UI Demo: http://http-password.badssl.com/
  *   Negative indicator for all HTTP sites bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1310447
  *   Firefox release schedule: https://wiki.mozilla.org/RapidRelease/Calendar
  *   Expect Staple: https://docs.google.com/document/d/1aISglJIIwglcOAhqNfK-2vtQl-_dWAapc-VLDh-9-BE/edit
  *   Annual Update process: https://wiki.mozilla.org/CA:CommonCADatabase#How_To_Provide_Annual_Updates
WebPKI Futures There will be a session on how each participant would like to see the WebPKI evolve in the next three to five years. This is a chance to give CAs a heads-up about our future thinking, and was suggested in response to the perceived out-of-the-blueness of Ryan's certificate lifetime ballot, where he claimed "we've been talking about this for 3 years".
In the next 3-5 years, Mozilla would like to see the following in the WebPKI:

  *   95%+ of websites available over HTTPS with PFS ciphersuites
  *   HTTPS being the default mode of browsers; HTTP connections specifically marked as untrusted
  *   Full publication for all issued publicly-trusted certificates, whether that's via CT, a related mechanism, or something else entirely
  *   Certificate lifetimes reduced to 13 months within 3 years, and 3-6 months in 5 years
  *   Much greater use of automation for certificate replacement, using standard protocols
  *   CAA fully deployed and implemented
  *   Multiple hash functions widely supported
  *   Better elliptic curves widely supported
  *   Initial support for post-quantum crypto algorithms
  *   No more reliance on live OCSP checks -- basically all certificates have MUST_STAPLE or are short-lived
Questions: What about CT? Mozilla is somewhat late in the process of engaging in CT discussions. While that process is working its way through, we're not advancing the client policy discussions. We're not coming up with independent CT policy at this moment.
Ryan: The feedback that was provided by Mozilla wasn't related to scalability.
Gerv: It was my understanding that it was.
Ryan: it was related to how quickly you can go from detection to enforcement. A 24-hour MMD allows a cert to go unlogged for that long.
Gerv: We commented on what's needed to take CT beyond countersignatures into not needing to trust the logs at all.
Ryan H: Mozilla has presented an alternate view of the near-term plan that has browsers doing audits of logs. Discussions are ongoing about how to get to the point of not needing to trust logs. In the near-term, what we have is still viable, but we're discussing how to get to not needing to trust logs.
Ryan: More discussion will be held at IETF next week.
Gerv: By the way, Richard Barnes is leaving Mozilla, so that may lead to some hiccups in Mozilla's continued pursuit of this topic.
Peter: Mozilla has spoken up about reducing cert lifetimes. Can you say more about that?
Gerv: We haven't yet had a discussion about follow-up to Ballot 193. My own opinion is that it's unfortunate that we didn't complete a proper discussion of whether the CABF agrees to further reduction. People don't seem keen to have that discussion. I'd like to see further reductions, and if the CABF agrees, we need to signal that well in advance to give folks time to prepare. I don't think 13-month certs require automation, but they are helped by it. I think 13 months should be the next step.
Ryan: As another proponent of reduced cert lifetimes, we're having a careful evaluation of the trust practices. For example, SHA-1 exceptions were allowed but not trusted. Independent of the forum discussions on issuance, we're trying to determine what's the appropriate level of trust?
Peter: Curt (Apple) what's your opinion on cert lifetimes?
Curt: I think we have similar thoughts. We'd like to reduce it, but we don't want to reduce it too fast and cause heartburn. As far as CT, I can't comment. We've mentioned before that we're looking into it. No announcements on timelines.
Geoff: We think that reduction to a year might be the worst of all possible worlds. That doesn't give you flexibility, and it's too short to be comfortable for those who are updating manually. It could be that the right number is much smaller or larger.
Qihoo 360
Note Taker: Zhihui Liang
1. Company introduction :

  *   three layer rocket model, top player in antivirus and browser market, many internet entry related tools, gaming, startup page, search engine.
  *   security research about OS and browser.
  *   master of pawn.
2. Exclusive finding of self-signed certificates on China website

  *   18.3% of top 10,000 websites are self-signed.
  *   127 of the top 417 .gov websites are self-signed.
  *   some of the biggest state-owned banking organizations in China are self signed.
3. 360 browser root program

  *   Both beta version and stable version was released in March, 2017.
  *   Our users will get a security update by April, 2017.
  *   Intercept any certificate error by default, traffic will redirect to red stop page.
  *   Some website with great traffic will be verified by us, like 12306.cn, these sites will be authenticated strictly with host name and digital signature, and there will be no warning or stop page .
  *   top gov / edu websites are authenticated by us, when a user visits these sites the browser will show a yellow warning bar on top of the page .
  *   360 browser will show z\adifferent padlock for EV/UV/IV/DV certificates.
4. Remediation Plan for WoSign<https://www.cabforum.org/wiki/WoSign> & StartCom<https://www.cabforum.org/wiki/StartCom>, update

  *   Separation of Management and legal structure was done by Nov 29th 2016, now 100% owned by Qihoo 360.
  *   Separation of operations was done by Dec 1st 2016
  *   Separation of systems, work in progress, StartCom<https://www.cabforum.org/wiki/StartCom> will have a key ceremony under witnessed by PWC.
5. Q&A session
Q
Is that any need for CAs to apply to be included in the 360 root program?
A
We reserved the right to stop trusting the CA. If we find the CA has done a bad thing, its roots will be revoked.
Q
How is the five star safety level calculated?
A
EV/UV/IV/DV certs have been verified by the CA, we also have a website security department, they have an algorithm to calculate it. Self-signed certificates will not get five stars in the browser.
Q
Why is a self-signed certificate website common in China?
A
In China, state-owned websites think they has the right to sign certificates, and everybody should trust them.
Q
Is a website is on the white list, does that mean you never show an error for s self-signed certificate?
A
Yes, but we will check the fingerprint, if the fingerprint changes, we will block it as well.
Q
Is any way to contact 360 to change the fingerprint?
A
For small websites, they can contact us by email, For the big ones, we don't think they will do it.
Q
Is the white list tied to your root program?
A
White listing is a way to minimize impact on self-signed websites with great traffic.
Q
This plan is for future versions or the current version?
A
White listing and the red stop page have already gone online in versions 8.1 and 8.2. A different UI for the padlock will go online in May.
Q
Is there any security standard for whitelisting? Will 1024 bit and SHA-1 certificates be white listed?
A
For content, we have a net shell security department monitoring the website for those sites. For weak cryptography certificates, we will look into it in the next quarter.
Q
Any plans for supporting CT logging?
A
We have no plans for supporting CT logs yet.
Q
Are the self-signed certificates white listing only for China?
A
Yes, its only for Chinese websites.
Q
Which version of Chromium is the 360 browser based on? A::Wwe have two versions, one based on Chrome 45, the other on Chrome 55.
Q
Do you have confidence about the audit for the new infrastructure?
A
We have chosen PWC as our auditor, we will try our best to do the implementation.
Cisco Root Program
Note Taker: Peter Bowen
Jos Purivs, Chief Worry Officer
<slides available>

  *   http://www.cisco.com/security/pki/trs/ios.p7b
  *   http://www.cisco.com/security/pki/trs/ios_union.p7b
  *   http://www.cisco.com/security/pki/trs/ios_core.p7b
Q
Why move away from trusting others trust?
A
Not a browser, use for other non-HTTP purposes. For example trust store used for SMTP and SSL VPN.
Q
Does using CCADB mean automatic trust for CCADB CAs?
A
Not going to pull CAs out
Q
Plans to make use of revocation info in CCADB?
A
Not at present
Q
Requirement for specific protocols? (SCEP, etc)
A
We want SCEP to die with fire
Q
Are you continuing to update intersection/union store?
A
Yes
Q
What about IPsec products? EST?
A
No default trust store. iOS 15 can pull bundle.
Q
Mozilla has a public process for root program. Would you consider running as publicly as Mozilla?
A
We like transparency, but need to work with counsel to define our policy.
WebTrust Update
Note Taker: Kirk
WebTrust<https://www.cabforum.org/wiki/WebTrust> for CA Update
Jeff Ward and Don Sheehy provided an update on WebTrust<https://www.cabforum.org/wiki/WebTrust> for CA issues.
1. Current Status of Completed Projects - Changes since last Forum Face-to-Face meeting:
* WebTrust<https://www.cabforum.org/wiki/WebTrust> Principles and Criteria for Certification Authorities - Extended Validation SSL - Version 1.6 is effective for audit periods commencing on or after January 1, 2017. For audit periods commencing prior to January 1, 2017, auditors will use version 1.4.5.
* WebTrust<https://www.cabforum.org/wiki/WebTrust> Principles and Criteria for Certification Authorities - SSL Baseline with Network Security. Version 2.2 is effective for audit periods commencing on or after December 1, 2016.
* WebTrust<https://www.cabforum.org/wiki/WebTrust> Principles and Criteria for Certification Authorities - Extended Validation Code Signing. Version 1.4 is effective for audit periods commencing on or after January 1, 2017. For audit periods commencing prior to January 1, 2017, auditors will use version 1.1.
* WebTrust<https://www.cabforum.org/wiki/WebTrust> for Certification Authorities - Publicly Trusted Code Signing Certificates - Version 1.0 . (New as of February 2017). Version 1.0 is effective for audit periods commencing on or after February 1, 2017.
2. Current Status of Ongoing Projects
* WebTrust<https://www.cabforum.org/wiki/WebTrust> for CA (Principles and Criteria for Certification Authorities) 2.1 - current version 2.0 is being updated with some minor changes. These include:

  *   - Introduction section getting updated to reflect digital certificates, CAs, Browsers, etc., not so e-commerce centric
- Disclosures no longer refer to WebTrust<https://www.cabforum.org/wiki/WebTrust> Version 1 - ideally browsers want conformance to RFC 3647 but have not yet mandated it. Some major CAs still have not moved from RFC 2527. - Updating for technological advances. - Will adopt CABF's view of Sub CAs, Intermediate CSs, and Issuing CAs.
* Practitioner Audit Reports - working with AICPA to release post-May 2017 reporting under SSAE 18. Canada and international reports undergoing minor updates to approved versions under CSAE 3000 * WebTrust<https://www.cabforum.org/wiki/WebTrust> - RA - revised drafting underway, will need CABF comments (got a good start in the working group meeting) * Practitioner guidance for auditors under development covering public and private CAs. Draft expected later this year.
3. Some new and old issues
* Issues in Network Security leading to qualifications * Cloud questions continuing to surface as well as other third-party involvement, creating confusion and inconsistency on audit scope * The attest/assurance standards are changing in US and Canada
4. Audit reporting issues
* Consistency in reporting could be an issue * As part of the reporting templates developed, WebTrust<https://www.cabforum.org/wiki/WebTrust> will provide a sample report that discusses each section of the audit report to provide guidance to the browsers [what they should be looking for etc.] * Possible creation of a transmittal letter? * Publicly available qualified reports
5. Audit reporting issues - questions posed
SSAE No. 18 was released earlier this year. What impact will this have on WebTrust<https://www.cabforum.org/wiki/WebTrust> audits in 2017? New standard should not impact the WebTrust<https://www.cabforum.org/wiki/WebTrust> examinations/audits. The reports themselves may look different or new words will be used but the content will be more or less the same. There will be some additional requirements for the auditor, such obtaining an understanding of internal audit or compliance groups as well as reviewing any audit reports issued. For the WebTrust<https://www.cabforum.org/wiki/WebTrust> examination/audit you should not need to do anything different or create any new documents. The standard change will have the biggest impact on Service Organization Type 1 reports.
6. Changes at CPA Canada (current information) * CPA Canada staff members

  *   - Gord Beal - Bryan Walker - Kaylynn Pippo - Lori Anastacio
* Consultant to CPA Canada: Don Sheehy (Vice -Chair)
* Task Force Members and Technical Support Volunteers:

  *   - Jeff Ward (BDO) - Chair - Daniel Adam (Deloitte) - Chris Czajczyc (Deloitte) - Tim Crawford (BDO) - Reema Anand (KPMG) - Zain Shabbir (KPMG) - David Roque (EY) - Donoghue Clarke (EY)
* Reporting Structure/Roles

  *   - Gord Beal - WebTrust<https://www.cabforum.org/wiki/WebTrust> coordinates Guidance and Support activities of CPA Canada - Bryan Walker - seal system responsibility, licensing advisor - Don Sheehy - Task Force and CABF - Jeff Ward is Chair of the WebTrust<https://www.cabforum.org/wiki/WebTrust> Task Force and serves as primary contact - All Task Force members provide WebTrust<https://www.cabforum.org/wiki/WebTrust> services to clients - Volunteers are supported by additional technical associates and CPA Canada liaison but report to CPA Canada
ETSI and eIDAS Update:
Note Taker: Connie

  *   Slides confirmed by ETSI Members last week
  *   Audit scheme is based on ISO 17065 for Conformity assessment, Audit Bodies must be accredited in Framework of EA or IA
  *   ETSI provides precise audit criterias
  *   for the full range of detailed norms and legal requirements refer to Arnos presentation
  *   latest standards released last week to find under http://www.etsi.org/standards-search
  *   New ESI activities
     *   AdES Signature Validations Services
     *   Signature creation Services
     *   Registered E-Delivery Services Formats an CPs
     *   Long Term (signature ) preservation
     *   Workshops in Washington and Tokio
  *   Discussion about trustworthiness of audit reports, Jody says there are CAs issuing SHA-1 certificates with a clean audit report,
     *   Arno answers that Algorithm defined by ETSI TS 119 312 are mandatory
  *   ACABC will have a presentation on next CAB F2F in Berlin
  *   Discussion about key transparency for Certificates and EU Dataprotection requirements
Guest Speaker: Eric Mill, GSA 18F
Note Taker: Eric Mill
Eric Mill from the General Services Administration gave an invited talk titled "Security and the enterprise, as seen from inside the U.S. Government". Eric introduced GSA's Technology Transformation Service and its 18F team and some of its work.
Eric discussed the U.S. government's work on making HTTPS and HSTS the default for publicly accessible services, including the White House's formal HTTPS policy for the executive branch, GSA's and DHS' collaboration in scanning government services to support this policy, and the progress seen so far from these efforts. Eric described a new recent supporting initiative from GSA, where the .gov domain registry (a program of GSA) will begin submitting newly registered executive branch domains to supporting web browsers to be preloaded as HSTS-only.
Eric shared a series of issues in the U.S. government that inhibit strong enterprise security, which include a resistance to information sharing between agencies and with the public, and an overwhelming emphasis on compliance without an equivalently strong grasp of engineering fundamentals. Eric described the government as theoretically intending to use compliance as a starting point for security, but in practice overlaying such intense layers of compliance that it effectively becomes a stopping point.
Eric concluded by stating that the biggest issues facing enterprise security, as observed in his work in the U.S. government, were a lack of automation of operational processes, as well as a lack of technical expertise (particularly software engineering) in places of authority and key operational/policy positions. Eric encouraged the audience to favor automation in their own systems and in those of their customers, to elevate technical expertise within their organizations, and to take advantage of the Forum to achieve these goals.
Process for Adoption of Post-SHA-2 Algorithms
Note Taker: Gerv
Speaker: Phil Hallam-Baker
We need to be a bit more agile than we were with the SHA-1/SHA-2 transition
The old system didn't really work:

  *   SHA-2 proposed by NIST in 2001
  *   SHA-1 known to be shaky even at this time
  *   Only implemented in browsers in 2005
  *   Transition completed in 2017
  *   CAs and browsers were waiting for each other
And now NIST is no longer an authoritative source; USG not seeking to lead, Snowden, Dual EC
There needs to be someone who holds the speaking stick and drives the process, and CAB Forum is the best choice
What do we need?

  *   One default algorithm for each required function (signature, digest)
  *   One backup algorithm deployed as "hot standby"
IoT means we need a 10-year planning horizon
Proposes CAB Forum takes on an endorsement role - not develop or write standards, but endorses those of NIST or IRTF
Current priority: SHA-3, because we only have one hash algorithm (SHA-2), and it's based on the same design as SHA-1 and MD5
Gerv: need 2 actively-used algorithms, not one in use and one "hot spare". (Ryan H endorsed this view.)
Ryan H: are we really the right people to be doing the picking? We aren't cryptographers.
PHB: Yes, if we consult the right cryptographers.
Ryan H: Post-quantum crypto is not yet advanced enough for us to pick an algorithm.
PHB: Yes, we can't do anything useful here at the moment.
Tim H: We do have some info and opinions about who the experts are. But there is fanboyism. CAB Forum _could_ identify what our specific needs are, and communicate those better. Both NIST and X9F1<https://www.cabforum.org/wiki/X9F1> have open processes for this.
Ryan S: Don't agree with the premise that we need to consider IoT. We need to consider what the set of acceptable algorithms is for issuance. What will browsers implement? We need to determine what algorithms meet the minimum security goals of the forum. Allow the normal compatibility cycle to push convergence. Also not sure who the market is, as WebPKI serves many constituencies.
Gerv: Are we actually amplifiers rather than endorsers? We amplify noises made by groups of cryptographers.
pzb: We shouldn't get ahead of "running code", in the IETF sense. Need to hear from TLS implementers - browsers, Cisco, Amazon...
Andrew W: Implementation gap is not just browsers and CAs, it's HSMs, FIPS, CA software, etc. So practically, if we are talking to vendors of these products, ask them what their plans are for this.
Ryan H: We are a huge voting bloc for HSM vendors. HSM vendors would do something if everyone in this room jumped up and down. More is not better, though. Group have a philosophy and principles to say that you need something that works for embedded devices, something that's post-quantum, but not necessarily make any choices but rely on CFRG or similar.
Chris B: It would help, when talking to HSM vendors, to be able to point back to something as official.
Kirk: Suggest emailing a group 3x a year asking for things we need to know, recommendations, etc. A formal enquiry process.
PHB: Some of the things like OIDs need to be cut, and only in one place. But when PHB asks for them to be cut, the question is "who wants to use them"?
Future of Web PKI (Part 1)
No Minutes.
Day 2 - Thursday, March 23
Attendees: See list for Day 1.
SHA-1 Deprecation and Exception Process - Lessons Learned
Ryan Sleevi presenting. Note Taker: Robin Alden
It's 2017. We have deprecated SHA-1. Everything was OK, and it was a successful transition which should be the model for all future transitions! (not!)
It could have been worse.
Who thinks it went too fast?
Jeremy Rowley: Lack of communication early on in the timeline.
Who thinks it went too slow? (several hands)
Bruce Morton: Started too late.
Who thinks it was the wrong way to handle these deprecations? This is probably not a model we should use for every deprecation.
Chris Bailey: A lot was right. I agree about timing issues.
A lot of CAs send out communications and customers don't read them.
Google UI change? - The pain of the lower validity period made customers start to take notice.
Ryan: I think we'll come around to that.
Getting a sense of the room. We know there's pain.
We (browsers) are not entirely pleased with how it went.
Apple - will drop SHA1 soon. MicroSoft<https://www.cabforum.org/wiki/MicroSoft> - nearly there.
CAs - SHA-1 exception process was painful.
OCSP / SHA1 - not sorted out in time.
Site operators and users were unhappy because of the issues getting SHA-1 certs, and the user experience.
We should take a look at what went wrong, what went well , what we got lucky on, what shold we do differently.
Yes this was an algorithm shift, but it could be a changed certificate profile next time - the transition process is the same. This is not just about Algorithms, it is more a forum process question.
We are familiar with post-mortems. No blame, just analysis.
What should we do differently?
Bruce: we did 1024 to 2048 and that was successful. Ryan: Browsers didn't turn off 1024
Peter Bowen: as a consumer of certs, one thing thats different with SHA-2 from the 1024/2048 change - it turned out that a lot of devices just didn't support SHA2.
(Peter gave an Amazon-internal example)
Let's say that RSA gets factorable , we might have the same problem going to ECDSA certs because some things don't support it (although many do). We didn't notice things broke until we started testing them.
Ryan: Agreed, noone knew all the things that would break.
Peter: We got lucky because we were running a private CA (for his internal example). The more devices that use the same trust store, the bigger the problem gets. eg Payment terminals. Alex: They lacked agility. Peter: Some devices need 25 year certs with an unchangable protocol. OK, but they can't also use the same trusted public roots.
Ryan: A number of CAs pulled roots to support legacy certs. A fairly nasty option. E.g. Cloudflare. Tara Wheeler: E.g. medical devices. Longevity of device usage is often tied to the lowest tech capability of the organizations that support them. These same places also get burdened with the education into operating / updating them. If it's a question of money, you CAN'T get them to do it. That is the actual long tail of how its hard to get some devices updated. The people that are least able to understand the security are the ones who need it the most.
Ryan: We didn't know what the edge cases where. On MS Windows, when Chrome tried to turn off SHA1 we hit edge cases.
E.g. google.com. Used a certificate from the Symantec GeoRoot<https://www.cabforum.org/wiki/GeoRoot> (CA) which had been cross-signed by the Equifax root. A SHA-1 cross-signature from a root with a 1024 bit RSA key. The Equifax root was not actively used. The GeoRoot<https://www.cabforum.org/wiki/GeoRoot> is not shipped with windows 7 RTM, it is downloaded on demand through the AuthRoot<https://www.cabforum.org/wiki/AuthRoot> update.
Because it was not part of the baked-in binary, when we turned off SHA-1 users get a message that there's a SHA-1 cert for google.com when on a client with Win7r1 with no auto-updates applied. We only saw this problem well after the product launched. This example speaks to a lack of knowledge to what the PKI looks like and what the edge cases are.
Rick Andrews: There was a perception that CAs weren't pushing customers to deploy SHA2 certs. As long as there are browsers that don't support SHA2 - the choice for the customer is "Would you like a modern certificate that doesn't work everywhere, or a SHA-1 certificate that works everywhere but may be weak." It is a no-brainer for most customers. They want the certificate that works. When almost all clients support both, it is easy to offer the new algorithm. The choice is seen as forward looking and strongest vs legacy.
Ryan: Using that as an example of what went wrong, is the problem that you know it's not going to work?
Rick: The problem is that there is empirical evidence that there a few pockets where it won't work.
Ryan: For the SHA1/2 transition we had a SHOULD in the BRs. We don't know where its not going to work. Millions of devices?
Rick: It is true of browsers and of other devices.
Peter: Why would you want to break any customer if you don't have to? We knew SHA-1 was weak for a long time. There is a cost-benefit tradeoff. We knew we were cutting a long tail off. We knew there was a tail of SHA-1 only kit out there. The economic reality is that for customers with an e-commerce platform that sells goods, you have to do that balancing. TLS is not for long term protection. I need to understand how long I am protecting data for. 50 yrs, or to 2030, or a few hours. There are different trade-offs to make in each case.
Gerv: It strikes me that the best way to avoid the long tail is to get the core libraries to implement new algorithms 15 years before it becomes essential.
Ryan: Lets keep looking at what went wrong (first).
Gerv: Something that went wrong: Crypto library providers were not sufficiently proactive in implementing new algorithms. A system that deals with non-updating devices is better than one that doesn't. SHA-2 came with WinXP SP2. SHA-1 was defined in 2001(?). Windows XP SP2 was 2009(?). It would have been nice to have it earlier.
Ryan: Summary: Devices did not support SHA2. Chrome was 'lucky' because they could backport SHA-2 to Windows XP because it was pluggable.
More examples of edge cases. Startcom had an intermediate StartCom<https://www.cabforum.org/wiki/StartCom> G2. Eddy prepared two versions, one SHA-1, one SHA-2. to give customers the choice. The issue was an edge case related to every platforms chain building library didn't know what it was going to path.
Digicert had a similar problem on MAC OS.
Summary: We don't know what the products are going to build (chain). Tough to rely on undefined behaviour.
Another E.g. when it came to legacy root removals. You don't know what the path is, so you don't know the effect of a legacy root removal. Redhat, Ubuntu, etc, are still shipping 1024 bit roots because openssl up to 1.02, still prefers the longest chain possible. This is a problem for future deprecation.
Ryan: trying to figure out all the ways.
PZB: What went wrong and why, and what do we do in the future. We need to be looking at what we want to change. Great that Phil spoke yesterday about Hash algos. EDDSA - nothing supports it yet! How do we mitigate this going forward?
Ryan: Equally applicable to introduction of shortlived certs, or for a change in the profile of OCSP or CRL.
Deb: We deprecated SHA1 in March 2006. In the late 90's, SHA to SHA-1 was an easy transition. SHA-1 to SHA-2 has been horrendous. Someone comes and says 'people are dying' - we say 'OK'. We have teeth, but not many. We can advertise when things should be deprecated. Have a hard time making it happen.
Gerv: This is what we were discussing yesterday. We can amplify other signals. If NIST are making noises, we can amplify them.
Ryan: I don't think amplification went well. We laid out the plan in 2013. Some CAs had reseller Apis that could not issue SHA-2. For a number of CAs the hierarchy was not in place, the APIs didn't offer the option. There were hoops that subscribers had to jump through. e.g. get SHA-1 first, generate a SHA-2 CSR (which was apparently hard). Examples on Eric's SHAAAAAAAAA site. Amplification started 2012-14. Maybe that's how long it takes, or maybe we should be doing more.
Gerv: Maybe we should have requirements for 'required' algorithms - not just 'permitted'.
Ryan: What are we going to do different? Where are the most pain poaints, where are the levers to lean on.
Wayne: What 3 things that went worst?
Ryan: 1 of biggest is that the ability to find out what is not going to work was incredibly difficult. If you want SHA2 you cant get SHA-2. If you deploy it how do you get feedback How do we explain WHY and what we are trying to do. Can we sign SHA1 OCSP responses. Can we sign SHA1 CRLs? Another big problem is that there's a lack of info and education as to why things are changing.
3. (biased to browser) One of the biggest challenges is that understanding path-building is a black art. Many CAs probably don't know the paths that are going to be built. Payment gateway vs Browser with updates vs browsers without updates. Concern with the proposal to put EKUs into intermediates - what will it break.
Jeremy: Some CAs didn't have APIs (eg). - Would it help to make a list of the causes?
Ryan: Yes. Overrunning, but love to figure out (eg) issuing SHA-2 by default. Because it will break things. Increases support costs, increases user frustration. Is there a world where CAs issue the new by default? How do we push the ecosystem forward?
Jeremy: we turned it on by default in 2013. Partners and resellers kept pushing SHA1 by default through their APIs. Pushed the deadline to get them to change their code.
Ryan: So you were pushing SHA-2 - but resellers etc hard coded to SHA-1. Jeremy: Yes. We emailed them - noone reads the emails.
Ryan: Not something we'd put in the URL bar. We have the amazing crawling engine. We can motivate people through their advertising.
Kirk: Path building - Do we need to go back to the standard to shore that up?
Alex: Shortening cert lifetimes will do nothing but help. The shorter the lifetime the quicker people discover what breaks. Theres a lot CAs could do to push new tech to their customers. Push the new. Pain points in different platforms points to algo agility. Is there an approach or framwework we can think about to have lib writers build in agility SHA-2 was a first big transition. Learn from it. Algo agility - Library agility.
Deb: Did you ever consider giving both a SHA-1 and SHA-2 cert? Ryan: Some CAs charged more (or again) for the SHA-2. Some CAs required manual contact orr a support call to get the new. Deb: So if we required both.. Ryan: Look at CloudFlare<https://www.cabforum.org/wiki/CloudFlare> - SHA1/SHA2 cert switching to serve a cert the client could do.
Gerv: Alex said libraries like concrete. Independant of the idea of a transition from SHA2 to 3, HeartBleed<https://www.cabforum.org/wiki/HeartBleed> shows that concrete libraries won't work. Nice to tell people they can't deploy OpenSSL in a device that has to work for 5 years without updates.
PZB: We basically have one kind of cert. Server auth, pretty much usable anywhere. Cisco - yesterdays prez about their root program interesting. They found some trust list and shipped it. Then they say to CAs - we need a cert because its the only thing a Furbie supports. Is there wa way we can separate off people who like updating and those who wont. How isolate the risk from this stuff? Common Names - There will be customers who can't take that. How do we continue to support them.
Ryan: Fully sensitive to the reality of 'can't issue' because of this. So many devices have baked-in root stores. Developers may be 3 person teams, may be medical device with limited resource to develop. May be a mom-pop shop that can't take the cost. Don't issue vs don't trust. Trajectory of modern crypto is that stuff will change. I can't tell you what to do with your product (although I can shame you on twitter). How do we allow terrible nasty things to be issued where needed - while protecting the web.
Deb: talking about shortening cert times, CA and root CA cert lifetimes are also too long.
Gerv: The fact that this forum was instrumental in pushing the payment industry to better crypto was an upside.
Ryan: What we did cost lot of time and forced people towards less optimum solutions.
Gerv: I wonder if we can use the SHA-1 experience to build a dependency graph. If fulfill all these tasks then we can switch. The fewer things we address, the more pain there will be. Means we can analyze progress and push the right levers.
Ryan: I would push back - one of the big things is the unknowns and the lack of involvement of some parties. You can call them fools for not updating, but they're not here.
Geoff: Talking about incomplete knowedge. MD5-SHA1 didnt go so badly. Before MS made their announcement, SHA1 was strictly better than SHA2 in every way (as a cert owner). MD5-SHA1 - Probably wanted MD5 unless talking to gov or other edge cases when you needed SHA-1. So some SHA-1 certs out there all the time. Helped flush out ecosystem problems.
Ryan: I challenge. MD5 transition was terrible. We know it was broken 2006, but didn't turn it off until 2016. Broke every school in America.
BlueCoat<https://www.cabforum.org/wiki/BlueCoat> - if you wanted to get the SHA-1 version, you had to update your support contract (and pay the back support costs). We thought - surely turning off MD5 will be easy, softeware supports SHA2, but everything was really painful.
Geoff: Apple has not yet announced when it will turn off SHA-1 in everything - because its scary down in the details.
Ryan: OpenSSL will still accept MD5 as 'secure'!
Should we have a workgroup? I don't know what the best medium is going to be, but we need to figure out something.
Chris Kammerer: Organizationally, the CABF has working groups - this sounds like a WG task.
The Role and Relationship of the Forum
Note Taker: Andrew
Are the documents we produce legal documents or technical standards, and how does that influence how we do participation, how we managment.
Do we provide feedback or guidance to these documents? If they are legal documents then it's up to the courts to interpret, we just provide the framework and the words, and it's up to auditors to work out what we meant.
Or should we have a process more like a technical standard where you file errata to say "I don't know what this means"
Are we slightly different from the world at large because we assume that people here have good intentions?
Most of the organisations assume good intent. Is there a consistency of technical skill and knowledge to define specific phrases, and interperate them in an appropriate context. E.g. "a certificate signs a certificate", what does that actually mean, who does that signing?
Also a question, when talking about a document like the BRs, we're talking about trusting issuance. The BRs set prohibitions on what you can issue. You should not issue anything that looks like this. Are we trying to say this is the best, or the minimum? How do we gain feedback? Is the discussion about what root programs are going to require something that should be discussed in the forum? Or is that something that root programs announce in the forum.
Question of the future of the web PKI, what are we trying to do here? Is this meant for browsers to declare proclamations and to deconflict those things with other browsers? Is it for CAs to say we don't think this should be issued, this should not be trusted. In the past year there have been a variety of conflicts in approach and information shared and how we should use that information, so what are we trying to do?
The original intent behind the forum was to stop things getting worse.
Ok, we stopped it getting worse, should we try to make it better? Is this the venue for that?
Deconflicting browsers is one of the top values of the forum. There are a growing number of root stores. Seen root programs that are out there be more formalised, e.g. Microsoft doing their contracts a while ago. Also work with the PCI Security council, and although it's a very different industry it has a similar dynamic, as it's a whole load of brand who all want to be able to use the same payment terminal, so they are trying to create standards that all the payment brands will accept. The brands are the only voting members, but there's a second tier made up of acquirers, merchants etc who do a first round of "is this the direction we should go" and then send to the brands for approval. Slightly different model, but there as here the value is coordination.
PCI is a great example of a different power dynamic or a different relationship and it gets to the point of what is the PCI council trying to do and what is the forum trying to do. Trying to decide on the "thou shalt not issue" points, stop doing this terrible thing, and then there's the question of "is this thing not good enough any more", and how do we have that discussion. It's separate from not allowed to, just that it's not good enough. SHA1 was a good example of communicating intent and then we implemented the UI change. That wasn't a forum vote on the UI, we just saw it as the right direction for our product.
Earlier in the presentation, there was a question of if we could provide guidance, if it's hard for people to understand the requirements or they need clarification. Perhaps other have a position of not interpreting things because they want to stay out of trouble. But that shouldn't stop us proposing ballots to clarify things, and maybe provide guidance of how things should be interpreted. That could be done on the public list so people could search for it there, rather than having to create a new section of the website dedicated to supplemental advice etc which would create a lot of overhead.
I heard a director at NIST say the CA/B forum was a model of how industry industry collaboration could work and it was a spectacular example of how things could be done the right way. And I love that because it says we can be flexible, and can set our own standards and govern ourselves. There are regulatory questions to if we're a technical or a legal body. And keeping things free form can help us to maintain a lot of the purpose. Collaboration and cooperation is a great way to go.
On the idea of the role, when somebody says there's a bad thing - do we all agree it's a bad thing? Is the role to assess the risk and agree what's bad? Or is the idea to come up standards that address risks that are defined elsewhere? At the moment I think we do a bit of both. Does that cause slowness?
Not sure if it causes slowness but it definitely causes friction. The BRs are a big list of "thou shalt nots", and to get a thou shalt not there has to be some agreement. Is a consensus ballot on the forum a sign that "this is good" or just "this isn't that bad"? I think that touches on ballot 193. Should we put out ballots to see how many people hate it or should we send out a mail to the list to give people a heads up that something is going to be a program requirement in three months? How should we approach those discussions? And then there's the specter of things like eIDAS.
One of the reasons the forum was found was to encourage collaboration between browsers' root program. Even if something is definitely going to be a root program requirement then it's good to see if it's something the forum wants to adopt and take a first shot at. But it's nice to get all the browsers to give their input so we don't get inconsistent policies. And putting things forum as ballot first isn't something that the forum has traditionally done.
Though a ballot that's put out with the assumption that not all of it is realistic is a good way to shake out what parts are unrealistic and why. E.g. x months vs y months. With many proposals there are lots of knobs and dials. Like doing CAA with walking or not.
There's the way that Mozilla has done it, using their own public list to propose something and give an FYI to the forum, is that a good approach? Should it go to the forum first and then to mozilla's list? Mozilla likes to do things publicly, and with the form the most important list is the questions list that can't be searched publicly. And that's why one of my first replies to a question is "can I post this question or reply over to the public list so it can be made searchable?" Should we make the questions list public?
Yes, we should probably make the questions list public. And I do think we are responsible for providing guidance, it's not like this is a legislative session where we can't. And there's a designated responder.
One example recent example on questions about CSRs. What is the process we use to find the answer? The correct process, at least in the bylaws, is that somebody sends a question to the questions list, there's a proposed response, and the designated responder sends the response after some period. If we say that's a legal document that's not consistent. But if it's a technical document then we should be tracking errata and do what other bodies do which is maintain an archive of ambiguity:answer, and we'll fix that after we've banged out the details.
Yes, we should maintain that archive.
Thank you for telling us that things are confusing, and good luck!
So what are the next steps? Should there be a bylaw change?
Before moving on to that, one point to make - might be shared by several CAs. When you see a question, and the CSR is a good example, where we look at the question and think "I have an opinion on this, but if I'm wrong, I could be toast". So I'm not going to say anything at all and see what the answers turns out to be and then I can assess the impact to me. When something comes up where there's a big area of ambiguity nobody wants to put their foot in it and admit that what they are not doing might not be compliant. So many something like, when there is an area of ambiguity identified and it has been resolved, there's some period, maybe 90 days, where everybody has a chance to fix the problem if they are doing things differently. Can do that through the balloting process, but that seems painful. Could do it through the errata process, as part of declaring an errata.
This is a mozilla type thing. e.g. we found a policy violation X. Nobody but one party interpreted as X, so that interpretation goes against the common understanding. Assuming good intent, but how did you come to that conclusion when everybody else reached another. That's an open challenge, since we're talking about compliance, on how to figure this out. When the areas of uncertainty are with auditing, can auditors talk about? It's one thing when they are unsure and they can bring it up with the task force.
Can we solve the technical document vs. legal document issue?
Why are we discussing legal vs technical documents when the topic is the role of the forum?
Let's back up for a second. Is the forum a standards defining organisation? Standards defining organisations do things like provide guidance on the documents they produce, and they process errata. Or are we just a group of people and the stands defining bodies are ETSI and CPA Canada, so what we say doesn't matter, but we just produce guidance to those bodies. Or we we creating things that become part of contract law, such as appending the BRs as part of a legal document. If so, then our respective legal teams would have an opinion if we could have an opinion on e.g. should FIPS be turned on. Are we going down the standards or the legal path?
All technical standards documents end up being a legal document to somebody somewhere. There's a contract saying you need to bridge a bridge according to some ISO standard, and eventually somebody will end up in court arguing they can do this or that according to the standards, and it's for the court to decide. Another interpretation of the forum is just the root stores telling CA what to do with a unified voice. And if you look at it that way things become even more strange because different browsers have very different ways of telling CAs. The concept of the CA Browser forum was always to provide a way for browsers to create requirements and a way for CAs to say that's impossible. So it's not really a legal or technical, but it's much closer to technical. Did that help?
Wrapping up: There's some consensus that there should be a way to collect feedback on confusion, can provide amnesty. Maybe CAs could launer questions through their auditor who could pass it to CPA Canada who could raise it to the forum to provide a degree of anonymity. Can we define a process to collect, publish this, and a process when only on CA says no, this is totally OK and 99 CAs say yes. Open question is if we need to formalise any of that consensus, and question of if the forum adopts X should it be that anything less than X is terrible or that X is the best we can be doing.
Mozilla Proposal: Forbidding Delegation of Validation to Third Parties
Note Taker: Jos
Gerv offered that Mozilla is considering a new program requirement that 3.2.2.4 and 3.2.2.5 validations must be done by a CA or by an RA who is an affiliate of the CA, and would like to know if this would be a problem for CAs. He clarified that this means 3.2.2.4 (domain validation) and 3.2.2.5 (IP ownership validation) using one of the ten-or later seven-'blessèd methods of validation', and using the Baseline Requirements definition of an "affiliate", including the definition of "common control". It would cover every end-entity certificate issued by an issuing CA that is controlled by a particular authority in the Mozilla store: that is, the controller of that immediately-issuing CA, or one of its affiliates, would be required to do the validation. Gerv was asked about creating this as a ballot, and responded that he'd like to, as other root store programs seem to think this might be a wise idea, and asked if there would be objection to doing this at the forum level as opposed to the Mozilla program level. Later discussion settled firmly on the idea of doing this as a ballot, which Gerv, Peter Bowen, and others will collaborate to put before the forum in the near future.
Kirk asked what problem was being solved with this change. Gerv replied that CAs are outsourcing domain validation to companies that were doing a bad job. Peter raised the objection that there are frequent cases where an organization owns a base domain (e.g. 'example.com') and then has an agreement with the CA to designate someone internal to do validation below that level. After some discussion, the consensus seemed to land on the CA or its affiliate being required to validate that "example.com" is an entity in 3.2.2.4 terms, and then what happens below that tree is part of the relationship between a CA and its customer, with the need for that "example.com" validation to be re-done every 825 days as per ballot 193. [Considerable discussion about this 825-day change occurred, which led to the creation of ballots 194 and 197...]
Kirk asked how this would play into the rules around RAs, and asked about how audits of RAs would apply here and whether those would be considered sufficient to cover this activity without the need to stop it. Gerv pointed out that the length of time it took to establish 3.2.2.4 (two years) was an indicator that this function is hard, and that given the number of edge cases identified in that work, getting domain validation right is extremely difficult and a core CA competency: domain validation is the sine qua non of a CA. Ryan added that while disclosure and audit are crucial, the review of audits and recent events have clearly shown a gap where third parties are performing these functions but for which no disclosure exists or their technical competency is in question, and for which either the audit letter does not disclose issues or is insufficient to reveal whether issues exist or not.
Robin asked whether the fact that this is coming up indicates a larger problem with our reliance on audits across a service contract between organizations. Ryan and Gerv both agreed with this, and both felt that this was a systemic issue that needed fixing, but was too large a problem to tackle completely at the moment. Forbidding third-party domain validation was, Ryan felt, the low-hanging fruit that could be dealt with now, as it is not acceptable to wait three years while domain validation slipped in under the door.
Peter made a counter-proposal of requiring that any time a CA delegated this particular function, they had to issue an RA certificate of sorts that defined the entity and the scope the entity was permitted to validate. Ryan felt that while this met the goal of transparency, it created a lot more work for root store operators in reviewing yet more audit reports, versus simply forbidding it as a simpler and more immediate approach.
Gerv added here that his goal beyond transparency was reducing the number of things doing domain validation in the world, without constraining the number of CAs. If CAs aren't generally delegating out their domain validation (as judged by the lack of any concrete objections from the floor), forbidding it made sense by defining domain validation as part of the expected core responsibilities of a CA. He clarified that this did NOT mean constraining competition, but instead just the number of validation implementations: that is, the ideal solution would be if CAs came together and created a limited number of well-audited methods or libraries for performing this function that they could then choose from, much like crypto libraries.
Robin asked what would happen with a CA that outsourced all of its functions to a third party like a service provider, such as a "white-label CA service", so the CA only wrote the management assertion letter but let a service provider do everything else. The general agreement was that the wording of the ballot should account for this but not prevent it, but there was no agreement on a specific solution to it.
Chris asked the WebTrust<https://www.cabforum.org/wiki/WebTrust> auditors whether auditing validation practices of RAs would fall under the scope of the WebTrust<https://www.cabforum.org/wiki/WebTrust> for RA standard being developed. Jeffrey clarified that the WebTrust<https://www.cabforum.org/wiki/WebTrust> for RA standard would be based on the standards criteria developed by the Forum, and that if an RA were doing validation work, that work would be covered by the WebTrust<https://www.cabforum.org/wiki/WebTrust> for RA audit.
Code of Conduct
Note Taker: Wayne

  *   Current code of conduct displayed - https://cabforum.org/wiki/ProfessionalConduct
  *   Tarah asked if it should be called "Code of Conduct" or "Civil Discourse" as it is on the wiki
  *   Show of hands for "Code of Conduct" - many hands
  *   Gerv said the current doc isn't really a COC. Tarah agreed, we don't really have one
  *   Tarah asked of any objections to establishing a COC - no objections
  *   Tarah suggested that someone needs to go away and draft one
  *   Peter suggested there are many existing ones
  *   Tarah clarified that she's take an existing one
  *   Peter suggested we review one
  *   Gerv said there are two buckets - some driven by a political agenda which include a set of values. Others just describe what we will do - Gerv recommends the latter
  *   Tarah says the Ubuntu COC is often adopted by other orgs and it's a good one
  *   Gerv said Ubuntu's COC does contain value statement
  *   Peter displayed the WHATWG COC - https://wiki.whatwg.org/wiki/Code_of_Conduct
  *   Peter - what happens if COC is violated?
  *   Tarah - We'd have a discussion of consequences on mailing list
  *   Tarah - should we review and pick one?
  *   Virginia - we should use it as a template, but modify to our own needs
  *   Tarah - anyone in favor? Many hands raised
Patent Advisory Group update
Note Taker: Kirk Hall
Kirk presented the following summary of the progress of the Patent Advisory Group ("PAG") that was created in response to the Exclusion Noticed filed for Ballot 182 (domain validation methods).
By way of background, Kirk noted that the Forum operates subject to an Intellectual Property Rights Policy ("IPRP") that is similar to the intellectual property rights policy of other self-regulatory organizations (SROs) such as W3C. The current IPRP version is Version 1.2. One goal of the Forum is to seek to issue Guidelines that can be implemented on a Royalty-Free (RF) basis subject to the conditions of the IPRP.
Under IPRP Sec. 7, Forum members may file Exclusion Notices concerning defined Essential Claims arising from patents and patent applications that are necessarily infringed by implementation of any Normative Requirement in a Final Guideline or Final Maintenance Guideline. Members may file Exclusion Notices during a 30- or 60-day Review Period following a Forum ballot. If Exclusion Notices are filed during a Review Period (as well as under other circumstances stated in the IPRP), a PAG may be formed to "resolve the conflict" between the Exclusion Notices and the Forum's goal of issuing Guidelines that can be implemented on a RF basis. See IPRP for more information.
The Forum's Ballot 182 proposed certain domain validation methods be added to Baseline Requirements (BR) Section 3.2.2.4, and generated three Exclusion Notices (including amendments) during the Review Period as shown in the table below. In response, this Ballot 182 PAG was formed, and has been meeting from time to time since January 2017.
Next, Kirk summarized the progress of the PAG meetings that have occurred. During the course of these PAG meetings, it was noted that other SROs such as W3C pass many guidelines, but exclusion notices are rarely filed, even though W3C members hold many patents relevant to the guidelines. They uniformly intend to grant RF licenses "RFLs"), so they take no action and allow a RFL to be granted automatically under the terms of the W3C IPR Agreement, on which the Forum's IPRP was modelled.
By March 2017 there was consensus in the PAG that members who intend to grant a RFL for Essential Claims patents and patent applications ("IP") they hold encompassed in a Forum Final Guideline or Final Maintenance probably should not file Exclusion Notices indicating a willingness to grant a RFL. Under the IPRP, Exclusion Notices are limited to cases when a member does not want to grant an IPRP Section 5.1 RFL. Instead, under IPRP Sec. 4.2 Exclusion Notices should only be used when the member has IP for an Essential Claim and the member is not willing to license the IP at all, or is willing to license the IP but wants to charge a royalty. The three members who filed Exclusion Notices for Ballot 182 each wanted to grant a RFL and did not want to charge a royalty. Accordingly, the PAG concluded it was probably appropriate for the three members to withdraw the Exclusion Notices they previously filed.
The PAG noted that intellectual property rights agreements for SROs are generally intended to work this way - members are encouraged to grant RFLs for their IP, and the assumption is that they will do so in accordance with the requirements stated for a RFL in the applicable IPR Agreement. The PAG recognized that IPR Agreements such as the Forum's IPRP allow a RFL to include certain terms of the IP holder's choosing (see Sec. IPRP 5.1 for examples), but there is generally no need to reduce a member's intended RFL to writing at the time the Forum creates a new Final Guideline or Final Maintenance Guideline, and there is not even a need for the member to disclose its IP at that time if the member intends to grant a RFL. Instead, the member can wait until an issue arises where the exact terms of the RFL that has been granted by the member need to be known (for example, in the event of litigation between two members over some matter), at which time the member holding the IP may reduce the RFL it has already granted to writing in any form desired, so long as the RFL complies with all the provisions of IPRA Sec. 5.1.
The PAG also discussed whether there was value at the present time in deciding whether a member who filed an Exclusion Notice should not have done so under IPRA Sec. 4.2 because the member's "Contribution" prevented the member from claiming exclusion for its IP. The consensus was that there was no need or value in making that determination at the present time if the member intended to grant a RFL that complies with IPRA Sec. 5.1.
By March, 2017, the three members who had filed Exclusion Notices during the Ballot 182 Review Period all decided to withdraw their Exclusion Notices. See the Withdrawals for their specific terms. The dates of the Withdrawal of Exclusion Notices are shown below.
Symantec: Date Exclusion Notice filed: 23 December 2016 Date Exclusion Notice withdrawn: 15 March 2017
GoDaddy<https://www.cabforum.org/wiki/GoDaddy>: Date Exclusion Notice filed: 23 December 2016, revised 13 February 2017 Date Exclusion Notice withdrawn: 21 March 2017
GlobalSign<https://www.cabforum.org/wiki/GlobalSign>: Date Exclusion Notice filed: 16 December 2016, revised 25 January 2017 and 23 February 2017 Date Exclusion Notice withdrawn: 17 March 2017
Finally, Kirk noted the PAG will be meeting again next week, and summarized the likely PAG conclusion. Based on the narrative above, and because there are no longer any Exclusion Notices pending in connection with the Forum's Ballot 182, there is no longer any "conflict" for the PAG to resolve and no other action for the PAG to take. Accordingly, the PAG will likely dissolve without reaching any Conclusion. However, it is likely the PAG will attach the Exclusion Notices and Withdrawals to a short PAG Conclusion announcing the PAG has dissolved and post these to the Public list and to the Forum public website, along with bundled PAG Minutes covering the discussions. In this way, the public and also other CAs who are not members of the Forum will have access to the information.
Future of Revocation
Note Taker: Ben Wilson
This was a presentation by Robin Alden of Comodo.
The presentation is available on Google Docs<https://docs.google.com/presentation/d/1PtO3EyxaRhyA8YJnkcQ2VI4AmUdzphi3iZy3PgxxAUY/edit?usp=sharing>
Revocation today is generally a binary status - valid or revoked. Revocation policy is set by the CA within the confines of the Baseline Requirements. The information is also published, as required by RFC 5280. However, as a practical matter, it isn't binary because it's not provided within the control of a single party. There are several conditions that are relevant to a determination of the final determination of whether a certificate is good or bad. One example is whether the site is a phishing/malicious site. Other examples are shown on Slide 5: systemic vulnerability (e.g. heartbleed), browser request, site owner/subscriber request, etc.
A CA may revoke a certificate, yet a browser chooses not display the updated certificate status. A browser is at the end of this chain, so it may be considered a relying party. Once a CA has revoked a certificate, it has had its chance to say what it has to say about a certificate. Perhaps there should be a central clearinghouse for this sort of information and mechanisms for publishing this information to relying parties and browsers. A Certificate Status Clearing House could be established with policies that disclose/require certain response times and other metrics/standards. These would be auditable criteria. Economies of scale would dictate that there aren't 60 clearing houses. The Certificate Status Clearing House could be a "split" in the revocation mechanisms and provide non-binary information about certificate status.
Discussion: This could be seen to be similar to certificate transparency in reverse. This discussion helps review the purpose of revocation. Microsoft has a contract with CAs that gives it more leverage to demand revocation. Google likes OCSP staple because it allows sites to control how revocation is handled. Revocation is not the same as Smart Screen (Microsoft) or safe browsing (Google).
Robin is interested in identifying mechanisms that push forward this information to relying parties so that they can make choices based on better information. A technological challenge is providing this information in a format that can be transmitted and processed efficiently. Reason codes might be a mechanism, but they don't line up with reality. For instance, it may have been issued by mistake, and "cessation of operation" may/may not be the right revocation reason. The CRL reason code can be changed on a subsequent CRL, but the "hold" reason code isn't allowed. Better use of reason codes could be an answer, but the reason codes may not be robust enough for the situations presented.
With reliable, smaller sizes of CRLs they could be hosted on CDNs and could be incorporated into Safe Browsing. Also, the Clearing House model presents a denial-of-service / scalability issue.
Future of Web PKI (Part 2)
Network Security Document
Note Taker: Bruce Morton

  *   Network security document has issues
  *   CAB Forum members do not have expertise in this area; as such it was recommended not to fix the Network Security document, but to replace
  *   One suggested replacement document was "The CIS Critical Security Controls for Effective Cyber Defense". The focus on this document were the 20 principles and the descriptions, but not the control descriptions.
  *   Other documents in this area should be researched and recommended as options
  *   A working group would be created to address fixing or replacing the Network Security document
Future of Web PKI (Part 3)
No Minutes
Discuss F2F Meeting 41 in Berlin, Germany and future meeting volunteers
Note Taker: Dean Coclin
Outlined plan for future meetings. Arno noted that he will post plans for Berlin meeting with hotel recommendations to wiki. Li-Chun will also post info for Fall meeting in Tapei. Amazon volunteered to host a 2018 meeting. Dean will coordinate schedules for 2018 and report on a future call.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cabforum.org/pipermail/public/attachments/20170516/0c0c656b/attachment-0002.html>


More information about the Public mailing list