[cabfpub] Public key pinning (Was: Notes of meeting)

Ryan Sleevi sleevi at google.com
Thu Jul 12 11:16:47 MST 2012


On Thu, Jul 12, 2012 at 10:56 AM, Rick Andrews
<Rick_Andrews at symantec.com> wrote:
>> Is path-building really non-deterministic?
>>
>> There can be multiple routes, but my (perhaps naive) understanding is
>> that multiple paths doesn't happen in the common website-on-the-internet
>> case, at least not for end-entity certs or the intermediates directly
>> above them. But perhaps someone can tell me I'm wrong.
>
> Symantec has been issuing SSL certs with dual paths for some time now. We do it to migrate to 2048-bit roots without shutting out older browsers that don't contain those roots. I would say the chain is deterministic; the leg leading to the 2048-bit root is shorter than the leg leading to the 1024-bit root, and that's what causes IE (at least) to favor the short chain over the long chain if it has both roots.
>
> -Rick

Thanks Rick,

I don't think you're alone in that sort of deployment - I believe a
number of CAs have done so during root migrations, as well as
inter-organizational cross-signing for items such as compatibility
with older systems (eg: Windows 95 support is something I still see
advertised).

However, such schemes unfortunately lead to non-determinism, although
I don't think this is the fault of the CAs, but instead limitations of
the APIs exposed to developers and, effectively, to browsers.

For example, OS X will prefer the first non-expired issuer it
encounters in the intersection of user-supplied certs and keychain
search paths. Because it does not (currently) perform back-tracking
during path discovery, this may lead it to the longer chain with the
smaller root, which will then successfully validate.

NSS, operating in 'legacy' mode, also behaves similarly, although
there are additional heuristics beyond expiration and a bit more
sorting. However, for NSS, path discovery is based upon all
certificates currently loaded in memory, in addition to the trust
store, so users may build a path to the 1024-bit root and the 2048-bit
root in the same browsing session, dependent upon which sites they
visited first. It also does not backtrack.

Similarly, under the libpkix (the non-legacy mode), NSS performs path
discovery and validation in a single pass, and only backtracks when a
valid path cannot be found. It does not (yet) perform any sorting
before exploring nodes in the path graph. Thus if NSS still has the
1024-bit root (or if it was cross-signed), then it may still end up
preferring the path leading to the 1024-bit root.

For OpenSSL, path discovery is based on the normalized hash of the
subject name, typically stored on the filesystem as individual files
(the OpenSSL "cacerts" directory, often /etc/ssl/certificates or the
like). OpenSSL does not prioritize or sort - it simply works through
each matching subject name until it finds a valid cert. When
re-creating this directory (c_rehash or the like), the order in which
these hashes are created may vary from iteration to iteration, or may
different from system to system.

These are the sorts of problems we deal with when trying to find the
One True Path. Microsoft's path building approach, which first
performs a 'cheap' path discovery for viable paths, then
re-prioritizes them based on optimum criteria (shortest path,
expiration policies, etc) is certainly one way to bring some degree of
determinism to the algorithm, but such solutions are not unformly
implemented. I'm fairly certain there are a number of CAs here who may
be able to attest to support incidents that have unfortunately arisen
from this fact.

Cheers,
Ryan


More information about the Public mailing list