[cabf_netsec] Minutes of NetSec meeting 2020-04-30

Neil Dunbar ndunbar at trustcorsystems.com
Fri May 1 05:02:10 MST 2020


All,

As promised, the meeting minutes for review and correction are attached 
to this email.

Those have got to be the earliest minutes I've produced - let's hope I 
can sustain that window of production!

Regards,

Neil


-------------- next part --------------
Minutes for NetSec Meeting 2020-04-30

Attendees

Neil Dunbar (TrustCor) [Chair]
Tim Crawford (BDO)
Wendy Brown (FPKI)
Mariusz Kondratowicz (Opera)
Corey Rasmussen (OATI)
Clint Wilson (Apple)
Taconis Lewis (Protiviti)
Daniela Hood (GoDaddy)
Dustin Hollenback (Microsoft)
Bruce Morton (Entrust Datacard)
David Kluge (Google)
Janet Hines (SecureTrust)
Ben Wilson (Mozilla)
Tobias Josefowitz (Opera)
Trevoli Ponds-White (Amazon)
Aaron Poulsen (DigiCert)

1. Review Agenda

The Agenda was agreed.

2. Meeting Minutes

The minutes of the previous meeting were approved.

3. Pain Points Subteam Update

David introduced the update by focussing on the access management ballot.
The terminology around accounts seems to be the primary problem - what is an
"account" and what is "access".

There seems to be consensus that the 24 hour deadline for deactivation
may not be necessary, and is not within the primary purpose of the ballot.
David thought that a 24 hour deadline might be hard for some CAs to enforce
for what might not be an emergency situation.

Trev commented that the issue wasn't the availability of staff to enforce a
24 hour deadline - most CAs have such capability - it's that it's a 3 month
review which then seems to mandate a 1 day removal of access; thus why is this
suddenly an emergency.

David observed that such a line of argument could extend (as an example) to a
critical vulnerability in the OS but which was detected 3 years after release:
would that mean no such window of fixing was established. Trev replied that
the situation was different because the vulnerability describes a risk which
becomes more serious by virtue of its having been disclosed. In the case of a
stale account which once was approved, there has been no increase in risk.

David disagreed - saying that this removal was not the approved method for
offboarding an employee account which is no longer required (which is a different
control requiring a 24 hour turndown). He thought that this 3 monthly review
was there to check whether the normal 24 hour turndown process was working as
expected. Thus, if a failure of process was observed, a 24 hour remediation
would be expected.

Trev then asked if this concern only applied to people. David said that this
was the discussion required - is it human accounts only, or does it apply to
service level accounts? In his opinion, it did not matter whether it was a human
or service account. Trev considered the scenario of a service which was performing
a valid service, but which is no longer needed: what is the urgency in removing
this account? Perhaps, Trev suggested, there should be an assessment within 24
hours over whether the service account should be removed. She considered that
this was a different scenario to the risk of a human account needing removal,
versus the removal of a service account thought to be unnecessary, but which
could cause breakage if simply removed.

Neil asked Trev to clarify why she considered the scenarios to differ in risk.
Trev replied that a human can take any number of actions, whereas a service is
limited to the expression of the code which it runs.

Tobi posited the notion that a trusted user could credential the service account
such that the user could (improperly) use the service account as an access vector
to the system. Neil added that he thought that if a service account was allowed
to persist beyond need, the monitoring on that service would have dissipated in
profile, and as such the old service account could be a vector for attack. Trev
thought that was a different control; saying that this was an example of shared
credentialling, which is a separate security control.

Tobi said that this could be a malicious attack from a disgruntled employee leaving
a backdoor into a system.

Trev said that in these discussions we have covered many scenarios which require
controls, but we should focus on the activities which require removal within 24
hours, rather than the existence of an account per se.

David say that he understood the example Trev had given. He added though that there
are other scenarios similar in nature which do pose significant threat; that of
where a different team ran a service who lost authorisation to perform the task.
Thus there were different models, some of which pose limited risk, some of which
pose urgent risk.

Trev replied that this was the point: that it is the use of credentials to access
accounts which should be monitored, and that in the case of a service account which
is not supposed to be used by people, that activity is the one which should trigger
an alarm, which should perhaps be addressed within 24 hours. This would apply
whether the service was necessary or not.

David commented saying that he would not know what to monitor for in that scenario.
He asked if we should then try and specify various remediation windows depending
on service; such a structure was possible - but does it make for good rule making?
Was it not easier to specify 24 hours, even if it was too strict, since at least
it is an easily enforceable boundary?

Trev thought that it might prove less strict, since it incentivised CAs to allow
old accounts to live on, until the three month review period ended, and then performing
cleanup, instead of being more responsive earlier on.

Ben asked how we could get consensus toward a ballot. Neil said that the intent of the ballot is
to encourage people to move towards continuous monitoring, while preserving the old
regime of periodic review.

Tobi then said that in Ballot SC29 we removed the option to continue periodic review.
Perhaps this was a case where we should do the same? 

Neil replied that in his opinion, there had to be some sort of acceptable window for
remediation - leaving it open ended seemed wrong. Trev answered that a CA should be
responding to any sort of alarms within a 24 hour window. She added that within 24
hours, an assessment of action to be taken should be in place.

Some discussion followed on whether this ballot would actually cause an alarm to
be generated, and whether the requirement to respond could result in an open ended
commitment to deactivate the account, rather than in a definite window. Trev
concluded with some suggestions around generating an alarm which requires remediation
in line with the CAs existing security policy; this would be preferable to a hard
24 hour deadline - having an engineer able to spend unrushed time to establish that
the removal of an account would not cause further breakage, but still under the 
time limits of the CAs existing policies.

Tobi added that the engineer would also be responsible for figuring the cause of
the accounts persistence. Trev thought that a human account is easy to determine
need; a service account less so.

David asked if Trev would like to propose some text. Neil suggested some text (in the
Google document). Neil said that there seemed consensus that stale account detection
was a security incident, and no security policy worth the name could allow such
incidents to persist indefinitely.

Neil was uncomfortable with the text as was, since it appeared to allow indefinite
remediation time. He also asked if Trev could modify the text to cover the expected
response time. He thought we were close to the text on the ballot.

4. Threat Modelling Subteam Update

Mariusz observed that the Threat Modelling team had also considered the risks of
unnecessary accounts. He was confused in that the CAs appeared to have the option
to leave accounts in existence for 3 months or to review continuously; so which
one was primary?

Neil replied that the ballot was still a stepping stone; perhaps the three month
window could go away if continuous monitoring was more established.

Tim said that he didn't think the review period could be removed even if continuous
monitoring was in place. Neil thought that if continuous monitoring alerted on an
account which was no longer supposed to be in place (by configuration) then that
didn't need a three month review for necessity.

Mariusz thought that the purpose of the three month review was to evaluate if the
continuous monitoring was functioning as expected. Aaron agreed, saying that you
still wanted to ensure that your monitoring control were current and measuring
what was expected.

Trev thought that the continuous monitoring was there to ensure that the three
month review was less onerous. Neil suggested that perhaps continuous monitoring
could be imposed as a requirement, with a 3 monthly review to see if it was
operating as designed. Trev thought this would be a step too far for too many
operations.

David observed that it goes back to the notion of "source of truth" for configuration;
which we had started on, but not finished. Since the configuration database was
likely to make account provisioning less volatile, it also makes sense to have a
review of the configuration database. He considered that the risk of having a
bad configuration database was lower than leaving the configuration up the individual
actions of system administrators. Even a six month review on a configuration
system with continuous monitoring would be an improvement on per-system configuration
reviews, and also less work.

Neil agreed, and David added that since the submissions to a configuration database
were likely to be in a version control system, in practice it was much less likely
to have accounts still on systems which did not need to be there, or which had
privileges which were no longer needed.

Mariusz then presented the Threat Modelling team's definition of "account" [
An account is a mapping between credentials and a set of privileges to
certain system or systems]. He added that it is not clear (from the NSRs) what
is actually accessed via the term "account". It was unclear whether uncredentialled
access was permissible (though clearly undesirable). The team felt that this
needed to be clarified, along with the term "system account", which seems to
provide no clear meaning. He suggested that the term "system account" could be
replaced by simply "account".

Neil thought that no distinction needed to be made between user and service accounts
from the perspective of determining business need.

Trev asked if we hadn't agreed to use the term "credential" to replace "account".

Neil replied he was unsure if it fitted the scenario where a Unix account (e.g.
in /etc/passwd or similar) which was not for anyone adopt as a login was still around
beyond need. He gave the example of the Unix account being used to run a cron job.

Mariusz and Trev thought that this was not covered by the ballot text. Neil accepted the
observation, given that the Unix account would have been provisioned by someone
who was properly authenticated. Mariusz in particular said that the term "account" only
really applied to an alteration of trust boundaries, which would not be the case for
a pre-configured system operation.

Neil still observed that many system administrators would consider these entries
in /etc/passwd to be an "account". Trev thought these cases should also be covered by
the NSRs as well as the authenticated accounts under discussion.

Neil was content with the definition; but added that any voters on a ballot should
be clear on what the scope of the terms of the ballot was, and that unclear definitions
were unfair on voters. Clint added he thought that cron job accounts would not be
covered by the ballot, since it was focussed on privilege escalation between
trust boundaries. Neil was happy with this but added that he thought the ballot needed
some terms and definitions so that everyone knew what was in scope.

Mariusz then talked about shared credentials - adding that there was probably insufficient
time to discuss all of the matters. It seems to be a significant risk from an accountability
perspective, but is not well covered in the NSRs.

There are cases, Mariusz continued, where a shared credential account is required because of
equipment architecture and firmware requirements (he gave the example of networking
components); this seems to require a ballot to carefully control the usage of such
credentials. There were several risks which had been identified via the threat models
that the team had developed, and not all of the risks were properly covered by the NSRs.

[Trev shared the document URI in the chat]

Neil agreed that there is a gap, and while time for discussion did not exist in the meeting,
it did need addressing both in the subteam and NetSec group as a whole.

5. Document Structuring Team Update

Ben had a meeting conflict, so the DS update was provided by Tim.

Tim said that there is work ongoing to address the controls around offline equipment (since
they have different threats to online equipment). No ballot is ready yet, but it is
still under discussion.

6. Ballot SC28 Update

Neil asked if anything needed to be done to SC28. Tim added that there had been cleanups
to the text, and that if people were OK with the changes.

Neil said that he would prepare a new redline based on the changes for 2020-05-01 or
2020-05-04.

7. Ballot SC29 Update

Ballot SC29 is now out for voting and voting will end on 2020-05-07. Neil added that
the implementation date had been pushed back to November 1, 2020, because the moratorium
on ballots had extended the discussion period more than was expected.

8. Other Ballot Work

There was no other ballots under discussion

9. Any Other Business

Neil asked for contributors to add any topics they want to see discussed for the F2F
virtual meeting; and that he would reach out in May to prepare a discussion document.

10. Adjourn

The meeting was adjourned and will reconvene on 2020-05-14 at the regular time.


More information about the Netsec mailing list