Tutorial: The FAIR Framework
Setting the scene for this tutorial day, the Security Forum chairman
Mike Jerbic explained that we are interested in establishing a common
understanding among our members on what it means to standardize a risk
analysis framework. This tutorial has been arranged to answer
this question, and to provide members of the Security Forum with
appropriate training sufficient to become competent to make this judgment
and then contribute to developing a relevant standard on
it.
Ian then introduced FAIR (Factor Analysis for Information Risk),
presenting a summary (see slides) of the
discussion in the previous meeting (Paris, April 2007) on the FAIR White
Paper, as background to the discussions to date. He then introduced Alex
Hutton (Risk Management Insight), who conducted the 1-day tutorial
on FAIR.
Alex described the approach that RMI's FAIR takes to analyzing risk, using his
RMI Introduction
slides. Key points were that the taxonomy for any risk analysis scheme
must mirror reality, and be complete. Classical risk is characterized by
historical events which indicate "probability" of them
recurring - priors create probables. Security by contrast is
characterized by uncertainty.
Next Alex explained the basic risk assessment steps (see
slides) in which he explained how the FAIR analysis scheme comprises
10 steps in 4 stages, resulting in a final step in which we can derive
and articulate the risk that we are analyzing. This is articulated as
the probable frequency and probable magnitude of future loss.
He went on to explain how FAIR is a taxonomy of risk (see
slides), and the components in the FAIR taxonomy and their
relationships. He followed this with a review of the nature of the
considerations that risk analysis brings into focus (see
slides):
- Recognize measurement theory considerations
- Express how the FAIR framework drives more objectivity into risk
analysis
- Distinguish between subjectivity and objectivity
- Derive vulnerability
- Recall how control strength evolves over time
- Recognize and describe the risk landscape components including those characteristics
that are important when thinking about a threat
- Recognize the validity and usefulness of quantitative ranges
- List and distinguish among the six types of loss
- Articulate risk as a function of Loss Event Frequency and Probable
Loss Magnitude
- Describe the loss severity levels
- Explain the challenges associated with data analysis
followed by concepts of profiles for threat communities, and risk
modifiers (see slides).
The tutorial continued with several use-case
exercises in which, for each use-case example, the attendees
conducted a risk analysis using the FAIR 10-step 4-stage process to
derive and articulate the risk. These exercises served well to stimulate
many questions, ranging from the validity of the assumptions required to
complete each step, to the validity of the theory and usage of the math
underlying risk analysis. The outcome was that at the end of the
tutorial all attendees had experienced the concepts, taxonomy, and
process that FAIR uses to derive and articulate risk.
Workshop: Risk Management
The workshop began with a general discussion in which members shared
their assessments from attending the previous day's FAIR tutorial. The
following summarizes the main points:
- A generally accepted standard for an information risk taxonomy and
framework would be a major contribution towards standardizing risk
analysis and management practices.
- To be useful, we need to evaluate the likelihood that any risk
analysis standard we develop will be adopted by at least a critical
mass of organizations. The greatest barrier to any effort to develop
a standard is that many major organizations have their own risk
analysis schemes, and while these have many commonalities in their
components, they do have widely different ways of relating them and
assessing the risks associated with them. Before embarking on
developing a standard in this space we should consider how confident
we are that we can make an acceptable impact with it. As an example,
BS5750 (ISO 9000) was successful in the UK because the UK Department of
Trade & Industry paid companies to certify to it. Such an
expensive sponsorship is not easily achieved today - the BS5750
sponsorship cost the DTI much more than it anticipated.
- A significant feature of FAIR is that it seems to be based on a
sound taxonomy which reduces variations in risk evaluations. We must
avoid undermining this by ill-judged efforts to accommodate features
from other risk analysis schemes. Equally we should understand the
best features of other schemes and include them where appropriate.
- An ultimate goal for the Security Forum would be to deliver a
top-class risk analysis framework which was then accepted as the
basis for an ISO standard. We see value in at least developing a
standard for the FAIR taxonomy. The relationships in the FAIR
taxonomy are a particularly high-value feature.
- RMI is not tied to the FAIR taxonomy terminology; it is tied to
the logic of the risk analysis breakdown. FAIR's probability
assessments require that you must be able to state your measurement
assumptions; you need to apply common sense at each stage (logical,
no biases); and you have to demonstrate that it is consistent (results
stand the test of time). RMI is gathering data on these features.
- Probability theory includes the concepts of "state of
nature" versus "state of knowledge". State of nature
has to do with things we can measure. State of knowledge is about
knowing from experience what to expect. In risk assessments we can
apply both these assessment approaches when assigning metrics and
logical deductions during our risk analysis process.
- FAIR is too specific to be a standard, though it can be a base for
a standard. The ENISA Working Group work on risk assessment and risk management
is a useful reference to consider in this regard.
- RMI would like FAIR to be the basis for an open standard on risk
analysis. The RMI value-add is to develop tools that support FAIR's
probability-methodology approach.
- The threat agents summary in FAIR is a useful component in itself,
and could form the basis for a catalog for academic study and
assembly of threat event data.
- ISO is already well down the road on a Risk Vocabulary (N5358), so
if we proceed with developing a standard for risk analysis we should
engage with that work as soon as possible. We will make contact with
the relevant ISO WG (JTC1 SC27, TMB WG) to follow up on this
item.
Members then received a presentation on Trends in Risk Assessment, Analysis, and Compliance,
from Jim Hietala, Compliance Marketing. Jim presented results from his
surveys on challenges/issues with organizations being able to meet
compliance requirements, how they are managing multiple compliance
mandates, how automation has helped their compliance processes, what
software tools have been most useful, and what new facility would be of
greatest help. His survey information, which included interesting quotes
from those included in his surveys, also indicated that most businesses
anticipate that meeting compliance requirements will become an increased
burden over the next 12-18 months. Highlights were:
- Cost of compliance is a board-level concern.
- There are many opportunities for improvement in compliance and risk management processes.
- Automating manual processes, moving from Excel and Word docs to web-based tools with central storage
and reporting.
- Risk and compliance hot spots include PII (Personal Identity
Information) proliferation, and assessing and managing risk from outsourcers.
- Senior management perceives assessing/managing risk as more critical than compliance initiatives.
- The Financial Services Roundtable/BITS has created the Shared
Assessments Program to establish financial services industry
standards aligned with ISO 17799 for SIG (Standardized Information
Gathering) and AUP (Agreed-Upon Procedures).
- Trends are that regulations on PII will have an increasingly big
impact on business compliance requirements, and tools that simultaneously assess
risk and determine compliance based on absence or existence of
controls are the goals for many businesses.
- Three different approaches to tools:
- Technical security tools (point solutions), including vulnerability, configuration, log management,
NAC, access, and entitlements; e.g., McAfee, Symantec, BigFix,
ArcSight, Elemental, and many more
- Process-oriented, assessment-based software tools; e.g., Avior Computing (vendor risk and information privacy assessments), Symantec/4Front ,
ControlPath, Archer
- Blending assessment and technical security data; e.g., Agiliance
Jim concluded that the highest priority risk/compliance problems
today are:
- Proliferation of privacy regulations impacting IT security,
creating need for privacy assessment frameworks and automated tools
- Assessment and management of outsourcing risks, requiring vendor risk assessment tools and common standard
- Lack of common standards for risk assessment and analysis
- Mapping of controls among compliance regulations, which are a
significant source of pain for end-users and vendors; the Center for Internet Security is leading an effort on this
- Ambiguity and subjectivity of standards and regulations, where
interpretation is left to each organization and their auditors;
breaking them down to discrete measurable control components is a “subjective art”, each vendor is doing it differently
and seeking compensating controls; a big problem in this area is who decides
what is a compensating control?
Jim Hietala's presentation was followed by a second presentation
on Risk Management and INFORM, from Dr. Jeremy Ward, Symantec.
Information on the INFORM (INFOrmation assurance Risk Model) tool is
available at here.
Jeremy first reviewed what businesses today are looking for to
help them with their risk management. He equated Information Risk with
business value, and managing risk gets across the components and
controls involved in risk management. He built up a context where two silos - Business Organization (concerned with Competitors, Conditions,
and Compliance), and IT Services (concerned with Access, Accountability,
and Availability) each have information security risk, and each require
information security risk management. He also illustrated factors which
drive up risk and management controls which drive down risk, and
followed this with a flow diagram illustrating how risk can be managed
through controls on threats, vulnerabilities, and the impact of loss,
concluding that any remaining uncontrolled risks are unmanaged and
therefore unacceptable. Then, reviewing the controls spectrum in the
context of his two silos, Jeremy highlighted risk analysis as the bridging
control mechanism between the business and technical sides of
organizations, and asserted that this crucial bridge is often the part
that breaks down. He then introduced the INFORM tool, which provides a
methodology to deliver this bridge, in the context of a pre-sales
facilitated service which Symantec uses to measure IT security and operational efficiency risk in
a client's business, using benchmarks against third-party research, Symantec data, and international standards (ISO 17799,
ITIL, FISMA), to produce a common set of prioritized solutions for risk impact reduction.
Having explained the INFORM approach, he illustrated its process using
screenshots of its assessments for a use-case example, covering
Valuation Capture, Risk Exposure Assessment, Threat Assessment,
Vulnerability Assessment, Legal & Regulatory Impact, Information
Loss, Cost of Security Incidents, arriving at Total Risk Exposure, and
following on with solutions involving ISO 17799 benchmarking, Current
Solution Implementation, and a Prioritized Solutions List with Cost
Calculation. One member's suggestion was the cost calculation could be
improved by adding a cost/benefit assessment. A key feature of INFORM is
that it gives a structured process and audit trail for how you arrived
at your analysis and assessment of risk.
Following the FAIR tutorial and discussion on it, plus the
informative presentations from Jim Hietala and Jeremy Ward, members and
presenters reviewed the value-add case for proceeding with a project to
develop an Open Group standard for risk analysis. Discussion on this
began with the standard questions:
- WHAT do we envisage the deliverable will look like?
- WHY will it add value to what already exists in the industry?
- WHO would need to contribute and be included in the development to
ensure it is a success?
after which we can consider the HOW and WHEN questions. Existing
standards in this space include NIST 800/30 and ASN CS43/60 - what is
different about our proposals for a risk analysis standard and what is
the value-add we think this would represent? FAIR does seem to offer a
more quantitative analysis approach (in this context the upcoming
Metricon conference in early August was recommended as worth attending),
so combined with its logical taxonomy including its inter-relationships,
we see it as offering a basis for developing a standard framework which
will represent a significant step forward in producing more precise and
repeatable results. We do lack involvement from the audit community -
The Open Group's now-closed Active Loss Prevention" initiative was
mentioned in this context. Alex noted that RMI is involved with ISACA
and they may be able to bring them in as a collaborative partner if we
proceed with FAIR. Perhaps business lawyers may also be value-add
contributors? Banking and insurance representatives are the
highest-priority targets for adopting a risk analysis standard, so we
will need to try to engage their interest.
Discussion concluded that we are thinking in terms of wanting to
develop a relatively lightweight standardized process for risk analysis
because most existing risk analysis processes lack precision both in
their logical breakdown of the risk model components and their
qualitative subjectivity in assessing risk levels to specific
components. A well-structured quantitative methodology will definitely
represent a value-add step forward for the industry. Our approach could
well be based on FAIR, but it should embrace a wider constituency and
starting point, because a critical success factor is providing an
effective risk analysis framework which meets one of the greatest
frustrations that many businesses have, which is that most existing risk
analysis processes/methodologies push the user into identifying all
their assets – be they high-value or low-value – and the possible
controls that can be applied to each, and in the process getting them
bogged down in excruciating detail, when their real priority is to
identify and manage the high-value risks in their business, where
investing in effective controls will yield greatest return on
investment. Another relevant point here is that tools typically fall
short of business requirements in that they analyze vulnerabilities or
threats but do not combine analysis of vulnerabilities and threats and
their impacts.
Alex undertook to draft a proposal for our next step in refining our
objectives, with the aim of producing a first draft within four weeks. Five
other members undertook to review and return feedback on this draft
within 14 days of receipt. In this way we will aim to make significant
progress with establishing a project plan for developing a clearly
defined set of deliverables.
Identity Management Forum
ISO JTC1 SC27 WG5: new set of standards including Biometrics,
Privacy, and an Identity Management Framework. Ian presented a summary
report on the discussion from our previous meeting in Paris (April
2007) on these ISO draft documents. He explained that we have ISO
Category C Liaison status with ISO SC27 which allows us a formal channel
to submit comments but we do not have voting rights in ISO. He also
explained that our submission of the Paris comments had been too late
for inclusion in their scheduled June 2007 drafts. Accordingly he will
update these comments to match a review of the new June 2007 ISO draft
documents. Ian will make the three new ISO June 2007 drafts available to
Identity Management Forum and Security Forum members, for their further
review. The closing date for submission of our comments to ISO
SC27 is September 1 2007, so he will need to receive members' comments
by August 27 if they are to be sure of inclusion in our formal
submission to ISO SC27. ITU-T SG17 Project on
interoperability/interworking, common data models, discovery, privacy, and governance.
We have no update on the progress in this project. Ian will follow up
with his Nortel contacts who were leading this activity, to explore how
we may contribute and leverage their work.
Common Core Identifiers submission to ISO standards work on
identifiers. The CCI deliverables were published by The Open Group
shortly after the April 2007 conference. Ian is engaged in offering this
work as a highly relevant submission into the ISO JTC1 SC27
working group which is developing a Standard for Identifiers. He had no
progress to report in the Austin meeting, and will follow up and report
back.
Presentation on Identity/Authentication Repository: Vikram
Dhawan (Lexis Nexis). Lexis Nexis is a leading
provider of information and services solutions, including its flagship
web-based research services, to a wide range of professionals in the
legal, risk management, corporate, government, law enforcement,
accounting, and academic markets. Its core business is human identities
and providing services which authenticate those identities, so
unsurprisingly their main customers are financial institutions, the
legal profession, and governments who are increasingly moving towards
e-Government and the e-Citizen. Vik gave a fascinating impromptu
presentation on Lexis Nexis (LEgal NEwspapers) - how their data
collectors gather identity information from available public records –
phone directories, birth records, credit bureaux, driver license
records, newspapers, court records property registers, etc. – then run
this information through their highly developed fabrication system which
reconciles all the input records and links all the scattered pieces of
information on one person into one coherent identity information file.
Identity documents from Lexis Nexis are accepted as authoritative. They
share the US market and some global markets with competitors like
Westlaw, eFunds, and ChoicePoint. The FCRA (Fair Credit Reporting Act)
in the US provides a regulatory check which allows individuals with a
direct interest in checking their files to verify that the information
held on them in these repositories is correct and to demand correction
where proven error exists.
Clearly the authenticated digitally stored
identity information in repositories held by companies like Lexis Nexis
are a significant resource to businesses which need high-strength
authentication.
XDAS Project Update
Ian introduced the sec-das
web site and presented the XDAS Requirements version 3 draft, summarizing the
collected requirements up to June 26. He also recalled the outcomes
from the initial XDAS project conference call on July 18, which
resulted in the Novell project leader John Calcote proposing two sets of
updates almost immediately afterwards - one on updating the XDAS Record
Format, and another on updating the XDAS taxonomy. The engagement in
that conference call with the CEE (Common Event Expression) project
being led by Mitre, including the welcome participation by the CEE
project leader Bill Heinbockel and CEE project member Anton Chauvakin,
will we hope ensure we achieve alignment between the XDAS and CEE event
format and taxonomy, so that even though XDAS is solely interested in
security events, while CEE is interested in all events, events in XDAS
and in CEE will be interoperable. Ian will add this requirement to the
XDAS Requirements draft.
In the Austin XDAS review, two more requirements were proposed:
- Translate the XDAS Record Format to XML
- Producers of events should be able to contribute events via RSS
feed, and contributors should also be able to subscribe to that
feed; the Web 2.0 interface to channels enables addition of this
functionality
A further comment was that we need to broaden participation to
include auditors if we are to achieve buy-in from the IT Audit community
– ISACA certainly, and perhaps COBIT too – particularly if we are
planning to define a taxonomy for security-related events. We also need
to engage more vendor participants.
The discussion then focused on the proposals from John Calcote in
sec-das emails dated July 19, on Record Format and on XDAS taxonomy.
Ian will capture the detailed comments in responses embedded in John's
two emails.
Finally, the members turned to Anton's response to John's proposal on
updating the XDAS taxonomy. The view was that Anton's July 23 email to
sec-das exemplifies why we need a defined way to extract significant
security events. If CEE anticipates 150k different kinds of event it
clearly will be hard to extract the significant ones. If CEE is then
intended to become simply a syntactic container into which you can put
anything you want to then what good will it be to any consumer of its
contents? Ian will put this point in an email response on sec-das.
SOA-Security Project (Joint Meeting of Security Forum with SOA Working
Group)
The objective of this project is to evaluate requirements and potential solutions for securing SOA
environments. Ian gave a brief two-slide introduction
to the progress achieved to date since the project started in January
2007. The current focus is to examine real use-case SOA deployments to
demonstrate and characterize where and how security requirements in SOA
environments are different to those for non-SOA environments, and from
SOA environments analysis of these use-cases, go on to identify:
- How the SOA-specific security requirements can be met
- Where extending existing security standards need to be extended or
created
- How to capture our findings in a best practice guide on security
SOA environments
In this meeting the focus was on receiving two presentations on
use-cases for assuring security in SOA environments.
In the first presentation, Ron Williams (IBM Austin) give a presentation
on two models for web services: security run-time and mediated (proxy) security.
In the Proxy Model, security is applied to specific (“intercepted”) layers of the network stack, user-centric security is at the application layers (above TCP [TCP/IP], Layer 7 [OSI]), and
is traffic-centric (Layers 2-7 [OSI], MAC to Application [TCP/IP] ). In
the Composition Model, aggregate services are located in a Service Composition Architecture (AAA/CIA).
Ron considered security as a service in the Proxy Model, and also in terms of the traditional AAA (Authentication, Access Control, Audit) security model, in which XACML provides the mechanisms needed for access control.
He then reviewed the XACML 2.0 Model, and explored the nature of the
Policy Administration, Policy Decision, and Policy Information points in
this model. He closed by considering his experience of the best features
(XACML ecstacy) and frustrations (XACML agony). Ron's closing slide
raised key questions on XACML:
- How to optimize for different Decision Requests - Access Control List “Style”
(resource-centric),
Capabilities List Style (user-centric)
- What about that XACML entitlements (capabilities?) model?
Work continues to get experience-backed answers to these and other
questions. His closing comments were "Full Speed Ahead! (but
where’re we going?)", and "Clearly some of us have a lot to learn".
In the second presentation, Paul Ashley (IBM Australia) described his
experience in Securing SOAs from a Government example use-case. The high-level requirements in this use-case were to allow citizens (millions) and employees (thousands) access to government and citizen information from the Internet via a web-based portal application, allowing users to personalize their access, and enforcing access control so that users can only access information that pertains to them. The design was mandated to use a Service Oriented Architecture approach for the design.
Also the design must implement Business Process Management (BPM), and allow inter-government interaction via secured web services, all in a highly available environment.
Paul described an end-to-end diagram which showed the application
components, and then built up incrementally on this diagram to show what
security he applied where and how, adding in turn:
- IT infrastructure security
- User authentication and authorization
- Web services security
- Service provider authorization and user registries
- Service authorization
- Identity propagation and mapping
- Security auditing and reporting
- SSL/WS-Security
- Identity Management
- Availability
Paul closed by noting that IBM's Redbook on Understanding SOA Security Design and
Implementation, February 2007 is freely available for download, and is IBM's most downloaded to date.
He is currently working on a team that is updating this Redbook, so he
recommended members to look for it in Fall of 2007.
Both presentations provoked significant discussion, as a result of
which there was no time in the 90-minute session for Fred Etemadieh to
give a third presentation of slides by Wan-Yen Hsu (HP) on a financial
services use-case. This will be taken up in our series of two-weekly SOA-Security
conference calls. All members, but particularly those in the Security
Forum and SOA Working Group, are welcome to participate in this joint project,
especially through the two-weekly conference calls. The soa-sec email list
has been set up for this purpose, and either Ian Dobson (Security Forum)
or Chris Harding (SOA Working Group) will be pleased to add respective members to
that list.
Closing Review
Security Forum members validated our existing list of projects:
- Risk Analysis (FAIR)
- SOA and Security
- XDAS Update
- Identity Management - in ISO JTC1 SC27 WG5, and on Common Core
Identifiers exploitation in the SC27 Identifiers WG
- Jericho Forum liaison/links