![]() |
|
![]() |
|
PLENARY
|
Introduction to Enterprise Information Management |
|
Allen Brown, President and CEO of The Open Group |
Allen welcomed all present to this conference. He mentioned Peter Drucker's lead over the information revolution - we have had a lot of the "t" in IT but not much of the "i". The theme of this plenary is enterprise information management.
Opening Address
|
|
Professor Alan K. McAdams, Cornell University's Johnson Graduate School of Management |
Alan said he is normally in front of hostile audiences so he has not adapted his slides for friendly ones. He has a serious message - we need to trump the Japanese. He regrets he cannot leave his slides because they are not yet approved for release, but they are available from the OIDA web site at www.oida.org. He has been on the IEEE Committee of Communication Information Policy since 1982.
The challenge is that we have a knowledge-based economy. Our prosperity depends on availability of information, and for this we need to have the very best telecommunications infrastructure - the best in the world if we (in the USA, but this applies in any knowledge economy) aim to be pre-eminent; i.e., the market leader.
What we need is a pre-eminent telecoms infrastructure. The OIDA Workshop Report says that the US telecoms infrastructure currently ranks 20th in the world; Korea is No. 1. In the US, most broadband is now at the limit of ADSL technology - offering 3 MB, which is nowhere near competitive to the No. 1 position. To Koreans, 70%-80% of the population has broadband, and they are running on average at around 45MB, with the promise to reach 100MB soon. The Japanese are close behind. How did this happen? It seems that government ownership and competitive innovation from private companies drove the national telecoms industry in Korea and Japan to improve. Private innovation and no monopoly forced the telecoms providers to improve. Compare this to the USA, where regional monopolies have held market share and so stifled competitiveness, thereby damping down innovation. The US players are moribund and now no longer have the financial resources to modernize. More importantly, the US Government has not intervened to break the US regional telecoms monopolies and thereby stimulate the major investment and modernization that is needed.
The key to breaking this situation - in the USA and anywhere else - is end-user ownership of their local networks. Accepting the challenge - Alan listed some new alternatives: the federal government has built the GIG BE running at 40GB because they have killer-applications that they can't get through the commercial infrastructure; they have the National Land Arail - again running at 40GB because they have killer-applications that they can't get through the commercial networks, and Cornell is part of this. We have multi-state research organizations - NERIN (North East Research & Education Network) putting a ring around New York, into PA, to Cleveland, and connecting with the Ohio network. It is end-users who are doing this over advanced fibre network, using Ethernet over fibre infrastructure, capable of Gigabit speeds.
This future has begun. Boeing, Intel, Oracle, Universities are doing this because this facility is not available commercially. User-owned big broadband is possible. The users have real interest in making it work well, yet they remain open to all users who want to join - there is no monopoly, or regulation. Once this takes off these fast networks will avalanche. These networks provide opportunity for reciprocal peering between each other, that can ultimately permit such networks to coalesce into multiple, open, end-to-end, regional, and national infrastructures. The only way the US can bring this availability nationally is to aim ahead of the present expectations. In Japan today, they have fibre networks going into most homes, and they aim to achieve gigabit speeds by 2005.
The US competitive advantage is based on having knowledge goods and knowledge workers. The US must begin now to create a world-preeminent telecom infrastructure. This must become a national priority, designating an institution to guarantee funding. The FCC has a major role here to facilitate the planning, build-out, and efficient operation of new networks. It should also prevent anti-competitive campaigns that aim to block the build-out of end-user-owned networks. This can be done under a business model that has a 20-30-year payback rather than a current telco 3.5-year payback. Federal, State, and Local Governments must support this to make sure it happens. This is not unfair competition: the present telecom monopolies should not argue that others should not do what they are not prepared to do, under a business model that does not require a penal 3.5-year payback .
Alan quoted Susan Kalla and Scott Cleland - business analysts - from last week, in support of his argument that the big incumbents have mostly avoided the VoIP world, and they must get back into contention with providing what business and citizens know is possible, or lose business too.
Alan says we have to trump the Japanese only because the Japanese are doing things so well. Fibre to the home provided by major electric supply companies, phone-to-phone telephony, Internet connectivity direct with the PSTN, going toward full IP technology where they don't have to go - so they will soon be able to operate independently of the PSTN. He noted the strong influence of the Japanese Government's initiative in creating its HATS (Harmonization for Advanced Telecoms Systems) organization, which is harmonizing VOIP in Japan. They know where we're going, and all this while the US telecoms infrastructure runs at 3.5Mbits. This is an abomination. The US nation cannot accept this. It must modernize if it is to maintain its pre-eminence and productivity.
Q: Jack Fujeida - Where does Alan obtain these great figures and analysis
about Japan?
A: From Google - the information is all there to be found.
Q: Walter Stahlecker - Government is allowed to pick off information via
switches in telecom networks; how to square this if the networks are privately owned?
A: We have to decide what is the best way to fight terrorism. Is it by destroying our
economy by fighting inevitable progress in our telecoms base? The day of the current
telephone is passing quickly, and we should accept that and move on.
Q: Chris Greenslade - You have talked about what the US can do, but the US
adopts its own standards. Would you advocate that the US should adopt international
standards, so we can get interoperability?
A: Yes - Alan said he already advances adoption of the Ethernet standard - which is
international. Currently, Canada (Alberta supernet) and France (Actia) are doing it right
with Canadian technology - in France, the French Government has invested 3 billion Euros
to advance fibre optic networks, in competition with France Telecom.
Q: Bill Estrem - We have a huge installed base, so how can Internet2 and
Abilene 2 move in the right direction.
A: Alan said he feels strongly about Abilene 2, which was specifically not done to
innovate - it was done to let the telcos off the hook. However, Alan predicted they will
not prevail. If we innovate the way we have to if we are to move into the necessary
big-broadband world, we will (by mistake) get to where we need to be.
Keynote: Managing the Adaptive Enterprise |
![]() |
Mark Potts, CTO, Strategy & Technology, Software Global Business Unit, Hewlett-Packard |
Mark said HP is dynamically linking business and IT. In this session he aimed to explain how to manage IT as a business in support of an adaptive enterprise, how HP's management architecture provides this, and to illustrate this by examples.
The CIO challenges he hears are to run IT as a service delivery business, to transform traditional IT departments into service providers, to turn management data into usable business intelligence, and to ensure availability and performance of key business operations. The lag time for IT to change to keep pace with changing business needs is increasing. The challenge is to enable IT to adapt quickly, and to reverse this trend so that change can be more rapidly deployed. The key to the solution of this problem is to move from high maintenance costs to innovation - to integrate people, process, and technology to run as an IT business, linking IT and business, and shifting IT investment from maintenance to innovation.
An enterprise delivers services to customers. The business process model is key to aligning delivery with strategy - this is an interactive activity between the process and the strategy. It involves managing IT as a service delivery system. HP breaks this down into a conceptual architecture involving delivery of business services, applications services, and infrastructure services. The more we can remove people from the operations and automate the adaptability of IT systems, the greater will be the efficiency improvements. Mark illustrated his point using the example of service delivery control to set up new mailbox services for new arrivals to the company.
How to achieve this? There are some key management design principles that must be adopted. Mark itemized these as modularity, integration, standardization, and simplification. He went on to describe the attributes of each, in successive slides.
Mark then described the use of models across the service lifecycle, and associated it with a technical perspective to automate and manage delivery of IT as a set of business services. He closed by inviting the question: why adopt the HP solution? And provided the answer - because it is open, evolutionary, and agile.
Q: Carl Bunje - Virtualization is complex, but in a multi-heterogeneous environment we
have several flavors; e.g., on-demand from IBM. Standards are critical here to helping the
customer to cope with a multiplicity of different systems. He is concerned that there will
be a battle in the standards arena, with different approaches that won't help the
customer. So how can customers influence this situation to ensure the outcome is a benefit
to the customer?
A: Mark replied that it is about managing what you have; i.e., it expects to live in a
heterogeneous environment. Yes, standards are important. They emerge from many different
sources - DMTF, The Open Group, OASIS, W3C, and many more. However, there is an implied
understanding across the IT industry over which source has the high ground on particular
areas of IT - management, architecture, protocols, languages, etc. It is important to hear
the customer voice in each of these standards arenas, so that this point stays visible and
reinforced as a prime customer requirement.
David Lounsbury introduced this Net-Centric Operations session, in which we received three presentations from Network-Centric Operation (NCO) players.
International Industry Consortium for Network-Centric Operations |
![]() |
Stephen James, Director of Strategic Programs, EMC |
Stephen noted that we are in the midst of change - not who is the enemy but how might they exploit weaknesses to do harm. This is the new theory of warfare. The Department of Defense, Department of Home Security, NATO, and many national Ministries of Defense, mandate knowing who is friendly and who is their enemy. Keeping pace in today's fast-changing IT world requires readiness to embrace and adapt to the changes, not resist change. In this scenario, only those who embrace change are likely to succeed. Transformation through adopting operations that use a network-centric approach (NCO) is a major goal for the US DoD and its federal affiliates. It includes creating an intellectual infrastructure, and understanding that critical to successful decision-making is gathering information, transforming this into knowledge through intelligent analysis, and delivering it to decision-makers in the right form so that informed decisions result. Essential to this are adoption of the right standards and operational efficiencies in order to support the operational concept of NCO. Interoperability between information systems is crucial, and this has to be enabled through migration towards open architecture communications and sensor systems.
Why an industry consortium? It is an industry contribution to achieving NCO. Its objectives are to reduce time-to-market of the IT solutions it needs for success, through developing a GIG (Global Information Grid) that will expedite delivery of information. Stephen presented the NCOC vision and primary tenets, and its value proposition:
Member organizations in the NCOC include DoD, DHS, NATO, MoDs, the International Law Enforcement Community, and US State & Provincial and Local Governments. New enterprises of all sizes, and think-tanks and academic institutions, are interested. The NCOC is open to all interested contributors, and is undertaking work based on relevant industry standards and practices.
International Industry Consortium for Network Centric Operations |
![]() |
Marshall (Tip) Slater, Deputy Director Strategic Architecture, Boeing |
Tip noted that the slides they are using are NCOC-approved, not personal views. His part of this three-part presentation addressed the operations side of the NCOC. The primary objective is to help customers achieve interoperability. They aim to achieve this through the use of commercial and defense best practices, and by using open standards, processes, and principles. They have an Advisory Council comprising NCOC activities, to ensure that the customer perspective and influence is kept at high visibility. They also have an Affiliations Council to represent the positions, views, and work programs of the affiliated organizations. The deliverables they are looking at are assessments of pertinent architectures and mandated open systems standards and how to bring these into a common view. They also have a Reference Model concept which they are using to characterize their goals for interoperability, use of open standards, evolve reusable solutions, identify gaps in open standards availability, and check they are using scalable and cost-effective approaches. They have an education outreach program aimed at furthering understanding of NCO and its essential role in future defense capability.
Tip noted that the NCOC is not a replacement of or competitor to other Government Forums; neither is it closed to non-traditional industry partners. It needs support through memberships in its Advisory Council, and it looks for guidance from and interaction with customer standards organizations. The NCOC will formally commence its role in September 2004.
Joint Integrated Open Architectures:
|
![]() |
Douglas Barton, Director, Network-Centric Programs & Technology, Lockheed Martin Integrated Systems & Solutions |
Doug noted that the US Dept of Defense (DoD) is in a process of transformation. The heart of this transformation is interoperability. The DoD has embodied the requirements in its DoD 5000 and its CJCSI 3170 documents. They have focussed on the effects that they want to happen in a battle scenario, and worked backwards to identify what is needed to fulfill those effects.
Doug outlined the DoD Enterprise Architecture which defines what they expect solutions to conform to in architecture terms, and backed these with strategic, operational, and tactical use cases. This maps to an FEAF acquisition-guidance definition of what Network-Centric means. The NCES is the enterprise services definition and platform (Common Operating Environment - COE) to network operations, and a DoD information management and interoperability data strategy. The whole construct defines the environment in which they will be able to deliver the required capability.
A further DoD directive is the over-arching joint integrated architectures from the Navy (FORCEnet), Air Force (C2 Constellation), and Army (LandWarNet), which are partnering to evolve joint architectures that will guide program development. These three reference architectures in the DoD are converging on a multi-service NESI (Net-Centric Enterprise Solutions for Interoperability) architecture. They are also driving convergence to a common program platform, to leverage the DoD program portfolio. In the NCOC, architecture drives net-centricity and program development.
The NCO industry forum is being established by the AFEI and is open to all who have a contribution to make.
The way ahead is to continue to evolve joint architectures and convergence, manage the information, converge on reference architecture (NESI), and partner in more joint demonstration and experimentation of proposed solutions to prove the concepts in operational terms. Their approach is that evolutionary steps can get them to revolutionary net-centric capability.
Q: There is so much similarity in requirements, but it seems there are two tracks - one
says we need a consortium because we don't want to get into competition without customers,
and the other says a consortium is not the right way to move forward. Lockheed Martin is
not part of NCOC at the moment. What is the situation?
A: Doug - Yes, LHM is not currently part of the NCOC. Their approach is that NDIA is the
right way forward, not creating a new reference model. However, they do see that NCOC can
help to achieve the goal.
A: Tip - Boeing sees that the issue here is the level of detail that is needed from lower
down the chain to make the network-centric outcomes more robust. They see that more detail
is needed and believe that this detail will come through the NCOC work.
Q: Andras Szakal - How will the NCOC do what is not already being done, and how will
NCOC coalesce the diverse activities that are already underway in various other areas?
A: Tip - We are finding that working at the lower level brings engineers together over a
longer period of time, and that time is needed to gain a common language, because there
are so many diverse cultures out there and each needs to accept a process to merge their
terminologies into a common vocabulary. Only through this process will we achieve a common
understanding and a more robust agreement on the common components in the network-centric
operational infrastructure.
A: Doug - Would like to see the efforts converge.
A: Steve - These efforts have come through a market economy, and the government customer
will be the beneficiary of gaining a confluence between these two approaches, so he looks
forward to that confluence emerging.
Q: Joe Gwinn - What process will be used to make sure that whatever reference model
emerges from this NCOC work will become accepted/adopted as a standard by the
industry-at-large rather than giggled at?
A: Tip - NCOC does not want to create new standards; rather it wants to adopt existing
standards. The idea of the NCOC reference model is to enable a view of how the existing
standards fit and interoperate, and where there are gaps waiting to be filled.
A: Doug - The best way to make it work is to either say here is the reference model then
require conformance if a supplier wants to play. We can look back at earlier uses of
reference models aimed at promoting interoperability and convergence - the Joint Technical
Architecture for the US armed forces, the DARPA work, the COE drive to standardization -
all we can conclude is that standards are necessary but not sufficient, and that use of
architectural design patterns has facilitated consistency in using the right building
blocks to produce coherent solutions. The nice thing about a reference model is that if
you come up with a good reference model the paperwork effort in doing it improves
understanding and costs a great deal less than developing the software. So, he believes
the DoD is on a good path here, he hopes to get it right, he agrees the NDIA effort is at
a higher level and the lower level detail is important, and he thinks it's up to
enterprises to conform if they want to play in the US DoD space.
Q: Glenn Logan - In the NCOC environment, what are the biggest challenges in leveraging
commercial products and technologies?
A: Doug - There isn't one - we build very high-assurance systems on critical defense
operations, and they work well. However, the commercial marketplace tunes to the demands
of its customers, and if lax requirements are allowed then they will surely happen. For
example, federated identity - if you call a service which calls another service, etc.,
then the composite service must be less precise. True multi-level security will always be
a problem.
A: Steve - From an organizational perspective, elimination of stovepipes of information is
a major challenge.
A: Tip - Information assurance immediately comes to mind in this question. This often
comes down to culture, within DoD and within industry, so he sees this is probably the
biggest inhibitor.
Q: Joe Bergmann - Doug mentioned that someone is working standards for identity
management - which standards?
A: Doug - SAML (OASIS) is one, federated identity (Liberty Alliance) is another. In
theory, if you look at the Global Information Grid (GIG) it has five initiatives - GIG-BE,
Jitters (theatre RF air-to-ground comm), Trusted Computing Alliance (TCA - satellite
programs), NCES is your enterprise infrastructure, and the information assurance part of
that is the responsibility of DISA. There are also funded programs addressing those
issues. Have I seen the material fruit from these programs - no! Most of the interesting
work is coming from commercial efforts like SAML and Liberty Alliance's federated
identity. Identity management and federated identity are really the big problems - every
time he goes to a DoD theatre to drop in a system he has to bring everything with him ...
DNS, network time protocols, email, directory. He would like to cut all that out, because
it should all be there and it's a waste of money and effort to have to install it, check
it, maintain it - every time, and all taking ten times as many people to do as should be
necessary. So he would desperately like to save the costs and time this involves, by
having the NCES concept and so focus the spend on more useful future work.
A: Joe Bergmann - Like to point out that some of these issues are being addressed in The
Open Group Real-Time & Embedded Systems Forum.
Q: Joe Gwinn - Having worked on NCES a while ago, the biggest problem was they were
trying to pick products, not standards, and many of these turned out not to be the right
selections. How will NCOC address this problem?
A: Doug - There is certainly the product versus standards argument, where
what matters most is a solution that works. He believes in biological diversity translated
into IT products diversity to mitigate the potential for viruses to disrupt operational
continuity. This also preserves competition.
Allen - So, the biggest requirement that is emerging is that we all want all these efforts to converge.
Doing the Timewarp - Again |
![]() |
Stephen T. Whitlock, IT Security Architect, The Boeing Company |
Steve said this presentation has a background of living through several environments - client-server, DCE, CORBA, and now Web Services, etc.- all in the name of achieving efficient business-to-business partnering. Relationships are getting more dynamic and more global. The next plane Boeing will produce will be designed, not just manufactured, by different companies in several countries. We used to have this perimeter which blocked everything, so everything outside was bad and everything inside was good. In reality of course there are bad things inside and good things outside. The perimeter is fading away in many cases, especially with home-workers and mobile workforces. It used to be that data was protected inside secured repositories, whereas now we are moving to embedding security in the data itself.
About a year ago, Boeing came up with five strategies to enable extended business operations while securing enterprise information assets:
Steve presented each of these in diagrammatic form to illustrate their deployed technologies, operational impacts, and interactions. Challenges they have encountered include
Steve then presented their policy-driven security service architecture. He also showed a slide describing policy decision and enforcement in more detail - and illustrated the protection systems for their applications or servers, clients or devices, and networks. Supporting services are cryptographic and audit and assessment. He hopes that much of this security infrastructure will be invisible to the end-users, taken care of by underlying automation - provided they keep their SecureBadge in safe undamaged order.
Q: Bill Estrem - Does your system have a means of doing delegation and impersonation
for external users so an application can act on their behalf within the environment?
A: He would like to do delegation without impersonation, but don't know how to do that
yet. There are very few commercial products that use SPKI, and with X509 you can't do it.
Q: Andras Szakal - The AUTH initiative from the US Govt - how might you use that?
A: It looked for a long time that the US Govt would only accept certificates, but guys at
NIST and the NSA seem to agree that using assertions is cheaper and easier.
Q: Claudia Boldman - On your Challenge slide you had end-to-end encryption and how it
thwarts your other security - how can this be reconciled?
A: Cryptographically it can't; our reverse proxy encrypts with an SSL-lite protocol and
decrypts at the perimeter, then re-encrypts or sends in the clear, but it's a
person-in-the-middle attack against a transaction going between a person outside and the
server. So there are two camps to this, and he is on the side of encrypt everything and do
a better job of anti-virus on the host. The anti-virus people in his organization don't
like that, but he thinks they are fighting a losing battle. He does not believe that
perimeter services can keep pace with checking for viruses and malicious XML code at the
perimeter - while it is working at present and keeps working by adding more computing
power, it will not scale to handling the expected increases in the volume of data, so he
is trying to put the protection in the client separately, just assuming you're always in a
hostile environment is hostile, and then protection around the servers. The only thing he
wants to protect the network infrastructure on is denial of service. It is plumbing except
for that.
Developing a Local Health Information Infrastructure (LHII) |
![]() |
Elliot Stone, Executive Director and CEO, Massachusetts Health Data Consortium |
Elliot focused on inter-enterprise connectivity in the healthcare industry. The long-term goal is that all US citizens will be on a digital healthcare register in ten years and have access to their information in a confidential environment. The perception is that healthcare organizations under-utilize IT and its increased use can improve quality and reduce medical errors.
Consumers understand the value but they believe it is already happening when in reality it is not. The vision in the US is that a NHII will emerge from a network of LHIIs, and that real-time clinical information from ALL providers for ALL patients will be available at the point of service. Critical success factors for this to be achieved include:
These are patient-safety initiatives.The MHDC was founded in 1978. It is working with MA-SHARE (Simplifying Healthcare Among Regional Entities) on the killer-application that will make MedsInfo-ED a reality in terms of integrating personal medical information into medical workflow at the point of treatment. It is a patient safety initiative to automate communication of medication history. Elliot showed the vision for how MedsInfo-ED fits into a community clinical connectivity strategy, and then showed the architecture they have devised for it. Their approach identifies and accesses data sources - it does not hold any actual medical data. Clearly privacy and security are major concerns.
Q: Stefan - In Germany there is a national SmartCard scheme that is well advanced -
perhaps we should collaborate to share experience.
A: Most of the US projects will be done on a local voluntary basis, and each LHII will be
free to decide whether they use SmartCards or not. However, it is always worth benefiting
from the experience of others who have done similar implementations.
Establishing an Enterprise Information Strategy |
![]() |
Michael Brown, MD, CIO, Harvard University Health Services |
Mike explained that his enterprise covers provision of health care to the population of Harvard University. This includes managing and operating with outside agencies who normally provide healthcare services - including insurance cover - to the transient student and visiting academic population of the University as well as its more stable administration staff. Communication is critical to avoid errors in medical diagnosis or treatment, and to improve efficiency. The information security role is getting much attention - foremost at present are the Institute of Medicine, the MHDC, the Leapfrog Group, the Health IT Czar (David Brailer), and the highly inquisitive public interest promoted in the general press.
Why is more not happening? They want electronic medical records (EMR). However, standards for creating, processing, and managing these are not sufficient, the legacy medical model is paper-based, the costs of moving to EMR are a major part of the unknown, and privacy and security concerns abound. Added to this, the healthcare industry does not yet have unique patient identifiers. So the Harvard University Health Service (HUHS) strategy is to follow the standards that currently do exist, prioritize their progress on EMR with due regard to stability/costs/benefits/efficiencies, support progress where it sees advantage, recognize limits, and accept the fact that the transition may be difficult but it must be undertaken.
Issues high on their agenda include getting interoperability across the healthcare industry, data feeds, use of outside systems, and a sound policy for exchanging information over email, scanning in of paper-based records. HUHS is excited over the information systems developments in healthcare, and wishes to be a wise leader in this arena.
Q: Mike Lambert - What are the challenges to the patient in communicating by email?
A: This depends on educating the patient in use of the email and delivering the
information they want.
Information Technology Strategy - Case Study |
![]() |
Scott Ogawa, Chief Technology Officer, Children's Hospital Boston |
Scott explained that the ISD (Information Services Department) of their Children's Hospital is customer-centric, performance-driven, and focused on solutions with quality measurements and feedback. Scott listed how ISD believes it fulfills its commitment to his hospital's user community. He reviewed how his ISD had decided it must revise its IT strategy, and adopt a major change in approach:
They developed guiding principles based on this strategy, and identified four main focus areas:
They have several project areas that have been running since 2002, and they monitor the effectiveness of these projects. The satisfaction survey results between 2002 and 2004 show dramatic improvements. There is still much to tackle, but they have much to be pleased with, and this encourages them to continue in becoming better still.
So Many Good Ideas, So Little Cooperation: The Technical Politics of Spam Control |
![]() |
Dr Nathaniel Borenstein, Distinguished Engineer, IBM Lotus Division; President, Computer Professionals for Social Responsibility |
Nathaniel noted that no-one disagrees that Spam is not good. The problem is where to begin. Email is complex, and controlling Spam will require multiple control operations. He showed a sequence of slides that illustrated the common perceptions for how email works, and all the places where problems can be introduced. Even when considering compliance policing, all of us are targets having to prove we're not spammers ourselves.
Spammers ignore any architectural rules (of course), so the good guys (police) and vigilantes also do so. There are no silver bullets - Nathaniel characterized the nature of several proposals that do not pass the tests for effective solutions. He noted that to be effective, multiple combined approaches must be well engineered so they work well together. Everyone needs a gatekeeper that has fine judgement, but we can live without perfection - in fact we often prefer it, because Spam filters often prevent us receiving emails that we really want to get.
Filtering is a coping strategy. Alternatives to filtering include
These are not really alternatives to filtering, and they can be used together with filtering. Thinking of practical steps to cooperation over agreeing the best solutions:
So we have to plan for the long haul towards best solutions, and collaboration to decide best solutions is to be encouraged. Points to consider in planning next steps should include:
Longer term, measures can include personal cryptographic signatures, payment mechanisms (this tends to be complex), collaborative filtering (information sharing), establishing adoption of global standards and legal measures across different jurisdictions, and more automated legal enforcement.
Reasonable goals should be kept in mind. We should remember how parasites and diseases co-evolve with victims. The lessons from history suggest that in 1000 years, AIDS may become like measles. Spam can only be reduced, not eliminated absolutely.
Nathaniel closed with some modest predictions, and advice to the victims (all of us) of Spam:
Q: Joe Gwinn - It seems that most Spam involves trying to sell me
something, so why should I worry too much?
A: Sales emails are almost legitimate. The serious Spam problem is those emails that trick
you into giving your credit card information - and lead to financial fraud.
S/MIME Gateway Certification Launch |
|
Allen
Brown and Elliot Stone
|
Mike explained briefly what a trademark is, and what trademark ownership means - conformance to the trademark is backed by international trademark law. He then explained how the Messaging Forum has worked to produce a Secure Messaging certification - The Open Group S/MIME Gateway Certified - through collaboration with the IETF in producing an agreed specification for an S/MIME Gateway, from which The Open Group produced the required Conformance Documents to define the requirements that conformant products must satisfy.
We are pleased to announce that two vendors have submitted products that have demonstrated conformance to the S/MIME Gateway certification requirements. Certificate awards were presented by Allen Brown and Elliot Stone to:
Two other vendors who have declared intent to complete submission of products that they expect will satisfy the conformance requirements are:
Keynote: Essential Information Flows for Enterprise Management |
![]() |
Richard C. Sturm, President, Enterprise Management Associates |
Rick did a level-set on what is Enterprise Management? The remote monitoring and control of computing and communications resources to meet the needs of the business, covering performance, availability, and security. We can compare this to the ISO FCAPS definition - Fault, Configuration, Accounting, Performance, Security.
Management requirements start with having to know what you've got - an inventory - and requires resources for delivery of the services needed to perform the required level of services. You have to know the objectives you're aiming to achieve - and these must take in the user's expectations, the business requirements, and the service-level objectives. You also need to know the status of "what is" - this requires instrumentation to measure availability, performance, security, and to interpret the results, and report them in best-usable form. Boundaries exist at the edges of every object. For enterprise management the boundaries exist between managed objects and managed software, between management tools, and between the tools and human operators.
Information flows exist everywhere - within the objects themselves, between managed objects, with and between external objects, etc. Do boundaries matter? Well, they tend to act as barriers to the flow of information. Heterogeneous environments require efficient information flows. New IT initiatives - Adaptive Enterprise, Autonomic Computing, Grid Computing, On-Demand Computing, Utility Data Center - all require a boundaryless environment. We need to recognize there are numerous boundaries and most of them inhibit information flows. Rick gave examples. We also have to cope with the fact that most organizations use multiple products to address management challenges, and the mix of tools introduces multiple challenges, like the lack of a common data model, data not synchronized, data not in a single shared repository, only limited data is exposed, tools have to function in semi-isolation, etc. All this adds to higher costs of management, impedes deployment of new IT technologies, and makes effective management of services much more difficult.
How can we overcome the problems that boundaries present. Well - boundaries don't have to be significant barriers if we can manage to render them less and less visible. Strategies to achieve this include using standard information exchanges, a shared data model, a common data store infrastructure, a service-oriented architecture, and the like. Architectures for good enterprise management need to facilitate good information flows. We must demand management tools that are open - standards-compliant, with open interfaces. Single-vendor sources for management tools is not the way to go forward.
Q: Prof Alan McAdams - You mentioned BSM - what does it stand for?
A: Business Systems Management, or perhaps more commonly - Business Services Management.
Allen - The clear message from Rick's presentation is that boundaryless does not mean no boundaries, but that they need to be permeable to enable effective information flows. We need to get serious about following an open standards strategy to make our systems more manageable.
Boundaryless Storage Management |
![]() |
Larry Krantz, Senior Technologist, Office of the CTO, EMC˛, and Chairman Emeritus of SNIA |
Larry observed that in the storage view of More's Law, while connectivity options are multiplying as systems grow, and as technology improvements grow at a similar pace, the pace of change in storage requirements is doubling every 12 months, and with the emergence of external storage and optical networks this pace is showing signs of increasing to doubling every 8-9 months.
In the mid-90s the Enterprise Storage scheme arrived - making storage easier to manage, resources easily re-deployed, and storage resources generally not difficult to manage. Then we moved to a storage area network (SAN). Why networked storage? Because it offered consolidation of resources, availability and scalability, sophisticated data protection and copying, increased capacity utilization, separation of the relationship between servers and storage, logical and dynamic allocation of capacity, and improved performance.
The business drivers for networked storage clearly emerge as consolidation for protection of company assets, reduced indirect and overall costs, management for continuous operations, and improved ROI. Larry quoted International Data Corp - "Storage consolidation technologies have passed the early-adopter stage, and have proven to be a viable strategy".
But networked storage does not completely solve the problem. The storage devices still need to be managed, as do the storage networks, the complex interaction of multiple devices, the mapping of storage to servers, sophisticated automation/policy for allocating and tracking storage, and consequent complexities in setting sensible service-level agreements.
Why is storage management important? Because management costs for storage are increasing to the point where they are much greater than the acquisition costs for storage devices. There are three levels of networked storage management - server, networked components, and storage components - which adds to the complexity of decision-making on what is the best approach to storage management in heterogeneous environments. The only way to reduce this complexity is to adopt open standards for the core functions that we all need - these core functions are not market differentiators so there is no real value in keeping these a secret - it is better to move up the food chain to provide real product-feature differentiators. The SNIA (Storage Networks Industry Association) shared storage model aims to provide for this. They now have a standard protocol for agents that monitor storage interfaces, giving a single consistent view. Larry showed how this is implemented in the open-Pegasus CIMOM story.
The Storage Management Initiative (SMI) is more than a specification - it is an environment for a common educational understanding of the whole approach to deliver an open standards-based solution. SMI-S version 1.0 is now published - a very important first step. The strategic vision is that SMI-S will become an ANSI Standard later this year, and by 2005 it will become an ISO Standard. Larry also presented SNIA's intended SMI-S technology roadmap. He compared their long-term objectives to the likely future evolution of the aeroplane, where rather than doing the flying, the pilot becomes the monitor to check that the automated flying manager (computer) is doing everything right.
Allen noted that Larry had set the level for SMI-S interoperability such that he believes there is plenty of room for vendors to provide significant feature-rich product differentiation.
Intelligent Control in the Dynamic Management of Application Service Levels |
![]() |
Tom Bishop, Chief Technology Officer, VIEO Corporation |
Tom noted that this presentation dovetails very well with Rick Sturm's message. He introduced his view of intelligent control systems (ICS) - measure, analyze, control, all performed in the context of a defined business policy. He presented the goals of an ICS, to actively manage the application infrastructure to maintain critical service levels within the user-specified policy, and to optimize the use of resources without compromising this or introducing additional risk to the IT infrastructure.
Tom took a small data center as a model to simulate all the components and illustrate how the theory for dynamic control of application service levels works. Within this scenario, he presented the high-level math for calculating the response times in this small predictive model. By statically changing the amount of resource available to different service requirements he showed how he can change the response times but not be sure of achieving the required service levels in any part. However, by viewing the problem as a repeating optimization problem, he can dynamically allow the resulting calculation to turn the control knobs so as to balance the response times to keep within the required service levels.
What does this mean for us? It proves that:
However, this result will create another problem - how to manage policy across disparate resources in the managed application environment? These problems need industry effort to solve in an open standards way. They are being addressed in the Applications Quality Resource Management (AQRM) Forum, and Tom (the Chairman of this Forum) invited members to join them to work in this arena.
Q: OMG is also working in this area - are there any plans to cooperate with them?
A: We have not yet taken steps to do so but clearly this is an appropriate move.
Allen pointed out that OMG is an organization we work very closely with, and he cited the collaboration we already have on OMG's MDA and TOGAF's ADM.
Enterprise Content Management: Beyond the Repository |
![]() |
Mike Ball, Director & General Manager, ApplicationXtender Product Business Unit, Documentum (a Division of EMC) |
Mike observed we are all inundated with information. There are information silos across all organizations. So content management is very important. 80% of information arrives unstructured and requires analysis to slot it into the right file(s). The solution to managing this problem is enterprise content management (ECM) - involving a lifecycle of content management that covers create/capture, manage, deliver, and ultimately archive or destroy. Mike noted the severe penalties that can be applied to business executives if they fail to comply with regulations on control, management, and handling of information, and cited the maximum 20-years imprisonment penalty that can be imposed for extreme failure to comply with the Sarbanes-Oxley regulations.
So, one business driver for performing good ECM is compliance. However, a well-run business will be much more interested in doing good ECM so as to achieve efficiency, consistency, good customer service, secure archiving/retrieval, consolidation, and thereby be more profitable as well as enjoying a high reputation. Recent surveys show ECM as being the highest priority in business CIO's top-ten issues.
Information Lifecycle Management (ILM) is a strategy of pro-active management of information that is business-centric. It needs a unified approach, and is policy-based. ILM aligns service levels with business requirements. Mike considered data protection, which in his world involves backup and replication. In his world, these two are becoming closely enjoined. We have already mentioned that compliance management is a major requirement. Email is becoming a big business risk, so needs to have a business strategy for managing it.
Summarizing, ECM meets the requirements of ILM at the lowest cost of ownership.
Home · Contacts · Legal · Copyright · Members · News |
|||
© The Open Group 1995-2012 Updated on Tuesday, 3 August 2004 |
|||
|