Introduction
Allen Brown, President of the Open Group
Allen began the conference by welcoming
the delegates to what promised to be a remarkable series of presentations on the
key topic of Web Services. He introduced the keynote speaker, Tim
Berners-Lee.
Tim began his presentation with an overview of the technical
history of Web Services - in particular the distinction between Web services and the conventional
Web.
Web Services compared with the Web itself
He described how fundamentally the Web began as a means of communicating
information: the
ability to take a URI and
obtain the information that it refers to. It is globally unambiguous and, unlike a
database system, infinitely scaleable. However, browsing does not of
itself achieve any change - that is a very important property of the Web; it
simply allows information to be obtained, and it is resilient to situations
where that information does not exist.
Web Services, on the other hand, do effect change. We want, for
instance, to put in an order, to enter into a commitment. Typically, in
doing so, we are not just providing a URI, but also a lot of other parameters -
information about ourselves, for instance - and it is possible in many ways at
the moment to do this. At the same time, at present, we are to some extent
losing something - the property of the Web that enables us to obtain information
directly from a URI - because the information we entered, and the properties of
the business transaction, are no longer directly accessible to us.
Remote Procedure Call History
Tim then moved on to compare Web Services with the original concept of RPCs:
a programmer who was used
to designing a system that ran on one computer could now design a distributed system in
the same way, and functions that were located on a remote system would be connected automatically.
The stub would pretend to be the required procedure on the local machine, but would talk to
the real machine. The concept was that a remote call was just like local call,
and all the complexity - the location of the server, managing retransmissions,
providing security, was hidden. There was also the bonus of inter-language calling,
which could be used to mix languages on the same machine.
He summarized the limitations of the RPC model, that at times all the things
that are hidden by the RPC model need to be made visible: an application may
need to know, or to specify, the location of the server; to be aware of network problems;
to manage security - in fact it can be argued that security is an application
problem. A further limitation is that with the strict interface model of
RPCs, evolution is limited.
So why the drive for XML-based RPCs?
Tim then moved on to consider the reasons for developing an RPC mechanism
that uses XML. The first reason is that people have XML and HTTP expertise
and tools and want to exploit them in remote processing, which is an easy start
because HTTP headers and XML are easy to
use and encode. Also,
XML namespaces can provide a different sort of flexibility; when you send an XML
message different vocabularies can be mixed in the same
namespace, so that information from different sources can be cut and pasted.
That sort of system allows systems to evolve gracefully, although potentially it allows
systems of uncontrolled complexity to be built.
Before moving on he reflected on the fact that the term 'Web Services' can
apply to a very wide range of transactions - from mainframes to cellphones;
within an organization and between organizations; some will be put together
casually, others will be carefully designed and controlled. So we should
be careful of generalizations - all generalizations are bad :-)
Differences between RPC and Web Services
Tim then discussed the differences between RPCs and Web Services. The
assumption in much of the discussion of Web Services is that they are spanning
company boundaries, and therefore trust boundaries. Whereas in RPCs there
was often an understanding of what was happening on the other side of the call,
in XML there is none. All that is visible is that an XML message has
crossed the gap, and unless XML encryption is used, the contents of transactions
are visible. In passing he mentioned one of the current attractions - and
dangers - of using HTTP: that it would more readily reach through a firewall.
He went on to differentiate between two kinds of Web Service: safe and
unsafe. Safe operations are akin to Web Browsing, while 'unsafe' ones make
a commitment. The former should be implemented as Web pages, using
HTTP/GET; but it is essential that GET should never be used for making a
commitment.
Simple Object Access Protocol
Tim went on to discuss the two styles of SOAP:
RPC style and document style. RPC style treats the message body as parameters
and typically uses a procedure calling paradigm, with systems designed accordingly;
in document style the message body is processed as a standard XML document,
and the processes operate on a peer-to-peer basis and would expect to use
an XML toolset. At the moment both styles are used and developing,
and he expressed the view that both would continue.
The Semantic Web
He then want on to express the view that as these things become more complex,
and many Web Services are composed together to provide a solution, it will
become necessary to do some modeling of the real world situation. The
semantic Web is about modeling at the reality level rather than the document
level - adding to basic Web Services the semantics of what the various objects
mean and documenting relationships between them. All the putting together
of Web services is going to be handled using RDF,
which allows the semantics of messages to be documented. WSDL
would ideally have been an RDF application and will in the future be linked
to it.
This approach provides
-
Much more application-awareness than a simple RPC.
- A choice about whether security is provided by the system or by the
application.
- Retention of messages, as an essential approach to security and auditing.
-
XML signature and encryption
- Key management (XKMS,
SAML,
etc)
On the subject of Composition, Tim mentioned two layers of description: Web
Devices Description Language at the lower layer (syntax, binding), and
DAML Services ontology at the upper layer; he felt that getting these
together would be really interesting.
On the longer term question of the relationship between the Semantic Web and
Web Services, Tim concluded that applications would send Semantic Web messages
between Web Services.. However, the W3C has not insisted that the two
groups working on these developments should use each other's technologies.
Questions
Q |
Alan Peltzman, DISA: Do you have any thoughts on the SNMP problem
and software vulnerabilities - exploitative behavior that's used on the Web - how that can be handled generically? Is there a
need for a code of ethics? |
A |
TB-L: I'm not en expert on the software engineering aspects of
this, but there is an assumption that internet packets are coded according to the
internet standards. There's an undertaking when you use the net that
users will not cheat and send packets more
frequently than they should; misuse is fraud, and should be treated that way. |
Q |
Bob Blakley, IBM, asked about Security: In the RPC model it was easy to
figure out where RPC packets
should be encrypted, and the easy to do things were done - access control lists
corresponded to applications and people. If we get to the Semantic Web we will
have got to a dangerous place from a security point of view, because the
failing of security systems has been the mismatch between syntax and
semantics. Having failed at the syntax level,
how can se succeed at the semantic level? |
A |
TB-L: It will have to be done. There has been a
tendency to put security at the lowest
possible level, but beginning with semantics makes more
sense, rather than trying to bolt on security. Semantic rules will help
because they are expressed in business terms. |
Q |
Ron Schmeltzer, ZapThink: You haven't mentioned
Service Oriented Architecture, and technologies such as WSDL and UDDI.
Is this just an XML view of RPC? |
A |
TB-L: I Haven't mentioned the UDDI area of the world, but frankly what I want to do is to
broaden our horizons. If feel that when looking at analyzing the Web
Services that are out there, it will not scale unless it's Web-like.
Web Services are web-like because they are infinitely scaleable; different
applications will be publishing meta-data about their Web Services. |
Q |
Geoff McClelland, Center for Open Systems: Would you care to comment on the last
sentence on your outline: 'Importance of Open Source philosophies' |
A |
TB-L: It's really important. Every standards group is put together
because organizations realize that it is better to talk than not. There is
a common good in making an interoperable specification, and the whole explosion of Web would not have happened
had there not been specifications that were completely open and royalty
and patent-free. With Web services, and all the new developments
like voice browsing, each is going to be foundational. It is always possible that
someone will try to claim ownership, to establish patents, in which case the
potential market growth won't happen. It is really
important that the base standards are freely available, and W3C is working hard to encourage companies that have used patents in the past, not
to do so in the future. |
Top
|
|
|
Mark
Forman
Associate Director of Information Technology and E-government, Office of
Management and Budget
|
Mark began by explaining that he would be giving a perspective from a user point of
view. He explained that in the coming fiscal year the US government would
be spending almost $53bn on IT, but that in the past it had not gained
proportionate benefits from this spend - it is his job to change this. He
explained his view that Web services provide a big part of future. Traditionally government
been couple of years behind industry, and a big part of his job is to change
that.
Although the US government actually provides very few on-line services to its
citizens - this is provided by state and local government - it is very dependent
on the Web for its operation. He illustrated this by the experience of the
Department of the Interior, which last November was prevented by the courts from
using the Web - at the same time as the White House wasn't receiving mail because of
the Anthrax scare.
The Presidential Initiative
The goal of President Bush is that the Government should move from
being agency-centered to being citizen-centered, and Mark introduced the vision:
'an order of magnitude improvement in the federal government’s value to the
citizen; with decisions in minutes or hours, not weeks or months'.
He defined E-Government as 'the use of digital technologies to transform
government operations in order to improve effectiveness, efficiency, and service
delivery'.
The principles by which these aims are to be achieved are:
- E-Government is an integral component of the President’s Management
Agenda
- Operations are to be market-based, results-oriented, and citizen-centered
- IT use should be simplified & unified
The issue is not one of making information available on the Web - there are
thousands of Federal Web sites - the task is basically one of Enterprise
Resource management, and the problem is illustrated by the way in
which departments approach ERP: each does its own, but in fact nobody is
actually looking at the total enterprise.
Mark explained that his approach was to look at leveraging some basic principles, and the fundamental one is
'Simplify and Unify'.
Simplify and Unify
Simplify: at the moment, if a small business needs to do business with the
Federal Government it needs to hire a lawyer or an accountant, so simplification
is key.
Unify: In the 1990s different arms of the Federal Government were able to
gather vast resources of information which they did not share, and as a result organizations
and individuals were required to submit the same information many times.
Mark described how the Government, like any business, has to focus on key customer
segments:
- in the citizen arena, leveraging one-stop approaches;
- in the business arena, consolidating redundant reporting requirements (at the
moment US business spends 7.7bn staff hours sending information to the
Government);
- similarly in the intergovernmental arena, when working with the
states, there is a need to automate the interchange of information
- finally, focusing on employees, there is a need to take eBusiness tools
and techniques and to bring them into Federal Government.
The Administration’s Citizen-centered E-strategy Integrates With Web-trends,
and Mark gave some examples of the sorts of initiatives that are being taken to
implement the 'Simplify and Unify' strategy.
- Government to Citizen - allowing, for instance, tax returns to be
submitted over the Web
-
Government to Business - leveraging the Web so that businesses can find out
which regulations they have to comply with. 42 millions people last
year needed to get answers to this question, and there has been a dramatic
switch from use of paper to companies obtaining information over the Web.
- Government to Government - Geospacial Information is becoming the backbone
of many intergovernmental activities, and e-Authentication is at the heart
of many initiatives.
Key Trends
Mark then went on to summarize the key technology trends that the Federal
Government is tracking in support of this strategy
-
Increasing broadband content and transactional interoperability between
government, industry, and individuals.
- Seeking to identify commodity transaction components that facilitate increasingly agile integration:
shared services and online transactions will together drive business process integration.
- Looking for Webservices to become business services.
-
Service delivery models have to focus on lowering transaction costs and empowering
customers.
- An increased focus on privacy and security goes with increased information
sharing.
Web Services
He introduced the working definition of web services for the federal government:
- A web-accessible automated transaction that is integrated into one or more
business processes - things that allow the Government to build business
functionality,
- Generally invoked through web service interfaces (e.g. SOAP, WSDL, UDDI) with
interoperability, business process or workflow management, and functional
service components built around common objects.
A Web Service is not a complete solution but a component that contributes to
the construction of a solution.
Mark went on to identify two key opportunities that arise through the use of Web
Services: accelerating cycle time, and enterprise modernization. If Web
services can be coupled and scaled in a realistic manner there is a real
opportunity.
He went on to detail some of the duplication of work and of projects that
arise because different departments are doing the same thing - in grants
management, for instance, $150m is being spent on 12 different projects when in
reality the need is probably for around $50m on 3-4 projects. The task is
fundamentally one of accounts payable, and if there were a good Web Service for
that function it could be leveraged in a very effective way.
Workforce Trends
Mark referred to the fact that over the next 5-6 years there will be a 50-70% turnover in Government
workforce. The new group of people coming in will be knowledge workers,
expecting to use the new technology. At the moment people in government
think of doing business on paper - of 6 weeks to write a letter and get a response.
That has to change.
He went on to outline some samples of the sort of Web Services opportunities
that exist, such as
- Services to Citizens
- Disaster Management: Location of Assets, Predictive Modeling Results,
and Availability of Hospital Beds
- Support Delivery of Services
- Strategic Planning: Access to Capability, Decision Support, Data
Availability & Analysis
- Internal Operations and Infrastructure
- Financial Management: Debt Collection, Payment Processing, Collection
& Reporting
He saw the following fundamentals for success in applying Web Services:
-
Identify common functions, interdependencies, interrelationships, and evaluate
barriers to information sharing.
-
Implement them in a way that addresses both the opportunities and risks of a “networked”
environment - security becomes a key element.
-
Leverage technologies to achieve benefits of interoperability while protecting
societal values of privacy, civil liberties and intellectual property rights, etc.
Achieving the Goal
The Government is working to develop a component-based enterprise architecture that addresses the business lines, data,
information, and technology necessary to meet government missions.
Each quarter, each Cabinet-level Department and Agency is rated according to
the progress it has made, and these ratings are reviewed by the
President. Each department is expected to have a modernization blueprint.
Mark then turned to the burning question: how to quickly leverage Web Services,
and highlighted the following issues that need to be resolved:
- Who creates and owns a UDDI server and enlists WSDL
-
What shared web accessible transactions components are available from whom
(e.g., search, epay, pki-related services, patching services, edge services, ...)
- Which agency has a business model and can write the business case? At
present agencies are struggling to create the business case
-
Supply or demand driven? Are Web Services provided when there is enough
demand, or ahead of demand in order to achieve change?
Questions
Q |
Ron Schmeltzer, ZapThink: A recent report suggested
that XML is not appropriate for the Federal Government; do you have any
comment? |
A |
MF: I think they said they thought the Government didn't
think it was appropriate. When I came into this job we were
justifying duplicating XML data and I've not allowed that. |
Q |
Allen Brown, The Open Group: It seems as though the Federal Government
is behind Industry in trying to break down the stovepipes of information;
how much can you learn from what Industry has done? |
A |
MF: We leverage best practice heavily - the experiences that
organizations have had in facing these same challenges. The
challenges are well understood and documented, but there is a chasm
between people who understand the business and those who understand the
technology; we have to bridge that chasm. |
Top
|
|
|
Alan Leidner
Assistant Commissioner and Director of Citywide GIS, New York City DoITT
|
Alan began his presentation by asking the audience to imagine that they
were walking through Central Park, and in a remote corner found someone
unconscious; it was not clear whether they had been the subject of an attack, or
whether they were ill, or had experienced an accident. In response to a
911 call the operator said 'where are you?'. We would have had no
idea. Maybe the cell phone had the coming facility to give a location in
latitude & longitude; how would the operator, and the emergency services, be
able to interpret that information? At present they could not.
A second scenario was of a massive explosion in mid-town Manhattan.
Hundreds of responders from many agencies rush to the scene. Information
is immediately needed about the infrastructure: gas, electricity, subway;
structural information about buildings in the area, and floor plans; information
about the storage of chemical, nuclear agents in the vicinity. Without
this information lives may be lost. How can this information be collected
quickly and made available to everyone who needs it?
Both these examples are geographic issues, and also Web issues, and Web-based
geographic systems are increasingly needed.
Alan pointed out that almost all data has a geographic component - at the
simplest level, an address field. In New York, as in many other places,
there are large databases with a geographic component which are unconnected, and
there is no way to relating the data in one to that in another. The
challenge is to find a standard way of putting together these sources of
information and then providing the results to everyone who needs (and has a
right to) it. He illustrated the difficulty of doing this by showing three
representations of the same area of New York which did not align with each
other.
Alan went on to describe how the first step in New York had been to
take aerial photographs of the entire City of New York such that each pixel of
the photograph corresponds to an area on the ground one foot square. To
this have been added 30 layers of data that depict the geography of the city -
the buildings, streets and services. The same exercise is being conducted
in cities and areas throughout the US.
Investment so far had totaled $50m, with another $25-50m in planning.
The need therefore is not a way of gathering data - it is available in vast
quantities; the need is to make it available and capable of exploitation, and
Alan went on to describe some applications of the information to date:
- In
NY the Police Department have now used spatial information related to the
occurrence of crime to adjust their strategies, leading to a dramatic fall in
crime - the murder rate went from over 2000 to under 1000 - the saving of these
lives justifies on its own the entire investment in GIS that the city had made
in the last 20 years.
- Another application had been the battle against West
Nile fever: the city tracks where birds and small mammals are dying, and
spraying programs are tailored precisely to the need.
- All the data that
has been gathered is going on the Web, and a new service - 'My neighborhood'
is giving 8 layers of information to the public, enabling them, for example, to discover
in the event of a hurricane whether they are in an evacuation zone, and to find
a route to safety.
- A further application is to coordinate the capital
projects of the many agencies operating in New York to avoid the scenario where
one agency lays a street and the following week another comes and digs it up.
- Following the attack on September 11th there was an urgent need to respond,
not just to the emergency services but to the public, to let them know what was
happening. An emergency service was set up with 20 GIS Workstation and 6 Plotters, and
50 GIS Operators working 24x7, with 25 workstations in clusters around the
town. In the entire response to this agency, and the subsequent Anthrax
scare, the IT staff who provided the information were almost entirely
GIS-trained.
Finally, Alan mentioned some of the data types needing to be integrated in
order to provide the total service:
-
Remote Sensing
-
Infrastructure and Building Interiors
-
Digital Pictures and 3D
-
Aerial Photography
-
Inspections Data
-
Dozens of Other Databases
-
On the Scene Data
and some of the technologies needed to provide spatial data integration support
-
Mobile Data Centers with Telecom
-
Universal GPS in hand-held devices
-
Pervasive High Speed Wireless
-
Ultra High Speed Satellite or Fiber
-
Data From Anywhere To Anywhere
-
On the Fly, Real Time Integration
Questions
Q |
Walter Stahlecker, HP: How to you manage security, given
that the information you have would be so valuable to so many people? |
A |
AL: We realise that we have a powerful data tool, and we
are very careful about the information that is available publicly. |
Q |
Barry Smith, Mitre: This is an amazing accomplishment -
what are you doing to sustain this data. |
A
|
AL: We overfly the city regularly, and we keep records so that if we need to go
back in history we can. |
Q
|
How could you reorganize the data against some other
scale than geography |
A
|
AL: Applications like social security number or
name occur to me, and they have been done. |
Q
|
Ed Lake, Interserve: can you integrate cell phone
outages - where they will
or won't work. |
A
|
AL: That's probably been figured out by the cell phone companies |
Q
|
Tim Berners-Lee, W3C: You had a slide with 3 images superimposed which showed that some of the
scaling was inaccurate. How can you resolve that? |
A
|
AL: We don't have a good solution - only an
expensive solution. The only way is to use an algorithm, with hand correction, to resolve.
It cost us $1m. |
Q |
Carl Bunje, Boeing: You talked about location-based
correlation of data from multiple sources.
How are you integrating access to this data, like, for instance, City Finance being able to
project taxes. |
A |
AL: Our use of the Web is in its pre-infancy. Under the Bloomberg
administration that sort of application will come to the fore. |
Top
|
|
|
Simon
Pugh
Vice President, Standards & Infrastructure, MasterCard International
|
Simon introduced his presentation by explaining that he intended to look at e-Commerce Trends
as they affect MasterCard, specifically what is being done in the area of Web Services.
eCommerce
now represents 3-4% of MasterCard's business, and this is becoming an extremely
important new channel. Inhibitors to buying on the Internet are the length
of time involved, and concern over security and privacy. MasterCard's
e-Business Strategy is to take the use of the credit card into new areas, both
in terms of transaction size and type.
One reason for the success of credit cards has been the concept of a
guaranteed payment, with benefits to the consumer and the merchant. Today
that guarantee is offered in the physical space but is not extended to internet
transactions, mainly because the technology is not available to
ensure it. SET (Secure Electronic Transactions) was developed a few years
ago, but was found to be undeliverable.
Web Services
Simon then turned to the use of Web Services within MasterCard.
Interest exists in using web services, but there is concern that standards still in
development, certainly from a security point of view. Other issues are a perceived
lack of stability, the need for vendor neutral tools, and the costs of technology change.
There is however potential for the use of Web Services, and one current
application is the
ATM Locator, which enables customers to locate the nearest MasterCard ATM from a
device such as a WAP phone or a PDA.
MasterCard is also working on a Secure Payment Application, an initiative for internet transactions
which provides a guaranteed payment to the merchant. The consumer is authenticated for the transaction by the issuer,
and a token (UCAF – Universal Cardholder Authentication Field) is provided to the
merchant, who then submits the populated UCAF to receive the payment guarantee.
This application will be XML based, and is addressed to the URL of the
authentication mechanism of the customer's bank.
With mobile transactions the situation becomes more complex, and MasterCard is looking to build an Authentication Web Service.
In summary, Simon concluded that there are enormous opportunities for Web
Services, and at the same time a need for improved standards, especially
covering security.
Questions
Q |
Timothy Chien, Oracle: Internal Projects use of Web Services - has there been
any movement in that direction? Also, what is your view of SAML? |
A |
No internal projects yet; my colleagues working in that area have the
same issues as we do. Regarding SAML, there is no use at the
moment - though MasterCard is a founding member of the Liberty Alliance,
whose work is based on SAML. We have our feet in a number of
camps, we haven't necessarily found a single answer. |
Q |
Walter Stahlecker, HP: How will Web Services impact fraud? |
A |
Overall, fraud is higher on the internet than in face-to-face
contact, but excluding 'adult' and gambling sites, fraud rates are about
the same. |
Top
Ted began his presentation by considering the parallel between Web Services
and the growth of rail travel: rail beds helped transportation, but standard rail beds revolutionized
it. He explained that he intended to cover three topics:
-
What are Web services - and why now?
- What’s the impact of Web services?
-
How will Web services evolve?
What are Web services - and why now?
He defined Web services as 'standard software interfaces to data and to business
logic that provide a simple channel for business services, that make the Internet
smarter'.
- Standard transport - TCP/IP, HTTP provides global email and access to the
Web. A lot of very important things were achieved using those
technologies.
- Middleware standards such as XML, SOAP and WSDL, provide a lot more - more
intelligence, communication.
- Standardizing business processes, using standards such as BPML, ebXML, is
a lot harder, because we have to agree what a purchase order means. In
practice this agreement comes for one of two reasons: either the Government
says you must or because one company dominates the channel.
He went on to describe the impact of standards, which is far beyond the item
being standardized. The 'standard' 2x4 lumber in the US influences the
design of saws, nail guns, architectural drawings and much else. The five line musical staff
allows everyone to compose and play the same music. XML, SOAP, WSDL, UDDI
have the potential to have the same impact.
What’s the impact of Web services?
Web Services are a tiny technology but they have a huge potential impact because
they dramatically lower the barriers to software communication - the result is
new software networks which unlock value: to access the inaccessible, connect the unconnected,
and related the unrelated.
How will Web services evolve?
Ted explained that he anticipates three waves of adoption: opportunistic
integration; business Web Services; new Software Networks. The real
opportunity is to know things that others need to know; to be aware of the
status first; to be able to access hundreds of uncoordinated services on the
basis of geography, or a calendar; and with a SOAP client on Set Top Boxes the
market becomes enormous.
New software networks tap untapped value, but
Web services affect the entire executive suite, providing the CIO with control, the
COO with productivity, the CFO with cost replacement, the CMO with influence and
the CEO with strategy.
In summary: Web services make the Internet smarter.
Top
Steve Nunn
Steve began by referring to the remark that Tim Berners-Lee had made in the
morning session, about the effect on the growth of the marked for Web Services
of the way in the industry treats IPRs and patents. We absolutely have to
avoid the standards in the Web Services area being encumbered by Intellectual Property
Rights.
Steve referred to all the activity going on in developing products and
standards for Web Services, and said that from his perspective this issue is the
single most significant factor affecting the growth of the market.
All of us involved in the creation of Web Services have an opportunity and a
duty to treat them an enablers to the widespread adoption of these technologies.
With this introduction he handed over to Andrew Updegrove, explaining that
Andrew advises on the setting up and running of consortia, in particular on
topics such as IPR, and that he had recently testified to the Department of
Justice and the Trade Commission of the subject.
Andrew Updegrove
Andrew had subtitled his presentation 'Intellectual Property Rights policies - A Call to (Lay Down)
Arms'.
He began by pointing out that there is nothing inherently different about Web
Services from the IPR point of view; the bad news, however, is that there is
more turmoil on this subject than at any time in the standards movement, and
regrettably little progress is being made.
He asked the delegates to consider the following quote:
"The biggest challenge that newcomers to the consortium world must face
is grasping the fact that standard setting is about giving away rights in some
technology in order to make money on other technology. This means that those who would form a consortium must enter into a
sort of 'through the looking glass' world where patents are
impediments rather than tools, royalties are unwanted encumbrances, licenses
exist for the sake of disclaiming rights, and competitors are as welcome as
partners."
He summarized a few legal disputes in the last six years that have caused
companies to adopt entrenched positions on IPR policies, and indicated that
consortia are spending increasing amounts of time debating the issue. As a
way out of this situation he suggested a return to basics, and argued that
standards are about enabling safe purchasing decisions, opening new markets,
creating opportunity and value. Standards, he argued:
- are meant to be owned in common, so that they can be trusted by everyone.
- are meant to be open to all - some consortia have been formed recently
that have been selective in accepting membership applications.
- are about giving away in order to get
- only become valuable if they are adopted
- only get adopted if they are credibly non-proprietary, and are easy to license
In order to benefit from the standardization process, IPR owners need to be
flexible, to get out of the 'proprietary' box, and to realize that in creating a
standard a different set of rules apply than simplistic and narrow commercial
values.
The current situation is made worse, he suggested, when companies are
represented in meetings by IPR lawyers rather than by managers with a genuine
business interest. It is also made worse by the fact that each consortium
has its own policy, creating confusion and inhibiting the standards process.
He suggested a set of steps to improve the situation:
- to come up with a "standard" standards IPR policy (compromise
once, agree often).
- to think of our representatives as employees of a consortium when they
attend a meeting; some companies insist that they own the IPR to
ideas suggested by their staff in a standards meeting.
- to make a judgment when we join a consortium that the standards ends
justify the IPR policy means, and then live with it. IBM have recently
taken a major step in this direction recently.
- to focus on what's to be gained and not on what (often only theoretically)
could be lost.
- to remember that if a standard is worth creating, it is usually worth
creating royalty free; occasionally this might not be possible, but it is
the most desirable aim, and in the context of the Web, this is the only
possible approach.
- to remember that standards are too important to game; too often companies
try to distort the standards process to their own ends, with the result that
the standard, if any emerges, is not trusted in the marketplace.
- to remember that if we can't police ourselves, then, in the US, the government will do
it for us.
Finally, Andrew asked how we might go about achieving these aims, and argued
for an IPR constitutional convention 'to bring us back together
and let us get back to work'. He invited delegates to continue the
discussion at www.consortium.org.
Questions
Q |
Bob Blakley, IBM: The patent process is clearly broken, but it seems to
me that standards are improved by disagreement between people with
different points of view. Surely the problem is government policies
that have polluted the original intention of patent policies by allowing
patents to be granted for unworthy submissions. |
A |
AU: We don't run the patent department but we do run our own
consortia. Within the present patent system it is possible to have
a policy that protects members' rights; the challenge is for us to adopt
it. |
Q |
Chris Greenslade, Fortuna Consultants. You talked about getting together
to sort this out, but if any one country isd happy for its people to
exploit patents unreasonably, we will never get worldwide agreement. |
A |
AU: There hasn't been any meaningful effort to address the
international issue. |
Top
|
|
|
Patrick Gannon
President and CEO, OASIS
|
Patrick introduced his presentation by explaining that he intended to look at
recent technology trends, to look at the various types of Web Services and the
development of eBusiness Standards; to consider the role of standards, and
finally to consider OASIS Initiatives.
He suggested that we are experiencing an eBusiness Tidal Wave, in which the
use of internet-based technologies is becoming an integral part of our business
processes.
He compared computing in the 20th century with the current information age,
in which the consumer is empowered by information access, and in which
businesses win by being open and by leveraging new mechanisms to drive their own costs down.
In particular he argued that this change is being driven by the XML technology from
W3C, enabling componentized applications to deliver solutions irrespective of
the platform on which they operate or the software in which they are
written. In recent years HTTP has been the protocol of choice, but what is
needed is a way to link data together, not just to send it to browsers.
He described two types of Web Services
- Remote Procedure Call - based for supporting simple Web Services,
supported by Simple Object Access Protocol (SOAP), Web Services Description Language (WSDL),
and Universal Description, Discovery and Integration (UDDI)
- Conversational or Message-based Web Services for supporting loosely coupled
asynchronous models, a key requirement for Enterprise-class Web Services,
supported by ebXML Messaging Services, ebXML Collaboration-Protocol Profile,
ebXML Registry and Business Transaction Protocol
He explained how the concepts for ebXML grew out of original work done in
forums such as EDIFACT, and suggested that it provides a standard way to
exchange business messages, conduct trading relationships, communicate data in common terms,
and to define and register business processes.
The main ebXML concepts are:
- Business Processes – defined as models, expressed in XML
- Business Messages – expressed in XML
- Trading Partner Agreement – specifies parameters for businesses to
interface with each other – expressed in XML
- Business Service Interface – implements trading partner agreement –
expressed in XML
- Transport and Routing Layer – moves the actual XML data between trading
partners
- Registry/Repository - provides a “container” for process models,
vocabularies, and partner profiles.
For those with an interest in following up the subject further, Patrick
offered the following links:
Top
|
|
|
Steve
Bratt
Chief Operating Officer, World Wide Web Consortium (W3C)
|
Steve introduced his presentation by stating his objective for the
presentation: to motivate an understanding of the W3C development framework, the
history and organization of Web Services, the W3Cs mission and accomplishments,
and its future plans.
He defined W3C's Mission as 'to lead the Web to its full potential', and its
role as providing the vision, design, standards and guidelines, mainly focused
on the core infrastructure of the Web.
W3C has:
- Close to 500 Members, worldwide.
- An Advisory Board, and a Technical Architecture Group, chaired by Tim
Berners-Lee, which advises on issues which span working groups.
- A team of about 60 technical staff, providing a vendor neutral view.
- Liaisons with 27 standards organizations - he supported the emphasis of
the day on interworking and cooperation between consortia.
- A strong Process that provides a governing framework towards goals and design principles,
providing vendor neutrality.
- A demonstrated record of successful productivity - there are 47 W3C
recommendations, and growing
He emphasized that there is no "rubber stamping" and illustrated
this by the fact that WSDL and SOAP were submissions from other
organizations, but were validated in W3C Working Groups.
Specifications should be royalty free and openly available.
He gave a brief history of Web Services at W3C
Dec 1999: |
Mail list created for XML protocol discussions |
Feb 2000: |
W3C provides an interim plan for XML protocol |
May 2000: |
SOAP 1.1 becomes acknowledged submission |
Sep 2000: |
Creation of the XML Protocol Working Group |
Mar 2001: |
WSDL becomes acknowledged submission |
Mar 2001: |
www-ws@w3.org created for Web Services discussions |
Apr 2001: |
Workshop on Web Services in San Jose, CA. |
Jan 2002: |
Web Services activity formed, with public lists, meeting records, editors copies,
chartered to produce royalty-free, open specifications. |
Steve gave his own definition of Web Services:
A Web service is a software application identified by a URI, whose
interfaces and binding are capable of being defined, described and discovered by
XML artifacts and supports direct interactions with other software applications
using XML based messages via internet-based protocols.
Steve went on to describe the work of the four groups with responsibilities
for Web Services.
The Architecture Working Group has 76 participants representing 52 Member organizations,
with the goal of defining the Web services architecture.
XML Protocol Working Group has 62 participants representing 34 Member organizations.
Its goal is to develop a Messaging protocol allowing machine to machine interactions.
This group is working towards SOAP Version 1.2, and is chartered to design an envelope encapsulating XML data,
a convention for RPC (Remote Procedure Call) applications, a data serialization mechanism,
and a mechanism to use HTTP as a transport mechanism.
Steve gave the current status of the SOAP/XML documents as:
- SOAP 1.2, Part 1: Messaging Framework: Last Call Working Draft
- processing model, envelope, Protocol Binding Framework
- SOAP 1.2, Part 2: Adjuncts: Last Call Working Draft
- data model, encoding, RPC Convention, binding and Feature Description
Convention, Message Exchange Pattern (SOAP/MEP), HTTP Binding
- XML Protocol Requirements: Working Draft
- XML Protocol Usage Scenarios: Working Draft
- XML Protocol Abstract Model: Working Draft
- SOAP 1.2, Part 0: Primer: Last Call Working Draft
- SOAP 1.2 Specification Assertions and Test Collection: Last Call Working
Draft
- SOAP 1.2 Email Binding: Note
Web Services Description Working Group has 50 participants representing 33 Member organizations,
with the goal of defining a format for describing the interface types and structures of messages.
It has responsibility for Message Exchange Pattern (WSDL/MEP), protocol bindings,
and application of WSDL. It has produced a first Working Draft of Web Services Description Language (WSDL) 1.2.
The Web Services Coordination Group is made up of the Chairs of Web Services
Working Groups, and is responsibile for coordinating deliverables and schedules
and dependencies with other W3C groups, external organizations.
Finally, Steve reported on some key next steps, including:
- Architecture: completion of the architecture definition and use cases
- XML Protocol: SOAP 1.2 to Recommendation - this should happen late this
year or early next.
- WSDL: address issues found against WSDL 1.1, design a binding to SOAP Version 1.2,
and develop an RDF mapping.
Top
Winston introduced the DMTF Mission:
To lead the development of management standards for distributed desktop,
network, enterprise and Internet environments.
DMTF Technologies include:
- Desktop Management Interface (DMI)
- Common Information Model (CIM)
- Directory Enabled Network (DEN)
- Web-Based Enterprise Management (WBEM)
- System Management BIOS (SMBIOS)
- Alert Standard Format (ASF)
DMTF developed the Common Information Model (CIM) in 1996 to provide a
common way to share management information enterprise-wide; the CIM specification
provides the language and methodology for describing
management data, and the CIM schema provides models for various implementations to describe
management data in a standard format
Winston saw WBEM as the forefather of Web services. It was developed as an set of management and
internet standard technologies, harnessing the power of the Web to integrate the management of networked
systems and devices. It is built on the CIM Model, and integrates existing standards
such as SNMP, DMI, and CMIP. The Open Group's Pegasus product is an Open
Source implementation of WBEM.
Current WBEM activities include updating WBEM to address/include emerging standards, such as SOAP.
DMTF is collaborating with OASIS to sponsor a new management protocol
technical committee, and is developing open industry standard management protocols to provide a
Web-based mechanism to monitor and control managed elements in a
distributed environment.
He argues that work on standardization should go beyond printing a document,
and needs to include an active involvement in the implementation, with such
aspects as compliance testing and certification.
Winston went on to describe the work on CIM Certification, which is laying
the groundwork for compliance and certification. The DMTF selected The Open Group to develop a CIM compliance and certification
program, with the aim of a program launch this fall. True interoperability will be achieved when vendors can objectively
demonstrate compliance.
In summary, DMTF aims to achieve holistic management, to emphasize the importance of Web services infrastructure and the
interdependence of its components, and to cooperate and collaborate to develop and promote compliance.
Top
Richard began by asking the question: Will Web Services magically solve the problems of heterogeneity and semantic information sharing?
He described the aim of Web Services: it should be as easy to connect systems together as it is to connect power
points. There are six different standards, but interoperability is a
trivial problem because adapters are easily available. He argued that it
will not be possible to achieve complete standardization, but that we should be
able to provide the 'adaptors' that will make the problem trivial.
He made the point that heterogeneity is a permanent state: there is a diversity
of programming languages, operating systems and networks, but that OMG’s Model Driven Architecture
MDA initiative is aimed
at recognizing heterogeneity & expecting it.
The aim is to have replaceable components which can hide the diversity and
provide interfaces that are totally independent of the underlying technologies.
The Model Driven Architecture has UML, MOF and CWM at its heart, using Web
Services, CORBA, Java, .NET and XMI/XML to support a range of services.
He described the process of modelling and developing an application using the
MDA, in three phases:
- Start with a Platform-Independent Model (PIM) representing business
functionality and behavior, undistorted by technology details.
- Map a PIM to Specific Middleware Technologies via OMG Standard Mappings -
MDA tool applies a standard mapping to generate Platform-Specific Model (PSM)
from the PIM. Code is partially automatic, partially hand-written.
- Map PSM to application interfaces, code, GUI descriptors, SQL queries,
etc. - MDA Tool generates all or most of the implementation code for
deployment technology selected by the developer.
Top
|
|
|
Carl
Reed
PhD - Executive Director, OGC Specification Program
|
Carl introduced the Vision of The OpenGIS Consortium as 'A world in which everyone benefits from geographic information and services
made available across any network, application, or platform'.
OGC envisions the full integration of geospatial data and geoprocessing
resources into mainstream computing and the widespread use of interoperable, commercial geoprocessing software throughout the information infrastructure.
The OGC was founded in 1994 when a number of organizations started digitizing
maps, and hundreds of systems were developed with no interoperability or ability
to exchange data. So the OGC was formed to identify ways of interoperating
spatial data.
Web Services are key to the OGC and to its members because
It is the task of Geography ... to present the known world as one and continuous, to describe its nature
and position ... (Geographike Uphegesis, Ptolemy)
This is exactly what we are trying to do today. Carl provided some very
interesting statistics: there are something like 50 million map images a day transmitted on the Internet,
and 85 to 90% of all enterprise data stores have a location attribute.
Application complexity varies dramatically: from simple map images to routing, driving
directions, change detection, dynamic discovery, and dynamic visualization.
More and more web sites, portals, and web map applications are making use of
geospatial data, and all the trends indicate that we are facing the emergence of a Geospatially
Enabled Web of data and services in which a virtual global geodata repository
will be available anytime, anywhere, to anyone. However, is order to
achieve that the problem of relating different representations of spatial data
needs to be resolved. Before 1999 the OGC spent its time getting agreement
on terminology and on systems of measurement.
In 1999 the Web Mapping Test Bed created a breakthrough in providing for
interoperability. It demonstrated to 200 people that it was possible to
achieve interoperability across the Web, and reshaped the further work of the
OGC.
Carl went on to describe the work currently under way on the OGC Web Services 1.2
specification, including
- Sensor Web capabilities - gathering information from mobile sensors - in airplanes,
for instance, and static ones.
- Vector Data Feature production
- Imagery production
- Common Architecture Enhancements
- In all cases, need to deal with registries (data, services etc), catalogs,
multi-source, multi-vendor.
OGC Web Services for the Mobile Domain: Location Services Test bed - these
services are required for the telecommunications industry; services such as
providing for mobile phone users:
- A navigation Service (Routing)
- Find “nearest” Service
- Yellow/White Pages
- Geocoding Service
- Get Device Location
Work is currently going on on the Critical Infrastructure Protection Initiative.
The aim of this is that timely, accurate information, easily accessed and capable of being shared
across federal, state, and local political jurisdictions is fundamental to the
decision making capability of those tasked with the homeland security mission. (FGDC
Report on Homeland Security and GIS)
Semantics are critical to the success of this initiative - Carl used the
example of a road, which may have many different names, and different databases
may characterize it differently. Interpreting this information is a matter
of semantics, and he presented two quotes to summarize the point:
"The Semantic Web is an extension of the current web in which
information is given well-defined meaning, better enabling computers and people
to work in cooperation." (Tim Berners-Lee, James Hendler, Ora Lassila, The
Semantic Web, Scientific American, May 2001)
“It is daunting to contemplate the many semantic possibilities that occur
when Information Communities begin to try to link up in the Spatial Web.”
Jeff Harrison, Exec. Dir. OGC Interoperability Program
The key issue (and goal) of ubiquitous geospatial computing is "serendipitous interoperability," interoperability under
"unchoreographed"
conditions, i.e., data and services which weren't necessarily designed to work together (such
as ones built for different purposes, by different vendors, at a different time,
etc.) should be able to discover each others' functionality and be able to
take advantage of it. Being able to "understand" other spatial data
and services, and reason about their services/functionality is necessary, since full-blown
ubiquitous computing scenarios will involve dozens if not hundreds of data sources and services, and a priori standardizing of the usage scenarios is an
unmanageable task. (Derived from paragraph in the W3C “Web Ontology Language Working Draft”,
2002)
www.opengis.org
Top
|
|
|
George Siegle
Past Chairman, OAG
|
George introduced the OAG as 'A not-for-profit Industry Consortium
to promote interoperability among Business Software Applications and
to create and/or endorse one or more standards for easier business software
interoperability'
The OAG began in 1994 when a large group of ERP vendors realized that they
had a common requirement for large modules within ERP to exchange data.
The organization has come a long way since then, and its membership has grown.
Its goal is 'everywhere to everywhere integration'. George emphasized
that all OAG specifications are royalty free and freely available on their Web
site.
He provided a list of the OAGs deliverables:
- Applications architecture
- Collaboration definitions in UML
- XML Messages defined in prose
- XML Messages expressed in XSD
- Data Dictionary
- Using industry standard frameworks to avoid duplication
George used the example of a Sales Order Processing application to illustrate
the processes and messages that are defined in describing a collaboration.,
What are Web Services? George provided the OAG's definition (the fifth
of the day!) of Web Services: A network-based technology-neutral protocol and
associated functions accessible through standards XML messaging.
He also illustrated the technology standards they use:
- SOAP provides the platform independent envelope
- WSDL provides the platform independent connection
- UDDI provides platform independent definition
- XML provides platform independent business language definition
Moving to the subject of OAGIS XML, he suggested that OAGIS XML is the best
business language for Web Services because it is Technology Neutral, can be used with SOAP
now, can also be used for B2B, trading hubs and internal integration (A2A).
He summarized as follows:
- Web Services standardizes the plugs
- OAGIS standardizes the current
- Terrific synergy as we evolve
Top
All the afternoon speakers then assembled for a panel.
Question 1
Jason Bloomberg ZapThink: How is OASIS working to converge with the W3C Web
Services Standards?
Gannon: One of the first activities between us is to focus on where we are
today, beginning with security. OASIS is participating in the W3C Web
Services Architecture Group.
Soley: OASIS and OMG have joined in an interoperability pledge (with The Open
Group); we see this as part of a general trend between consortia to work
together.
Bratt: Also our joint members play an important part in ensuring coherence
between our work
Question 2
Stephen Cook, University of Reading: When I go to software engineering conferences people
worry that organizations are building unmaintainable applications because they
haven't be built in a disciplined way. Couldn't Web Services make that situation much worse?
Bumpus: There are things going on in the DMTF in defining how you instrument applications
and other services to me manageable. Hardware has done a pretty good job
of enabling itself to be manageable. From the software point of view this
is still a challenge.
Gannon: We need also to look at semantic level standards to achieve semantic
integration.
Reed: A lot of our larger user members are pursuing the concept of
off-the-shelf standards software to achieve a stable framework against standards
and interface specifications. Senior management needs to institute good
configuration management.
Soley: I believe that applications should be designed, and these designs
should be written down.
Question 3
Janet Daly, W3C: One of the challenges that each organization faces regarding
member initiatives. I wonder if the panel can identify areas where members
should come together to solve obstacles.
Brown: We have people from other organizations who come to us and confuse us
because they contradict each other.
Reed: From an experience base, we in the OGC deal at a very formal level with
ISO/TC211 and the OMA. We have the same issue, where representatives of a
single company to different consortia don't talk to each other.
Bratt: One thing we've done this year is to have joint meetings with other
consortia to discuss technical work and relationships. This raises
potential conflicts and overlaps in a very valuable way.
Gannon: We have to recognize that as in many other situations, different
consortia are able to meet different needs within a single organization.
Bumpus: We have a series of liaison relationships between several organizations.
I think today has done a lot to improve the dialogue.
Question 4
Ronald Fons: What role do emerging countries play in your organizations -
countries such as India, China, the former Soviet Union. How would your
being these counties out into the global community.
Bumpus: The DMTF has an academic alliance program, and nearly all are outside
the US. A number of them are in China and India. It makes phone
conferences more complicated but it's been a good boost.
Bratt: Within a couple of years Chinese will be the dominant language on the
Web. One thing we've done is to open low-level offices in counties like
Hong Kong, Korea, Morocco.
Gannon: One thing we have a chance to do is to keep our specifications from
becoming regionalized. This is something that is faced by Web Services
when working across country boundaries.
Reed: We are concerned about this from a geospacial point of view; we are
working with the UN to put together a plan for geospacial technology across the
different parts of their organization to the benefit of their member
countries. We are also interested in a State Department initiative to help
countries in Africa.
Soley: Nearly all our members are in the US, Western Europe and Japan.
There are an increasing numbers from Australia and Korea. We have a
program of seminars and conferences in Eastern Europe, South America and in
China. We're not finding the response that we would like.
Question 5
Bob Blakely, IBM: I observe a high infant mortality rate for consortia products.
I wonder to what extent this is due to not listening to members, or not working
together?
Soley: This relates to the interesting question of why companies work in
consortia. IN my experience failed specifications arise because it is the
brainchild of a single company or even an individual.
Brown: In many cases this is because consumers won't buy something if it's
not available, and suppliers won't provide something if it's not in the market.
Reed: Interoperability challenges cause vendors to provide products.
The ideal is that everyone works together in developing the requirements, the standards,
and the products - involving the entire value chain.
Soley: In OMG and other organizations specifications can't be approved until
there are live implementations.
Top
Allen introduced his talk by saying that he wanted to take the opportunity to
build on some of the themes of the day, and to reflect on the role of the Open
Group in building for the future.
He began by referring to the role of customers in steering the Open Group
towards its vision of Boundaryless Information Flow. The day had been a
remarkable expression of the meeting of many forums - customers, suppliers,
consortia, standards experts. Although speakers represented many different
interests there had been total agreement on the problem that needs to be
solved. Everyone needs to share information to do their jobs, and to
protect security and privacy at the same time. Everyone in the room has
the job of meeting customer needs and challenges, and The Open Group is no
exception. We all need to work together to ensure that customers can buy
with confidence, so that the market can grow.
Customer Problem Statement
Allen went on to review the goals and objectives that customer members of The
Open Group had revealed.
"I could run my business better if I could gain operational efficiencies
improving
the many different business processes of the enterprise
both internal, and
spanning the key interactions with vendors, customers, and partners using
integrated information, and integrated access to that information."
Our IT infrastructure needs to support us in this goal by enabling us to
access and integrate information. This need changes constantly as teams
forms and disperse.
Who would have thought that applications that were written 30-40 years ago
would still be in use. If we had known then of the requirements for
integration that we have now, would we have done anything differently?
Allen described how technologies can of themselves create inflexible
boundaries. These may be:
-
Infrastructural -
Organization of the interconnecting and underlying facilities
-
Structural -
System growth is limited by the “strength” or scalability of its structure
-
Architectural -
Differently architected technologies often don’t “fit” with each other
-
Semantic -
Different ways of representing the same thing
Within large organizations, departments not only create silos of information,
but develop cultures of their own which whatever the benefits can underline the
barriers that exist between them. Increasingly organizations are
succeeding in getting teams from several departments working together, and
breaking down the cultural barriers. We can talk about application engineering,
business process reengineering. but this is about needs that we hadn't thought
of last week, last year.
The goal has to be not about removing boundaries but making them permeable. The term
'The
Boundaryless Organization' was invented by Jack Welsh, the Chairman of GE, which succeeded in
becoming one. Others have adopted the concept if
not the name.
Boundaryless Information Flow
Allen progressed to consider the goal - the Ideal Boundaryless Organization.
In a perfect world the technology would support the business flow in a secure, reliable
and timely manner.
We have to recognize that we live in a global, interconnected world;
geographic and organization boundaries need to be overcome, and therefore The Open Group Vision
is of 'Boundaryless Information Flow
achieved through global interoperability
in a secure, reliable and timely manner'.
We are not going to remove all boundaries; what we are going to do is to have appropriate boundaries, with gatekeeper
functions where they are needed.
To remove the barriers there are a many things that need to be done - understanding
business information needs and then working through all the processes involved
in liberating information. Fortunately, as we've heard today, a lot of standard technologies are
available, but we need to link together technologies and open standards with
best business practices.
The Role of Web Services
The Open Group has adopted the definition of Web Services currently used by W3C:
A software application identified by a URI, whose interfaces and binding are
capable of being defined, described and discovered by XML artefacts and supports
direct interactions with other software applications using XML based messages
via internet-based protocols.
Web Services provide the potential for a means of communicating and exchanging information between applications
and access to information confined to applications. They help to support cross-functional and cross-industry teams
- but they have challenges of their own.
Allen asked how we can work together to grow the market for Web Services, and
proposed that a key approach is for the organizations represented in the room to
work together in a boundaryless way - to adopt in meeting the market challenge
the approaches that organizations in industry are adopting, of flexibility,
openness and collaboration.
Our Role in the Process
Allen then moved on to suggest the contributions that each sector represented
in the meeting could make.
Consortia
- Position and align - he described recent cooperative activities between
The Open Group, OMG and DMTF.
- Advance your area of expertise - The Open Group brings skills in business
process scenarios, in integration, in certification and testing ... but
there are other areas in which other consortia have specialist skills.
- Collaborate whenever it adds value
- Take what others can offer and integrate it
Customers
- Collaborate and communicate requirements - individual organizations are
never big enough, but together we can have a real impact.
- Give feedback on new approaches
- Develop best practice statements and guides
- Demand certified products - otherwise they won't be delivered.
- Enable your organization to learn from the experiences of others and to make the right investment bets
Vendors
- Collaborate responsibly - work in the spirit in which consortia were first
created.
- Avoid further fragmentation of consortia - the proliferation is
fragmenting the market. Vendors would benefit from growing the market
rather than from scrapping over a share of a shrinking one.
- Lower unnecessary barriers
- Make products that interoperate
The Open Group is committed to its Mission: to drive the creation of Boundaryless
Information Flow achieved by:
- Working with customers to capture, understand and address current and
emerging requirements, establish policies and share best practices;
- Working with vendors, consortia and standards bodies to develop
consensus and facilitate interoperability, to evolve and integrate open
specifications and open source technologies;
- Offering a comprehensive set of services to enhance the operational
efficiency of consortia; and
- Developing and operating the industry’s premier certification service
and encouraging procurement of certified products.
Allen closed by describing his vision of consortia working together in a Boundaryless
Environment, with teams of people coming together to solve business needs in a
flexible way.
Top
|
|
|
Jim
King
Senior Vice President and CIO, Technology and New Media, Information &
Media Services, McGraw-Hill
|
Jim began his presentation by talking about information and its context. From
the point of view of McGraw-Hill, its digital assets include text, video, image,
audio, data, form, and application objects, and they aspire to integrate their
content assets into their customer’s consumption methods.
He explained his belief that transparency and interoperability are largely
the same thing. The aim is that flow of information, content delivered within a
domain context, and integration into an application services context, should be
transparent to the user. An end user should have easy access to the information
that they need within their context.
Strictly speaking, Boundaryless Information Flow is not a reasonable aim.
Security, privacy, procedural necessities, contractual bindings, and copyright
management are examples of boundaries which must be maintained, but:
- those boundaries should be transparent within the context of a user’s
need for information
- those boundaries should be managed to promote interoperability across
services
- Brokers will be necessary to facilitate information flow
and where people are providing these services they will expect to be paid.
Jim then moved on to consider the required standards efforts. He felt that
the media industry needs to see interaction in several areas including digital
rights (ownership of the content), business rights management (associated
contracts for the content), and Web Service transaction management.
The vision at McGraw-Hill is to build a federated service-based architecture.
He described use cases within Standard & Poors and McGraw-Hill Construction.
He concluded by arguing that it is critical that organizations such as the
Open Group, W3C, OMG, and others, as well as the key vendors - work together to
enable our transparency
Top