Service Oriented Architecture

Published on February 2018 | Categories: Documents | Downloads: 33 | Comments: 0 | Views: 369
of 29
Download PDF   Embed   Report

Comments

Content

Service-oriented architecture From Wikipedia, the free encyclopedia

Jump to: navigation, search The external links in this article or section may require cleanup to comply with Wikipedia's content policies. Please improve this article by removing excessive or inappropriate external links. Please remove this tag when this is done. (talk)

The introduction to this article provides insufficient context for those unfamiliar with the subject matter. Please help improve the introduction to meet Wikipedia's layout standards. You can discuss the issue on the talk page.

Service Oriented Architecture(SOA) is an evolution of distributed computing and modular programming. SOA provides a modularity of logic that can be presented as a service for a client (client as in client-server architecture) and at the same time function as a client for other services. Relative to earlier attempts to promote software reuse via modularity of functions or classes, SOA's atomic level objects are 100 to 1,000 times larger, and are associated by an application designer or engineer using orchestration. In the process of orchestration, relatively large chunks of software functionality (services) are associated in a non-hierarchical arrangement (in contrast to a class's hierarchies) by a software engineer or process engineer using a special software tool which contains an exhaustive list of all of the services and their characteristics currently available. Underlying and enabling all of this is metadata which is sufficient to describe not only the characteristics of these services, but the data that drives them. XML has been used extensively in SOA to create data which is wrapped in a nearly exhaustive description container. Analogously, the services themselves are typically described by WSDL, and communications protocols by SOAP. Whether these description languages are the best possible for the job, and whether they will remain the favorites going forward, is at present an open question. What is certain is that SOA is utterly dependent on data and services that are described using some implementation of metadata which meets two criteria. The metadata must be in a form which software systems can consume to dynamically configure to maintain coherence and integrity, and in a form which system designers can understand and use to manage that metadata. The goal of SOA then is to allow fairly large chunks of functionality to be strung together to form ad-hoc applications which are built almost entirely from existing software services. The larger the chunks the fewer the interface points required to implement any given set of functionality. This is at odds with very large chunks of functionality which are not granular enough to be easily reused. Since each interface brings with it some amount of processing overhead, there is a performance consideration in choosing the granularity of services. The great promise of SOA though, is that in this world, the marginal cost of creating the nth application is zero, as all of the software required

already exists to satisfy the requirements of other applications. Only orchestration is required to produce a new application. SOA services are loosely coupled, in contrast, for example, to the functions a linker binds together to form an executable, a DLL, or an assembly. SOA services also run in "safe" wrappers such as the .NET environment, which manages memory allocation and reclamation, allows ad-hoc and late binding, and some degree of indeterminate data typing. It is important to note that increasingly there are third party software companies which offer software services for a fee. Going forward, many SOA systems may be composed of services, only some of which were created in-house. This has the potential to spread costs over many customers, and customer uses, and promotes standardization both in and across industries. The travel industry in particular now has a well-defined and documented set of services, and the data they consume sufficient to allow any reasonably competent software engineer to create travel agency software using entirely off the shelf software services. Other industries, such as the finance industry, are also making significant progress in this direction. There is no widely agreed upon definition of SOA other than its literal translation. It is an architecture that relies on service-orientation as its fundamental design principle.[1] In an SOA environment independent services can be accessed without knowledge of their underlying platform implementation.[2] These concepts can be applied to business, software and other types of producer/consumer systems.

Contents [hide] • • • • • • • • • • • • • • • •

1 Service-oriented architecture 2 Requirements for a SOA 3 Web services approach to a service-oriented architecture 4 Other SOA Concepts 5 SOA definitions 6 Why SOA? 7 SOA principles 8 Service-oriented design and development o 8.1 Service contract 9 SOA and web service protocols 10 SOA, Web 2.0, and mashups 11 SOA 2.0 or Advanced SOA 12 What are the challenges faced in SOA adoption? 13 Criticisms of SOA 14 SOA and Business Architecture 15 SOA and network management architecture 16 Jargon

• •

17 Literature o 17.1 Books, non-technical o 17.2 Books, technical o 17.3 Articles/Papers, non-technical o 17.4 Articles/Papers, technical o 17.5 Standards 18 References 19 See also



20 External links



[edit] Service-oriented architecture A service-oriented architecture (SOA) is a collection of services that communicate with each other, for example, passing data from one service to another or coordinating an activity between one or more services. Companies have longed to integrate existing systems in order to implement information technology (IT) support for business processes that cover the entire business value chain. A variety of designs are used, ranging from rigid point-to-point electronic data interchange (EDI) interactions to Web auctions. By using the Internet, companies make their IT systems available to internal departments or external customers, but the interactions are not flexible and are without standardized architecture. Because of this increasing demand for technologies that support connecting and sharing of resources and data, there is a need for a flexible, standardized architecture. SOA is a flexible architecture that unifies business processes by structuring large applications into building blocks, or small modular functional units or services, to be used by different groups of people in and outside the company. The building blocks can be one of three roles: service provider, service broker, or service requestor. See Web services approach to a service-oriented architecture to learn more about these roles.

[edit] Requirements for a SOA In order to efficiently use a SOA, you must abide by the following requirements: •



Interoperability between different systems and programming languages The most important basis for a simple integration between applications on different platforms is a communication protocol, which is available for most systems and programming languages. Clear and unambiguous description language To use a service offered by a provider, it is not only necessary to be able to access the provider system, but the syntax of the service interface must also be clearly defined in a platform-independent fashion.



Retrieval of the service To allow a convenient integration at design time or even system run time, a search mechanism is required to retrieve suitable services. The services should be classified as computer-accessible, hierarchical or taxonomies based on what the services in each category do and how they can be invoked.

[edit] Web services approach to a service-oriented architecture Web services implement a service-oriented architecture. A major focus of Web services is to make functional building blocks accessible over standard Internet protocols that are independent from platforms and programming languages. These services can be new applications or just wrapped around existing legacy systems to make them networkenabled. A service can rely on another service to achieve its goals. Each SOA building block can play one or more of three roles: •





Service provider The service provider creates a Web service and possibly publishes its interface and access information to the service registry. Each provider must decide which services to expose, how to make trade-offs between security and easy availability, how to price the services, or, if they are free, how to exploit them for other value. The provider also has to decide what category the service should be listed in for a given broker service and what sort of trading partner agreements are required to use the service. Service broker The service broker, also known as service registry, is responsible for making the Web service interface and implementation access information available to any potential service requestor. The implementer of the broker decides about the scope of the broker. Public brokers are available through the Internet, while private brokers are only accessible to a limited audience, for example, users of a company intranet. Furthermore, the amount of the offered information has to be decided. Some brokers specialize in many listings. Others offer high levels of trust in the listed services. Some cover a broad landscape of services and others focus within an industry. There are also brokers that catalog other brokers. Depending on the business model, brokers can attempt to maximize look-up requests, number of listings or accuracy of the listings. The Universal Description Discovery and Integration (UDDI) specification defines a way to publish and discover information about Web services. Service requestor The service requestor or Web service client locates entries in the broker registry using various find operations and then binds to the service provider in order to invoke one of its Web services.

[edit] Other SOA Concepts

Architecture is not tied to a specific technology. It may be implemented using a wide range of technologies, including SOAP, RPC, DCOM, CORBA, Web Services or WCF. SOA can be implemented using one or more of these protocols and, for example, might use a file system mechanism to communicate data conforming to a defined interface specification between processes conforming to the SOA concept. The key is independent services with defined interfaces that can be called to perform their tasks in a standard way, without the service having foreknowledge of the calling application, and without the application having or needing knowledge of how the service actually performs its tasks.

Elements of SOA, by Dirk Krafzig, Karl Banke, and Dirk Slama. Enterprise SOA. Prentice Hall, 2005

SOA Meta Model, The Linthicum Group, 2007 SOA can also be regarded as a style of information systems architecture that enables the creation of applications that are built by combining loosely coupled and interoperable services[citation needed]. These services inter-operate based on a formal definition (or contract, e.g., WSDL) that is independent of the underlying platform and programming language. The interface definition hides the implementation of the language-specific service. SOAbased systems can therefore be independent of development technologies and platforms (such as Java, .NET etc). Services written in C# running on .NET platforms and services written in Java running on Java EE platforms, for example, can both be consumed by a common composite application (or client). Applications running on either platform can also consume services running on the other as Web services, which facilitates reuse. Many COBOL legacy systems can also be wrapped by a managed environment and presented as a software service. This has allowed the useful life of many core legacy systems to be extended indefinitely no matter what language they were originally written in. SOA can support integration and consolidation activities within complex enterprise systems, but SOA does not specify or provide a methodology or framework for documenting capabilities or services.

High-level languages such as BPEL and specifications such as WS-CDL and WSCoordination extend the service concept by providing a method of defining and supporting orchestration of fine grained services into more coarse-grained business services, which in turn can be incorporated into workflows and business processes implemented in composite applications or portals[citation needed]. The use of Service component architecture (SCA) to implement SOA is a current area of research.

[edit] SOA definitions SOA is a design for linking business and computational resources (principally organizations, applications and data) on demand to achieve the desired results for service consumers (which can be end users or other services). OASIS (the Organization for the Advancement of Structured Information Standards) defines SOA as the following: A paradigm for organizing and utilizing distributed capabilities that may be under the control of different ownership domains. It provides a uniform means to offer, discover, interact with and use capabilities to produce desired effects consistent with measurable preconditions and expectations. There are multiple definitions of SOA, the OASIS group and the Open Group have created formal definition with depth which can be applied to both the technology and business domains. • • • • • • • • •

Open Group SOA Definition (SOA-Definition)[3] OASIS SOA Reference Model (SOA-RM)[4] What Is Service-Oriented Architecture? (XML.com) What is Service-Oriented Architecture? (Javaworld.com) Webopedia definition TechEncyclopedia definition Object Management Group (OMG ) SOA Special Interest Group definition WhatIs.com definition SearchWebServices.com Numerous SOA definitions by industry experts

Though many definitions of SOA limit themselves to technology or just web services, this is predominantly pushed by technology vendors; in 2003 they talked just of web services, while in 2006 the talk is of events and process engines.[citation needed]

[edit] Why SOA? The main drivers for SOA adoption are that it links computational resources and promotes their reuse. Enterprise architects believe that SOA can help businesses respond more quickly and cost-effectively to changing market conditions[5] . This style of

architecture promotes reuse at the macro(service) level rather than micro(objects) level. It can also simplify interconnection to - and usage of - existing IT (legacy) assets. SOA Practitioners Guide: Why Services-Oriented Architecture? provides a high-level summary on SOA. In some respects, SOA can be considered an architectural evolution rather than a revolution and captures many of the best practices of previous software architectures. In communications systems, for example, there has been little development of solutions that use truly static bindings to talk to other equipment in the network. By formally embracing a SOA approach, such systems are better positioned to stress the importance of welldefined, highly inter-operable interfaces.[citation needed] Some have questioned whether SOA is just a revival of modular programming (1970s), event-oriented design (1980s) or interface/component-based design (1990s)[citation needed]. SOA promotes the goal of separating users (consumers) from the service implementations. Services can therefore be run on various distributed platforms and be accessed across networks. This can also maximize reuse of services[citation needed].

[edit] SOA principles The following guiding principles define the ground rules for development, maintenance, and usage of the SOA[6] • • •

Reuse, granularity, modularity, composability, componentization, and interoperability Compliance to standards (both common and industry-specific) Services identification and categorization, provisioning and delivery, and monitoring and tracking

The following specific architectural principles for design and service definition focus on specific themes that influence the intrinsic behaviour of a system and the style of its design: •

• • • •

Service Encapsulation - A lot of existing web-services are consolidated to be used under the SOA Architecture. Many a times, such services have not been planned to be under SOA. Service Loose coupling - Services maintain a relationship that minimizes dependencies and only requires that they maintain an awareness of each other Service contract - Services adhere to a communications agreement, as defined collectively by one or more service description documents Service abstraction - Beyond what is described in the service contract, services hide logic from the outside world Service reusability - Logic is divided into services with the intention of promoting reuse

• • • •

Service composability - Collections of services can be coordinated and assembled to form composite services Service autonomy – Services have control over the logic they encapsulate Service optimization – All else equal, high-quality services are generally considered preferable to low-quality ones Service discoverability – Services are designed to be outwardly descriptive so that they can be found and assessed via available discovery mechanisms[7]

In addition, the following factors should also be taken into account when defining a SOA implementation: •



• • •

SOA Reference Architecture SOA Practitioners Guide Part 2: SOA Reference Architecture covers the SOA Reference Architecture, which provides a worked design of an enterprise-wide SOA implementation with detailed architecture diagrams, component descriptions, detailed requirements, design patterns, opinions about standards, patterns on regulation compliance, standards templates etc. Life cycle management SOA Practitioners Guide Part 3: Introduction to Services Lifecycle introduces the Services Lifecycle and provides a detailed process for services management though the service lifecycle, from inception through to retirement or repurposing of the services. It also contains an appendix that includes organization and governance best practices, templates, comments on key SOA standards, and recommended links for more information. Efficient use of system resources Service maturity and performance EAI Enterprise Application Integration

[edit] Service-oriented design and development This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. (help, get involved!) Unverifiable material may be challenged and removed. This section has been tagged since June 2006.

The modelling and design methodology for SOA applications has become known by the terms service-oriented analysis and design and SOAD [8]. SOAD is a design methodology for developing highly-agile systems in a consumer/producer model that abstracts implementation from process, such that a service-provider can be modified or changed without affecting the consumer.

[edit] Service contract A service contract needs to have the following components: •

Header

Name - Name of the service. Should indicate in general terms what it does, but not be the only definition o Version - The version of this service contract o Owner - The person/team in charge of the service o RACI  Responsible - The role the person/team is responsible for the deliverables of this contract/service. All versions of the contract  Accountable - Ultimate Decision Maker in terms of this contract/service  Consulted - Who must be consulted before action is taken on this contract/service. This is 2-way communication. These people have an impact on the decision and/or the execution of that decision.  Informed - Who must be informed that a decision or action is being taken. This is a 1-way communication. These people are impacted by the decision or execution of that decision, but have no control over the action. o Type - This is the type of the service to help distinguish the layer in which it resides. Different implementations will have different service types. Examples of service types include:  Presentation  Process  Business  Data  Integration Functional o Functional Requirement (From Requirements Document) - Indicates the functionality in specific bulleted items what exactly this service accomplishes. The language should be such that it allows test cases to prove the functionality is accomplished. o Service Operations - Methods, actions etc. Must be defined in terms of what part of the Functionality it provides. o Invocation - Indicates the invocation means of the service. This includes the URL, interface, etc. There may be multiple Invocation paths for the same service. We may have the same functionality for an internal and external clients each with a different invocation means and interface. Examples:  SOAP  REST  Events Triggers Non-Functional o Security Constraints - Defines who can execute this service in terms of roles or individual partners, etc. and which invocation mechanism they can invoke. o Quality of Service - Determines the allowable failure rate o Transactional - Is this capable of acting as part of a larger transaction and if so, how do we control that? o





o o o

Service Level Agreement - Determines the amount of latency the service is allowed to have to perform its actions Semantics - Dictates or defines the meaning of terms used in the description and interfaces of the service Process - Describes the process, if any, of the contracted service

[edit] SOA and web service protocols This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. (help, get involved!) Unverifiable material may be challenged and removed. This section has been tagged since June 2006.

SOA may be built on Web services standards (e.g., using SOAP) that have gained broad industry acceptance. These standards (also referred to as web service specifications) also provide greater interoperability and some protection from lock-in to proprietary vendor software. One can, however, implement SOA using any service-based technology, such as Jini. Service-oriented architecture is often defined as services exposed using the Web Services Protocol Stack[citation needed] . The base level of web services standards relevant to SOA includes the following: • • • • •



XML - a markup language for describing data in message payloads in a document format HTTP (or HTTPS) - request/response protocol between clients and servers used to transfer or convey information SOAP - a protocol for exchanging XML-based messages over a computer network, normally using HTTP XACML - a markup language for expressing access control rules and policies. Web Services Description Language (WSDL) - XML-based service description that describes the public interface, protocol bindings and message formats required to interact with a web service Universal Description, Discovery, and Integration (UDDI) - An XML-based registry to publish service descriptions (WSDL) and allow their discovery

Note, however, that a system does not necessarily need to use any or all of these standards to be "service-oriented." For example, some service oriented systems have been implemented using Corba, Jini and REST.

[edit] SOA, Web 2.0, and mashups Web 2.0 refers to a "second generation" of web sites, primarily distinguished by the ability of visitors to contribute information for collaboration and sharing. Web 2.0

applications use Web services and may include Ajax, Flash, Silverlight or JavaFX user interfaces, Web syndication, blogs, and wikis. While there are no set standards for Web 2.0, it is characterised by building on the existing web server architecture and using services. Web 2.0 can therefore be regarded as displaying some SOA characteristics[9]. Mashups are also regarded by some as Web 2.0 applications. The term "enterprise mashup" has been coined to describe Web applications that combine content from more than one source into an integrated experience, which share many of the characteristics of service-oriented business applications (SOBAs), which are applications composed of services in a declarative manner. There is ongoing debate about "the collision of Web 2.0, mashups, and SOA", with some stating that Web 2.0 applications are a realisation of SOA composite and business applications. [10]

[edit] SOA 2.0 or Advanced SOA Amid much negative reaction, Oracle is taking up SOA 2.0 as "the next-generation version of SOA" combining service-oriented architecture and Event Driven Architecture, and categorizing the first iteration of SOA as client-server driven[11] . Even though Oracle indicates that Gartner is coining a new term, Gartner analysts indicate that they call this advanced SOA and it is 'whimsically' referred to as SOA 2.0.[12] Most of the pure-play middleware vendors (e.g., webMethods and TIBCO Software) have had SOA 2.0 attributes for years. SOA 2.0 can therefore be regarded as "more marketing noise than anything else"[13] and product evangelism rather than a new "way of doing things". However, other industry commentators have criticized attaching a version number to an application architecture design approach, while others have stated that the "next generation" should apply to the evolution of SOA techniques from IT optimization to business development[14] .

[edit] What are the challenges faced in SOA adoption? This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. (help, get involved!) Unverifiable material may be challenged and removed. This section has been tagged since June 2006.

One obvious and common challenge faced is managing services metadata[citation needed]. SOA-based environments can include many services which exchange messages to perform tasks. Depending on the design, a single application may generate millions of messages. Managing and providing information on how services interact is a complicated task. Another challenge is providing appropriate levels of security. Security model built into an application may no longer be appropriate when the capabilities of the application are

exposed as services that can be used by other applications. That is, application-managed security is not the right model for securing services. A number of new technologies and standards are emerging to provide more appropriate models for security in SOA. See SOA Security entry for more info. As SOA and the WS-* specifications are constantly being expanded, updated and refined, there is a shortage of skilled people to work on SOA based systems, including the integration of services and construction of services infrastructure. Interoperability is another important aspect in the SOA implementations. The WS-I organization has developed Basic Profile (BP) and Basic Security Profile (BSP) to enforce compatibility. Testing tools have been designed by WS-I to help assess whether web services are conformant with WS-I profile guidelines. Additionally, another Charter has been established to work on the Reliable Secure Profile. There is significant vendor hype concerning SOA that can create expectations that may not be fulfilled. Product stacks are still evolving as early adopters test the development and runtime products with real world problems. SOA does not guarantee reduced IT costs, improved systems agility or faster time to market. Successful SOA implementations may realise some or all of these benefits depending on the quality and relevance of the system architecture and design[15] . See also: WS-MetadataExchange OWL-S Abrobit Roy

[edit] Criticisms of SOA Some criticisms of SOA are based on the assumption that SOA is just another term for Web Services. For example, some critics claim SOA results in the addition of XML layers introducing XML parsing and composition. In the absence of native or binary forms of Remote Procedure Call (RPC) applications could run slower and require more processing power, increasing costs. Most implementations do incur these overheads, but SOA can be implemented using technologies (for example, Java Business Integration (JBI)) which do not depend on remote procedure calls or translation through XML. However, there are emerging and open-source XML parsing technolgies, such as VTDXML, and various XML-compatible binary formats (http://vtdxml.sf.net/persistence.html) that promise to significantly improve the SOA performance. Stateful services require both the consumer and the provider to share the same consumerspecific context, which is either included in or referenced by messages exchanged between the provider and the consumer. The drawback of this constraint is that it could reduce the overall scalability of the service provider because it might need to remember the shared context for each consumer. It also increases the coupling between a service provider and a consumer and makes switching service providers more difficult. Another concern is that WS-* standards and products are still evolving (e.g., transaction, security), and SOA can thus introduce new risks unless properly managed

and estimated with additional budget and contingency for additional Proof of Concept work. An informal survey by Network Computing placed SOA as the most despised buzzword (November 2006). Some critics feel SOA is merely an obvious evolution of currently well-deployed architectures (open interfaces, etc). A SOA being an architecture is the first stage of representing the system components that interconnect for the benefit of the business. At this level a SOA is just an evolution of an existing architecture and business functions. SOAs are normally associated with interconnecting back end transactional systems that are accessed via web services. The real issue with any IT "architecture" is how one defines the information management model and operations around it that deal with information privacy, reflect the business's products and services, enable services to be delivered to the customers, allow for self care, preferences and entitlements and at the same time embrace identity management and agility. On this last point, system modification (agility) is a critical issue which is normally omitted from IT system design. Many systems, including SOAs, hard code the operations, goods and services of the organisation thus restricting their online service and business agility in the global market place. Adopting SOAs is therefore just the first (diagrammatic) step in defining a real business system. The next step in the design process is the definition of a Service Delivery Platform (SDP) and its implementation. It is in the SDP design phase where one defines the business information models, identity management, products, content, devices, and the end user service characteristics, as well as how agile the system is so that it can deal with the evolution of the business and its customers.

[edit] SOA and Business Architecture One area where SOA has been gaining ground is in its power as a mechanism for defining business services and operating models and thus provide a structure for IT to deliver against the actual business requirements and adapt in a similar way to the business. The purpose of using SOA as a business mapping tool is to ensure that the services created properly represent the business view and are not just what technologists think the business services should be. At the heart of SOA planning is the process of defining architectures for the use of information in support of the business, and the plan for implementing those architectures (Enterprise Architecture Planning by Steven Spewak and Steven Hill). Enterprise Business Architecture should always represent the highest and most dominant architecture. Every service should be created with the intent to bring value to the business in some way and must be traceable back to the business architecture.

Within this area, SOMA (Service-Oriented Modelling and Architecture) was announced by IBM as the first SOA-related methodology in 2004. Since then, efforts have been made to move towards greater standardization and the involvement of business objectives, particularly within the OASIS standards group and specifically the SOA Adoption Blueprints group. All of these approaches take a fundamentally structured approach to SOA, focusing on the Services and Architecture elements and leaving implementation to the more technically focused standards.

[edit] SOA and network management architecture The principles of SOA are currently being applied to the field of network management. Examples of Service-Oriented network management architectures are TS 188 001 NGN Management OSS Architecture from ETSI, and the recently published M.3060 Principles for the Management Of Next Generation Networks recommendation from the ITU-T SOA. Tools for managing SOA infrastructure include: • • • • •

Symantec APM HyPerformix IPS Performance Optimizer HP Management Software / Mercury SOA Manager IBM Tivoli Framework Tidal Software Intersperse

[edit] Jargon This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. (help, get involved!) Unverifiable material may be challenged and removed. This section has been tagged since June 2006.

SOA is an architectural style rather than a product. Several vendors offer products which can form the basis of, or enable, SOA--particularly Enterprise Service Bus (ESB) products. ESBs provide infrastructure that can be purchased, implemented and leveraged for SOA-based systems[citation needed]. SOA relies heavily on metadata design and management. Metadata design and management products are also critical to implementing SOA architectures. See the list of SOA related products for an overview and ideas.

Distributed computing From Wikipedia, the free encyclopedia

Jump to: navigation, search

Distributed computing is a method of computer processing in which different parts of a program run simultaneously on two or more computers that are communicating with each other over a network. Distributed computing is a type of segmented or parallel computing. But the latter term is most commonly used to refer to processing in which different parts of a program run simultaneously on two or more processors that are part of the same computer. While both types of processing require that a program be segmented —divided into sections that can run simultaneously, distributed computing also requires that the division of the program take into account the different environments on which the different sections of the program will be running. For example, two computers are likely to have different file systems and different hardware components. An example of distributed computing is BOINC, a framework in which large problems can be divided into many small problems which are distributed to many computers. Later, the small results are reassembled into a larger solution. Distributed computing is a natural result of the use of networks to allow computers to efficiently communicate. But distributed computing is distinct from computer networking or fragmented computing. The latter refers to two or more computers interacting with each other, but not, typically, sharing the processing of a single program. The World Wide Web is an example of a network, but not an example of distributed computing. There are numerous technologies and standards used to construct distributed computations, including some which are specially designed and optimized for that purpose, such as Remote Procedure Calls (RPC) or Remote Method Invocation (RMI) or .NET Remoting.

Contents [hide] • • • • •

• • •

1 Organization 2 Goals and advantages o 2.1 Openness 3 Drawbacks and disadvantages 4 Architecture 5 Concurrency o 5.1 Multiprocessor systems o 5.2 Multicore systems o 5.3 Multicomputer systems o 5.4 Computing taxonomies o 5.5 Computer clusters o 5.6 Grid computing 6 Languages 7 Examples o 7.1 Projects 8 See also

• •

9 References 10 Further reading



11 External links

[edit] Organization Organizing the interaction between each computer is of prime importance. In order to be able to use the widest possible range and types of computers, the protocol or communication channel should not contain or use any information that may not be understood by certain machines. Special care must also be taken that messages are indeed delivered correctly and that invalid messages are rejected which would otherwise bring down the system and perhaps the rest of the network. Another important factor is the ability to send software to another computer in a portable way so that it may execute and interact with the existing network. This may not always be possible or practical when using differing hardware and resources, in which case other methods must be used such as cross-compiling or manually porting this software.

[edit] Goals and advantages There are many different types of distributed computing systems and many challenges to overcome in successfully designing one. The main goal of a distributed computing system is to connect users and resources in a transparent, open, and scalable way. Ideally this arrangement is drastically more fault tolerant and more powerful than many combinations of stand-alone computer systems.

[edit] Openness Openness is the property of distributed systems such that each subsystem is continually open to interaction with other systems (see references). Web Services protocols are standards which enable distributed systems to be extended and scaled. In general, an open system that scales has an advantage over a perfectly closed and self-contained system. Consequently, open distributed systems are required to meet the following challenges: Monotonicity Once something is published in an open system, it cannot be taken back. Pluralism Different subsystems of an open distributed system include heterogeneous, overlapping and possibly conflicting information. There is no central arbiter of truth in open distributed systems. Unbounded nondeterminism Asynchronously, different subsystems can come up and go down and communication links can come in and go out between subsystems of an open

distributed system. Therefore the time that it will take to complete an operation cannot be bounded in advance (see unbounded nondeterminism).

[edit] Drawbacks and disadvantages See also: Fallacies of Distributed Computing If not planned properly, a distributed system can decrease the overall reliability of computations if the unavailability of a node can cause disruption of the other nodes. Leslie Lamport famously quipped that: "A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable."[1] Troubleshooting and diagnosing problems in a distributed system can also become more difficult, because the analysis may require connecting to remote nodes or inspecting communication between nodes. Many types of computation are not well suited for distributed environments, typically owing to the amount of network communication or synchronization that would be required between nodes. If bandwidth, latency, or communication requirements are too significant, then the benefits of distributed computing may be negated and the performance may be worse than a non-distributed environment. Distributed computing projects may generate data that is proprietary to private industry, even though the process of generating that data involves the resources of volunteers. This may result in controversy as private industry profits from the data which is generated with the aid of volunteers. In addition, some distributed computing projects, such as biology projects that aim to develop thousands or millions of "candidate molecules" for solving various medical problems, may create vast amounts of raw data. This raw data may be useless by itself without refinement of the raw data or testing of candidate results in real-world experiments. Such refinement and experimentation may be so expensive and time-consuming that it may literally take decades to sift through the data. Until the data is refined, no benefits can be acquired from the computing work. Other projects suffer from lack of planning on behalf of their well-meaning originators. These poorly planned projects may not generate results that are palpable, or may not generate data that ultimately result in finished, innovative scientific papers. Sensing that a project may not be generating useful data, the project managers may decide to abruptly terminate the project without definitive results, resulting in wastage of the electricity and computing resources used in the project. Volunteers may feel disappointed and abused by such outcomes. There is an obvious opportunity cost of devoting time and energy to a project that ultimately is useless, when that computing power could have been devoted to a better planned distributed computing project generating useful, concrete results. Another problem with distributed computing projects is that they may devote resources to problems that may not ultimately be soluble, or to problems that are best pursued later in the future, when desktop computing power becomes fast enough to make pursuit of such solutions practical. Some distributed computing projects may also attempt to use computers to find solutions by number-crunching mathematical or physical models. With

such projects there is the risk that the model may not be designed well enough to efficiently generate concrete solutions. The effectiveness of a distributed computing project is therefore determined largely by the sophistication of the project creators.

[edit] Architecture Various hardware and software architectures are used for distributed computing. At a lower level, it is necessary to interconnect multiple CPUs with some sort of network, regardless of whether that network is printed onto a circuit board or made up of looselycoupled devices and cables. At a higher level, it is necessary to interconnect processes running on those CPUs with some sort of communication system. Distributed programming typically falls into one of several basic architectures or categories: Client-server, 3-tier architecture, N-tier architecture, Distributed objects, loose coupling, or tight coupling. •









Client-server — Smart client code contacts the server for data, then formats and displays it to the user. Input at the client is committed back to the server when it represents a permanent change. 3-tier architecture — Three tier systems move the client intelligence to a middle tier so that stateless clients can be used. This simplifies application deployment. Most web applications are 3-Tier. N-tier architecture — N-Tier refers typically to web applications which further forward their requests to other enterprise services. This type of application is the one most responsible for the success of application servers. Tightly coupled (clustered) — refers typically to a set of highly integrated machines that run the same process in parallel, subdividing the task in parts that are made individually by each one, and then put back together to make the final result. Peer-to-peer — an architecture where there is no special machine or machines that provide a service or manage the network resources. Instead all responsibilities are uniformly divided among all machines, known as peers. Peers can serve both as clients and servers.

[edit] Concurrency Distributed computing implements a kind of concurrency. It interrelates tightly with concurrent programming so much that they are sometimes not taught as distinct subjects [2] .

[edit] Multiprocessor systems A multiprocessor system is simply a computer that has more than one CPU on its motherboard. If the operating system is built to take advantage of this, it can run different processes (or different threads belonging to the same process) on different CPUs.

[edit] Multicore systems Intel CPUs from the late Pentium 4 era (Northwood and Prescott cores) employed a technology called Hyperthreading that allowed more than one thread (usually two) to run on the same CPU. The more recent Sun UltraSPARC T1, AMD Athlon 64 X2, AMD Athlon FX, AMD Opteron, Intel Pentium D, Intel Core, Intel Core 2 and Intel Xeon processors feature multiple processor cores to also increase the number of concurrent threads they can run.

[edit] Multicomputer systems A multicomputer may be considered to be either a loosely coupled NUMA computer or a tightly coupled cluster. Multicomputers are commonly used when strong compute power is required in an environment with restricted physical space or electrical power. Common suppliers include Mercury Computer Systems, CSPI, and SKY Computers. Common uses include 3D medical imaging devices and mobile radar.

[edit] Computing taxonomies The types of distributed systems are based on Flynn's taxonomy of systems; single instruction, single data (SISD), single instruction, multiple data (SIMD), multiple instruction, single data (MISD), and multiple instruction, multiple data (MIMD). Other taxonomies and architectures available at Computer architecture and in Category:Computer architecture.

[edit] Computer clusters Main article: Cluster computing A cluster consists of multiple stand-alone machines acting in parallel across a local high speed network. Distributed computing differs from cluster computing in that computers in a distributed computing environment are typically not exclusively running "group" tasks, whereas clustered computers are usually much more tightly coupled. Distributed computing also often consists of machines which are widely separated geographically.

[edit] Grid computing Main article: Grid computing A grid uses the resources of many separate computers connected by a network (usually the Internet) to solve large-scale computation problems. Most use idle time on many thousands of computers throughout the world. Such arrangements permit handling of data that would otherwise require the power of expensive supercomputers or would have been impossible to analyze.

[edit] Languages Nearly any programming language that has access to the full hardware of the system could handle distributed programming given enough time and code. Remote procedure calls distribute operating system commands over a network connection. Systems like CORBA, Microsoft DCOM, Java RMI and others, try to map object oriented design to the network. Loosely coupled systems communicate through intermediate documents that are typically human readable (e.g. XML, HTML, SGML, X.500, and EDI). Languages specifically tailored for distributed programming are: • • • • • • •

Ada programming language [3] Alef programming language E programming language Erlang programming language Limbo programming language Oz programming language ZPL (programming language)

[edit] Examples [edit] Projects Main article: List of distributed computing projects A variety of distributed computing projects have grown up in recent years. Many are run on a volunteer basis, and involve users donating their unused computational power to work on interesting computational problems. Examples of such projects include the Stanford University Chemistry Department Folding@home project, which is focused on simulations of protein folding to find disease cures; World Community Grid, an effort to create the world's largest public computing grid to tackle scientific research projects that benefit humanity, run and funded by IBM; SETI@home, which is focused on analyzing radio-telescope data to find evidence of intelligent signals from space, hosted by the Space Sciences Laboratory at the University of California, Berkeley; and distributed.net, which is focused on breaking various cryptographic ciphers.[4] Distributed computing projects also often involve competition with other distributed systems. This competition may be for prestige, or it may be a matter of enticing users to donate processing power to a specific project. For example, stat races are a measure of the work a distributed computing project has been able to compute over the past day or week. This has been found to be so important in practice that virtually all distributed computing projects offer online statistical analyses of their performances, updated at least daily if not in real-time.

Universal Description Discovery and Integration From Wikipedia, the free encyclopedia

Jump to: navigation, search Universal Description, Discovery and Integration (UDDI) is a platform-independent, XML-based registry for businesses worldwide to list themselves on the Internet. UDDI is an open industry initiative, sponsored by OASIS, enabling businesses to publish service listings and discover each other and define how the services or software applications interact over the Internet. A UDDI business registration consists of three components: • • •

White Pages — address, contact, and known identifiers; Yellow Pages — industrial categorizations based on standard taxonomies; Green Pages — technical information about services exposed by the business.

UDDI is one of the core Web services standards[1]. It is designed to be interrogated by SOAP messages and to provide access to Web Services Description Language documents describing the protocol bindings and message formats required to interact with the web services listed in its directory. UDDI was written in August, 2000, at a time when the authors had a vision of a world in which consumers of Web Services would be linked up with providers through a public or private dynamic brokerage system. In this vision, anyone needing a service such as credit card authentication, would go to their service broker and select one supporting the desired SOAP or other service interface and meeting other criteria. In such a world, the publicly operated UDDI node or broker would be critical for everyone. For the consumer, public or open brokers would only return services listed for public discovery by others, while for a service producer, getting a good placement, by relying on metadata of authoritative index categories, in the brokerage would be critical for effective placement. The UDDI was integrated into the Web Services Interoperability (WS-I) standard as a central pillar of web services infrastructure. By the end of 2005, it was on the agenda for use by more than seventy percent of the Fortune 500 companies in either a public or private implementation, and particularly among those enterprises that seek to optimize software or service reuse. Many of these enterprises subscribe to some form of serviceoriented architecture (SOA), server programs or database software licensed by some of the professed founders of the UDDI.org and OASIS. The UDDI specifications supported a publicly accessible Universal Business Registry in which a naming system was built around the UDDI-driven service broker. IBM, Microsoft and SAP announced they were closing their public UDDI nodes in January 2006.[2]

Some assert that the most common place that a UDDI system can be found is inside a company where it is used to dynamically bind client systems to implementations. They would say that much of the search metadata permitted in UDDI is not used for this relatively simple role. However, the core of the trade infrastructure under UDDI, when deployed in the Universal Business Registries (now being disabled), has made all the information available to any client application, regardless of heterogeneous computing domains.

Contents [hide]

• •

1 UDDI Data Model o 1.1 UDDI Data Types 2 UDDI Nodes & Registry o 2.1 UDDI Nodes o 2.2 UDDI Registry 3 Implementations o 3.1 UDDI clients o 3.2 UDDI servers 4 See also 5 External links



6 References

• •



[edit] UDDI Data Model [edit] UDDI Data Types • • • •

businessEntity: The top level structure, describing a business or other entity for which information is being registered businessService: Description of a set of services which may contain one or more bindingTemplates. bindingTemplate: Information necessary to invoke specific services which may encompass bindings to one or more protocols, such as HTTP or SMTP. tModel: Technical “finger print” for a given service which may also function as namespace to identify other entities, including other tModels.

[edit] UDDI Nodes & Registry [edit] UDDI Nodes UDDI nodes are servers which support the UDDI specification and belong to a UDDI registry.

[edit] UDDI Registry UDDI registries are collections of one or more nodes.

Service component architecture From Wikipedia, the free encyclopedia

Jump to: navigation, search Service Component Architecture (SCA) is a relatively new initiative advocated by major vendors of Java EE technology. Its proponents claim it is more natively suited for the delivery of applications that conform with the principles of service-oriented architecture. As such, SCA components are supposedly more technologically agnostic.

Contents [hide] • • • • • • • •

1 Partners 2 Supporters 3 Definition 4 Further Analysis 5 SCA artifacts 6 Transition to a Standards Body 7 Footnotes 8 See also



9 External links

[edit] Partners Partner vendors include: • •



the original members: BEA Systems, IBM, IONA Technologies, Oracle Corporation, SAP AG, Sybase, Xcalia and Zend Technologies the additional members announced on July 26, 2006: Cape Clear, Interface21, Primeton Technologies, Progress Software, Red Hat, Rogue Wave Software, Software AG, Sun Microsystems and TIBCO Software.[1] Siemens AG, who joined the collaboration of companies working on the technology on September 18, 2006.

[edit] Supporters

In addition to the partners above, the SCA community has a significant set of formal supporters. [2] The Supporters Program remains open for any interested vendor, ISV, customer or user of the SCA technology to contribute to its evolution.

[edit] Definition On March 21, 2007 the OSOA Collaboration released the V1.0 level of specification [3]. The specifications specify that an application designed with SCA should have the following advantages: • • • • • •

Decoupling of application business logic from the details of its invoked service calls including . Target services in a multitude of languages including C++, Java, COBOL, and PHP as well as XML, BPEL, and XSLT. The ability to seamlessly work with various communications constructs including One-Way, Asynchronous, Call-Return, and Notification. The ability to "bind" to legacy components or services, accessed normally by technologies such as Web Services, EJB, JMS, JCA, RMI and others. The ability to declare (outside of business logic) the Quality of Service requirements, such as Security, Transactions and the use of Reliable Messaging. Data could be represented in Service Data Objects.

The value proposition of SCA, therefore, is to offer the flexibility for true composite applications, flexibly incorporating reusable components in an SOA programming style. The overhead of business logic programmer concerns regarding platforms, infrastructure, plumbing, policies and protocols are removed, enabling a high degree of programmer productivity.

[edit] Further Analysis Gartner Group has published a short brief that concluded that the SCA included technology of Service Data Object (SDO) will enjoy more rapid adoption due to its maturity. [4] Advantages: • • • •

caters for all existing Java platform technologies and C++ less technology dependence - does not have to rely on the Java programming language, nor XML uses SDO, which is the only industry standard for data access in SOA lack of support by Microsoft encourages potential users to broaden their spectrum of SOA solutions to an increasing number of vendors.

Disadvantages:

• •

lack of support by Microsoft reduces the relevancy of SCA for a large number of potential users. Specification does not address performance of SOA applications, which continues to be a detractor of adoption.

SCA is said to provide interoperability through an approach called "Activation". It is the method that provides the highest degree of component autonomy, compared to older "mediation" (e.g. JBI) or "Invocation" method used in JCA, as explained by an architect at SAP [1].

[edit] SCA artifacts The SCA Assembly Model consists of a series of artifacts, which are defined by elements contained in XML files. A SCA runtime may have other non-standard representations of the artifacts represented by these XML files, and may allow for the configuration of systems to be modified dynamically. However, the XML files define the portable representation of the SCA artifacts. The basic artifact is the Composite, which is the unit of deployment for SCA and which holds Services which can be accessed remotely. A composite contains one or more Components, which contain the business function provided by the module. Components offer their function as services, which can either be used by other components within the same module or which can be made available for use outside the module through Entry Points. Components may also depend on services provided by other components — these dependencies are called References. References can either be linked to services provided by other components in the same module, or references can be linked to services provided outside the module, which can be provided by other modules. References to services provided outside the module, including services provided by other modules, are defined by External Services in the module. Also contained in the module are the linkages between references and services, represented by Wires. A Component consists of a configured Implementation, where an implementation is the piece of program code implementing business functions. The component configures the implementation with specific values for settable Properties declared by the implementation. The component can also configure the implementation with wiring of references declared by the implementation to specific target services. Composites are deployed within a SCA System. A SCA System represents a set of services providing an area of business functionality that is controlled by a single organization. As an example, for the accounts department in a business, the SCA System might cover all financial related function, and it might contain a series of modules dealing with specific areas of accounting, with one for customer accounts, another dealing with accounts payable. To help build and configure the SCA System, Composites can be used as component implementations, in the same way as a Java classes or a BPEL processes. In other words, SCA allows a hierarchy of composites that is arbitrarily deep such a nested model is termed recursive.

The capture and expression of non-functional requirements such as security is an important aspect of service definition, and has impact on SCA throughout the lifecycle of components and compositions. SCA provides the Policy Framework to support specification of constraints, capabilities and Quality of Service (QoS) expectations, from component design through to concrete deployment.

[edit] Transition to a Standards Body After several years of incubation under an informal industry collaboration, early (V1.0) implementations of the specification are now coming to market. The collaboration have now indicated that formal industry standardization is the appropriate next step and announced their intentions in March 2007. The chosen Standards Development Organization is the OASIS organization, and a new OASIS Open CSA Member Section has been established [5]. Charters for six new Technical Committees (TCs) have been submitted to OASIS[6] and a Call for Participation for Technical Committee members has been issued within the OASIS organization. The Technical Committees will start their work in September 2007. Participation in these OASIS SCA TCs remains open to all companies, non-profit groups, governments, academic institutions, and individuals. Archives of the work will be accessible to both members and non-members, and OASIS will offer a mechanism for public comment[7].

OASIS (organization) From Wikipedia, the free encyclopedia

Jump to: navigation, search This article is about standards organisation. For other uses, see Oasis (disambiguation). The Organization for the Advancement of Structured Information Standards (OASIS) is a global consortium that drives the development, convergence and adoption of e-business and web service standards. Members of the consortium decide how and what work is undertaken through an open, democratic process. Technical work is happening in the following categories: Web Services, e-Commerce, Security, Law & Government, Supply Chain, Computing Management, Application Focus, Document-Centric, XML Processing, Conformance/Interop, and Industry Domains.

Contents [hide] •

1 History



2 Specific standards under development by OASIS technical committees 3 Patent disclosure controversy 4 Events o 4.1 OASIS Adoption Forum 5 See also



6 External links

• • •

[edit] History OASIS was first formed as SGML Open in 1993 as a trade association of SGML tool vendors to cooperatively promote the adoption of SGML through mainly educational activities, though some amount of technical activity was also pursued including an update of the CALS Table Model specification and specifications for fragment interchange and entity management. In 1998, with the movement of the high tech industry to XML, SGML Open changed its emphasis from SGML to XML, and changed its name to OASIS Open to be inclusive of XML and any future structured information standards. The focus of the consortium's activities also moved from promoting adoption (as XML was getting lots of attention on its own) to developing technical specifications. In July 2000 a new technical committee process was approved. With the adoption of the process the manner in which technical committees were created, operated, and progressed their work was regularized. At the adoption of the process there were five technical committees; by 2004 there were nearly 70. During 1999 OASIS was approached by UN/CEFACT, the committee of the United Nations dealing with standards for business, to jointly develop a new set of specifications for electronic business. The joint initiative, called "ebXML" and which first met in November 1999, was chartered for a three year period. At the final meeting under the original charter, in Vienna, UN/CEFACT and OASIS agreed to divide the remaining work between the two organizations and to coordinate the completion of the work through a coordinating committee. In 2004 OASIS submitted its completed ebXML specifications to ISO TC154 where they were approved as ISO 15000.

[edit] Specific standards under development by OASIS technical committees •

CAP - Common Alerting Protocol, is an XML-based data format for exchanging public warnings and emergencies between alerting technologies.



CIQ - Customer Information Quality, is an XML Specifications for defining, representing, interoperating and managing party information (e.g. name, address).



DocBook (DocBook) is a markup language for technical documentation. It was originally intended for authoring technical documents related to computer hardware and software but it can be used for any other sort of documentation.



DITA (Darwin Information Typing Architecture) is a modular and extensible XML-based language for topic-based information, such as for online help, documentation, and training.



OpenDocument (OASIS Open Document Format for Office Applications) is an open document file format for saving office documents such as spreadsheets, memos, charts, and presentations.



SAML - Security Assertion Markup Language, a standard XML-based framework for the secure exchange of authentication and authorization information.



SPML - Service Provisioning Markup Language, a standard XML-based protocol for the integration and interoperation of service provisioning requests.



UBL - Universal Business Language, National effort to define a royalty-free library of standard electronic XML business documents. All invoices to the Danish government have to be in UBL electronic format since February 2005.



WSDM - Web Services Distributed Management



XRI - eXtensible Resource Identifier, a URI-compatible scheme and resolution protocol for abstract identifiers used to identify and share resources across domains and applications.



XDI - XRI Data Interchange, a standard for sharing, linking, and synchronizing data ("dataweb") across multiple domains and applications using XML documents, eXtensible Resource Identifiers (XRIs), and a new method of distributed data control called a link contract.

[edit] Patent disclosure controversy Like many bodies producing open standards, OASIS has a patent disclosure policy requiring participants to disclose intent to apply for software patents for technologies under consideration in the standard. Like the W3C, which requires participants to offer royalty-free licenses to anyone using the resulting standard, OASIS offers a similar Royalty Free on Limited Terms mode, along with a Royalty Free on RAND Terms mode and a RAND (reasonable and non-discriminatory) mode for its committees [1] . Controversy has arisen because this licensing allows publication of standards requiring licensing fee payments to patent holders, the use of which would effectively eliminate the possibility of free/open source implementations of these standards. Further, contributors

could initially offer royalty-free use of their patent, later imposing per-unit fees, after the standard becomes accepted. Supporters of OASIS point out this could occur anyway since an agreement would not be binding on non-participants, discouraging contributions from potential participants. Supporters further argue that IBM and Microsoft shifting standardization efforts from the W3C to OASIS is evidence this is already occurring.

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close