Database Management System by Simon (BUBT)

Published on August 2020 | Categories: Documents | Downloads: 10 | Comments: 0 | Views: 208
of 56
Download PDF   Embed   Report

Comments

Content

 

Definition  DBMS stands for "Database Management System." In short, a DBMS is a database program. Technically speaking, it is a software system that uses a standard method of cataloging, retrieving, and running queries on data. The DBMS manages incoming data, organizes it, and provides ways for the data to be modified or extracted by users or other programs. Some DBMS examples include MySQL, PostgreSQL, Microsoft Access, SQL Server, FileMaker, Oracle, RDBMS, dBASE, Clipper, and FoxPro. Since there are so many database management systems available, it is important for there to be a way for them to communicate with each other. For this reason, most database software comes with an Open Database Connectivity (ODBC) driver that allows the database to integrate with other databases. For example, common SQL statements such as SELECT and INSERT are translated from a program's proprietary syntax into a syntax other databases can understand. A database management system is the system in which related data is stored in an efficient and compact manner. "Efficient" means that the data which is stored in the DBMS can be accessed quickly and "compact" means that the data takes up very little space in the computer's memory. The phrase "related data" is means that the data stored pertains to a particular topic. Specialized databases have existed for scientific, imaging, document storage and like uses. Functionality drawn from such applications has begun appearing in mainstream DBMS's as well. However, the main focus, at least when aimed at the commercial data processing market, is still on descriptive attributes on repetitive record structures. Thus, the DBMSs of today roll together frequently needed services or features of attribute management. By externalizing such functionality to the DBMS, applications effectively share code with each other and are relieved of much internal complexity. Features commonly offered by database management systems include:  

Query ability : Querying is the process of requesting attribute information from various

perspectives and combinations of factors. Example: "How many 2-door cars in Texas are green?" A database query language and report writer allow users to interactively interrogate the database, analyze its data and update it according to the users privileges on data.  

Backup and replication : Copies of attributes need to be made regularly in case primary

disks or other equipment fails. A periodic copy of attributes may also be created for a distant organization that cannot readily access the original. DBMS usually provide utilities to facilitate the process of extracting and disseminating attribute sets. When data is replicated between database servers, so that the information remains consistent throughout the database system and users cannot tell or even know which server in the DBMS they are using, the system is said to exhibit replication transparency.

 



 

 

Rule enforcement : Often one wants to apply rules to attributes so that the attributes are

clean and reliable. For example, we may have a rule that says each car can have only one engine associated with it (identified by Engine Number). If somebody tries to associate a second engine with a given car, we want the DBMS to deny such a request and display an error message. However, with changes in the model specification such as, in this example, hybrid gas-electric cars, rules may need to change. Ideally such rules should be able to be added and removed as needed without significant data layout redesign.  

Security : For security reasons, it is desirable to limit who can see or change specific

attributes or groups of attributes. This may be managed directly on an individual basis, or by the assignment of individuals and privileges to groups, or (in the most elaborate models) through the assignment of individuals and groups to roles which are then granted entitlements.  

Computation: Common computations requested on attributes are counting, summing,

averaging, sorting, grouping, cross-referencing, and so on. Rather than have each computer application implement these from scratch, they can rely on the DBMS to supply such calculations.  

Change and access logging : This describes who accessed which attributes, what was

changed, and when it was changed. Logging services allow this by keeping a record of  access occurrences and changes.

 

Automated optimization : For frequently occurring usage patterns or requests, some

DBMS can adjust themselves to improve the speed of those interactions. In some cases the DBMS will merely provide tools to monitor performance, allowing a human expert to make the necessary adjustments after reviewing the statistics collected. A Database Management System (DBMS) is a set of computer of computer programs that controls the creation, maintenance, and the use of a database. It allows organizations to place control of  database development in the hands of database of database administrators (DBAs) and other specialists. A DBMS is a system software package that helps the use of integrated collection of data records and files known as databases. It allows different user application programs to easily access the same database. DBMSs may use any of a variety of database of  database models, such as the network model or relational model. In large systems, a DBMS allows users and other software to store and retrieve data in a structured way. Instead of having to write computer programs to extract information, user can ask simple questions in a query language. Thus, many DBMS packages provide Fourthgeneration programming language (4GLs) and other application development features. It helps to specify the logical organization for a database and access and use the information within a database. It provides facilities for controlling data access, enforcing data integrity, managing concurrency, and restoring the database from backups. A DBMS also provides the ability to logically present database information to users.  



 

History  Databases have been in use since the earliest days of electronic computing. Unlike modern systems which can be applied to widely different databases and needs, the vast majority of older systems were tightly linked to the custom databases in order to gain speed at the expense of flexibility. Originally DBMSs were found only in large organizations with the computer hardware needed to support large data sets.

1960s Navigational DBMS As computers grew in speed and capability, a number of general-purpose database systems emerged; by the mid-1960s there were a number of such systems in commercial use. Interest in a standard began to grow, and Charles Bachman, author of one such product, the Integrated Data Store (IDS), founded the "Database Task Group" within CODASYL, the group responsible for the creation and standardization of COBOL. of COBOL. In 1971 they delivered their standard, which generally became known as the "Codasyl approach", and soon a number of commercial products based on this approach were made available. The Codasyl approach was based on the "manual" navigation of a linked data set which was formed into a large network. When the database was first opened, the program was handed back a link to the first record in the database, which also contained pointers to other pieces of data. To find any particular the programmer had to step pointers at arequired time until required recordrecord was returned. Simple queries like through "find allthese the people in one India" thethe program to walk the entire data set and collect the matching results one by one. There was, essentially, no concept of "find" or "search". This may sound like a serious limitation today, but in an era when most data was stored on magnetic tape such operations were too expensive to contemplate anyway. IBM also had their own DBMS system in 1968, known as  IMS. IMS was a development of  software written for the Apollo program on the System/360. IMS was generally similar in concept to Codasyl, but used a strict hierarchy for its model of data navigation instead of Codasyl's network model. Both concepts later became known as navigational databases due to the way data was accessed, and Bachman's 1973 Turing Award award presentation was The Programmer as  Navigator   Navigato r . IMS is classified as a hierarchical database. IMS and IDMS, both CODASYL databases, as well as CINCOMs TOTAL database, are classified as network databases. databases.  

1970s Relational DBMS  



 

Edgar Codd worked at IBM in San Jose, California, in one of their offshoot offices that was primarily involved in the development of hard of hard disk systems. He was unhappy with the navigational model of the Codasyl approach, notably the lack of a "search" facility. In 1970, he wrote a number of papers that outlined a new approach to database construction that eventually culminated in the groundbreaking A Relational Relational Model of Data Data for Large Large Shared Data Bank.  In this paper, he described a new system for storing and working with large databases. Instead of  records in some sort of  linked of linked list of free-form as very in Codasyl, Codd's idea was to use a being "table"stored of fixed-length records. A linked-list systemrecords would be inefficient when storing "sparse" databases where some of the data for any one record could be left empty. The relational model solved this by splitting the data into a series of normalized tables, with optional elements being moved out of the main table to where they would take up room only if needed.

In the relational model, related records are linked together with a "key".

For instance, a common use of a database system is to track information about users, their name, login information, various addresses and phone numbers. In the navigational approach all of these data would be placed in a single record, and unused items would simply not be placed in the database. In the relational approach, the data would be normalized  into a user table, an address table and a phone number table (for instance). Records would be created in these optional tables only if the address or phone numbers were actually provided. Linking the information back together is the key to this system. In the relational model, some bit of  information was used as a "key", "key", uniquely defining a particular record. When information was being collected about a user, information stored in the optional (or related ) tables would be found by searching for this key. For instance, if the login name of a user is unique, addresses and phone numbers for that user would be recorded with the login name as its key. This "re-linking" of related data back into a single collection is something that traditional computer languages are not designed for. Just as the navigational approach would require programs to loop in order to collect records, the relational approach would require loops to collect information about any one record. Codd's solution to the necessary looping was a set-oriented language, a suggestion that would later spawn the ubiquitous SQL. Using a branch of mathematics known as tuple calculus, he demonstrated that  



 

such a system could support all the operations of normal databases (inserting, updating etc.) as well as providing a simple system for finding and returning sets of data in a single operation. Codd's paper was picked up by two people at Berkeley, Eugene Wong and Michael Stonebraker.  Stonebraker.  They started a project known as INGRES using funding that had already been allocated for a geographical database project, using student programmers to produce code. Beginning in 1973, INGRES delivered its first test products which were generally ready for widespread use in 1979.  — to During worked this time, number of people hadatmoved perhaps as 30 people onathe project, about five a time."through" INGRESthe wasgroup similar SystemasRmany in a number of ways, including the use of a "language" for data access, known as QUEL — QUEL was in fact relational, having been based on Codd's own Alpha language, but has since been corrupted to follow SQL, thus violating much the same concepts of the relational model as SQL itself.

IBM itself did one test implementation of the relational model, PRTV, and a production one, Business System 12, both now discontinued. Honeywell did MRDS for Multics, and now there are two new implementations: Alphora Dataphor and Rel. All other DBMS implementations usually called relational are actually SQL DBMSs. In 1968, the University of Michigan began development of the Micro DBMS . It was used to manage very large data sets by the US Department of Labor, the Environmental Protection Agency and researchers from University of  Alberta, the University of Michigan and Wayne State University. It ran on mainframe computers using Michigan Terminal System. The system remained in production until 1996.

End 1970s SQL DBMS IBM started working on a prototype system loosely based on Codd's concepts as  as  System R in the early 1970s. The first version was ready in 1974/5, and work then started on multi-table systems in which the data could be split so that all of the data for a record (some of which is optional) did not have to be stored in a single large "chunk". Subsequent multi-user versions were tested by customers in 1978 and 1979, by which time a standardized query language - SQL - had been added. Codd's ideas were establishing themselves as both workable and superior to Codasyl, pushing IBM to develop a true production version of System R, known as SQL/DS, and, later,  Database 2 (DB2) (DB2).. Many of the people involved with INGRES became convinced of the future commercial success of  such systems, and formed their own companies to commercialize the work but with an SQL interface. Sybase, Informix, NonStop SQL and eventually Ingres itself were all being sold as offshoots to the original INGRES product in the 1980s. Even Microsoft SQL Server is actually a re-built version of Sybase, and thus, INGRES. Only Larry Ellison's Ellison's Oracle started from a different chain, based on IBM's papers on System R, and beat IBM to market when the first version was released in 1978. Stonebraker went on to apply the lessons from INGRES to develop a new database, Postgres, which is now known as PostgreSQL. PostgreSQL is often used for global mission critical applications (the .org and .info domain name registries use it as their primary data store, as do many large companies and financial institutions). In Sweden, Codd's paper was also read and Mimer SQL was developed from the mid-70s at Uppsala University. In 1984, this project was consolidated into an independent enterprise. In the  



 

early 1980s, Mimer introduced transaction handling for high robustness in applications, an idea that was subsequently implemented on most other DBMS.

1980s Object Oriented Databases The 1980s, along with a rise in object oriented programming; saw a growth in how data in various databases were handled. Programmers and designers began to treat the data in their databases as objects. That is to say that if a person‘s data were in a database, that person‘s attributes, such as their address, phone number, and age, was now considered to belong to that person instead of being extraneous data. This allows for relationships between data to be relation to objects and their attributes and not to individual fields. Another big game changer for databases in the 1980s was the focus on increasing reliability and access speeds. In 1989, two professors from the University of Michigan at Madison published an article at an ACM associated conference outlining their methods on increasing database performance. The idea was to replicate specific important and often queried information, and store it in a smaller temporary database that linked these key features back to the main database. This meant that a query could search the smaller database much quicker, rather than search the entire dataset. This eventually leads to the practice of indexing, which is used by almost every operating system from Windows to the system that operates Apple iPod devices.

Current Trends In 1998, database management was in need of a new style of databases to solve current database management problems. Researchers realized that the old trends of database management were becoming too complex and there was a need for automated configuration and management. Surajit Chaudhuri, Gerhard Weikum and Michael Stonebraker were the pioneers that dramatically affected the thought of database management systems. They believed that database management needed a more modular approach and there were too many specifications needed for users. Since this new development process of database management there are more possibilities. Database management is no longer limited to ―monolithic entities‖. Many solutions have been developed d eveloped to satisfy the individual needs of users. The development of numerous database options has created flexibility in database management. There are several ways database management has affected the field of technology. Because organizations' demand for directory services has grown as they expand in size, businesses use directory services that provide prompted searches for company information. Mobile devices are able to store more than just the contact information of users, and can cache and display a large amount of information on smaller displays. Search engine queries are able to locate data within the World Wide Web. Retailers have also benefited from the developments with data warehousing, warehousing,   recording customer transactions. Online transactions have become tremendously popular for ebusiness. Consumers and businesses are able to make payments securely through some company websites. These current developments would not have been possible without the evolution of  database management. Even with the progress of database management, there is a demonstrated need for new development as specifications and needs change.  



 

As the speeds of consumer internet connectivity increase, and as data availability and computing become more ubiquitous, databases are seeing a migration to web services. Web-based languages such as XML and PHP are used to process databases. These languages allow databases to live in "the cloud." cloud." As with products such as Google's Gmail, Microsoft's Office 2010, and Carbonite's Carbonite's online backup services, many services are beginning to move to web based services due to increasing internet reliability, data storage efficiency, and the lack of a need for dedicated IT staff  to manage the hardware. Faculty at Rochester Institute of Technology published a paper regarding the use of databases in the cloud state that information their university plans to (IT) add cloud-based computing to their curriculum to and "keep [their] technology curriculum atdatabase the forefront of technology."

Components DBMS Engine accepts logical requests from various other DBMS subsystems, converts them into

physical equivalents, and actually accesses the database and data dictionary as they exist on a storage device.  



Data Definition Subsystem helps the user create and maintain the data dictionary and

define the structure of the files in a database. 

 

 



 



Data Manipulation the user to add, change, and delete information in a database and query itSubsystem for valuablehelps information. Software tools within the data manipulation

subsystem are most often the primary interface between user and the information contained in a database. It allows the user to specify its logical information requirements. Application Generation Subsystem contains facilities to help users develop transactionintensive applications. It usually requires that the user perform a detailed series of tasks to process a transaction. It facilitates easy-to-use data entry screens, programming languages, and interfaces. Data Administration Subsystem helps users manage the overall database environment by providing facilities for backup and recovery, security management, query optimization, concurrency control, and change management.

Data 

Data 

Definition 

Manipulation 

Database

Application 

Data 

Generation 

Administration 

The component of the DBMS is discussed in details on the following:  



 

Data Definition Component Data Definition Component creates and maintains the data dictionary and the structure of the databaseThe data definition component includes the data dictionary. Data dictionary:

The data dictionary is an important part of the DBMS because users can consult the dictionary to determine the different types of database information.   The following slide displays an example of logical field properties found in a database   A file that stores definitions of information types, identifies the primary and foreign keys, and maintains the relationships among the tables.

 

Data dictionary essentially defines the logical properties of the information that the database contains. Filed name Type Form Default value Validation Rule Entry rule Duplicate rule

Logical property

Example Name of the field such as Customer ID or product ID. Alphanumeric, numeric, data, time currency, etc. Each phone number must have the area code (XXX) XXX-XXX The default value area code is (303) A discount rule cannot exceed 100% The field must have a valid entry-no blanks are allowed Duplicate information is not allowed

Logical properties displayed in the figure vary depending on the types of information.  



 

A typical address filed can accept numbers, letters, and special characters (Relational integrity constraint). The validation rule requiring that a discount can‘t exceed 100% (Business-critical

integrity

constraint).

Data Manipulation Component Data Manipulation Component allows users to create, read, update, and delete information in a database. A DBMS contains several data manipulation tools: View : allows users to see, change, sort, and query the database content    Report generator : users can define report formats     Query-by-example(QBE)- users can graphically design the answers to specific questions   Structured query language(SQL)-query language  

The above figure displays a QBE graphical query. The above figure displays the call-outs that explain the fields in the query.

Application Generation & Data Administration Components  AG includes tools for creating visually appealing and easy-to-use applications. Again, DA provides tools for managing the overall database environment by providing faculties for backup, recovery, security and performance. It specialists primarily use these components.

 



 

Modeling language A modeling language is a data modeling language to define the schema of each database hosted in the DBMS, according to the DBMS database model. Database management systems (DBMS) are designed to use one of five database structures to provide simplistic access to information stored in databases. The five database structures are:  

the hierarchical the  hierarchical model, model,  

 

the network the  network model, model,  

 

the relational the  relational model, model,  









   



the multidimensional the  multidimensional model, and  the object model .  the object

Inverted lists and other methods are also used. A given database management system may provide one or more of the five models. The optimal structure depends on the natural organization of the application's data, and on the application's requirements, which include transaction rate (speed), reliability, maintainability, scalability, and cost. The Hierarchical structure was used in early mainframe DBMS. Records‘ relationships form a treelike model. This structure is simple but nonflexible because the relationship is confined to a one-to-many relationship. IBM‘s IMS system and the RDM Mobile are examples exa mples of a hierarchical   database system with multiple hierarchies over the same data. RDM Mobile is a newly designed embedded database for a mobile computerand system. The hierarchical structure is used primarily today for storing geographic information file systems.  

10 

 

The Network structure consists of more complex relationships. Unlike the hierarchical structure, it can relate to many records and accesses them by following one of several paths. In other words, this structure allows for many-to-many relationships. The Relational structure is the most commonly used today. It is used by mainframe, midrange and microcomputer systems. It uses two-dimensional rows and columns to store data. The tables of  records can be connected by common key values. While working for IBM, E.F. Codd designed this structure in 1970. The model is not easy for the end user to run queries with because it may require a complex combination of many tables. The Multidimensio ultidimensional nal structure is similar to the relational model. The dimensions of the cubelike model have data relating to elements in each cell. This structure gives a spreadsheet-like view of data. This structure is easy to maintain because records are stored as fundamental attributes - in the same way they are viewed - and the structure is easy to understand. Its high performance has made it the most popular database structure when it comes to enabling online analytical processing (OLAP). The Object oriented structure has the ability to handle graphics, pictures, voice and text, types of data, without difficultly unlike the other database structures. This structure is popular for multimedia Web-based applications. It was designed to work with object-oriented programming languages such as Java. Java.   The dominant model in use today is the ad hoc one embedded in SQL, despite the objections of  purists who believe this model is a corruption of the relational model since it violates several fundamental principles for the sake of practicality and performance. Many DBMSs also support the Open Database Connectivity API that supports a standard way for programmers to access the DBMS. Before the database management approach, organizations relied on file processing systems to organize, store, and process data files. End users criticized file processing because the data is stored in many different files and each organized in a different way. Each file was specialized to be used with a specific application. File processing was bulky, costly and nonflexible whenthe it came supplying needed data accurately and promptly. Data redundancy is an issue with file to processing system because the independent data files produce duplicate data so when updates were needed each separate file would need to be updated. Another issue is the lack of data integration. The data is dependent on other data to organize and store it. Lastly, there was not any consistency or standardization of the data in a file processing system which makes maintenance difficult. For these reasons, the database management approach was produced.

 

11 

 

Database Models  A database model is the theoretical foundation of a database and fundamentally determines in which manner data can be stored, organized and manipulated in a database system. It thereby defines the infrastructure offered by a particular database system. The most popular example of a database model is the relational model. model.  

 

12 

 

A database model is a theory or specification describing how a database is structured and used. Several such models have been suggested. Common models include: Hierarchical model Network model Relational model Entity-relationship Object-relational model Object model A data model is not just a way of structuring data: it also defines a set of operations that can be performed on the data. The relational model, for example, defines operations such as select,  select,  project, and join. Although these operations may not be explicit in a particular query language, they provide the foundation on which a query language is built. Various techniques are used to model data structure. Most database systems are built around one particular data model, although it is increasingly common for products to offer support for more than one model. For any one logical model various physical implementations may be possible, and most products will offer the user some level of control in tuning the physical implementation, since the choicesmodel: that are have a significant effect performance. example of this isofthe relational allmade serious implementations of theonrelational modelAnallow the creation indexes which provide fast access to rows in a table if the values of certain columns are known.  

13 

 

Flat File Database  A flat file database describes any of various means to encode a database model (most commonly a table) as a single file (such as .txt or .ini) .ini)..

A "flat file" is a plain text or mixed text and binary file which usually contains one record per line or 'physical' record (example on disc or tape) tape).. Within such a record, the single fields can be separated by delimiters, e.g. commas, or have a fixed length. In the latter case, padding may be needed to achieve this length. Extra formatting may be needed to avoid delimiter collision. There are no structural relationships between the records. Typical examples of flat files are /etc/passw  /etc/passwd  d and /etc/grou  /etc/group p on Unix-like operating systems. Another example of a flat file is a name-and-address list with the fields Name, Address, and Phone  Number . It is possible to write out by hand, on a sheet of paper, a list of names, addresses, and phone numbers; this is a flat file database. This can also be done with any typewriter or word processor. processor.   Many pieces of computer software are designed to implement flat file databases.

 

14 

 

History  The first uses of computing machines were implementations of simple databases. Herman Hollerith conceived the idea that census data could be represented by holes punched in paper cards and tabulated by machine. He sold his concept to the US Census Bureau; thus, the Census of 1890 was the first ever computerized database — cconsisting, onsisting, in essence, of thousands of boxes full of punched of punched cards.   cards. Hollerith's enterprise grew into computer giant IBM, which dominated the data processing market for most of the 20th century. IBM's fixed-length field, 80-column punch cards became the ubiquitous means of inputting electronic data until the 1970s. In the 1980s,These configurable databasetocomputer applications were popular on and DOSuse andtheir the Macintosh. programsflat-file were designed make it easy for individuals to design own databases, and were almost on par with word processors and spreadsheets in popularity. Examples of flat-file database products were early versions of FileMaker of  FileMaker and the shareware PCFile. Some of these offered limited relational capabilities, allowing some data to be shared between files.

Contemporary implementations  Faircom's C-tree is an example of a modern enterprise-level solution, and spreadsheet software is Faircom's often used for this purpose, but aside from that there are very few programs available today that would allow a novice to create and use a general-purpose flat file database. This functionality is implemented in Microsoft Works (available some versions of Windows) and AppleWorks, AppleWorks,  sometimes named ClarisWorks (available foronly bothfor Macintosh and Windows platforms). Over time,   products like Borland' Borland'ss Paradox, and Microsoft' Microsoft'ss Access started offering some relational  

15 

 

capabilities, as well as built-in programming languages. Database Management Systems (DBMS) (DBMS)   like MySQL or Oracle generally require programmers to build applications. Flat file databases are still used internally by many computer applications to store configuration data. Many applications allow users to store and retrieve their own information from flat files using a pre-defined set of fields. Examples are programs to manage collections of books or appointments. Some small "contact" (name-and-address) database implementations essentially use flat files. XML is now a popular format for storing data in plain text files, but as XML allows very complex nested data structures to be represented and contains the definition of the data, it is very different from the flat-file model.

Terms  "Flat file database" may be defined very narrowly, or more broadly. The narrower interpretation is correct in database theory; the broader covers the term as generally used. Strictly, a flat file database should consist of nothing but data and, if records vary in length, delimiters. More broadly, the term refers to any database which exists in a single file in the form of  rows and columns, with no relationships or links between records and fields except the table structure. Terms used to describe different aspects of a database and its tools differ from one implementation to the next, but the concepts remain the same. FileMaker uses the term "Find", while MySQL uses the term "Query"; but the concept is the same. FileMaker "files" are equivalent to MySQL "tables", and so forth. To avoid confusing the reader, one consistent set of terms is used throughout this article. However, the basic terms "record" and "field" are used in nearly every flat file database implementation.  

Example Database  data  arrangement The following example illustrates the basic elements of a flat-file database. The  The  data consists of a series of columns and rows organized into a  a  tabular format. format. This specific example uses only one table. The columns include: name (a person's name, second column); team (the name of an athletic team supported by the person, third column); and a numeric unique ID, (used to uniquely identify records, first column).Here is an example textual representation of the described data: Id

name

team

1

Amy

Blues

2 3

Bob Chuck

Reds Blues

4

Dick

Blues

5

Ethel

Reds

 

16 

 

6

Fred

Blues

7

Gilly

Blues

8

Hank

Reds

This type of data representation is quite standard for a flat-file database, although there are some additional considerations that are not readily apparent from the text: Data types: each column in a database table such as the one above is ordinarily restricted to

a specific data type. Such restrictions are usually established by convention, but not formally indicated unless the data is transferred to a relational database system. Separated columns: In the above example, individual columns are separated using whitespace characters. This is also called indentation or "fixed-width" data formatting. Another common convention is to separate columns using one or more delimiter characters. There are many different conventions for depicting data such as that above in text. (See e.g., Comma-separated values, Delimiter-separated values, Markup language, Programming language).. Using delimiters incurs some overhead in locating them every time they are language) processed (unlike fixed-width formatting) which may have some performance implications. However, use of character delimiters (especially commas) is also a crude form of data of  data compression which may assist overall performance by reducing data volumes - especially for data transmission purposes. Use of character delimiters which include a length component (Declarative notation) is comparatively rare but vastly reduces the overhead associated with locating the extent of each field. Relational algebra: Each row or record in the above table meets the standard definition of  a tuple under relational algebra (the above example depicts a series of 3-tuples). Additionally, the first row specifies the field names that are associated with the values of  each row.

Database management system: Since the formal operations possible with a text file are

usually more limited than desired, the text in the above example would ordinarily represent an intermediary state of the data prior to being transferred into a database management system. 

Practical Implementations  GNU Recutils, a set of tools and libraries to access human-editable, text-based text-based databases called recfiles SQLite, software library that implements a self-contained, serverless, zero-configuration, transactional SQL database engine. Berkeley DB, a robust flat file database for critical applications which supports ACID transactions

 

17 

 

Borland Reflex TextDB, a file-based database designed to handle high loads. (Last updated: 2002) Mimesis, an FFDB written in PHP4 that uses multiple files and a heap method of s storage. torage. MySQL CSV, a storage engine for MySQL 5.x. TheIntegrationEngineer How Delimited and Fixed Position or Fixed Width files compare and are used. Flat File Checker - Open source data validation application for flat files. HXTT JDBC CSV/Text - HXTT JDBC flat file database tool. UNIXODBC Text Driver - UNIXODBC included Text Driver. CsvJdbc - a JDBC driver for CSV files. J-Stels CSV - a JDBC driver for CSV files.

Hierarchical database model 

A hierarchical data model is a data model in which the data is organized into a tree-like tree-like structure. The structure allows repeating information using parent/child relationships: each parent can have many children but each child only has one parent (also known as a 1:many ratio ). All attributes of  a specific record are listed under an entity type.

Example of a Hierarchical Model. In a database, an entity type is the equivalent of a table; each individual record is represented as a row and an attribute as a column. Entity types are related to each other using 1: N mapping, also  

18 

 

known as one-to-many relationships. this model is recognized as the first data base model created by IBM in the 1960s. The most recognized and used hierarchical databases are IMS developed by IBM and Windows Registry by Microsoft.  Microsoft. 

History  The hierarchical data model lost traction as Codd' Codd'ss relational model became the de facto standard used by virtually all mainstream database management systems. A relational database implementation of a hierarchical model was first discussed in publication form in 1992 (see also nested set model). model). Hierarchical data organization schemes resurfaced with the advent of  of XML XML in the late 1990s. 

Examples of hierarchical data represented as relational tables An organization could store employee information in a table that contains attributes/columns such as employee number, first name, last name, and Department number. The organization provides each employee with computer hardware as needed, but computer equipment may only be used by the employee to which it is assigned. The organization could store the computer hardware information in a separate table that includes each part's serial number, type, and the employee that uses it. The tables might look like this:

EmpNo First Name Last Name Dept. Num

Serial Num

100

Sally

Baker

10-L

3009734-4

101

Jack

Douglas

10-L

3-23-283742 Monitor

100

102

Sarah

Schultz

20-B

2-22-723423 Monitor

100

103

David

Drachmeier 20-B

232342

100

Type

User EmpNo

Computer 100

Printer

In this model, the employee data table represents the "parent" part of the hierarchy, while the computer tableofrepresents "child" part the individual hierarchy. piece As shown, each employee may possess several pieces computerthe equipment, butofeach of computer equipment may have only one employee owner.  

19 

 

Consider the following structure: EmpNo

Designation

ReportsTo

10

Director

20

Senior Manager 10

30

Typist

20

40

Programmer

20

In this, the "child" is the same type as the "parent". The hierarchy stating EmpNo 10 is boss of 20, and 30 and 40 each report to 20 is represented by the "ReportsTo" column. In Relational database terms, the ReportsTo column is a foreign key referencing the EmpNo column. If the "child" data type were different, it would be in a different table, but there would still be a foreign key referencing the EmpNo column of the employees table. This simple model is commonly known as the adjacency list model, and was introduced by Dr. Edgar F. Codd after initial criticisms surfaced that the relational model could not model hierarchical data.

 

20 

 

Network Database Model The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, is not restricted to being a hierarchy or lattice. The network published model's original was Charles Consortium. Bachman, and it was developed into a standard specification in 1969inventor by the CODASYL

Where the hierarchical database model structures data as a tree of records, with each record having one parent record and many children, the network model allows each record to have multiple parent and child records, forming a generalized graph structure. This property applies at two levels: the schema is a generalized graph of record types connected by relationship types (called "set types" in CODASYL), and the database itself is a generalized graph of record occurrences connected by relationships (CODASYL "sets"). Cycles are permitted at both levels.  

21 

 

The chief argument in favour of the network model, in comparison to the hierarchic model, was that it allowed a more natural modeling of relationships between entities. Although the model was widely implemented and used, it failed to become dominant for two main reasons. Firstly, IBM chose to stick to the hierarchical model with semi-network extensions in their established products such as IMS and DL/I. Secondly, it was eventually displaced by the relational model, which offered a higher-level, more declarative interface. Until the early 1980s the performance benefits of  the low-level navigational interfaces offered by hierarchical and network databases were persuasive for many large-scale applications, buttoasthe hardware faster, of thethe extra productivity and flexibility of the relational model led gradualbecame obsolescence network model in corporate enterprise usage.

Some Well-known Database Systems using the Network Model Digital Equipment Corporation DBMS-10 Digital Equipment Corporation DBMS-20 Digital Equipment Corporation VAX DBMS Honeywell IDS (Integrated Data Store) IDMS (Integrated Database Management System) RDM Embedded RDM Server TurboIMAGE Univac DMS-1100.  DMS-1100. 

History In 1969, the Conference on Data Systems Languages (CODASYL) established the first specification of the network database model. This was followed by a second publication in 1971, which became the basis for most implementations. Subsequent work continued into the early 1980s, culminating in an ISO specification, but this had little influence on products.

Relational Database Model  A relational database matches data by using common characteristics found within the data set. The resulting groups of data are organized and are much easier for many people to understand.

 

22 

 

For example, a data set containing all the real-estate transactions in a town can be grouped by the year the transaction occurred; or it can be grouped by the sale price of the transaction; or it can be grouped by the buyer's last name; and so on. Such a grouping uses the relational model (a technical term for this is schema) schema).. Hence, such a database is called a "relational database." The software used to "relational do this grouping is called relational database (RDBMS). The term database" oftenarefers to this type ofmanagement software. system Relational databases are currently the predominant choice in storing financial records, medical records, manufacturing and logistical information, personnel data and much more.

Its central idea was to describe a database as a collection of predicates of  predicates over a finite set of predicate variables, describing constraints on the possible values and combinations of values. The content of  the database at any given time is a finite (logical) model of the database, i.e. a set of  of relations, relations, one per predicate variable, such that all predicates are satisfied. A request for information from the database (a database query) is also a predicate.

 

23 

 

In the relational model, related records are linked together with a "key".

The purpose of the relational model is to provide a declarative method for specifying data and queries: we directly state what information the database contains and what information we want from it, and let the database management system software take care of describing data structures for storing the data and retrieval procedures for getting queries answered. IBM's original implementation of Codd's ideas was System R. There have been several commercial IBM's and open source products based on Codd's ideas, including IBM's DB2, Oracle Database, Database,   Microsoft SQL Server, PostgreSQL, MySQL, and many others. Most of these use the SQL data definition and query language. A table in an SQL database schema corresponds to a predicate variable; the contents of a table to a relation; key constraints, other constraints, and SQL queries correspond to predicates. However, it must be noted that SQL databases, including DB2, deviate from the relational model in many details; Codd fiercely argued against deviations that compromise the original principles. Alternatives to the relational model

Other models are the hierarchical model and network model. Some systems using these older architectures are still in use today in data centers with high data volume needs or where existing systems are so complex and abstract it would be cost prohibitive to migrate to systems employing the relational model; also of note are newer object-oriented databases.  databases.   

24 

 

A recent development is the Object-Relation type-Object model, which is based on the assumption that any fact can be expressed in the form of one or more binary relationships. The model is used in Object Role Modeling (ORM), RDF /Notation 3 (N3) and in Gellish English. English.   The relational model was the first database model to be described in formal mathematical terms. Hierarchical and network databases existed before relational databases, but their specifications were relatively informal. After the relational model was defined, there were many attempts to compare andofcontrast the different models,the andprocedural this led tonature the emergence of manipulation more rigorousinterfaces descriptions the earlier models; though of the data for hierarchical and network databases limited the scope for formalization.

Implementation

There have been several attempts to produce a true implementation of the relational database model as originally defined by Codd and explained by Date, Darwen and others, but none have been popular successes so far. Rel is one of the more recent attempts to do this. History

The relational model was invented by E.F. (Ted) Codd as a general model of data, and subsequently maintained and developed by Chris Date and Hugh Darwen among others. In The Third Manifesto (first published in 1995) Date and Darwen show how the relational model can accommodate certain desired object-oriented features. Controversies

Codd himself, some years after publication of his 1970 model, proposed a three-valued logic (True, False, Missing or NULL) version of it to deal with missing information, and in his The Relational  Model for Database Database Management Management Version Version 2 (1990) he went a step further with a four-valued logic (True, False, Missing but Applicable, Missing but Inapplicable) version. But these have never been implemented, presumably because of attending complexity. SQL's NULL construct was intended to be part of a three-valued logic system, but fell short of that due to logical errors in the standard and in its implementations. Terminology

The term relational database was originally defined and coined by Edgar Codd at IBM Almaden Research Center in 1970.

 

25 

 

Relationalterminology. database theory set ofsummarizes mathematical terms, which areimportant roughly equivalent to SQL database The uses tableabelow some of the most relational database terms and their SQL database equivalents. Relational term

SQL equivalent

relation, base relvar table derived relvar

view, query result, result set

tuple

row

attribute

column

Relational Model Topics

The model  

26 

 

The fundamental assumption of the relational model is that all data is represented as mathematical n-ary -ary  relations, an n-ary relation being a subset of the Cartesian product of n domains. In the mathematical model, reasoning about such data is done in two-valued predicate logic, meaning there are two possible evaluations for each proposition: either true or false (and in particular no third value such as unknown, or not applicable, either of which are often associated with the concept of NULL) of NULL).. Some think two-valued logic is an important part of the relational model, while others think a system that uses a form of three-valued of  three-valued logic can still be considered relational. Data are operated upon by means of a relational calculus or relational algebra, these being equivalent in expressive power. power.   The relational model of data permits the database designer to create a consistent, logical representation of information. of information. Consistency is achieved by including declared declared   constraints constraints in the database design, which is usually referred to as the logical schema. The theory includes a process of database of database normalization whereby a design with certain desirable properties can be selected from a set of logically of logically equivalent alternatives. The access plans and other implementation and operation details are handled by the DBMS engine, and are not reflected in the logical model. This contrasts with common practice for SQL DBMSs in which performance tuning often requires changes to the logical model. The basic relational building block is the domain or data type, usually abbreviated nowadays to

A tuple is an ordered set of  attribute  type. A   attribute values values. An attribute is an ordered pair of  attribute  attribute name name  and type name. An attribute value is a specific valid value for the type of the attribute. This can be either a scalar value or a more complex type. A relation consists of a heading and a body. A heading is a set of attributes. A body (of an n-ary relation) is a set of n-tuples. The heading of the relation is also the heading of each of its tuples. A relation is defined as a set of n-tuples. In both mathematics and the relational database model, a set is an unordered collection of unique, non-duplicated items, although some DBMSs impose an order to their data. In mathematics, a tuple has an order, and allows for duplication. E.F. Codd originally defined tuples using this mathematical definition. Later, it was one of E.F. of E.F. Codd' Codd'ss great insights that using attribute names instead of an ordering would be so much more convenient (in general) in a computer language based on relations. This insight is still being used today. Though the concept has changed, the name "tuple" has not. An immediate and important consequence of  this distinguishing feature is that in the relational model the Cartesian product becomes commutative.   commutative. A table is an accepted visual representation of a relation; a tuple is similar to the concept of   of  row,  but note that in the database language SQL the columns and the rows of a table are ordered. A relvar  is a named variable of some specific relation type, to which at all times some relation of  that type is assigned, though the relation may contain zero tuples. The basic principle of the relational model is the Information Principle: all information is represented data In accordance Principle, a relational database is a set of relvarsbyand thevalues result in ofrelations. every query is presentedwith as a this relation.  

27 

 

The consistency of a relational database is enforced, not by rules built into the applications that use it, but rather by  by constraints , declared as part of the logical schema and enforced by the DBMS for all applications. In general, constraints are expressed using relational comparison operators, of  which just one, "is subset of" ( ⊆), is theoretically sufficient. In practice, several useful shorthands are expected to be available, of which the most important are candidate key (really, superkey) and foreign key constraints. Interpretation

To fully appreciate the relational model of data it is essential to understand the intended interpretation of a relation. relation.   The body of a relation is sometimes called its extension. This is because it is to be interpreted as a representation of the extension of some predicate, this being the set of true propositions that can be formed by replacing each free variable in that predicate by a name (a term that designates something). There is a one-to-one correspondence between the free variables of the predicate and the attribute names of the relation heading. Each tuple of the relation body provides attribute values to instantiate the predicate by substituting each of its free variables. The result is a proposition that is deemed, on account of the appearance of the tuple in the relation body, to be true. Contrariwise, every tuple whose heading conforms to that of the relation but which does not appear in the body is deemed to be false. This assumption is known as the closed world assumption: it is often violated in practical databases, where the absence of a tuple might mean that the truth of the corresponding proposition is unknown. For example, the absence of the tuple ('John', 'Spanish') from a table of  language skills cannot necessarily be taken as evidence that John does not speak Spanish. For a formal exposition of these ideas, see the section Set-theoretic Formulation, below. Application to databases

A type as used in a typical relational database might be the set of integers, the set of character strings, the set of dates, or the two boolean values true and false, and so on. The corresponding type names for these types might be the strings "int", "char", "date", "boolean", etc. It is important to understand, though, that relational theory does not dictate what types are to be supported; indeed, nowadays provisions are expected to be available for user-defined  types in addition to the built-in ones provided by the system. Attribute is the term used in the theory for what is commonly referred to as a column. Similarly, table is commonly used in place of the theoretical term relation (though in SQL the term is by no

means synonymous with relation). A table data structure is specified as a list of column definitions, each of which specifies a unique column name and the type of the values that are permitted for that column. An attribute value is the entry in a specific column and row, such as "John Doe" or "35".  tuple  isordered. basically(Tuples the same as aa  rowinstead, , excepteach in anattribute SQL DBMS, the column aArow are arething not ordered; value where is identified solelyvalues by the in attribute name and never by its ordinal position within the tuple.) An attribute name might be "name" or "age".  

28 

 

A relation is aa  table structure definition (a set of column definitions) along with the data appearing in that structure. The structure definition is the heading and the data appearing in it is the body, a set of rows. A database  database relvar (relation variable) is commonly known as a base table. The heading of its assigned value at any time is as specified in the table declaration and its body is that most recently assigned to it by invoking some update operator (typically, INSERT, UPDATE, or DELETE). The heading and body of the table resulting from evaluation of some query are determined by the definitions of the operators used in the expression of that query. (Note that in SQL the heading is not a setalso of column as described because is possible for a column to have noalways name and for two definitions or more columns to haveabove, the same name.it Also, the body is not always a set of rows because in SQL it is possible for the same row to appear more than once in the same body.)

SQL and the relational model

SQL, initially pushed as the standard language for relational databases, deviates from the relational model in several places. The current ISO SQL standard doesn't mention the relational model or use relational terms or concepts. However, it is possible to create a database conforming to the relational model using SQL if one does not use certain SQL features. The following deviations from the relational model have been noted in SQL. Note that few database servers implement the entire SQL standard and in particular do not allow some of these deviations. Whereas NULL is ubiquitous, for example, allowing duplicate column names within a table or anonymous columns is uncommon. Duplicate rows

The same row can appear more than once in an SQL table. The same tuple cannot appear more than once in a relation. relation.   Anonymous columns

A column in an SQL table can be unnamed and thus unable to be referenced in expressions. The relational model requires every attribute to be named and referenceable. Duplicate column names

Two or more columns of the same SQL table can have the same name and therefore cannot be referenced, on account of the obvious ambiguity. The relational model requires every attribute to be referenceable.

Column order significance

 

29 

 

The order of columns in an SQL table is defined and significant, one consequence being that SQL's implementations of Cartesian product and union are both noncommutative. The relational model requires there to be no significance to any ordering of the attributes of a relation. Views without CHECK OPTION

Updates to a view defined without CHECK OPTION can be accepted but the resulting update to the database does not necessarily have the expressed effect on its target. For example, an invocation of INSERT can be accepted but the inserted rows might not all appear in the view, or an invocation of UPDATE can result in rows disappearing from the view. The relational model requires updates to a view to have the same effect as if the view were a base relvar. Columnless tables unrecognized

SQL requires every table to have at least one column, but there are two relations of degree zero (of  cardinality one and zero) and they are needed to represent extensions of predicates that contain no free variables. NULL

This special mark can appear instead of a value wherever a value can appear in SQL, in particular in place of a column value in some row. The deviation from the relational model arises from the fact that the implementation of this ad hoc concept in SQL involves the use of three-valued of  three-valued logic, logic,   under which the comparison of NULL with itself does not yield true but instead yields the third truth value,  value, unknown; similarly the comparison NULL with something other than itself does not yield false but instead yields unknown. It is because of this behaviour in comparisons that NULL is described as a mark rather than a value. The relational model depends on the law of excluded middle under which anything that is not true is false and anything that is not false is true; it also requires every tuple in a relation body to have a value for every attribute of that relation. This particular deviation is disputed by some if only because E.F. Codd himself eventually advocated the use of special marks and a 4-valued logic, but this was based on his observation that there are two distinct reasons why one might want to use a special mark in place of a value, which led opponents of the use of such logics to discover more distinct reasons and at least as many as 19 have been noted, which would require a 21-valued logic. SQL itself uses NULL for several purposes other than to represent "value unknown". For example, the sum of the empty set is NULL, meaning zero, the average of the empty set is NULL, meaning undefined, and NULL appearing in the result of a LEFT JOIN can mean "no value because there is no matching row in the right-hand operand". Relational operations

Users (or programs) request data from a relational database by sending it a query that is written in a special language, usually a dialect of SQL. of SQL. Although SQL was originally intended for end-users, it  

30 

 

is much more common for SQL queries to be embedded into software that provides an easier user interface. Many web sites, such as Wikipedia, perform SQL queries when generating pages. In response to a query, the database returns a result set, which is just a list of rows containing the answers. The simplest query is just to return all the rows from a table, but more often, the rows are filtered in some way to return just the answer wanted. Often, dataallfrom multiple tables are of combined into one, by product) doing a ,join. this is done by taking possible combinations rows (the Cartesian product), and Conceptually, then filtering out everything except the answer. In practice, relational database management systems rewrite ("optimize" "optimize")) queries to perform faster, using a variety of techniques. There are a number of relational operations in addition to join. These include project (the process of eliminating some of the columns), restrict (the process of eliminating some of the rows), union (a way of combining two tables with similar structures), difference (which lists the rows in one table that are not found in the other), intersect (which lists the rows found in both tables), and product (mentioned above, which combines each row of one table with each row of the other). Depending on which other sources you consult, there are a number of other operators  – many of  which can be defined in terms of those listed above. These include semi-join, outer operators such as outer join and outer union, and various forms of division. Then there are operators to rename columns, and summarizing or aggregating operators, and if you permit relation values as attributes (RVA – relation-valued attribute), attribute), then operators such as group and ungroup. The SELECT statement in SQL serves to handle all of these except for the group and ungroup operators. The flexibility of relational databases allows programmers to write queries that were not anticipated by the database designers. As a result, relational databases can be used by multiple applications in ways the original designers did not foresee, which is especially important for databases that might be used for a long time (perhaps several decades). This has made the idea and implementation of relational databases very popular with businesses.

Examples Database

An idealized, very simple example of a description of some relvars (relation variables) and their attributes:

 

Customer(Customer ID, Tax ID, Name, Address, City, State, Zip, Phone) Order(Order No, Customer ID, Invoice No, Date Placed, Date Promised, Terms, Status) Order Line(Order No, Order Line No, Product Code, Qty) Invoice(Invoice No, Customer ID, Order No, Date, Status) Invoice Line(Invoice No, Invoice Line No, Product Code, Qty Shipped)

 

Product(Product Code, Product Description)

       

 

31 

 

In this design we have six relvars: Customer, Order, Order Line, Invoice, Invoice Line and Product. The bold, underlined attributes are  are candidate keys. The non-bold, underlined attributes are are   foreign foreign keys.  Usually one candidate key is arbitrarily chosen to be called the primary key and used in preference over the other candidate keys, which are then called alternate keys. keys.   key is a unique identifier enforcing that no tuple will be duplicated; this would make A candidate the relation into something else, namely a bag, by violating the basic definition of a set. Both foreign keys and superkeys (which includes candidate keys) can be composite, that is, can be composed of several attributes. Below is a tabular depiction of a relation of our example Customer relvar; a relation can be thought of as a value that can be attributed to a relvar.

Customer relation

Customer ID Tax ID Name Address [More fields... fields....] .] ====================================================================== ============================ 1234567890 555-5512222 Munmun 323 Broadway ... 2223344556 555-5523232 Wile E. 1200 Main Street ... 3334445563 555-5533323 Ekta 871 1st Street ... 4232342432

555-5325523

E. F. Codd

123 It Way

...

If we attempted to insert a new customer with the ID 1234567890, this would violate the design of  the relvar since Customer ID is a primary key and we already have a customer 1234567890. The DBMS must reject a transaction such as this that would render the database inconsistent by a violation of an integrity constraint. constraint.   Foreign keys are integrity constraints enforcing that the value of the attribute set is drawn from a candidate key in another relation. For example in the Order relation the attribute Customer ID is a foreign key. A  A  join join is the operation that draws on information from several relations at once. By  joining relvars relvars from the example aabove bove we ccould ould query the database for all of the Customers,

Orders, and Invoices. If we only wanted the tuples for a specific customer, we would specify this using a restriction condition.  condition. 

 

32 

 

If we wanted to retrieve all of the Orders for Customer 1234567890, we could query the database to return every row in the Order table with Customer ID 1234567890 and join the Order table to the Order Line table based on Order No. There is a flaw in our database design above. The Invoice relvar contains an Order No attribute. So, each tuple in the Invoice relvar will have one Order No, which implies that there is precisely one Order for each Invoice. But in reality an invoice can be created against many orders, or indeed for no particular order. AdditionallyInvoice. the Order an always Invoicetrue No attribute, that each Order has a corresponding Butrelvar againcontains this is not in the realimplying world. An order is sometimes paid through several invoices, and sometimes paid without an invoice. In other words there can be many Invoices per Order and many Orders per Invoice. This is a  a  many-tomany relationship between Order and Invoice (also called a non-specific relationship). To represent this relationship in the database a new relvar should be introduced whose role is to specify the correspondence between Orders and Invoices: OrderInvoice ( Order Order No,Invoice No  ) )

Now, the Order relvar has a  a  one-to-many relationship to the OrderInvoice table, as does the Invoice relvar. If we want to retrieve every Invoice for a particular Order, we can query for all orders where Order No

Order No

in the Order OrderInvoice equals therelation Invoice. Invoiceequals No in the

in OrderInvoice, and where

Invoice No

in

Set-Theoretic Formulation Basic notions in the relational model are  are relation names and attribute names. We will represent these as strings such as "Person" and "name" and we will usually use the variables and a,b,c to range over them. Another basic notion is the set of  atomic values that contains values such as numbers and strings. Our first definition concerns the notion of tuple, which formalizes the notion of row or record in a table: Tuple

A tuple is a partial function from attribute names to atomic values. Header

A header is a finite set of attribute names. Projection

The projection of a tuple t on a finite set of attributes A is . The next definition defines relation which formalizes the contents of a table as it is defined in the relational model.  

33 

 

Relation

A relation is a tuple ( H   H ,  B B) with H , the header, and  B, the body, a set of tuples that all have the domain H . Such a relation closely corresponds to what is usually called the extension of a predicate in firstorder logic except that here we identify the places in the predicate with attribute names. Usually in the relational model a database said to consist of a sethold of relation names, the headers are associated with these namesschema and the isconstraints that should for every instance of the that database schema. Relation universe

A relation universe U over a header H is a non-empty set of relations with header H . Relation schema

A relation schema ( H   H ,C ) consists of a header H and a predicate C ( R  R) that is defined for all relations  R with header H . A relation satisfies a relation schema ( H   H ,C ) if it has header H and satisfies C .

Key constraints and functional dependencies

One of the simplest and most important types of relation constraints is the key constraint . It tells us that in every instance of a certain relational schema the tuples can be identified by their values for certain attributes. Superkey

A superkey is written as a finite set of attribute names. A superkey K holds in a relation ( H   H ,  B B) if: and there exist no two distinct tuples such that t 1[K ] = t 2[K ]].. A superkey holds in a relation universe U if it holds in all relations in U . Theorem: A superkey K holds in a relation universe U over H if and only if and holds in U . Candidate key K holds as a candidate key for a relation universe U if it holds as a superkey for U and A superkey there is no proper subset of K that also holds as a superkey for U . Functional dependency  

34 

 

A functional dependency (FD for short) is written as for X ,Y finite sets of attribute names. A functional dependency holds in a relation ( H   H  B  ,B) if: and tuples A functional dependency

,

holds in a relation universe U if it holds in all relations in U .

Trivial functional dependency

A functional dependency is trivial under a header  H if it holds in all relation universes over  H . is trivial under a header H if and only if . Theorem: An FD

Closure

Armstrong's axioms: The closure of a set of FDs S under a header H , written as S + , is the smallest superset of S such that: (reflexivity) (transitivity) and (augmentation) Theorem: Armstrong's axioms are sound and complete; given a header  H and a set S of FDs that only contain subsets of  H   H , if and only if holds in all relation universes over H in which all FDs in S hold. Completion

The completion of a finite set of attributes  X under a finite set of FDs S, written as X  + , is the smallest superset of  X   X such that:

The completion of an attribute set can be used to compute if a certain dependency is in the closure of a set of FDs.  

35 

 

Theorem: Given a set S of FDs,

if and only if

.

Irreducible cover 

An irreducible cover of a set S of FDs is a set T of FDs such that: +



+

= T     there exists no

such that S + = U  +  is a singleton set and .

Object-Oriented Database Model An object database (also object-oriented database) is a database model in which information is represented in the form of objects of  objects as used in object-oriented programming.  Object databases are a niche field within the broader DBMS market dominated by relational database systems have been considered the early 1980s andmanagement 1990s but they have (RDBMS). made little Object impact databases on mainstream commercial data since processing, though there is some usage in specialized areas.

 

36 

 

When database capabilities are combined with object-oriented (OO) programming language capabilities, the result is an object database management system (ODBMS). Today‘s trend in programming languages is to utilize objects, thereby making OODBMS ideal for 

OO programmers because they can develop the product, store them as objects, and can replicate or modify existing objects to make new objects within the OODBMS. Information today includes not only data but video, audio, graphs, and photos which are considered complex data types. Relational DBMS aren‘t natively capable of supporting these complex data types. By being i ntegrated with the programming language, the programmer can maintain consistency within one environment because both the OODBMS and the programming language will use the same model of  representation. Relational DBMS projects using complex data types would have to be divided into two separate tasks: the database model and the application. As the usage of web-based technology increases with the implementation of Intranets and extranets, companies have a vested interest in OODBMS to display their complex data. Using a DBMS that has been specifically designed to store data as objects gives an advantage to those companies that are geared towards multimedia presentation or organizations that utilize computeraided design (CAD). Some object-oriented databases are designed to work well with object-oriented programming languages such as Ruby, Python, Perl, Java, C#, Visual Basic .NET, C++, Objective-C and Smalltalk; others have their own programming languages. ODBMSs use exactly the same model as object-oriented programming languages. History

Object database management systems grew out of research during the early to mid-1970s into having intrinsic database management support for graph-structured objects. The term "objectoriented database system" first appeared around 1985. Notable research projects included EncoreOb/Server (Brown University) University),, EXODUS (University of Wisconsin – Madison) Madison),, IRIS (HewlettPackard), ODE (Bell Labs) Labs),, ORION (Microelectronics and Computer Technology Corporation or MCC), Vodak (GMD-IPSI), and Zeitgeist (Texas Instruments). The ORION project had more  

37 

 

published papers than any of the other efforts. Won Kim of MCC compiled the best of those papers in a book published by The MIT Press. Early commercial products included Gemstone (Servio Logic, name changed to GemStone Systems), Gbase (Graphael), and Vbase (Ontologic). The early to mid-1990s saw additional commercial products enter the market. These included ITASCA (Itasca Systems), Jasmine (Fujitsu, marketed by Computer Associates), Matisse (Matisse Software), Objectivity/DB (Objectivity, Inc.), ObjectStore (Progress Software, acquiredfrom fromOntologic), eXcelon which originally Object Design), ONTOS (Ontos, Inc., name changed O2 (Owas 2 Technology, merged with several companies, acquired by Informix, which was in turn acquired by IBM), IBM), POET (now FastObjects from Versant which acquired Poet Software), Versant Object Database (Versant Corporation), VOSS (Logic Arts) and JADE (Jade Software Corporation). Some of these products remain on the market and have been joined by new open source and commercial products such as InterSystems CACHÉ (see the product listings below). Object database management systems added the concept of persistence of persistence to object programming languages. The early commercial products were integrated with various languages: GemStone (Smalltalk),, Gbase (LISP) (Smalltalk) (LISP),, Vbase (COP) and VOSS (Virtual Object Storage System for Smalltalk) Smalltalk).. For much of the 1990s, C++ dominated the commercial object database management market. Vendors added Java in the late 1990s and more recently, C#. C#.   Starting in 2004, object databases have seen a second growth period when open source object databases emerged that were widely affordable and easy to use, because they are entirely written in OOP languages like Smalltalk, Java or C#, such as db4o (db4objects), DTS/S1 from Obsidian Dynamics and Perst (McObject), available under dual open source and commercial licensing. Time Line 1985 – Term Object Database first introduced 1988  

Versant Corporation started (as Object Sciences Corp)

 

Objectivity, Inc. founded

Early 1990’s  

Gemstone (Smalltalk)  (Smalltalk) 

 

GBase (LISP) (LISP)  

 

VBase (O2- ONTOS – INFORMIX)

 

Objectivity/DB launched

Mid 1990’s

 

 

Versant Object Database

 

ObjectStore

 

Poet

 

Jade

38 

 

 

Matisse

2000’s  

Cache’ 

 

db4o project started by Carl Rosenberger

 

ObjectDB for Java

2001  

IBM acquires Informix (Illustra) integrates with DB2

 

db4o shipped to first f irst pilot customer

2004 - db4o's commercial launch as db4objects, Inc. 2008 - db4o acquired by Versant Corporation

Adoption of object databases

Object databases based on persistent programming acquired a niche in application areas such as engineering and spatial databases, telecommunications, and scientific areas such as high energy physics and molecular biology. They have made little impact on mainstream commercial data processing, though there is some usage in specialized areas of financial of  financial services. services.[6] It is also worth noting that object databases held the record for the World's largest database (being the first to hold over 1000 terabytes at Stanford Linear Accelerator Center) Center)[7] and the highest ingest rate ever recorded for a commercial database at over one Terabyte per hour. Another group of object databases focuses on embedded use in devices, packaged software, and real-time systems. Technical features

Most declarative object databases also offerapproach. some kindIt of query of  query language, allowing to beand found more programming is in the area of object queryobjects languages, the by a integration of the query and navigational interfaces, that the biggest differences between products are found. An attempt at standardization was made by the ODMG with the Object Query Language, OQL. Access to data can be faster because joins are often not needed (as in a tabular implementation of a relational database). database). This is because an object can be retrieved directly without a search, by following pointers. (It could, however, be argued that "joining" is a higher-level abstraction of  pointer following.) Another area of variation between products is in the way that the schema of a database is defined. A general characteristic, however, is that the programming language and the database schema use the same type definitions.

 

39 

 

Multimedia applications are facilitated because the class methods associated with the data are responsible for its correct interpretation. Many object databases, for example VOSS, offer support for versioning. An object can be viewed as the set of all its versions. Also, object versions can be treated as objects in their own right. Some object databases also provide systematic support for triggers and constraints which are the basis of  active databases.  databases.  The efficiency of such a database is also greatly improved in areas which demand massive amounts of data about one item. For example, a banking institution could get the user's account information and provide them efficiently with extensive information such as transactions, account information entries etc. The Big O Notation for such a database paradigm drops from O(n) to O(1), greatly increasing efficiency in these specific cases. Standards

The Object Data Management Group (ODMG) was a consortium of object database and objectrelational mapping vendors, members of the academic community, and interested parties. Its goal was to create a set of specifications that would allow for portable applications that store objects in database management systems. specification. The last release was ODMG 3.0. By 2001, mostItofpublished the majorseveral object versions database of anditsobject-relational mapping vendors claimed conformance to the ODMG Java Language Binding. Compliance to the other components of the specification was mixed. In 2001, the ODMG Java Language Binding was submitted to the Java Community Process as a basis for the Java Data Objects specification. The ODMG member companies then decided to concentrate their efforts on the Java Data Objects specification. As a result, the ODMG disbanded in 2001. Many object database ideas were also absorbed into SQL:1999 and have been implemented in varying degrees in object-relational database products. In 2005 Cook, Rai, and Rosenberger proposed to drop all standardization efforts to introduce additional object-oriented query APIs but rather use the OO programming language itself, i.e., Java and .NET, Integrated to express Query queries.(LINQ) As a result, Native Queries emerged. Similarly, announced Language and DLINQ, an implementation of LINQ,Microsoft in September 2005, to provide close, language-integrated database query capabilities with its programming languages C# and VB.NET 9. In February 2006, the Object Management Group (OMG) announced that they had been granted the right to develop new specifications based on the ODMG 3.0 specification and the formation of  the Object Database Technology Working Group (ODBT WG). The ODBT WG plans to create a set of standards that incorporates advances in object database technology (e.g., replication), data management (e.g., spatial indexing), and data formats (e.g., XML) and to include new features into these standards that support domains where object databases are being adopted (e.g., real-time systems). On January 2007 the World Wide Web Consortium gave final recommendation status to the XQuery language. XQuery has enabled a new class of applications that managed hierarchical data built around the XRX web application architecture that also provide many of the advantages of   

40 

 

object databases. In addition XRX applications benefit by transporting XML directly to client applications such as XForms without changing data structures. Advantages & Disadvantages 

The main benefit of creating a database with objects as data is speed. OODBMS are faster than relational DBMS because data isn‘t stored in relational rows and columns co lumns but as objects. Objects

have a many to many relationship and are accessed by the use of pointers. Pointers are linked to objects to establish relationships. Another benefit of OODBMS is that it can be programmed with small procedural differences without affecting the entire system. This is most helpful for those organizations that have data relationships that aren‘t entirely clear or need ne ed to change these relations to satisfy the new business requirements. Benchmarks between OODBMSs and RDBMSs have shown that an OODBMS can be clearly superior for certain kinds of tasks. The main reason for this is that many operations are performed using navigational rather than declarative interfaces, and navigational access to data is usually implemented very efficiently by following pointers. Compared to relational databases another major advantage of OODBMSs is that they do not need any object relational mapping layer andthis object marshaling map the object model to the database object model. In RDBMS, mapping is alsotosource of application the impedance mismatch, mismatch,    which does not occur when using OODBMS. Avoiding this layer also improves performance and saves effort for implementation and maintenance. Critics of navigational of navigational databasedatabase-based based technologies like ODBMS suggest that pointer-based techniques are optimized for very specific "search routes" or viewpoints; for general-purpose queries on the same information, pointer-based techniques will tend to be slower and more difficult to formulate than relational. Thus, navigation appears to simplify specific known uses at the expense of general, unforeseen, and varied future uses.However, with suitable language support, direct object references may be maintained in addition to normalised, indexed aggregations, allowing both kinds of access; furthermore, a persistent language may index aggregations on whatever its content elements return from a call to some arbitrary object access method, rather than only on attribute value, which allows a query to 'drill down' into complex data structures. Other things that work against ODBMS seem to be the lack of interoperability of  interoperability with a great number of tools/features that are taken for granted in the SQL world, including but not limited to industry standard connectivity, reporting tools, OLAP tools, and backup and recovery standards. Additionally, object databases lack a formal mathematical foundation, unlike the relational model, model,   and this in turn leads to weaknesses in their query support. However, this objection is offset by the fact that some ODBMSs fully support SQL in addition to navigational access, e.g. Objectivity/SQL++, Matisse, and InterSystems CACHÉ. Effective use may require compromises to keep both paradigms in sync. In fact there is an intrinsic tension between the notion of encapsulation, of  encapsulation, which hides data and makes it available only through a published set of interface methods, and the assumption underlying much database technology, which is that data should be accessible to queries based on data content rather than predefined access paths. Database-centric thinking tends to view the world through a declarative and attribute-driven viewpoint, while OOP tends to view the world through a  

41 

 

behavioral viewpoint, maintaining entity-identity independently of changing attributes. This is one of the many impedance mismatch issues surrounding OOP and databases. Although some commentators have written off object database technology as a failure, the essential arguments in its favor remain valid, and attempts to integrate database functionality more closely into object programming languages continue in both the research and the industrial communities.

Dimensional Model  The dimensional model is a specialized adaptation of the relational model used to represent data in data warehouses in a way that data can be easily summarized using OLAP queries. In the dimensional model, a database consists of a single large table of facts that are described using dimensions and measures. A dimension provides the context of a fact (such as who participated, when and where it happened, and its type) and is used in queries to group related facts together. Dimensions tend to be discrete and are often hierarchical; for example, the location might include the building, state, and country. A measure is a quantity describing the fact, such as revenue. It's important that measures can be meaningfully aggregated - for example, the revenue from different locations can be added together. In an OLAP query, dimensions are chosen and the facts are grouped and added together to create a summary. The dimensional model is often implemented on top of the relational model using a star schema,  schema,  consisting of one table containing the facts and surrounding tables containing the dimensions. Particularly complicated dimensions might be represented using multiple tables, resulting in a snowflake schema.  schema.  A data warehouse can contain multiple star schemas that share dimension tables, allowing them to be used together. Coming up with a standard set of dimensions is an important part of dimensional modeling.

Database Normalization  In relational database design, the process of organizing data to minimize redundancy is called normalization or the process of decomposing relation with anomalies to produce smaller,wellstructured relation. Normalization usually involves dividing a database into two or more tables and  

42 

 

defining relationships between the tables. The objective is to isolate data so that additions, deletions, and modifications of a field can be made in just one table and then propagated through the rest of the database via the defined relationships. Edgar F. Codd, the inventor of the relational model, introduced the concept of normalization and what we now know as the First Normal Form (1NF) in 1970. Codd went on to define the Second Normal Form (2NF) and Third Normal Form (3NF) in 1971, and Codd and Raymond F. Boyce defined the Boyce-Codd Normal Form (BCNF) in 1974. normal forms defined by other theorists in subsequent years, the most recent beingHigher the Sixth normal formwere (6NF) introduced by Chris Date, Hugh Darwen, and Nikos Lorentzos in 2002.  Informally, a relational database table (the computerized representation of a relation) is often described as "normalized" if it is in the Third Normal Form. Most 3NF tables are free of insertion, update, and deletion anomalies, i.e. in most cases 3NF tables adhere to BCNF, 4NF, and 5NF (but typically not 6NF). 6NF). A standard piece of database design guidance is that the designer should create a fully normalized design; selective denormalization can subsequently be performed for performance reasons. However, some modeling disciplines, such as the dimensional modeling approach to data warehouse design, explicitly recommend non-normalized designs, i.e. designs that in large part do not adhere to 3NF. Objectives of Normalization 

form  defined by Codd in 1970 was to permit data to be A basic objective of the  the first normal form logic.. (SQL  SQL  queried and manipulated using a "universal data sub-language" grounded in  in  first-order logic is an example of such a data sub-language, albeit one that Codd regarded as seriously flawed.) The objectives of normalization beyond 1NF (First Normal Form) were stated as follows by Codd: 1. To free the collection of  relations  relations from undesirable insertion, update and deletion dependencies; 2. To reduce the need for restructuring the collection of relations as new types of data are introduced, and thus increase the life span of application programs; 3. To make the relational model more informative to users; 4. To make the collection of relations neutral to the query statistics, where these statistics are liable to change as time goes by.  — E.F. E.F. Codd, "Further Normalization of the Data Base Relational Model"  

The sections below give details of each of these objectives.

Free the database of modification anomalies  

43 

 

An update anomaly. Employee 519 is shown as having different addresses on different records.

An insertion anomaly. Until the new faculty member, Dr. Newsome, is assigned to teach at least one course, his details cannot be recorded.

A deletion anomaly. All information about Dr. Giddens is lost when he temporarily ceases to be assigned to any courses. When an attempt is made to modify (update, insert into, or delete from) a table, undesired sideeffects mayinfollow. Not have all tables can suffer from these side-effects; rather, the side-effects only arise tables that not been sufficiently normalized. An insufficiently normalizedcan table might have one or more of the following characteristics: The same information can be expressed on multiple rows; therefore updates to the table may result in logical inconsistencies. For example, each record in an "Employees' Skills" table might contain an Employee ID, Employee Address, and Skill; thus a change of address for a particular employee will potentially need to be applied to multiple records (one for each of his skills). If the update is not carried through successfully — if, if, that is, the employee's address is updated on some records but not others — then then the table is left in an inconsistent state. Specifically, the table provides conflicting answers to the question of what this particular employee's address is. This phenomenon is known as an update anomaly.

 

44 

 

There are circumstances in which certain facts cannot be recorded at all. For example, each record in a "Faculty and Their Courses" table might contain a Faculty ID, Faculty Name, Faculty Hire Date, and Course Code — tthus hus we can record the details of any faculty member who teaches at least one course, but we cannot record the details of a newly-hired faculty member who has not yet been assigned to teach any courses. This phenomenon is known as an insertion anomaly. There are circumstances in which the deletion of data representing certain facts necessitates the deletion of data representing completely different facts. The "Faculty and Their Courses" table described in the previous example suffers from this type of anomaly, for if a faculty member temporarily ceases to be assigned to any courses, we must delete the last of the records on which that faculty member appears, effectively also deleting the faculty member. This phenomenon is known as a deletion anomaly. Minimize redesign when extending the database structure

When a fully normalized database structure is extended to allow it to accommodate new types of  data, the applications pre-existing interacting aspects of the structure can remainaffected. largely or entirely unchanged. As a result, withdatabase the database are minimally Make the data model more informative to users

Normalized tables, and the relationship between one normalized table and another, mirror realworld concepts and their interrelationships. Avoid bias towards any particular pattern of querying

Normalized tables are suitable for general-purpose querying. This means any queries against these tables, including future queries whose details cannot be anticipated, are supported. In contrast, tables that are not normalized lend themselves to some types of queries, but not others. For example, consider an online bookseller whose customers maintain wishlists of books they'd like to have. For the obvious, anticipated query -- what books does this customer want? -- it's enough to store the customer's wishlist in the table as, say, a homogeneous string of authors and titles. With this design, though, the database can answer only that one single query. It cannot by itself  answer interesting but unanticipated queries: What is the most-wished-for book? Which customers are interested in WWII espionage? How does Lord Byron stack up against his contemporary poets? Answers to these questions must come from special adaptive tools completely separate from the database. One tool might be software written especially to handle such queries. This special adaptive software has just one single purpose: in effect to normalize the non-normalized field. Unforeseen queries can be answered trivially, and entirely within the database framework, with a normalized table.  

45 

 

Example

Querying and manipulating the data within an unnormalized data structure, such as the following non-1NF representation of customers' credit card transactions, involves more complexity than is really necessary: Customer 

Transactions  

Tr. ID 

Jones 

Date 

Amount 

12890  14-Oct-2003  −87 

12904  15-Oct-2003  −50 

Tr. ID 

Date 

Amount 

Wilkins  12898  14-Oct-2003  −21 

Tr. ID 

Date 

Amount 

12907  15-Oct-2003  −18  Stevens  14920  20-Nov-2003   −70 

15003  27-Nov-2003   −60 

To each customer there corresponds a repeating group of transactions. The automated evaluation of any query relating to customers' transactions therefore would broadly involve two stages: 1.  Unpacking one or more customers' groups of transactions allowing the individual transactions in a group to be examined, and 2.  Deriving a query result based on the results of the first stage  

46 

 

For example, in order to find out the monetary sum of all transactions that occurred in October 2003 for all customers, the system would have to know that it must first unpack the Transactions   group of each customer, then sum the  Amounts of all transactions thus obtained where the Date of  the transaction falls in October 2003. One of Codd's important insights was that this structural complexity could always be removed completely, leading to much greater power and flexibility in the way queries could be formulated (by  users  (by users above and and  applications) applications (by the  the DBMS) DBMS). The normalized equivalent of the structure would look) and like evaluated this: Customer  Tr. ID 

Date 

Amount 

Jones 

12890  14-Oct-2003  −87 

Jones 

12904  15-Oct-2003  −50 

Wilkins 

12898  14-Oct-2003  −21 

Stevens 

12907  15-Oct-2003  −18 

Stevens 

14920  20-Nov-2003   −70 

Stevens 

15003  27-Nov-2003   −60 

Now each row represents an individual credit card transaction, and the DBMS can obtain the answer of interest, simply by finding all rows with a Date falling in October, and summing their Amounts. The data structure places all of the values on an equal footing, exposing each to the DBMS directly, so each can potentially participate directly in queries; whereas in the previous situation some values were embedded in lower-level structures that had to be handled specially. Accordingly, the normalized design lends itself to general-purpose query processing, whereas the unnormalized design does not.

Database Storage   Database tables/indexes are typically stored on hard disk in one of many forms, ordered/unordered Flat files, ISAM, Heaps, Hash buckets or B+ Trees. These have various advantages and disadvantages discussed in this topic. The most commonly used are B+trees and ISAM.    

47 

 

Unordered  Unordered storage typically stores the records in the order they are inserted. While having good

insertion efficiency ( ), it may seem that it would have inefficient retrie ievval times ( ), but this is usually never the case as most databases use indexes on the primary keys, resulting in or for keys that are the same as database row offsets within the database file storage system, efficient retrieval times. Ordered  Ordered storage typically stores the records in order and may have to rearrange or increase the file

size in the case a record is inserted, this is very inefficient. However is better for retrieval as the records are pre-sorted, leading to a complexity of . Structured Files 

Heaps Simplest and most basic method       

insert efficient, records added at end of file –  ‗chronological‘ order   retrieval inefficient as searching has to be linear deletion – deleted records marked

It requires periodic reorganization if file is very volatile Advantages  

good for bulk loading data

 

good for relatively small relations as indexing overheads are avoided good when retrievals involve large proportion of records.

 

Disadvantages 

not efficient for selective retrieval using key values, especially if large   sorting may be time-consuming It is not suitable for ‗volatile‘ tables 

 

Hash buckets

 

48 

 

Hash functions calculate the address of the page in which the record is to be stored based on one or more fields in the record  



 

 

Hashing functions chosen to ensure that addresses are spread evenly across the address space ‗occupancy‘ is generally 40% – 60% of total file size unique address not guaranteed so collision detection and collision resolution mechanisms are required      

         

open addressing chained/unchained overflow pros and cons

efficient for exact matches on key field not suitable for range retrieval, which requires sequential storage calculates where the record is stored based on fields in the record hash functions ensure even spread of data collisions are possible, so collision detection and restoration is required  

B+ trees

These are the most used in practice. the time taken to access any tuple is the same because same number of nodes searched index is a full index so data file does not have to be ordered Pros and cons   versatile data structure – sequential as well as random access   access is fast   supports exact, range, part key and pattern matches efficiently d ynamic – expands and contracts as   ‗volatile‘ files are handled efficiently because index is dynamic table grows and shrinks   less well suited to relatively stable files  – in this case, ISAM is more efficient

Distributed database management system  

49 

 

A distributed database management system ('DDBMS') is a software system that permits the management of a distributed database and makes the distribution transparent to the users. A distributed database is a collection of multiple, logically interrelated databases distributed over a computer network. Sometimes "distributed database system" is used to refer jointly to the distributed database and the distributed DBMS. A distributed database management system is software for managing databases stored on multiple computers in a network. A distributed database is a set of databases stored on multiple computers that typically appears to applications on a single database. Consequently, an application can simultaneously access and modify the data in several databases in a network. DDBMS is specially developed for heterogeneous database platforms, focusing mainly on heterogeneous database management systems (HDBMS).

Federated database system A federated database system is a type of meta-database of meta-database management system (DBMS) which transparently integrates multiple autonomous database systems into a single federated database. The constituent databases are interconnected via a computer network, and may be geographically decentralized. Since the constituent database systems remain autonomous, a federated database system is a contrastable alternative to the (sometimes daunting) task of merging together several disparate databases. A federated database, or virtual database, is the fully-integrated, logical composite of all constituent databases in a federated database system. McLeod and Heimbigner Heimbigner[1] were among the first to define a federated database system, as one which "define[s] the architecture and interconnect[s] databases that minimize central authority yet support partial sharing and coordination among database systems". Through data abstraction, federated database systems can provide a uniform user interface, interface,   enabling users and clients to store and retrieve data in multiple noncontiguous databases with a single query -- even if the constituent databases are heterogeneous. To this end, a federated database system must be able to decompose the query into subqueries for submission to the relevant constituent DBMS's, after which the system must composite the result sets of the subqueries. Because various database management systems employ different query languages,  languages,  federated database systems can apply wrappers to the subqueries to translate them into the appropriate query languages. languages.   Among other surveys, defines a Federated Database as a collection of cooperating component systems which are autonomous and are possibly heterogeneous. The three important components of  an FDBS as pointed out in are autonomy, heterogeneity and distribution. Another dimension which has also been considered is the Networking Environment Computer Network, e.g., many DBSs over a LAN or many DBSs over a WAN update related functions of participating DBSs (e.g., no updates, nonatomic transitions, atomic updates) updates).. Master data management or MDM is a relatively new term for similar practices.  

50 

 

FDBS Architecture 

A DBMS can be classified as either centralized or distributed. A centralized system manages a single database while distributed manages multiple databases. A component DBS in a DBMS may be centralized or distributed. A multiple DBS (MDBS) can be classified into two types depending on the autonomy of the component DBS as federated and non federated. A nonfederated database system is an integration of component DBMS that are not autonomous. A federated database system consists of component DBS that are autonomous yet participate in a federation to allow partial and controlled sharing of their data. Federated architectures differ based on levels of integration with the component database systems and the extent of services offered by the federation. A FDBS can be categorized as loosely or tightly coupled systems. Loosely Coupled require component databases to construct their own federated schema. A user will typically access other component database systems by using a multidatabase language but this removes any levels of location transparency, forcing the user to have direct knowledge of the federated schema. A user imports the data they require from other component databases and integrates it with their own to form a federated schema. Tightly coupled system consists of component systems that use independent processes to construct and publicize an integrated federated schema. Multiple DBS of which FDBS are a specific type can be characterized along three dimensions: Distribution, Heterogeneity and Autonomy. Another characterization could be based on the dimension of networking For e.g. single databases or multiple databases in a LAN or WAN.  WAN.  Distribution 

Distribution of data in anamong FDBSmultiple is due toDB thewhich existence before an FDBS is built. Data can be distributed couldofbea multiple stored inDBS a single computer or multiple computers. These computers could be geographically located in different places but interconnected by a network. The benefits of data distribution help in increased availability and reliability as well as improved access times. Heterogeneous Database System|Heterogeneity

Heterogeneities in databases arise due to several factors. Some of them occur due to differences in structures, semantics of data, the constraints supported or query language. Differences in structure occur when two data models provide different primitives such as object oriented (OO) models that support specialization and inheritance and relational models that do not. Differences due to constraints occur when two models support two different constraints. For example the set type in CODASYL schema may be partially modeled as a referential integrity constraint in a relationship schema. CODASYL supports insertion and retention that are not captured by referential integrity alone. The query language supported by a DBMSs can also contribute to heterogeneity between  

51 

 

other component DBMSs. For example differences in query languages with same data models or different versions of query of query languages could contribute heterogeneity. heterogeneity.   Semantic heterogeneities arise when there is a disagreement about meaning, interpretation or intended use of data. of data. At the schema and data level, some of the possible classification of  Heterogeneities that occur are Naming Conflicts e.g. Databases using different names to represent the same concept. Domain Conflicts or Data Representation conflicts e.g. Databases using different values to represent same concept. Precision Conflicts e.g. Databases using same data values from domains of different cardinalities for same data. data.   Metadata Conflicts e.g. same concepts are represented at schema level and instance level. Data Conflicts e.g. Missing attributes Schema Conflicts e.g. Table versus table conflict which includes naming conflicts, data conflicts etc. In creating a federated schema, one has to resolve such heterogeneities before integrating the component DB schemas. Schema matching, schema mapping 

Dealing with incompatible data types or query syntax is not the only obstacle to a concrete implementation of an FDBS. In systems that are not planned top-down, a generic problem lies in matching semantically equivalent, but differently named parts from different schemas (=data models) (tables, attributes). A pairwise mapping between n attributes would result in mapping rules (given equivalence mappings) - a number that quickly gets too large for practical purposes. A common way out is to provide a global schema that comprises the relevant parts of all member schemas and provide mappings in the form of database of database views. Two principal solutions can be realized, depending on the direction of the mapping: 1.  Global as View (GaV): the global schema is defined in terms of the underlying schemas 2.  Local as View (LaV): the local schemas are defined in terms of the global schema Both are explained in more detail in the article Data integration. Alternate approaches to the schema matching problem and a classification of the same are explained in more detail in the article Schema Matching.  Matching. 

Autonomy  

52 

 

Fundamental to the difference between an MDBS and an FDBS is the concept of autonomy. It is important to understand the aspects of autonomy for component databases and how they can be addressed when a component DBS participates in an FDBS. There are four kinds of autonomies addressed Design Autonomy which refers to ability to choose its design irrespective of data, query language or conceptualization, functionality of the system implementation. Heterogeneities in an FDBS are primarily due to design autonomy. Communication autonomy refers to the general operation of the DBMS to communicate with other DBMS or not. Execution autonomy allows a component DBMS to control the operations requested by local and external operations. Association autonomy gives a power to component DBS to disassociate itself from a federation which means FDBS can operate independently of any single DBS. DBS.   The ANSI/X3/SPARC Study Group outlined a three level data description architecture, the components of which are the conceptual schema, internal schema and external schema of  databases. The three level architecture is however inadequate to describing the architectures of an FDBS. It was therefore extended to support the three dimensions of the FDBS namely Distribution, Autonomy and Heterogeneity. The five level schema architecture is explained below. Five Level Schema Architecture for FDBSs

The five level schema architecture includes the following: Local Schema is the conceptual concept expressed in primary data model of component DBMS. Component Schema is derived by translating local schema into a model called the canonical data model or common data model. They are useful when semantics missed in local schema are incorporated in the component. They help in integration of data for tightly coupled FDBS. Export Schema represents a subset of a component schema that is available to the FDBS. It may include access control information regarding its use by specific federation user. The export schema help in managing flow of control of data. Federated Schema is an integration of multiple export schema. It includes information on data distribution that is generated when integrating export schemas. External Schema defines a schema for a user/applications or a class of users/applications.

 

53 

 

While accurately representing the state of the art in data integration, the Five Level Schema Architecture above does suffer from a major drawback, namely IT imposed look and feel. Modern data users demand control over how data is presented; their needs are somewhat in conflict with such bottom-up approaches to data integration.

 

54

 

 

References 1.  Codd, E.F. (1970)."A (1970)."A Relational Model of Data for Large Shared Data Banks". In: Communications of the ACM  13 (6): 377 – 387. 387. 2.  Development of an object-oriented DBMS; Portland, Oregon, United States; Pages: 472 482; 1986; ISBN 0-89791-204-7 3.  Performance enhancement through replication in an object-oriented DBMS; Pages 325-336; ISBN 0-89791-317-5 4.  Seltzer, M. (2008, July). Beyond Relational Databases. Communications of the ACM, 51(7), 52-58. Retrieved July 6, 2009, from Business Source Complete database. 5.  Databases in the cloud: a work in progress; October 2009; ISBN 978-1-60558-765-3 6.  itl.nist.gov (1993)  (1993)  Integration Integration Definition for for Information Information Modeling Modeling (IDEFIX) (IDEFIX). 21 December 1993. 7.  Codd, E.F. (June 1970). "A Relational Model of Data for Large Shared Data Banks". Banks".   Communications of the ACM  13 (6): 377 – 387. 387. doi doi:10.1145/362384.362685. :10.1145/362384.362685.   http://www.acm.org/classi http://www.ac m.org/classics/nov95/t cs/nov95/toc.html. oc.html.   8.  Codd, E.F. "Further Normalization of the Data Base Relational Model." (Presented at Courant Computer Science Symposia Series 6, "Data Base Systems," New York City, 24th-25th, 1971.) IBM Research Report RJ909 (August 31st, 1971). Republished in May Randall J. Rustin (ed.), Data Base Systems: Courant Courant Computer Computer Science Science Symposia Series 6 . Prentice-Hall, 1972. 9.  Codd, E. F. "Recent Investigations into Relational Data Base Systems." IBM Research Report RJ1385 (April 23rd, 1974). Republished in Proc. 1974 Congress (Stockholm, Sweden, 1974). , N.Y.: North-Holland (1974). 10. C.J. Date, Hugh Darwen, Nikos Lorentzos. Temporal Data and the Relational Model. Morgan Kaufmann (2002), p. 176 11. C.J. Date. An Introduction Introduction to Database Systems Systems. Addison-Wesley (1999), p. 290 12. Chris Date, for example, writes: "I believe firmly that anything less than a fully normalized design is strongly contraindicated ... [Y]ou should "denormalize" only as a last resort . That is, you should back off from a fully normalized design only if all other strategies for improving performance have somehow failed to meet requirements." Date, C.J. Database in  Depth: Relational Relational Theory Theory for Practitioners Practitioners. O'Reilly (2005), p. 152. 13. Ralph Kimball, for example, writes: "The use of normalized modeling in the data warehouse presentation area defeats the whole purpose of data warehousing, namely, intuitive and high-performance retrieval of data." Kimball, Ralph. The Data Warehouse Toolkit, 2nd Ed.. Wiley Computer Publishing (2002), p. 11. 14. "The adoptionbased of a relational modelpredicate of data ... permitsAthe development of a universal sub-language on an applied calculus. first-order predicate calculus data suffices if the collection of relations is in first normal form. Such a language would provide  

55 

 

a yardstick of linguistic power for all other proposed data languages, and would itself be a strong candidate for embedding (with appropriate syntactic modification) in a variety of  host Ianguages (programming, command- or problem-oriented)." Codd, "A Relational Model of Data for Large Shared Data Banks", p. 381 15. Codd, E.F. Chapter 23, "Serious Flaws in SQL", in The Relational Model for Database  Management:  Manageme nt: Version 2. Addison-Wesley

(1990), p. 371-389 16. Codd, E.F. "Further Normalization of the Data Base Relational Model", p. 34 17. Date, C. J. "What First Normal Form Really Means" in  Date on Database: Database: Writings Writings 20002006 (Springer-Verlag, 2006), pp. 127-128. 18. Codd, E.F. "Further Normalization of the Data Base Relational Model." (Presented at Courant Computer Science Symposia Series 6, "Data Base Systems," New York City, May 24-25, 1971.) IBM Research Report RJ909 (August 31st, 1971). Republished in Randall J. Rustin (ed.), Data Base Systems: Courant Courant Computer Computer Science Science Symposia Symposia Series 6 . PrenticeHall, 1972. 19. Codd, E.F. "Further Normalization of the Data Base Relational Model." (Presented at Courant Computer Science Symposia Series 6, "Data Base Systems," New York City, May 24-25, 1971.) IBM Research Report RJ909 (August 31, 1971). Republished in Randall J. Rustin (ed.), Data Base Systems: Courant Courant Computer Computer Science Science Symposia Symposia Series 6 . PrenticeHall, 1972. 20. Zaniolo, Carlo. "A New Normal Form for the Design of Relational Database Schemata."  ACM Transactions Transactions on Database Database Systems Systems 7(3), September 1982. 21. Codd, E. F. "Recent Investigations into Relational Data Base Systems." IBM Research Report RJ1385 (April 23, 1974). Republished in Proc. 1974 Congress (Stockholm, Sweden, 1974). New York, N.Y.: North-Holland (1974). 22. Fagin, Ronald (September 1977). "Multivalued Dependencies and a New Normal Form for Relational Databases".  Databases". ACM   ACM Transactions Transactions on Database Database Systems Systems 2 (1): 267. doi:10.1145/320557.320571. doi :10.1145/320557.320571. http://www.almaden.ibm.com/cs/people/fagin/tods77.pdf.   23. Ronald Fagin. "Normal Forms and Relational Database Operators". ACM SIGMOD International Conference on Management of Data, May 31-June 1, 1979, Boston, Mass. Also IBM Research Report RJ2471, Feb. 1979. 24. Ronald Fagin (1981)  (1981)  A A Normal Form Form for Relational Relational Databases Databases That Is Based Based on Domains Domains and Keys, Communications of the ACM , vol. 6, pp. 387-415

 

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close