Jump to content
xisto Community
TheBoutique-

All About Databases Info, for you

Recommended Posts

Hi,
I noticed there were a lot of questions in all of the database topics, so i went around the internet using different sources, and have some information that will answer your questions and help you understand databases. I hope this helps, and if you have any other questions which arent answered below, please send a message asking. Thankyou.

In computing, a database can be defined as a structured collection of records or data that is stored in a computer so that a program can consult it to answer queries. The records retrieved in answer to queries become information that can be used to make decisions. The computer program used to manage and query a database is known as a database management system (DBMS). The properties and design of database systems are included in the study of information science.

The term "database" originated within the computing discipline. Although its meaning has been broadened by popular use, even to include non-electronic databases, this article is about computer databases. Database-like records have been in existence since well before the Industrial Revolution in the form of ledgers, sales receipts and other business-related collections of data.

The central concept of a database is that of a collection of records, or pieces of information. Typically, for a given database, there is a structural description of the type of facts held in that database: this description is known as a schema. The schema describes the objects that are represented in the database, and the relationships among them. There are a number of different ways of organizing a schema, that is, of modeling the database structure: these are known as database models (or data models). The model in most common use today is the relational model, which in layman's terms represents all information in the form of multiple related tables each consisting of rows and columns (the true definition uses mathematical terminology). This model represents relationships by the use of values common to more than one table. Other models such as the hierarchical model and the network model use a more explicit representation of relationships.

The term database refers to the collection of related records, and the software should be referred to as the database management system or DBMS. When the context is unambiguous, however, many database administrators and programmers use the term database to cover both meanings.

Many professionals consider a collection of data to constitute a database only if it has certain properties: for example, if the data is managed to ensure its integrity and quality, if it allows shared access by a community of users, if it has a schema, or if it supports a query language. However, there is no definition of these properties that is universally agreed upon.

Database management systems are usually categorized according to the data model that they support: relational, object-relational, network, and so on. The data model will tend to determine the query languages that are available to access the database. A great deal of the internal engineering of a DBMS, however, is independent of the data model, and is concerned with managing factors such as performance, concurrency, integrity, and recovery from hardware failures. In these areas there are large differences between products.

Contents [hide]1 History
2 Database models
2.1 Flat model
2.2 Hierarchical model
2.3 Relational model
2.3.1 Relational operations
2.3.2 Normal Forms
2.4 Object database models
2.5 Post-relational database models
3 Database internals
3.1 Storage and Physical Database Design
3.1.1 Indexing
3.2 Transactions and concurrency
3.3 Replication
3.4 Security
4 Applications of databases
5 Database development platforms
6 Notes
7 References
8 See Also



[edit] History
The earliest known use of the term 'data bases' was in November 1963, when the System Development Corporation sponsored a symposium under the title Development and Management of a Computer-centered Data Base[1]. Database as a single word became common in Europe in the early 1970s and by the end of the decade it was being used in major American newspapers. (Databank, a comparable term, had been used in the Washington Post newspaper as early as 1966.)

The first database management systems were developed in the 1960s. A pioneer in the field was Charles Bachman. Bachman's early papers show that his aim was to make more effective use of the new direct access storage devices becoming available: until then, data processing had been based on punched cards and magnetic tape, so that serial processing was the dominant activity. Two key data models arose at this time: CODASYL developed the network model based on Bachman's ideas, and (apparently independently) the hierarchical model was used in a system developed by North American Rockwell, later adopted by IBM as the cornerstone of their IMS product. While IMS along with the CODASYL IDMS were the big, high visibility databases developed in the 1960's, several others were also born in that decade, some of which have a significant installed base today. Two worthy of mention are the PICK and MUMPS databases, with the former developed originally as an operating system with an embedded database and the latter as a programming language and database for the development of data-based software.

The relational model was proposed by E. F. Codd in 1970. He criticized existing models for confusing the abstract description of information structure with descriptions of physical access mechanisms. For a long while, however, the relational model remained of academic interest only. While CODASYL products (IDMS) and network model products (IMS) were conceived as practical engineering solutions taking account of the technology as it existed at the time, the relational model took a much more theoretical perspective, arguing (correctly) that hardware and software technology would catch up in time. Among the first implementations were Michael Stonebraker's Ingres at Berkeley, and the System R project at IBM. Both of these were research prototypes, announced during 1976. The first commercial products, Oracle and DB2, did not appear until around 1980. The first successful database product for microcomputers was dBASE for the CP/M and PC-DOS/MS-DOS operating systems.

During the 1980s, research activity focused on distributed database systems and database machines, but these developments had little effect on the market. Another important theoretical idea was the Functional Data Model, but apart from some specialized applications in genetics, molecular biology, and fraud investigation, the world took little notice.

In the 1990s, attention shifted to object-oriented databases. These had some success in fields where it was necessary to handle more complex data than relational systems could easily cope with, such as spatial databases, engineering data (including software engineering repositories), and multimedia data. Some of these ideas were adopted by the relational vendors, who integrated new features into their products as a result. The 1990s also saw the spread of Open Source databases, such as PostgreSQL and MySQL.

In the 2000s, the fashionable area for innovation is the XML database. As with object databases, this has spawned a new collection of startup companies, but at the same time the key ideas are being integrated into the established relational products. XML databases aim to remove the traditional divide between documents and data, allowing all of an organization's information resources to be held in one place, whether they are highly structured or not.


[edit] Database models
Main article: Database models
Various techniques are used to model data structure.

Most database systems are built around one particular data model, although it is increasingly common for products to offer support for more than one model. For any one logical model various physical implementations may be possible, and most products will offer the user some level of control in tuning the physical implementation, since the choices that are made have a significant effect on performance. An example is the relational model: all serious implementations of the relational model allow the creation of indexes which provide fast access to rows in a table if the values of certain columns are known.


[edit] Flat model
This may not strictly qualify as a data model, as defined above. The flat (or table) model consists of a single, two-dimensional array of data elements, where all members of a given column are assumed to be similar values, and all members of a row are assumed to be related to one another.


[edit] Hierarchical model
In a hierarchical model, data is organized into a tree-like structure, implying a single upward link in each record to describe the nesting, and a sort field to keep the records in a particular order in each same-level list.


[edit] Relational model
Three key terms are used extensively in relational database models: relations, attributes, and domains. A relation is a table with columns and rows. The named columns of the relation are called attributes, and the domain is the set of values the attributes are allowed to take.

The basic data structure of the relational model is the table, where information about a particular entity (say, an employee) is represented in columns and rows (also called tuples). Thus, the "relation" in "relational database" refers to the various tables in the database; a relation is a set of tuples. The columns enumerate the various attributes of the entity (the employee's name, address or phone number, for example), and a row is an actual instance of the entity (a specific employee) that is represented by the relation. As a result, each tuple of the employee table represents various attributes of a single employee.

All relations (and, thus, tables) in a relational database have to adhere to some basic rules to qualify as relations. First, the ordering of columns is immaterial in a table. Second, there can't be identical tuples or rows in a table. And third, each tuple will contain a single value for each of its attributes.

A relational database contains multiple tables, each similar to the one in the "flat" database model. One of the strengths of the relational model is that, in principle, any value occurring in two different records (belonging to the same table or to different tables), implies a relationship among those two records. Yet, in order to enforce explicit integrity constraints, relationships between records in tables can also be defined explicitly, by identifying or non-identifying parent-child relationships characterized by assigning cardinality (1:1, (0)1:M, M:M). Tables can also have a designated single attribute or a set of attributes that can act as a "key", which can be used to uniquely identify each tuple in the table.

A key that can be used to uniquely identify a row in a table is called a primary key. Keys are commonly used to join or combine data from two or more tables. For example, an Employee table may contain a column named Location which contains a value that matches the key of a Location table. Keys are also critical in the creation of indices, which facilitate fast retrieval of data from large tables. Any column can be a key, or multiple columns can be grouped together into a compound key. It is not necessary to define all the keys in advance; a column can be used as a key even if it was not originally intended to be one.


[edit] Relational operations
Users (or programs) request data from a relational database by sending it a query that is written in a special language, usually a dialect of SQL. Although SQL was originally intended for end-users, it is much more common for SQL queries to be embedded into software that provides an easier user interface. Many web sites, such as Wikipedia, perform SQL queries when generating pages.

In response to a query, the database returns a result set, which is just a list of rows containing the answers. The simplest query is just to return all the rows from a table, but more often, the rows are filtered in some way to return just the answer wanted. Often, data from multiple tables are combined into one, by doing a join. There are a number of relational operations in addition to join.


[edit] Normal Forms
Main article: Database normalization
Relations are classified based upon the types of anomalies to which they're vulnerable. A database that's in the first normal form is vulnerable to all types of anomalies, while a database that's in the domain/key normal form has no modification anomalies. Normal forms are hierarchical in nature. That is, the lowest level is the first normal form, and the database cannot meet the requirements for higher level normal forms without first having met all the requirements of the lesser normal form.


[edit] Object database models
In recent years, the object-oriented paradigm has been applied to database technology, creating a new programming model known as object databases. These databases attempt to bring the database world and the application programming world closer together, in particular by ensuring that the database uses the same type system as the application program. This aims to avoid the overhead (sometimes referred to as the impedance mismatch) of converting information between its representation in the database (for example as rows in tables) and its representation in the application program (typically as objects). At the same time, object databases attempt to introduce the key ideas of object programming, such as encapsulation and polymorphism, into the world of databases.

A variety of these ways have been tried for storing objects in a database. Some products have approached the problem from the application programming end, by making the objects manipulated by the program persistent. This also typically requires the addition of some kind of query language, since conventional programming languages do not have the ability to find objects based on their information content. Others have attacked the problem from the database end, by defining an object-oriented data model for the database, and defining a database programming language that allows full programming capabilities as well as traditional query facilities.


[edit] Post-relational database models
Several products have been identified as post-relational because the data model incorporates relations but is not constrained by the Information Principle, requiring that all information is represented by data values in relations. Products using a post-relational data model typically employ a model that actually pre-dates the relational model. These might be identified as a directed graph with trees on the nodes.

Examples of models that could be classified as post-relational are PICK aka MultiValue, and MUMPS.r


[edit] Database internals

[edit] Storage and Physical Database Design
Main article: Database storage structures
This short section requires expansion.

Database tables/indexes are typically stored in memory or on hard disk in one of many forms, ordered/unordered Flat files, ISAM, Heaps, Hash buckets or B+ Trees. These have various advantages and disadvantages discussed further in the main article on this topic. The most commonly used are B+trees and ISAM.

Other important design choices relate to the clustering of data by category (such as grouping data by month, or location), creating pre-computed views known as materialized views, partitioning data by range or hash. As well memory management and storage topology can be important design choices for database designers. Just as normalization is used to reduce storage requirements and improve the extensibility of the database, conversely denormalization is often used to reduce join complexity and reduce execution time for queries. [2]


[edit] Indexing
All of these databases can take advantage of indexing to increase their speed, and this technology has advanced tremendously since its early uses in the 1960s and 1970s. The most common kind of index is a sorted list of the contents of some particular table column, with pointers to the row associated with the value. An index allows a set of table rows matching some criterion to be located quickly. Typically, indexes are also stored in the various forms of data-structure mentioned above (such as B-trees, hashes, and linked lists). Usually, a specific technique is chosen by the database designer to increase efficiency in the particular case of the type of index required.

Relational DBMSs have the advantage that indexes can be created or dropped without changing existing applications making use of it. The database chooses between many different strategies based on which one it estimates will run the fastest. In other words, indexes are transparent to the application or end-user querying the database; while they affect performance, any SQL command will run with or without indexes existing in the database.

Relational DBMSs utilize many different algorithms to compute the result of an SQL statement. The RDBMS will produce a plan of how to execute the query, which is generated by analyzing the run times of the different algorithms and selecting the quickest. Some of the key algorithms that deal with joins are Nested loop join, Sort-Merge Join and Hash Join. Which of these is chosen depends on whether an index exists, what type it is, and its cardinality.


[edit] Transactions and concurrency
In addition to their data model, most practical databases ("transactional databases") attempt to enforce a database transaction . Ideally, the database software should enforce the ACID rules, summarized here:

Atomicity: Either all the tasks in a transaction must be done, or none of them. The transaction must be completed, or else it must be undone (rolled back).
Consistency: Every transaction must preserve the integrity constraints — the declared consistency rules — of the database. It cannot place the data in a contradictory state.
Isolation: Two simultaneous transactions cannot interfere with one another. Intermediate results within a transaction are not visible to other transactions.
Durability: Completed transactions cannot be aborted later or their results discarded. They must persist through (for instance) restarts of the DBMS after crashes
A cascading rollback occurs in database systems when a transaction (T1) causes a failure and a rollback must be performed. Other transactions dependent on T1's actions must also be rolled back due to T1's failure, thus causing a cascading effect.
In practice, many DBMS's allow most of these rules to be selectively relaxed for better performance.

Concurrency control is a method used to ensure that transactions are executed in a safe manner and follow the ACID rules. The DBMS must be able to ensure that only serializable, recoverable schedules are allowed, and that no actions of committed transactions are lost while undoing aborted transactions.


[edit] Replication
Replication of databases is closely related to transactions. If a database can log its individual actions, it is possible to create a duplicate of the data in real time. The duplicate can be used to improve performance or availability of the whole database system. Common replication concepts include:

Master/Slave Replication: All write requests are performed on the master and then replicated to the slaves
Quorum: The result of Read and Write requests are calculated by querying a "majority" of replicas.
Multimaster: Two or more replicas sync each other via a transaction identifier.
Parallel synchronous replication of databases enables transactions to be replicated on multiple servers simultaneously, which provides a method for backup and security as well as data availability. The first parallel synchronous replication systems were deployed by Parallel Computers Technology, Inc.(for SQL Server databases) using patented technology developed by a team of parallel computing specialists.


[edit] Security
Database security is the system, processes, and procedures that protect a database from unintended activity.

In the United Kingdom legislation protecting the public from unauthorized disclosure of personal information held on data bases falls under the Office of the Information Commissioner. United Kingdom based organizations holding personal data in electronic format (data bases for example) are required to register with the Data Commissioner. (reference: [1])


[edit] Applications of databases
Databases are used in many applications, spanning virtually the entire range of computer software. Databases are the preferred method of storage for large multiuser applications, where coordination between many users is needed. Even individual users find them convenient, and many electronic mail programs and personal organizers are based on standard database technology. Software database drivers are available for most database platforms so that application software can use a common application programming interface (API) to retrieve the information stored in a database. Two commonly used database APIs are JDBC and ODBC.


[edit] Database development platforms
4D
Alpha Five
Apache Derby (Java, also known as IBM Cloudscape and Sun Java DB)
BerkeleyDB
dBase
FileMaker
Firebird_(database_server)
Hsqldb (Java)
IBM DB2
Informix
Ingres
Interbase
MaxDB (formerly SapDB)
Microsoft Access
Microsoft SQL Server (derived from Sybase)
MySQL
Oracle Corporation
Paradox (database)
PostgreSQL
Sybase
Visual FoxPro
Trackvia

[edit] Notes
^ Swanson, Kenneth (1963-11-08). Development and Management of a Computer-Centered Database. dtic.mil. Retrieved on 2007-07-20.
^ S. Lightstone, T. Teorey, T. Nadeau, Physical Database Design: the database professional's guide to exploiting indexes, views, storage, and more, Morgan Kaufmann Press, 2007. ISBN: 0123693896
`


[edit] References
C. J. Date, An Introduction to Database Systems, Eighth Edition, Addison Wesley, 2003.
J. Gray, A. Reuter, Transaction Processing: Concepts and Techniques, 1st edition, Morgan Kaufmann Publishers, 1992.
David M. Kroenke, Database Processing: Fundamentals, Design, and Implementation (1997), Prentice-Hall, Inc., pages 130-144
S. Lightstone, T. Teorey, T. Nadeau, “Physical Database Design: the database professional's guide to exploiting indexes, views, storage, and more”, Morgan Kaufmann Press, 2007. ISBN: 0123693896
T. Teorey, S. Lightstone, T. Nadeau, “Database Modeling & Design: Logical Design, 4th edition”, Morgan Kaufmann Press, 2005. ISBN: 0-12-685352-5
J. Shih, "Why Synchronous Parallel Transaction Replication is Hard, But Inevitable?", white paper, 2007.


Share this post


Link to post
Share on other sites

Not all databases were created equal.There are what I call "pocket databases" (MS Access, MySQL, PostgreSQL) and then there are enterprise-strength databases (Oracle, DB2 Universal Database, SQL Server, Informix, Sybase).One of the first companies that got involved in developing database software was IBM, and the main driver for doing this was to sell more mainframes, not to provide reliable, well-performing information systems to their customers. That, and the fact that they had fallen in love with their own database product (DB2/360 for mainframe) led to IBM falling behind the growth curve of companies' (their own customers) data volumes.Instead of taking a hard look at the emerging performance problems their customers were beginning to experience and going back to the drawing board and designing from the ground up a truly scalable database product based on a radically new architecture, they added gizmos and gimmicks to the already existing code base.Oracle came along and capitalized on this opportunity by inventing the concept of rollback segments. Not many people realize this, but having these rollback segments was and continues to be the reason for Oracle's unparalleled success.Rollback segments are basically a bunch of files that store the before and after images of data when you issue so-called DML statements (update, insert, delete).Rollback segments are the hidden reason why the same application will scale to terabytes when running on Oracle, and will screech to a halt as it reaches the 500 GB mark when running on pretty much any other database software.I know this from first hand experience, being a technical SAP professional. I have personally seen many SAP systems, both R/3 (for OLTP / online transaction processing) and BW (Business Information Warehouse, for OLAP / online analytical processing) and I have noticed a pattern of the worst performance problems always happening on SAP systems which were sitting on non-Oracle databases.For the sake of disclosure, I do not work for Oracle, never have, never will, and I do not hold shares in their stock.But if it sounds like I'm bashing IBM and their DB2 UDB product, I'm not going to apologize. If a company is considering purchasing DB2 UDB, they might as well get the same level of performance and scalability by going with MySQL and saving a bunch of money in the process.The worst performance problem I have ever seen was when issuing a 3-way table join in a pretty big DB2 UDB database, where the 3 tables involved were around 2 GB in size each. Not only would this SELECT statement take forever, but we managed to bring the server to its knees just by running this 3-way join (the server being a powerful RS6000 machine running AIX). The way we ended up solving this problem was to break down the 3-way join into 2 separate SELECTs and passing a result set drom one to the other, but that ugly and against all database design best practices. However, the results were spectacular. The end users came to us and asked "What did you guys do? Did you turbo-charge the database?" and our answer was "No, but next time you have a nearly unsolvable database performance problem, get Redwood Shores on the phone." OK, enough of my soapbox, I'm calm now.

Share this post


Link to post
Share on other sites

3-way table join in a pretty big DB2 UDB database, where the 3 tables involved were around 2 GB in size each. Not only would this SELECT statement take forever, but we managed to bring the server to its knees just by running this 3-way join (the server being a powerful RS6000 machine running AIX

Another point to be mentionned, when running on (really) big AIX machines, is a feature of Oracle name "parallel query". If you alter your table "parallel (degree 12)" Oracle starts a coordinator for the query, which fires a lot of slave processes (in this example here 2*12=24 processes), each process being in charge of executing a part of the query. So, if you have a huge database splitted on twelve disks, each process reads one disk, so the system reads all the disks concurrently instead of sequentially, taking full benefit of the disk throughput of a powerful system.This parallel query is a feature only Oracle has, and is very useful for boosting batch processes on huge Unix systems.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.