August 4, 2009
What is Database Normalization?
Database Normalization is a systematic way of ensuring that a database structure is suitable for general – purpose quering and normalization is the process of efficently organizing data in a database. This process was first introduced in 1970 by E.F. Codd and has since been redefined to higher normal forms. The two goals of the normalization process are: eliminating redundant data and ensuring that data dependencies make sense. These goals reduce the amount od space a database consumes and ensures that the data is logically stored. There are a series of guidelines that ensure the database is normalized. They are refered to as normal forms and are numbered from one (the lowest form) through five, althought the fifth form is rarely seen. The normal forms of relational database theory provide criteria for determining a table’s degree of vulnerablility to logical inconsistencies and anomalities. The higher the normal form, the less vulnerable it is to inconsistencies First normal form (1NF) sets the very basic rules for organized databases. These rules eliminate duplicative columns from the same table and creates separate tables for each group of related data and identify each row with a unique column or set of columns known as the primary key. The basic objective of the first normal form defined by Codd was to permit data to be queried and manipulated using a “universal data sub-language” such as SQL. One of Codd’s important insights was that the structural complexity could always be removed, leading to a much greater power and flexibility in the way queries could be formulated and evaluated. Second normal form (2NF) further adresses the concept of removing duplicative data. This is done by meeting all the requirements of forst normal form, removing subsets of data that apply to multiple rows of a table and placing them into separate tables, and creating relationships between the new tables and their...
Please join StudyMode to read the full document