Normalization

In creating a database, normalization is the process of organizing it into tables in such a way that the results of using the database are always unambiguous and as intended. Normalization may have the effect of duplicating data within the database and often results in the creation of additional tables. (While normalization tends to increase the duplication of data, it does not introduce redundancy, which is unnecessary duplication.) Normalization is typically a refinement process after the initial exercise of identifying the data objects that should be in the database, identifying their relationships, and defining the tables required and the columns within each table.

A simple example of normalizing data might consist of a table showing:

Customer Item purchased Purchase price
Thomas Shirt $40
Maria Tennis shoes $35
Evelyn Shirt $40
Pajaro Trousers $25

If this table is used for the purpose of keeping track of the price of items and you want to delete one of the customers, you will also delete a price. Normalizing the data would mean understanding this and solving the problem by dividing this table into two tables, one with information about each customer and a product they bought and the second about each product and its price. Making additions or deletions to either table would not affect the other.

Normalization degrees of relational database tables have been defined and include:

First normal form (1NF). This is the "basic" level of normalization and generally corresponds to the definition of any database, namely:

  • It contains two-dimensional tables with rows and columns.
  • Each column corresponds to a sub-object or an attribute of the object represented by the entire table.
  • Each row represents a unique instance of that sub-object or attribute and must be different in some way from any other row (that is, no duplicate rows are possible).
  • All entries in any column must be of the same kind. For example, in the column labeled "Customer," only customer names or numbers are permitted.

Second normal form (2NF). At this level of normalization, each column in a table that is not a determiner of the contents of another column must itself be a function of the other columns in the table. For example, in a table with three columns containing customer ID, product sold, and price of the product when sold, the price would be a function of the customer ID (entitled to a discount) and the specific product.

Third normal form (3NF). At the second normal form, modifications are still possible because a change to one row in a table may affect data that refers to this information from another table. For example, using the customer table just cited, removing a row describing a customer purchase (because of a return perhaps) will also remove the fact that the product has a certain price. In the third normal form, these tables would be divided into two tables so that product pricing would be tracked separately.

Domain/key normal form (DKNF). A key uniquely identifies each row in a table. A domain is the set of permissible values for an attribute. By enforcing key and domain restrictions, the database is assured of being freed from modification anomalies. DKNF is the normalization level that most designers aim to achieve.

时间: 2024-12-10 18:18:11

Normalization的相关文章

Batch Normalization导读

/* 版权声明:可以任意转载,转载时请标明文章原始出处和作者信息 .*/ author: 张俊林 Batch Normalization作为最近一年来DL的重要成果,已经广泛被证明其有效性和重要性.目前几乎已经成为DL的标配了,任何有志于学习DL的同学们朋友们雷迪斯俺的詹特曼们都应该好好学一学BN.BN倒过来看就是NB,因为这个技术确实很NB,虽然有些细节处理还解释不清其理论原因,但是实践证明好用才是真的好,别忘了DL从Hinton对深层网络做Pre-Train开始就是一个经验领先于理论分析的偏

转:数据标准化/归一化normalization

转自:数据标准化/归一化normalization 这里主要讲连续型特征归一化的常用方法.离散参考[数据预处理:独热编码(One-Hot Encoding)]. 基础知识参考: [均值.方差与协方差矩阵] [矩阵论:向量范数和矩阵范数] 数据的标准化(normalization)和归一化 数据的标准化(normalization)是将数据按比例缩放,使之落入一个小的特定区间.在某些比较和评价的指标处理中经常会用到,去除数据的单位限制,将其转化为无量纲的纯数值,便于不同单位或量级的指标能够进行比较

数据预处理中归一化(Normalization)与损失函数中正则化(Regularization)解惑

背景:数据挖掘/机器学习中的术语较多,而且我的知识有限.之前一直疑惑正则这个概念.所以写了篇博文梳理下 摘要: 1.正则化(Regularization) 1.1 正则化的目的 1.2 正则化的L1范数(lasso),L2范数(ridge),ElasticNet 2.归一化 (Normalization)   2.1归一化的目的 2.1归一化计算方法 2.2.spark ml中的归一化 2.3 python中skelearn中的归一化 知识总结: 1.正则化(Regularization) 1.

Batch Normalization 学习笔记

原文:http://blog.csdn.net/happynear/article/details/44238541 今年过年之前,MSRA和Google相继在ImagenNet图像识别数据集上报告他们的效果超越了人类水平,下面将分两期介绍两者的算法细节. 这次先讲Google的这篇<Batch Normalization Accelerating Deep Network Training by Reducing Internal Covariate Shift>,主要是因为这里面的思想比较

[CS231n-CNN] Training Neural Networks Part 1 : activation functions, weight initialization, gradient flow, batch normalization | babysitting the learning process, hyperparameter optimization

课程主页:http://cs231n.stanford.edu/ ? Introduction to neural networks -Training Neural Network ______________________________________________________________________________________________________________________________________________________________

Python 规范化LinkedIn用户的联系人所在公司后缀 (data normalization)

CODE: #!/usr/bin/python # -*- coding: utf-8 -*- ''' Created on 2014-8-19 @author: guaguastd @name: company_suffix_normalize.py ''' # import json import os import csv from collections import Counter from operator import itemgetter from prettytable imp

Layer Normalization

Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. "Layer normalization." arXiv preprint arXiv:1607.06450 (2016). Batch Normalization是对每个神经元做归一化(cnn是对每个feature map做归一化),主要是为了解决internal covariate shift的问题. 作者提出,对于RNN这种没法用mini-batch的网络,没办法用B

caffe中的Local Response Normalization (LRN)有什么用,和激活函数区别

http://stats.stackexchange.com/questions/145768/importance-of-local-response-normalization-in-cnn caffe 解释: The local response normalization layer performs a kind of “lateral inhibition” by normalizing over local input regions.双边抑制.看起来就像是激活函数 几种解释以上链

Batch normalization:accelerating deep network training by reducing internal covariate shift的笔记

说实话,这篇paper看了很久,,到现在对里面的一些东西还不是很好的理解. 下面是我的理解,当同行看到的话,留言交流交流啊!!!!! 这篇文章的中心点:围绕着如何降低  internal covariate shift 进行的, 它的方法就是进行batch normalization. internal covariate shift 和 batch normalization 1. 什么是 internal covariate shift呢? 简单地理解为一个网络或system的输入的dirs

论文笔记-Batch Normalization

论文题目:Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift 首先看看博客http://blog.csdn.net/happynear/article/details/44238541中最开始介绍的: 为什么中心化,方差归一化等,可以加快收敛? 补充一点:输入x集中在0周围,sigmoid更可能在其未饱和区域,梯度相对更大一些,收敛更快. Abstract 1.深