Sqoop1.4.4实现关系型数据库多表同时导入HDFS或Hive中

问题导读:

1、使用Sqoop哪个工具实现多表导入?

2、满足多表导入的三个条件是?

3、如何指定导入HDFS某个目录?如何指定导入Hive某个数据库?

一、介绍

有时候我们需要将关系型数据库中多个表一起导入到HDFS或者Hive中,这个时候可以使用Sqoop的另一个工具sqoop-import-all-tables。每个表数据被分别存储在以表名命名的HDFS上的不同目录中。

在使用多表导入之前,以下三个条件必须同时满足:

1、每个表必须都只有一个列作为主键;

2、必须将每个表中所有的数据导入,而不是部分;

3、你必须使用默认分隔列,且WHERE子句无任何强加的条件

--table, --split-by, --columns, 和 --where 参数在sqoop-import-all-tables命令中是不合法的。--exclude-tables:可以用来排除导入某个表。具体的导入用法和单表导入差不多。

二、关系数据表

我的spice数据库中,有以下四张表:

mysql> show tables;
+-----------------+
| Tables_in_spice |
+-----------------+
| servers         |
| users           |
| vmLog           |
| vms             |
+-----------------+
4 rows in set (0.00 sec)

三、多表同时导入HDFS中

[[email protected] ~]$ sqoop-import-all-tables --connect jdbc:mysql://secondmgt:3306/spice  --username hive --password hive --as-textfile --warehouse-dir /output/
Warning: /usr/lib/hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
15/01/19 20:21:15 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
15/01/19 20:21:15 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
15/01/19 20:21:15 INFO tool.CodeGenTool: Beginning code generation
15/01/19 20:21:15 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `servers` AS t LIMIT 1
15/01/19 20:21:15 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `servers` AS t LIMIT 1
15/01/19 20:21:15 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/hadoopUser/cloud/hadoop/programs/hadoop-2.2.0
Note: /tmp/sqoop-hadoopUser/compile/0bdbced5e58f170e1670516db3339f91/servers.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
15/01/19 20:21:16 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoopUser/compile/0bdbced5e58f170e1670516db3339f91/servers.jar
15/01/19 20:21:16 WARN manager.MySQLManager: It looks like you are importing from mysql.
15/01/19 20:21:16 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
15/01/19 20:21:16 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
15/01/19 20:21:16 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
15/01/19 20:21:16 INFO mapreduce.ImportJobBase: Beginning import of servers
15/01/19 20:21:16 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoopUser/cloud/hadoop/programs/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoopUser/cloud/hbase/hbase-0.96.2-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
15/01/19 20:21:17 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
15/01/19 20:21:17 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
15/01/19 20:21:17 INFO client.RMProxy: Connecting to ResourceManager at secondmgt/192.168.2.133:8032
15/01/19 20:21:18 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(`src_id`), MAX(`src_id`) FROM `servers`
15/01/19 20:21:18 INFO mapreduce.JobSubmitter: number of splits:3
15/01/19 20:21:19 INFO Configuration.deprecation: mapred.job.classpath.files is deprecated. Instead, use mapreduce.job.classpath.files
15/01/19 20:21:19 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
15/01/19 20:21:19 INFO Configuration.deprecation: mapred.cache.files.filesizes is deprecated. Instead, use mapreduce.job.cache.files.filesizes
15/01/19 20:21:19 INFO Configuration.deprecation: mapred.cache.files is deprecated. Instead, use mapreduce.job.cache.files
15/01/19 20:21:19 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
15/01/19 20:21:19 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
15/01/19 20:21:19 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
15/01/19 20:21:19 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
15/01/19 20:21:19 INFO Configuration.deprecation: mapreduce.inputformat.class is deprecated. Instead, use mapreduce.job.inputformat.class
15/01/19 20:21:19 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
15/01/19 20:21:19 INFO Configuration.deprecation: mapreduce.outputformat.class is deprecated. Instead, use mapreduce.job.outputformat.class
15/01/19 20:21:19 INFO Configuration.deprecation: mapred.cache.files.timestamps is deprecated. Instead, use mapreduce.job.cache.files.timestamps
15/01/19 20:21:19 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
15/01/19 20:21:19 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
15/01/19 20:21:19 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1421373857783_0035
15/01/19 20:21:19 INFO impl.YarnClientImpl: Submitted application application_1421373857783_0035 to ResourceManager at secondmgt/192.168.2.133:8032
15/01/19 20:21:19 INFO mapreduce.Job: The url to track the job: http://secondmgt:8088/proxy/application_1421373857783_0035/
15/01/19 20:21:19 INFO mapreduce.Job: Running job: job_1421373857783_0035
15/01/19 20:21:32 INFO mapreduce.Job: Job job_1421373857783_0035 running in uber mode : false
15/01/19 20:21:32 INFO mapreduce.Job:  map 0% reduce 0%
15/01/19 20:21:43 INFO mapreduce.Job:  map 33% reduce 0%
15/01/19 20:21:46 INFO mapreduce.Job:  map 67% reduce 0%
15/01/19 20:21:48 INFO mapreduce.Job:  map 100% reduce 0%
15/01/19 20:21:48 INFO mapreduce.Job: Job job_1421373857783_0035 completed successfully
15/01/19 20:21:48 INFO mapreduce.Job: Counters: 27
        File System Counters
                FILE: Number of bytes read=0
                FILE: Number of bytes written=275913
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=319
                HDFS: Number of bytes written=39
                HDFS: Number of read operations=12
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=6
        Job Counters
                Launched map tasks=3
                Other local map tasks=3
                Total time spent by all maps in occupied slots (ms)=127888
                Total time spent by all reduces in occupied slots (ms)=0
        Map-Reduce Framework
                Map input records=3
                Map output records=3
                Input split bytes=319
                Spilled Records=0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=201
                CPU time spent (ms)=7470
                Physical memory (bytes) snapshot=439209984
                Virtual memory (bytes) snapshot=2636304384
                Total committed heap usage (bytes)=252706816
        File Input Format Counters
                Bytes Read=0
        File Output Format Counters
                Bytes Written=39
15/01/19 20:21:48 INFO mapreduce.ImportJobBase: Transferred 39 bytes in 30.8769 seconds (1.2631 bytes/sec)
15/01/19 20:21:48 INFO mapreduce.ImportJobBase: Retrieved 3 records.
15/01/19 20:21:48 INFO tool.CodeGenTool: Beginning code generation
15/01/19 20:21:48 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `users` AS t LIMIT 1
15/01/19 20:21:48 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/hadoopUser/cloud/hadoop/programs/hadoop-2.2.0
Note: /tmp/sqoop-hadoopUser/compile/0bdbced5e58f170e1670516db3339f91/users.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
15/01/19 20:21:48 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoopUser/compile/0bdbced5e58f170e1670516db3339f91/users.jar
15/01/19 20:21:48 INFO mapreduce.ImportJobBase: Beginning import of users
15/01/19 20:21:48 INFO client.RMProxy: Connecting to ResourceManager at secondmgt/192.168.2.133:8032
15/01/19 20:21:49 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(`id`), MAX(`id`) FROM `users`
15/01/19 20:21:49 INFO mapreduce.JobSubmitter: number of splits:4
15/01/19 20:21:49 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1421373857783_0036
15/01/19 20:21:49 INFO impl.YarnClientImpl: Submitted application application_1421373857783_0036 to ResourceManager at secondmgt/192.168.2.133:8032
15/01/19 20:21:49 INFO mapreduce.Job: The url to track the job: http://secondmgt:8088/proxy/application_1421373857783_0036/
15/01/19 20:21:49 INFO mapreduce.Job: Running job: job_1421373857783_0036
15/01/19 20:22:02 INFO mapreduce.Job: Job job_1421373857783_0036 running in uber mode : false
15/01/19 20:22:02 INFO mapreduce.Job:  map 0% reduce 0%
15/01/19 20:22:13 INFO mapreduce.Job:  map 25% reduce 0%
15/01/19 20:22:18 INFO mapreduce.Job:  map 75% reduce 0%
15/01/19 20:22:23 INFO mapreduce.Job:  map 100% reduce 0%
15/01/19 20:22:23 INFO mapreduce.Job: Job job_1421373857783_0036 completed successfully
15/01/19 20:22:23 INFO mapreduce.Job: Counters: 27
        File System Counters
                FILE: Number of bytes read=0
                FILE: Number of bytes written=368040
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=401
                HDFS: Number of bytes written=521
                HDFS: Number of read operations=16
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=8
        Job Counters
                Launched map tasks=4
                Other local map tasks=4
                Total time spent by all maps in occupied slots (ms)=175152
                Total time spent by all reduces in occupied slots (ms)=0
        Map-Reduce Framework
                Map input records=13
                Map output records=13
                Input split bytes=401
                Spilled Records=0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=257
                CPU time spent (ms)=10250
                Physical memory (bytes) snapshot=627642368
                Virtual memory (bytes) snapshot=3547209728
                Total committed heap usage (bytes)=335544320
        File Input Format Counters
                Bytes Read=0
        File Output Format Counters
                Bytes Written=521
15/01/19 20:22:23 INFO mapreduce.ImportJobBase: Transferred 521 bytes in 34.6285 seconds (15.0454 bytes/sec)
15/01/19 20:22:23 INFO mapreduce.ImportJobBase: Retrieved 13 records.
15/01/19 20:22:23 INFO tool.CodeGenTool: Beginning code generation
15/01/19 20:22:23 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `vmLog` AS t LIMIT 1
15/01/19 20:22:23 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/hadoopUser/cloud/hadoop/programs/hadoop-2.2.0
Note: /tmp/sqoop-hadoopUser/compile/0bdbced5e58f170e1670516db3339f91/vmLog.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
15/01/19 20:22:23 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoopUser/compile/0bdbced5e58f170e1670516db3339f91/vmLog.jar
15/01/19 20:22:23 INFO mapreduce.ImportJobBase: Beginning import of vmLog
15/01/19 20:22:23 INFO client.RMProxy: Connecting to ResourceManager at secondmgt/192.168.2.133:8032
15/01/19 20:22:24 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(`id`), MAX(`id`) FROM `vmLog`
15/01/19 20:22:24 INFO mapreduce.JobSubmitter: number of splits:4
15/01/19 20:22:24 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1421373857783_0037
15/01/19 20:22:24 INFO impl.YarnClientImpl: Submitted application application_1421373857783_0037 to ResourceManager at secondmgt/192.168.2.133:8032
15/01/19 20:22:24 INFO mapreduce.Job: The url to track the job: http://secondmgt:8088/proxy/application_1421373857783_0037/
15/01/19 20:22:24 INFO mapreduce.Job: Running job: job_1421373857783_0037
15/01/19 20:22:37 INFO mapreduce.Job: Job job_1421373857783_0037 running in uber mode : false
15/01/19 20:22:37 INFO mapreduce.Job:  map 0% reduce 0%
15/01/19 20:22:47 INFO mapreduce.Job:  map 25% reduce 0%
15/01/19 20:22:52 INFO mapreduce.Job:  map 75% reduce 0%
15/01/19 20:22:58 INFO mapreduce.Job:  map 100% reduce 0%
15/01/19 20:22:58 INFO mapreduce.Job: Job job_1421373857783_0037 completed successfully
15/01/19 20:22:59 INFO mapreduce.Job: Counters: 27
        File System Counters
                FILE: Number of bytes read=0
                FILE: Number of bytes written=367872
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=398
                HDFS: Number of bytes written=635
                HDFS: Number of read operations=16
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=8
        Job Counters
                Launched map tasks=4
                Other local map tasks=4
                Total time spent by all maps in occupied slots (ms)=171552
                Total time spent by all reduces in occupied slots (ms)=0
        Map-Reduce Framework
                Map input records=23
                Map output records=23
                Input split bytes=398
                Spilled Records=0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=182
                CPU time spent (ms)=10480
                Physical memory (bytes) snapshot=588107776
                Virtual memory (bytes) snapshot=3523424256
                Total committed heap usage (bytes)=337641472
        File Input Format Counters
                Bytes Read=0
        File Output Format Counters
                Bytes Written=635
15/01/19 20:22:59 INFO mapreduce.ImportJobBase: Transferred 635 bytes in 35.147 seconds (18.067 bytes/sec)
15/01/19 20:22:59 INFO mapreduce.ImportJobBase: Retrieved 23 records.
15/01/19 20:22:59 INFO tool.CodeGenTool: Beginning code generation
15/01/19 20:22:59 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `vms` AS t LIMIT 1
15/01/19 20:22:59 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/hadoopUser/cloud/hadoop/programs/hadoop-2.2.0
Note: /tmp/sqoop-hadoopUser/compile/0bdbced5e58f170e1670516db3339f91/vms.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
15/01/19 20:22:59 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoopUser/compile/0bdbced5e58f170e1670516db3339f91/vms.jar
15/01/19 20:22:59 INFO mapreduce.ImportJobBase: Beginning import of vms
15/01/19 20:22:59 INFO client.RMProxy: Connecting to ResourceManager at secondmgt/192.168.2.133:8032
15/01/19 20:23:00 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(`id`), MAX(`id`) FROM `vms`
15/01/19 20:23:00 INFO mapreduce.JobSubmitter: number of splits:4
15/01/19 20:23:00 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1421373857783_0038
15/01/19 20:23:00 INFO impl.YarnClientImpl: Submitted application application_1421373857783_0038 to ResourceManager at secondmgt/192.168.2.133:8032
15/01/19 20:23:00 INFO mapreduce.Job: The url to track the job: http://secondmgt:8088/proxy/application_1421373857783_0038/
15/01/19 20:23:00 INFO mapreduce.Job: Running job: job_1421373857783_0038
15/01/19 20:23:13 INFO mapreduce.Job: Job job_1421373857783_0038 running in uber mode : false
15/01/19 20:23:13 INFO mapreduce.Job:  map 0% reduce 0%
15/01/19 20:23:24 INFO mapreduce.Job:  map 25% reduce 0%
15/01/19 20:23:28 INFO mapreduce.Job:  map 75% reduce 0%
15/01/19 20:23:34 INFO mapreduce.Job:  map 100% reduce 0%
15/01/19 20:23:35 INFO mapreduce.Job: Job job_1421373857783_0038 completed successfully
15/01/19 20:23:35 INFO mapreduce.Job: Counters: 27
        File System Counters
                FILE: Number of bytes read=0
                FILE: Number of bytes written=367932
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=401
                HDFS: Number of bytes written=240
                HDFS: Number of read operations=16
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=8
        Job Counters
                Launched map tasks=4
                Other local map tasks=4
                Total time spent by all maps in occupied slots (ms)=168328
                Total time spent by all reduces in occupied slots (ms)=0
        Map-Reduce Framework
                Map input records=8
                Map output records=8
                Input split bytes=401
                Spilled Records=0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=210
                CPU time spent (ms)=10990
                Physical memory (bytes) snapshot=600018944
                Virtual memory (bytes) snapshot=3536568320
                Total committed heap usage (bytes)=335544320
        File Input Format Counters
                Bytes Read=0
        File Output Format Counters
                Bytes Written=240
15/01/19 20:23:35 INFO mapreduce.ImportJobBase: Transferred 240 bytes in 35.9131 seconds (6.6828 bytes/sec)
15/01/19 20:23:35 INFO mapreduce.ImportJobBase: Retrieved 8 records.

由导入过程日志可知,其实多表导入过程也是一个个单表导入的整合。

查看导入结果:

[[email protected] ~]$ hadoop fs -ls /output/
Found 4 items
drwxr-xr-x   - hadoopUser supergroup          0 2015-01-19 20:21 /output/servers
drwxr-xr-x   - hadoopUser supergroup          0 2015-01-19 20:22 /output/users
drwxr-xr-x   - hadoopUser supergroup          0 2015-01-19 20:22 /output/vmLog
drwxr-xr-x   - hadoopUser supergroup          0 2015-01-19 20:23 /output/vms

四、多表导入Hive中

我们将上述四个表导入到Hive中,如下:

[[email protected] ~]$ sqoop-import-all-tables --connect jdbc:mysql://secondmgt:3306/spice --username hive --password hive --hive-import --as-textfile --create-hive-table

查看结果:

 hive> show tables;
OK
servers
users
vmlog
vms
Time taken: 0.022 seconds, Fetched: 4 row(s)

默认是导入到default数据库中,如果想指定导入到某个数据库中,可以使用--hive-database参数,如下:

[[email protected] ~]$ sqoop-import-all-tables --connect jdbc:mysql://secondmgt:3306/spice  --username hive --password hive --hive-import --hive-database test  --as-textfile --create-hive-table
时间: 2024-10-06 00:10:33

Sqoop1.4.4实现关系型数据库多表同时导入HDFS或Hive中的相关文章

使用sqoop1.4.4从oracle导入数据到hive中错误记录及解决方案

在使用命令导数据过程中,出现如下错误 sqoop import --hive-import --connect jdbc:oracle:thin:@192.168.29.16:1521/testdb --username NAME --passord PASS --verbose -m 1 --table T_USERINFO 错误1:File does not exist: hdfs://opt/sqoop-1.4.4/lib/commons-io-1.4.jar FileNotFoundEx

mysql数据库数据能不能导入到sql server中

当然可以了. 一.为 MySQL安装ODBC驱动 下载MySQL ODBC Connector,下载:http://dev.mysql.com/downloads/connector 从控制面板-管理工具,打开你的 数据源(ODBC),选 系统DNS ,点添加.   在 创建新数据源对话框中,选择MySQL ODBC 5.1 Driver ,点完成. 完成后会出现MySQL 链接对话框,添加你的 MySQL 数据库账号信息,并确认"root"账号是否有全部的权限,如果你安装MySQL

关系型数据库设计表和字段的思路

做数据库的设计一定要有思路,把各个表的依赖关系整理清楚.我们就讲一个小例子就可以让你轻松掌握到设计数据表和字段的思路 创建表和字段之前首先要明确各表之间的依赖关系场景: 比如现在要做一个电商网站的数据库整理清楚要设计的表:用户信息表,商品信息表,结算的表.PS:购物车的表请根据此例举一反三的去思考它的依赖关系,吴小迪相信聪明的您一定会做出来的思路:我们要 先设置不需要依赖其他表的表的字段(这句话一定要看懂,看不懂再来几遍,看懂为止),比如说用户信息和商品信息的表就 不需要依赖 其他的表,那么我们

Sqoop1.4.4将MySQL数据库表中数据导入到HBase表中

问题导读:         1.--hbase-table.--hbase-row-key.--column-family及--hbase-create-table参数的作用? 2.Sqoop将关系型数据库表中数据导入HBase中,默认Rowkey是什么? 3.如果关系型数据库表中存在多关键字,该怎么办? 一.简介及部分重要参数介绍 Sqoop除了能够将数据从关系型数据库导入到HDFS和Hive中,还能够导入到HBase表中. --hbase-table:通过指定--hbase-table参数值

关系型数据库表结构的两个设计技巧

By良少http://blog.csdn.net/shendl 关系型数据库表结构的设计,有下面两个设计技巧: 物理主键作为关联的外键 关系型数据库,由多个数据表构成.每一个数据表的结构是相同的,不同表之间可能存在关联关系.表之间的关联关系,正是关系型数据库得名的原因. 一个表由多个字段构成.其中可能有多个字段适合作为主键.主键字段,就是表中每一行都不会有重复数据的字段. 主键,可以分为两种:物理主键和逻辑主键. 每一张数据库的表,都使用自增长的id字段作为物理主键. 多表之间的外键关联,都关联

关系型数据库知识小结

一.基础术语 DML(data manipulation language): 如SELECT.UPDATE.INSERT.DELETE,主要用来对数据库里的数据进行操作的语言 DDL(data definition language): 主要的命令有CREATE.ALTER.DROP等,DDL主要是用在定义或改变表(TABLE)的结构,数据类型,表之间的链接和约束等初始化工作上,大多在建立表时使用. DCL(Data Control Language):数据库控制功能.是用来设置或更改数据库用

关系型数据库基础

第1章 关系型数据库管理系统简介 1.            常见SQL:MySQL(小型.免费.简单,甲骨文公司),SQLServer(微软,收费,.net,大型),Oracle,DB2 2.            数据库特点: 降低存储数据的冗余度: 更高的数据统一性: 存储的数据可以共享: 可以建立数据库所遵循的标准: 便于维护数据完整性: 能够实现数据的安全性: 3.            数据存储模型:层次模型.网状模型.关系模型.对象模型 4.            关系模型把世界看

nosql的介绍以及和关系型数据库的区别

一直对非关系型数据库和关系型数据库的了解感觉不太深入,在网上收集了一些关于sql和nosql的区别和优缺点分享给大家. Nosql介绍 Nosql的全称是Not Only Sql,这个概念早起就有人提出,在09年的时候比较火.Nosql指的是非关系型数据库,而我们常用的都是关系型数据库.就像我们常用的mysql,sqlserver一样,这些数据库一般用来存储重要信息,应对普通的业务是没有问题的.但是,随着互联网的高速发展,传统的关系型数据库在应付超大规模,超大流量以及高并发的时候力不从心.而就在

sql 同步远程数据库(表)到本地

一)在同一个数据库服务器上面进行数据表间的数据导入导出: 1. 如果表tb1和tb2的结构是完全一样的,则使用以下的命令就可以将表tb1中的数据导入到表tb2中: insert into db2.tb2 select * from  db1.tb1 2. 如果表tb1和tb2只有部分字段是相同的,要实现将tb1中的部分字段导入到tb2中相对应的相同字段中,则使用以下命令: insert into db2.tb2(字段1,字段2,字段3--) select  字段1',字段2',字段3',--