在MySQL和分布式TiDB之间迁移数据,这里用到mydumper工具。
迁移分为2步:
第1步:dump到本地,需要保证本地有足够的磁盘空间
import os import sys import datetime import subprocess src_db1 = ‘test1‘ src_table1 = ‘table1‘ dump_time1 = datetime.datetime.now().strftime("%Y%m%d_%H%M") file_path1 = ‘/home/coconut/backup/%s/%s/%s‘%(src_db1, src_table1, dump_time1) os.system("mkdir -p %s"%(file_path1)) dict1 = { ‘host‘ : "mysql1.yourcompany1.com", ‘user‘ : "reader1", ‘password‘ : ‘108749512d78aa131a8eeb8d1c067ba3‘, ‘database‘ : src_db1, ‘table‘ : src_table1, ‘outputdir‘ : file_path1 } dump_command = """mydumper -h %(host)s -P 3306 -u %(user)s -p %(password)s -t 8 -F 64 -k -B %(database)s -T %(table)s -o %(outputdir)s"""%dict1 p = subprocess.Popen(dump_command, shell=True,stdout=subprocess.PIPE,stderr=subprocess.PIPE,close_fds=True) output, err = p.communicate() print datetime.datetime.now() if p.returncode<>0: print err err_str1 = "\n%s\n%s\n%s"%(dump_command, output, err) sys.stderr.write(err_str1) return False else: print ‘dump done‘
第2步:在分布式TiDB上恢复数据
import os import sys import datetime import subprocess dst_db1 = ‘test1‘ dict2 = { ‘host‘ : "tidb1.yourcompany1.com", ‘user‘ : "write1", ‘password‘ : ‘d1b27b715aa04694926a8c8539668193‘, ‘outputdir‘ : file_path1, ‘database‘ : dst_db1 } restore_command = """myloader -h %(host)s -P 3306 -u %(user)s -p %(password)s -t 1 -q 2 -B %(database)s -d %(outputdir)s"""%dict2 print restore_command p = subprocess.Popen(restore_command, shell=True,stdout=subprocess.PIPE,stderr=subprocess.PIPE,close_fds=True) output, err = p.communicate() print datetime.datetime.now() if p.returncode<>0: print err err_str1 = "\n%s\n%s\n%s"%(restore_command, output, err) sys.stderr.write(err_str1) return False else: print ‘restore done‘
==============================================
附1:mydumper的安装过程
1. 获得root权限:
$ su root
2. 更新包列表信息
# apt-get update
Hit:1 http://mirrors.tuna.tsinghua.edu.cn/debian buster InRelease
Get:2 http://mirrors.tuna.tsinghua.edu.cn/debian buster-updates InRelease [49.3 kB]
Get:3 http://security.debian.org/debian-security buster/updates InRelease [39.1 kB]
Reading package lists... Done
E: Release file for http://mirrors.tuna.tsinghua.edu.cn/debian/dists/buster-updates/InRelease is not valid yet (invalid for another 14d 10h 53min 10s). Updates for this repository will not be applied.
E: Release file for http://security.debian.org/debian-security/dists/buster/updates/InRelease is not valid yet (invalid for another 14d 16h 6min 53s). Updates for this repository will not be applied.
3. 查找mydumper
# apt-cache search mydumper
mydumper - High-performance MySQL backup tool
mydumper-doc - High-performance MySQL backup tool - documentation
4. 安装
# apt-get install mydumper
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
libmariadb3 mariadb-common mysql-common
Suggested packages:
mydumper-doc
The following NEW packages will be installed:
libmariadb3 mariadb-common mydumper mysql-common
0 upgraded, 4 newly installed, 0 to remove and 2 not upgraded.
Need to get 254 kB of archives.
After this operation, 786 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://mirrors.tuna.tsinghua.edu.cn/debian buster/main amd64 mysql-common all 5.8+1.0.5 [7,324 B]
Get:2 http://mirrors.tuna.tsinghua.edu.cn/debian buster/main amd64 mariadb-common all 1:10.3.17-0+deb10u1 [31.6 kB]
Get:3 http://mirrors.tuna.tsinghua.edu.cn/debian buster/main amd64 libmariadb3 amd64 1:10.3.17-0+deb10u1 [170 kB]
Get:4 http://mirrors.tuna.tsinghua.edu.cn/debian buster/main amd64 mydumper amd64 0.9.5-1 [44.6 kB]
Fetched 254 kB in 1s (452 kB/s)
Selecting previously unselected package mysql-common.
(Reading database ... 56346 files and directories currently installed.)
Preparing to unpack .../mysql-common_5.8+1.0.5_all.deb ...
Unpacking mysql-common (5.8+1.0.5) ...
Selecting previously unselected package mariadb-common.
Preparing to unpack .../mariadb-common_1%3a10.3.17-0+deb10u1_all.deb ...
Unpacking mariadb-common (1:10.3.17-0+deb10u1) ...
Selecting previously unselected package libmariadb3:amd64.
Preparing to unpack .../libmariadb3_1%3a10.3.17-0+deb10u1_amd64.deb ...
Unpacking libmariadb3:amd64 (1:10.3.17-0+deb10u1) ...
Selecting previously unselected package mydumper.
Preparing to unpack .../mydumper_0.9.5-1_amd64.deb ...
Unpacking mydumper (0.9.5-1) ...
Setting up mysql-common (5.8+1.0.5) ...
update-alternatives: using /etc/mysql/my.cnf.fallback to provide /etc/mysql/my.cnf (my.cnf) in auto mode
Setting up mariadb-common (1:10.3.17-0+deb10u1) ...
update-alternatives: using /etc/mysql/mariadb.cnf to provide /etc/mysql/my.cnf (my.cnf) in auto mode
Setting up libmariadb3:amd64 (1:10.3.17-0+deb10u1) ...
Setting up mydumper (0.9.5-1) ...
Processing triggers for man-db (2.8.5-2) ...
Processing triggers for libc-bin (2.28-10) ...
5. 测试安装是否正确:
# mydumper --help
Usage:
mydumper [OPTION?] multi-threaded MySQL dumping
Help Options:
-?, --help Show help options
Application Options:
-B, --database Database to dump
-T, --tables-list Comma delimited table list to dump (does not exclude regex option)
-O, --omit-from-file File containing a list of database.table entries to skip, one per line (skips before applying regex option)
-o, --outputdir Directory to output files to
-s, --statement-size Attempted size of INSERT statement in bytes, default 1000000
-r, --rows Try to split tables into chunks of this many rows. This option turns off --chunk-filesize
-F, --chunk-filesize Split tables into chunks of this output file size. This value is in MB
-c, --compress Compress output files
-e, --build-empty-files Build dump files even if no data available from table
-x, --regex Regular expression for ‘db.table‘ matching
-i, --ignore-engines Comma delimited list of storage engines to ignore
-N, --insert-ignore Dump rows with INSERT IGNORE
-m, --no-schemas Do not dump table schemas with the data
-d, --no-data Do not dump table data
-G, --triggers Dump triggers
-E, --events Dump events
-R, --routines Dump stored procedures and functions
-W, --no-views Do not dump VIEWs
-k, --no-locks Do not execute the temporary shared read lock. WARNING: This will cause inconsistent backups
--no-backup-locks Do not use Percona backup locks
--less-locking Minimize locking time on InnoDB tables.
-l, --long-query-guard Set long query timer in seconds, default 60
-K, --kill-long-queries Kill long running queries (instead of aborting)
-D, --daemon Enable daemon mode
-I, --snapshot-interval Interval between each dump snapshot (in minutes), requires --daemon, default 60
-L, --logfile Log file name to use, by default stdout is used
--tz-utc SET TIME_ZONE=‘+00:00‘ at top of dump to allow dumping of TIMESTAMP data when a server has data in different time zones or data is being moved between servers with different timezones, defaults to on use --skip-tz-utc to disable.
--skip-tz-utc
--use-savepoints Use savepoints to reduce metadata locking issues, needs SUPER privilege
--success-on-1146 Not increment error count and Warning instead of Critical in case of table doesn‘t exist
--lock-all-tables Use LOCK TABLE for all, instead of FTWRL
-U, --updated-since Use Update_time to dump only tables updated in the last U days
--trx-consistency-only Transactional consistency only
--complete-insert Use complete INSERT statements that include column names
-h, --host The host to connect to
-u, --user Username with the necessary privileges
-p, --password User password
-a, --ask-password Prompt For User password
-P, --port TCP/IP port to connect to
-S, --socket UNIX domain socket file to use for connection
-t, --threads Number of threads to use, default 4
-C, --compress-protocol Use compression on the MySQL connection
-V, --version Show the program version and exit
-v, --verbose Verbosity of output, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2
--defaults-file Use a specific defaults file
原文地址:https://www.cnblogs.com/pengyicun/p/11667212.html