auto-extending data file ./ibdata1 is of a different size auto-extending data file ./ibdata1 is of a different size

160315 17:08:19 mysqld_safe Starting mysqld daemon with databases from /usr/local/mysql/data
160315 17:08:19 [Warning] Using unique option prefix thread_cache instead of thread_cache_size is deprecated and will be removed in a future release. Please use the full name instead.
160315 17:08:19 InnoDB: The InnoDB memory heap is disabled
160315 17:08:19 InnoDB: Mutexes and rw_locks use GCC atomic builtins
160315 17:08:19 InnoDB: Compressed tables use zlib 1.2.3
160315 17:08:19 InnoDB: Initializing buffer pool, size = 4.0G
160315 17:08:19 InnoDB: Completed initialization of buffer pool
InnoDB: Error: auto-extending data file ./ibdata1 is of a different size
InnoDB: 1152 pages (rounded down to MB) than specified in the .cnf file:
InnoDB: initial 65536 pages, max 0 (relevant if non-zero) pages!
160315 17:08:19 InnoDB: Could not open or create data files.
160315 17:08:19 InnoDB: If you tried to add new data files, and it failed here,
160315 17:08:19 InnoDB: you should now edit innodb_data_file_path in my.cnf back
160315 17:08:19 InnoDB: to what it was, and remove the new ibdata files InnoDB created
160315 17:08:19 InnoDB: in this failed attempt. InnoDB only wrote those files full of
160315 17:08:19 InnoDB: zeros, but did not yet use them in any way. But be careful: do not
160315 17:08:19 InnoDB: remove old data files which contain your precious data!
160315 17:08:19 [ERROR] Plugin ‘InnoDB‘ init function returned error.
160315 17:08:19 [ERROR] Plugin ‘InnoDB‘ registration as a STORAGE ENGINE failed.
160315 17:08:19 [ERROR] Unknown/unsupported storage engine: InnoDB
160315 17:08:19 [ERROR] Aborting

160315 17:08:19 [Note] /application/mysql5.5.38/bin/mysqld: Shutdown complete

160315 17:08:19 mysqld_safe mysqld from pid file /usr/local/mysql/data/Mysql.master.xxjy.com.pid ended

解决:删除 ./ib_logfile0

rm ibdata1

186 mv ib_logfile0 /root/tools/
187 mv ib_logfile1 /root/tools/

重启服务

时间: 2024-10-16 01:11:45

auto-extending data file ./ibdata1 is of a different size auto-extending data file ./ibdata1 is of a different size的相关文章

Method and apparatus for encoding data to be self-describing by storing tag records describing said data terminated by a self-referential record

A computer-implemented method and apparatus in a computer system of processing data generated by a first application program in a second application program during runtime. During runtime, the first application program generates a record including a

Failed to collect certificates from /data/app/vmdl201020547.tmp/base.apk: META-INF/CERT.SF indicates /data/app/vmdl201020547.tmp/base.apk is signed using APK Signature Scheme v2, but no such signature

错误信息: 12-26 11:08:44.809 1501-1535/system_process E/PackageInstaller: Commit of session 201020547 failed: Failed to collect certificates from /data/app/vmdl201020547.tmp/base.apk: META-INF/CERT.SF indicates /data/app/vmdl201020547.tmp/base.apk is sig

css中元素的auto属性值是什么意思,比如margin:0 auto表示什么?

auto 你可以理解为一种 自动/自适应 的概念 比如 现在项目需要一个宽度为960px的整体布局居中 根据用户浏览器大小不同你将需要使用margin:0 auto;来实现. 无论用户浏览器宽度为多少.960px的定位宽度永远居中. css中的auto是自动适应的意思,而在css中auto往往都是默认值. 正如margin:0 auto,意思就是上下边距为0,左右边距为auto,就是自动适应.但是,如果要使用他的话,就必须给标签配上指定的宽度,如下:<div class="center&q

解决VTune错误The Data Cannot be displayed, there is no viewpoint available for data

错误信息: Error Cannot display data The data cannot be displayed: there is no viewpoint application for the data 错误出现情景: 在对程序做Hardware Event-based Sampling Analysis 0分析时,出现上述错误. 错误解决方法: 删掉现有的Hardware Event-based Sampling Analysis 0,重建一个,并如下设置: (1)添加事件 CP

ls you no 2&gt;&1 1&gt;&2|egrep \* &gt;file 和 (ls you no 2&gt;&1) 1&gt;&2|egrep \* &gt;file 执行结果不一样

1.ls you no 2>&1 1>&2|egrep \* >file2.(ls you no 2>&1) 1>&2|egrep \* >file 谁shell比较熟悉,这个脚本,为什么执行结果不一样? 回答群友问题 1.ls you no 2>&1 1>&2|egrep \* >file这句第一个2>&1把标准输出的管道复制给了2,所以1和2都走的标准输出,后面1>&2时由

hive对于lzo文件处理异常Caused by: java.io.IOException: Compressed length 842086665 exceeds max block size 67108864 (probably corrupt file)

hive查询lzo数据格式文件的表时,抛 Caused by: java.io.IOException: Compressed length 842086665 exceeds max block size 67108864 (probably corrupt file) 这类异常,如图: 这是由于lzo文件数过多,hive执行时默认是不会自动先合并lzo等压缩文件再计算,需要设置hive对应的参数,告诉它在执行计算之前,先合并较多的压缩文件 在执行hive的sql之前需要加上 set hive

为data磁盘组删除当中一个盘(asm external data盘组中有两块盘)

删除磁盘,注意,假设删掉磁盘之后.数据在剩余磁盘中.是否有足够空间存储.假设空间不够.删除工作不会成功. 检查空间够不够: select a.GROUP_NUMBER,a.DISK_NUMBER,a.NAME ,decode(sign(a.FREE_MB-d.COLD_USED_MB/ 2),1 ,'Y',- 1,'N' ,'N') from v$asm_diskgroup d,v$asm_disk a where a.GROUP_NUMBER = d.GROUP_NUMBER and a.GR

为data磁盘组删除其中一个盘(asm external data盘组中有两块盘)

删除磁盘,注意,如果删掉磁盘之后,数据在剩余磁盘中,是否有足够空间存储.如果空间不够,删除工作不会成功. 检查空间够不够: select a.GROUP_NUMBER,a.DISK_NUMBER,a.NAME ,decode(sign(a.FREE_MB-d.COLD_USED_MB/ 2),1 ,'Y',- 1,'N' ,'N') from v$asm_diskgroup d,v$asm_disk a where a.GROUP_NUMBER = d.GROUP_NUMBER and a.GR

R 语言中 data table 的相关,内存高效的 增量式 data frame

面对的是这样一个问题,不断读入一行一行数据,append到data frame上,如果用dataframe,  rbind() ,可以发现数据大的时候效率明显变低. 原因是 每次bind 都是一次重新整个数据集的重新拷贝 这个链接有人测试了各种方案,似乎给出了最优方案 http://stackoverflow.com/questions/11486369/growing-a-data-frame-in-a-memory-efficient-manner library(data.table) d

判断input file 里面的图片是否为空,并把file图片文件显示在另一个地方

var eleFile = document.querySelector('#file_1'); eleFile.addEventListener('change', function() { var file = this.files[0]; // 确认选择的文件是图片 if(file.type.indexOf("image") == 0) { var reader = new FileReader(); reader.readAsDataURL(file); reader.onlo