从库crash一直自动重启(mysqld got signal 11)问题解决

一:问题描述

今天收到邮件报警,遂进数据库查看slave状态,发现io进程和sql进程都为NO.

mysql> show slave status \G;
*************************** 1. row***************************
               Slave_IO_State:
                  Master_Host: 此处不予显示,哈哈
                  Master_User: replica
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File:master1-bin.001191
         Read_Master_Log_Pos: 29214749
               Relay_Log_File:web_appdb_10-relay-bin.000663
                Relay_Log_Pos: 29213639
       Relay_Master_Log_File: master1-bin.001191
            Slave_IO_Running: No
           Slave_SQL_Running: No
              Replicate_Do_DB:
         Replicate_Ignore_DB:
          Replicate_Do_Table:
      Replicate_Ignore_Table:
     Replicate_Wild_Do_Table: ccda.%,eip_fileservice.%
 Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
         Exec_Master_Log_Pos: 29213491
              Relay_Log_Space: 29215212
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
          Master_SSL_Allowed: No
          Master_SSL_CA_File:
          Master_SSL_CA_Path:
              Master_SSL_Cert:
           Master_SSL_Cipher:
               Master_SSL_Key:
       Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
 Replicate_Ignore_Server_Ids:
            Master_Server_Id: 0
1 row in set (0.01 sec)

ERROR:
No query specified

尝试启动,然后再次查看状态,竟然报错,说连不上数据库。

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

mysql> show slave status \G;
ERROR 2006 (HY000): MySQL server has goneaway
No connection. Trying to reconnect...
ERROR 2002 (HY000): Can't connect to localMySQL server through socket '/tmp/mysql.sock' (2)
ERROR:
Can't connect to the server

ERROR:
No query specified

连续尝试多次,可以登录数据库了,再次查询复制,发现状态还是NO.

mysql> show slave status \G;
No connection. Trying to reconnect...
Connection id:    1
Current database: *** NONE ***

*************************** 1. row***************************
               Slave_IO_State:
                  Master_Host: 10.0.3.34
                  Master_User: replica
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File:master1-bin.001191
         Read_Master_Log_Pos: 29214749
               Relay_Log_File:web_appdb_10-relay-bin.000663
                Relay_Log_Pos: 29213639
       Relay_Master_Log_File: master1-bin.001191
            Slave_IO_Running: No
           Slave_SQL_Running: No
              Replicate_Do_DB:
         Replicate_Ignore_DB:
          Replicate_Do_Table:
      Replicate_Ignore_Table:
     Replicate_Wild_Do_Table: ccda.%,eip_fileservice.%
 Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                  Last_Error:
                 Skip_Counter: 0
         Exec_Master_Log_Pos: 29213491
              Relay_Log_Space: 29215426
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
          Master_SSL_Allowed: No
          Master_SSL_CA_File:
          Master_SSL_CA_Path:
              Master_SSL_Cert:
           Master_SSL_Cipher:
               Master_SSL_Key:
       Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
               Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
 Replicate_Ignore_Server_Ids:
            Master_Server_Id: 0
1 row in set (0.00 sec)

ERROR:
No query specified

发现只要start slave,该服务器数据库就会自动重启。

而且start slave io_thread没问题,当start slave sql_thread时,才会导致数据库自动重启。

查看错误日志:

160429 9:09:00 [Note] Event Scheduler: Loaded 0 events
160429 9:09:00 [Note] /usr/local/mysql/bin/mysqld: ready for connections.
Version: '5.5.19-log'  socket: '/tmp/mysql.sock'  port: 3306 Source distribution
160429 11:04:47 [Note] Slave SQL threadinitialized, starting replication in log 'master1-bin.001191' at position29213491, relay log './web_appdb_10-relay-bin.000663' position: 29213639
160429 11:04:47 - mysqld got signal 11 ;
This could be because you hit a bug. It isalso possible that this binary
or one of the libraries it was linkedagainst is corrupt, improperly built,
or misconfigured. This error can also becaused by malfunctioning hardware.
We will try our best to scrape up some infothat will hopefully help diagnose
the problem, but since we have alreadycrashed, something is definitely wrong
and this may fail.

key_buffer_size=268435456
read_buffer_size=6291456
max_used_connections=3
max_threads=2000
thread_count=2
connection_count=2
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size +sort_buffer_size)*max_threads = 20764878 K
bytes of memory
Hope that's ok; if not, decrease somevariables in the equation.

Thread pointer: 0x2ab2f1b54740
Attempting backtrace. You can use thefollowing information to find out
where mysqld died. If you see no messagesafter this, something went
terribly wrong...
stack_bottom = 0x594310e8 thread_stack0x30000
/usr/local/mysql/bin/mysqld(my_print_stacktrace+0x33)[0x765df3]
/usr/local/mysql/bin/mysqld(handle_segfault+0x36e)[0x4ee4fe]
/lib64/libpthread.so.0[0x31a640ebe0]
/usr/local/mysql/bin/mysqld(_ZNK9table_def15compatible_withEP3THDP14Relay_log_infoP5TABLEPS5_+0x31a)[0x74c29a]
/usr/local/mysql/bin/mysqld(_ZN14Rows_log_event14do_apply_eventEPK14Relay_log_info+0xcdc)[0x6f0d3c]
/usr/local/mysql/bin/mysqld(_Z26apply_event_and_update_posP9Log_eventP3THDP14Relay_log_info+0x14d)[0x5021ed]
/usr/local/mysql/bin/mysqld[0x504b19]
/usr/local/mysql/bin/mysqld(handle_slave_sql+0xc0a)[0x5061ea]
/lib64/libpthread.so.0[0x31a640677d]
/lib64/libc.so.6(clone+0x6d)[0x31a54d49ad]

Trying to get some variables.
Some pointers may be invalid and cause thedump to abort.
Query ((nil)): is an invalid pointer
Connection ID (thread ID): 353
Status: NOT_KILLED

The manual page athttp://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find outwhat is causing the crash.
160429 11:04:48 mysqld_safe Number ofprocesses running now: 0
160429 11:04:48 mysqld_safe mysqldrestarted
160429 11:04:48 InnoDB: The InnoDB memoryheap is disabled
160429 11:04:48 InnoDB: Mutexes andrw_locks use GCC atomic builtins
160429 11:04:48 InnoDB: Compressed tablesuse zlib 1.2.3
160429 11:04:48 InnoDB: Initializing bufferpool, size = 32.0G
160429 11:04:50 InnoDB: Completedinitialization of buffer pool
160429 11:04:50 InnoDB: highest supportedfile format is Barracuda.
InnoDB: The log sequence number in ibdatafiles does not match
InnoDB: the log sequence number in theib_logfiles!
160429 11:04:50  InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information fromthe .ibd files...
InnoDB: Restoring possible half-writtendata pages from the doublewrite
InnoDB: buffer...
InnoDB: Last MySQL binlog file position 0112571, file name ./mysql-bin.048292
160429 11:04:52  InnoDB: Waiting for the background threads tostart
160429 11:04:53 InnoDB: 1.1.8 started; logsequence number 5992159806777
160429 11:04:53 [Note] Recovering after acrash using mysql-bin
160429 11:04:53 [Note] Starting crashrecovery...
160429 11:04:53 [Note] Crash recoveryfinished.
160429 11:04:53 [Warning] Neither--relay-log nor --relay-log-index were used; so replication may break when thisMySQL server acts as a slave and has his hostname changed!! Please use'--relay-log=web_appdb_10-relay-bin' to avoid this problem.
160429 11:04:53 [Note] Event Scheduler:Loaded 0 events
160429 11:04:53 [Note]/usr/local/mysql/bin/mysqld: ready for connections.
Version: '5.5.19-log'  socket: '/tmp/mysql.sock'  port: 3306 Source distribution

对于这个错误”mysqld got signal 11”,我在网上查了,有的说是磁盘空间满了,有的说是内存问题,也有可能是硬件错误,也有可能是中继日志重放位置的sql导致的。

查看中继日志该位置执行的语句:

Relay_Log_File:web_appdb_10-relay-bin.000663

Relay_Log_Pos: 29213639

# at 29213639
#160428 21:29:32 server id 1  end_log_pos 29213559      Query  thread_id=624506       exec_time=0     error_code=0
SET TIMESTAMP=1461850172/*!*/;
/*!\C utf8mb4 *//*!*/;
[email protected]@session.character_set_client=45,@@session.collation_connection=45,@@session.collation_server=33/*!*/;
BEGIN
/*!*/;
# at 29213707
#160428 21:29:32 server id 1  end_log_pos 29213657      Table_map: `ccda`.`ess_accounting_relation`mapped to number 311993
# at 29213805
#160428 21:29:32 server id 1  end_log_pos 29213757      Table_map:`ccda`.`ess_accounting_relation_1` mapped to number 311994
# at 29213905
#160428 21:29:32 server id 1  end_log_pos 29214051      Update_rows: table id 311993 flags:STMT_END_F

BINLOG '
PBAiVxMBAAAAYgAAANnDvQEAALnCBAAAAAEABGNjZGEAF2Vzc19hY2NvdW50aW5nX3JlbGF0aW9u
AA4DDw8PDw8PDw8PDw8MAxYsATwAPAAGACwBPAAsAZYAlgCWAJYA/D8=
PBAiVxMBAAAAZAAAAD3EvQEAALrCBAAAAAEABGNjZGEAGWVzc19hY2NvdW50aW5nX3JlbGF0aW9u
XzEADgMPDw8PDw8PDw8PDwwDFiwBPAA8AAYALAE8ACwBlgCWAJYAlgD8Pw==
PBAiVxgBAAAAJgEAAGPFvQEAALnCBAAAAAEADv////8Q4Dnl7wATADEyMTEwMTAxMTE1MTEwMDE2
MzUS5bqU5LuY5LuY5qy+5Yet6K+BA0VBUwAACeaKpei0puWNlRYAemhhbmd5MTA0NzE1MTExODEz
MTYyMhcxMTExMDExNTExMTg1MDUyMzA2MjE2MQAAAI3GveRVEgAAEOA55e8AEwAxMjExMDEwMTEx
NTExMDAxNjM1EuW6lOS7mOS7mOasvuWHreivgQNFQVMUADExMTEwMTE1MTEwOTEwMzgwMDg5CeaK
pei0puWNlRYAemhhbmd5MTA0NzE1MTExODEzMTYyMhcxMTExMDExNTExMTg1MDUyMzA2MjE2MQAA
AI3GveRVEgAA
'/*!*/;
### UPDATE `ccda`.`ess_accounting_relation`
### WHERE
###  @1=15721785
###  @2='1211010111511001635'
###  @3='应付付款凭证'
###  @4='EAS'
###  @5=NULL
###  @6=''
###  @7='报账单'
###  @8='zhangy1047151118131622'
###  @9='11110115111850523062161'
###  @10=''
###  @11=''
###  @12=''
###  @13=2016-01-19 16:25:09
###  @14=NULL
### SET
###  @1=15721785
###  @2='1211010111511001635'
###  @3='应付付款凭证'
###  @4='EAS'
###  @5=NULL
###  @6='11110115110910380089'
###  @7='报账单'
###  @8='zhangy1047151118131622'
###  @9='11110115111850523062161'
###  @10=''
###  @11=''
###  @12=''
###  @13=2016-01-19 16:25:09
###  @14=NULL

先备份一下该记录,然后手动在从库上更新一下,看是否报错。

UPDATE `ccda`.`ess_accounting_relation`

SET attachId=‘11110115110910380089‘

WHERE id = 15721785;

结果发现在从库上也可以正常update呀。

后来我想查看下该表表结构,结果出现错误:

mysql> show create table`ccda`.`ess_accounting_relation` \G;

ERROR 144 (HY000): Table‘./ccda/ess_accounting_relation_1‘ is marked as crashed and last (automatic?)repair failed

ERROR:

No query specified

mysql> select count(*) fromccda.ess_accounting_relation_1;

ERROR 144 (HY000): Table‘./ccda/ess_accounting_relation_1‘ is marked as crashed and last (automatic?)repair failed

诡异,刚才还能更新呢,现在却又不能正常访问了。

然后,尝试修复出问题的表:

check table ccda.ess_accounting_relation_1;

repair table ccda.ess_accounting_relation_1;

修复成功后,查看ccda.ess_accounting_relation 表结构,发现该表是个合并表,ess_accounting_relation_1是myisam引擎:

CREATE TABLE `ess_accounting_relation` (

……)

ENGINE=MRG_MyISAM DEFAULT CHARSET=utf8 INSERT_METHOD=LASTUNION=(`ess_accounting_relation_1`)

原本以为修复该表成功后,start slave,就正常了。结果还是会导致数据库重启。

此时,再检查ess_accounting_relation_1,也是正常的,没有显示崩溃。

我试验,在从库跳过该表的操作(用change master to或者set global sql_slave_skip_counter=n),当执行其他表的操作时,并没有导致从库重启。

我试验在从库配置文件里添加参数:replicate_ignore_table=ccda.ess_accounting_relation过滤掉这个表,然后重启数据库,再start slave,没有导致从库重启。

最后,我大胆试验下,在主库直接操作ess_accounting_relation_1的某条数据(前提是已经注释掉了上面的参数replicate_ignore_table),发现从库在应用相应数据时,并没有导致重启。

所以,问题就出在了这个mrg_myisam存储引擎。

这个表,其实每天也都有update,可是为什么最近才出现了这个问题,那就不知道了。因为这个表引用的子表数据量太大了吗?该表大概1600万数据。不晓得。

二:出错原因

一个mrg_myisam存储引擎的合并表,引用了一个myisam引擎的子表,更新前者导致slave数据库一直自动重启,且偶尔子表也会发生崩溃。

这估计是mysql的一个bug吧。点击打开链接

三:解决办法

由于ess_accounting_relation表数据只来源于ccda.ess_accounting_relation_1这一个表,实际上并没有合并的意义,而且,通过了解发现,这个myisam表经常更新。myisam容易崩溃,且不支持行锁,故建议将ccda.ess_accounting_relation_1改成innodb存储引擎(可以先mysqldump备份下这个表,然后在备份文件里将MyISAM改成innodb),删掉ccda.ess_accounting_relation,将ccda.ess_accounting_relation_1重命名为ccda.ess_accounting_relation。

--关于mrg_myisam介绍,请参考http://blog.csdn.net/yabingshi_tech/article/details/51320701

--备注:mysql版本5.5.19

时间: 2024-10-19 17:55:39

从库crash一直自动重启(mysqld got signal 11)问题解决的相关文章

程序意外Crash后自动重启

程序意外Crash后自动重启 主要是针对因为意外导致的程序Crash后再次被重启! 下载地址:http://www.devstore.cn/code/info/803.html 运行截图 :   

MySQL案例-mysqld got signal 11

背景:MySQL-5.7.12, debian 8核16G虚拟机, 业务方反馈在某一个时间点, 出现了大量的数据库报错, 之后恢复正常; 场景:开发查看日志后, 发现在某个时间点, 应用断开了所有与数据库的连接, 几秒钟以后就恢复了;同时监控系统的内存使用率出现了异常的骤降; 3min之后收到了报警系统的信息, 内存使用率82%; 分析:第一时间的判断是网络的问题造成了应用层的连接断开了, 但是这种内存使用率骤降的现象不会是网络造成的;查看MySQL的日志, 发现MySQL实例发生了crash,

如何用supervisor守护php-fpm主进程以实现php-fpm的自动重启

最近有同事有个针对php-fpm进程的监护需求,也即:如果php-fpm的master进程意外退出(可能是crash,也可能是被误kill),那么希望master进程能被自动拉起,以免中断服务. 我们知道,supervisor是一个非常强大的进程监控(monitor & control)工具,它理论上可以实现php-fpm master进程的守护需求.因此,我帮同事试验了如何用supervisor完成他的需求,结果表明,supervisor确实是神器,只需一个合理的配置文件,它就能解决问题. 下

解决Android后台清理APP后,程序自动重启的问题

最近解决了一个Android APP的bug,发现APP在被后台清理后,会自动重启.现象很奇怪,有的手机(HTC)后台清理后,程序必crash,而有的手机(小米)程序不会crash.查找问题时,发现HTC手机执行后台清理后,程序会自动重启,而小米手机则没有.猜想可能是小米手机内部做了处理,当执行后台清理后,对应的APP不会再重启了.不过问题在于,HTC手机后台清理,为什么会导致程序crash. Debug后发现,APP内部有一个Service,在Service的onStartCommand函数中

Android的服务(Service)(二)Service的自动重启问题

继续上篇的分析,接下来是第二个问题"Service的自动重启问题" (一).Service的生命周期 (二).Service的自动重启问题 这里要说服务的自动重启问题,这个问题其实很简单,只有两个关键的方法.代码如下: 这个方法在ActivityThread的一系列针对服务的handle方法中都有调用到ActivityManagerSerice的serviceDoneExecuting()方法,但是跟重启有关的只有handleServiceArgs(),因为只有在这里才有一个叫res的

一个简单的监控网站是否正常并自动重启服务的shell脚本

#!/bin/sh if [ -z "`curl --connect-timeout 15 --max-time 20 --head --silent http://localhost/index.php|head -n 1|grep '200'`" ];then echo -e "$(date +%Y-%m-%d)\n" killall nginx killall php-fpm /usr/local/nginx/sbin/nginx -c conf/nginx.

[3]supervisor使用管理:实现对异常中断子进程的自动重启(以nginx和apache为例)

Web服务器Nginx的安装与配置 卸载老版本的Nginx sudo apt-get --purge remove nginxsudo apt-get autoremove dpkg --get-selections|grep nginx//将罗列出与nginx相关的软件,如nginx-common一并删除sudo apt-get --prege remove nginx-common1234 安装Nginx 从官网下载Nginx 编译安装: tar -zxvf nginx-1.10.2.tar

springboot开发人员工具(自动重启及相关的配置)

导入依赖: <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <optional>true</optional> </dependency> </dependencies> Developer tools

ORACLE 10G RAC 节点自动重启故障处理

将数据库集群升级到10.2.0.5之后,双节点服务器不断重启,查询oracle oprocd进程日志,信息如下: Jul 03 08:16:34.702 | INF | monitoring started with timeout(1000), margin(500), skewTimeout(125) Jul 03 08:16:34.704 | INF | fatal mode startup, setting process to fatal mode 可以看到看到oprocd进程的时间间