zabbix添加主机时最常见的报错

** 今天在玩zabbix监控的时候,在被监控端装好agent后,添加主机时界面上提示报错
Zabbix agent on jiabao is unreachable for 5 minutes
查看日志却发现

[root@localhost tmp]# tail -f zabbix_server.log
 26115:20190307:235549.064 cannot send list of active checks to "192.168.1.150": host [localhost] not found
 26120:20190307:235715.010 executing housekeeper
 26120:20190307:235715.015 housekeeper [deleted 0 hist/trends, 0 items, 0 events, 0 problems, 0 sessions, 0 alarms, 0 audit items in 0.003076 sec, idle for 1 hour(s)]
 26115:20190307:235749.195 cannot send list of active checks to "192.168.1.150": host [localhost] not found
 26115:20190307:235949.324 cannot send list of active checks to "192.168.1.150": host [localhost] not found
 26115:20190308:000149.455 cannot send list of active checks to "192.168.1.150": host [localhost] not found
 26115:20190308:000349.587 cannot send list of active checks to "192.168.1.150": host [localhost] not found
 26115:20190308:000549.719 cannot send list of active checks to "192.168.1.150": host [localhost] not found
 26115:20190308:000749.846 cannot send list of active checks to "192.168.1.150": host [localhost] not found
 26115:20190308:000941.617 cannot send list of active checks to "192.168.1.150": host [localhost] not found
 26117:20190308:001141.741 cannot send list of active checks to "192.168.1.150": host [localhost] not found
 26117:20190308:001341.873 cannot send list of active checks to "192.168.1.150": host [localhost] not found
 26117:20190308:001542.001 cannot send list of active checks to "192.168.1.150": host [localhost] not found
于是我就查看agent端的配置文件发现hostname字段


web界面上
不一致,因此出现了主机名识别不了导致无法连接,必须保证二者必须一致才可以
更改之后查看server端的日志


成功**

原文地址:https://blog.51cto.com/14101466/2360100

时间: 2024-11-07 00:45:45

zabbix添加主机时最常见的报错的相关文章

elk安装时最常见的报错

1.在启动kibana的时候报一下错误 max file descriptors [4096] for elasticsearch process likely too low, increase to at least [65536]max number of threads [1024] for user [lishang] likely too low, increase to at least [2048] 解决方法:切换到root账户后,修改/etc/security/limits.c

cactiez 添加主机时出现:SNMP信息 SNMP错误

在使用cactiez的过程中,常常无法添加监控主机或添加主机时出现,SNMP信息 snmp错误 第一,首先确定cacti所监控的主机能ping通被监控主机:如果不能ping通,请确认网络配置和被监控主机的ip设置是否正确. 第二,如果能ping通,那么确认被监控主机是否启用snmpd服务: ps -ef | grep snmp 或者直接重启被监控主机的snmp服务: service snmp restart 若上面的命令不可用,则用这两个: service snmpd restart 然后到ca

linux运维常见英文报错中文翻译(菜鸟必知)

linux常见英文报错中文翻译(菜鸟必知) 1.command not found   命令没有找到 2.No such file or directory   没有这个文件或目录 3.Permission denied   权限不足 4.No space left on device   磁盘没有剩余空间 5.File exists   文件已经存在 6.Is a directory   这是1个目录 7.Not a directory   不是1个目录 8.Warning: Changing

mysql为表添加外键完成性约束 报错Can't create table 'sfkbbs.#sql-513_25' (errno: 150)

代码 alter table sfk_son_module add constraint foreign key(father_module_id) references sfk_father_module(id) on delete restrict on update restrict; (constraint 后面可以加上约束名字) 错误原因是之前两张表的id的类型不一样,一个时int,一个时bigint 解决办法时修改表, alter table sfk_father_module mo

Windows8 上面安装Oracle 11g 时,安装程序报错:[INS-13001]环境不满足最低要求

Oracle 11g 时,安装程序报错:[INS-13001]环境不满足最低要求,解决方法: 在安装文件的/stage/svu文件夹下面找到文件 cvu_prereq.xml文件,修改为如下(添加windows 8 相关字段): <?xml version="1.0"?> <HOST PLATID="912"> <SYSTEM> <MEMORY> <PHYSICAL_MEMORY VALUE="128&

解决xtrabackup备份时出现的socket报错

今天为公司新建的uat数据库备份时,出现了报错,将解决方法整理.做一下备忘: 服务器系统: [[email protected] tmp]# cat /etc/redhat-release  CentOS Linux release 7.3.1611 (Core)  mysql版本号: 报错如下: [[email protected] tmp]# innobackupex  --defaults-file=/etc/my.cnf --user=backup --password=****** -

表空间正在热备份时关闭实例重启报错的重现和解决

最近一个客户的库在OPEN时报错需要恢复,发现原因为当时一个表空间正在热备份-->ALTER TABLESPACE TEST1 BEGIN BACKUP;  然后实例异常关闭(可能为ABORT或KILL SMON等进程,这里据说为存储直接关闭导致),然后重启时遇到此错误. 在ORACLE 10.2.0.1及11.2.0.4版本中重现了此错误,在这两个版本中同样的情况但是报错信息不太一样,具体情况如下: 10.2.0.1.0 版本表空间正在热备份时关闭实例重启报错的重现和解决: SQL> sel

resion 编译时,遇到java报错问题

checking if/usr/lib/jvm/java-1.7.0/bin/java -d64 is Java 1.6... no configure: error: Java1.6 required. /usr/lib/jvm/java-1.7.0/bin/java -d64 returned: java version"1.7.0_55" OpenJDK RuntimeEnvironment (rhel-2.4.7.1.el6_5-x86_64 u55-b13) OpenJDK

node启动时, listen EADDRINUSE 报错;

1.启动时, listen  EADDRINUSE 报错: 查看是因为 端口被占用了,换了端口 还是不行:      查看了 被占用的端口,端口没有为:4000: 突然想到,每次启动node服务后,都没有关闭: 于是,查看任务进程,把node.js进程关闭后,重新启动 node环境即可!