一个生产环境,nginx占用cpu很高。
top - 15:04:19 up 1 day, 14:16, 5 users, load average: 13.26, 13.20, 13.20 Tasks: 881 total, 14 running, 865 sleeping, 0 stopped, 2 zombie Cpu(s): 21.3%us, 18.0%sy, 0.0%ni, 57.4%id, 0.0%wa, 0.0%hi, 3.3%si, 0.0%st Mem: 257556M total, 254371M used, 3184M free, 400M buffers Swap: 0M total, 0M used, 0M free, 200639M cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 32512 root 20 0 11.9g 3.3g 3.2g R 130 1.3 174:16.31 nginx 393 root 20 0 11.9g 3.5g 3.4g R 87 1.4 285:16.87 nginx 394 root 20 0 11.9g 3.5g 3.4g R 87 1.4 271:35.20 nginx 395 root 20 0 11.9g 3.6g 3.5g R 87 1.4 301:12.50 nginx 400 root 20 0 11.9g 3.5g 3.4g R 87 1.4 257:04.89 nginx 32502 root 20 0 11.9g 3.7g 3.6g R 87 1.5 396:46.03 nginx 32506 root 20 0 11.9g 3.6g 3.5g R 87 1.4 298:45.09 nginx 32642 root 20 0 11.9g 3.7g 3.6g R 87 1.5 355:42.75 nginx
top跟进去单个线程,发现是主线程很高:
top - 14:48:47 up 1 day, 14:00, 4 users, load average: 12.79, 12.84, 13.04 Tasks: 68 total, 0 running, 68 sleeping, 0 stopped, 0 zombie Cpu(s): 18.0%us, 21.5%sy, 0.1%ni, 54.6%id, 1.7%wa, 0.0%hi, 4.1%si, 0.0%st Mem: 257556M total, 253504M used, 4052M free, 400M buffers Swap: 0M total, 0M used, 0M free, 199627M cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 32582 root 20 0 11.9g 3.4g 3.3g S 0 1.3 178:24.72 nginx 2999 root 20 0 11.9g 3.4g 3.3g S 0 1.3 0:07.05 nginx 3001 root 20 0 11.9g 3.4g 3.3g S 0 1.3 0:06.36 nginx 3004 root 20 0 11.9g 3.4g 3.3g S 0 1.3 0:07.07 nginx 3006 root 20 0 11.9g 3.4g 3.3g S 0 1.3 0:06.18 nginx 3009 root 20 0 11.9g 3.4g 3.3g S 0 1.3 0:06.70 nginx 3011 root 20 0 11.9g 3.4g 3.3g S 0 1.3 0:03.87 nginx 3013 root 20 0 11.9g 3.4g 3.3g S 0 1.3 0:06.16 nginx 3016 root 20 0 11.9g 3.4g 3.3g S 0 1.3 0:06.51 nginx 3018 root 20 0 11.9g 3.4g 3.3g S 0 1.3 0:07.03 nginx 3020 root 20 0 11.9g 3.4g 3.3g S 0 1.3 0:07.10 nginx
gdb跟踪一下,发现大量的epoll_wait返回,中间没有任何系统调用:
epoll_wait(283, {{EPOLLOUT, {u32=3409011729, u64=140156781814801}}}, 512, 1) = 1 epoll_wait(283, {{EPOLLOUT, {u32=3409011729, u64=140156781814801}}}, 512, 1) = 1 epoll_wait(283, {{EPOLLOUT, {u32=3409011729, u64=140156781814801}}}, 512, 1) = 1 epoll_wait(283, {{EPOLLOUT, {u32=3409011729, u64=140156781814801}}}, 512, 1) = 1 epoll_wait(283, {{EPOLLOUT, {u32=3409011729, u64=140156781814801}}}, 512, 1) = 1 epoll_wait(283, {{EPOLLOUT, {u32=3409011729, u64=140156781814801}}}, 512, 1) = 1 epoll_wait(283, {{EPOLLOUT, {u32=3409011729, u64=140156781814801}}}, 512, 1) = 1 epoll_wait(283, {{EPOLLOUT, {u32=3409011729, u64=140156781814801}}}, 512, 1) = 1
查看堆栈,
epoll_wait在返回有active的fd的时候,我们并没有去调用recv或者recvfrom,走查代码:
if ((revents & EPOLLOUT) && wev->active) { if (c->fd == -1 || wev->instance != instance) { /* * the stale event from a file descriptor * that was just closed in this iteration */ ngx_log_debug1(NGX_LOG_DEBUG_EVENT, cycle->log, 0, "epoll: stale event %p", c); continue; } wev->ready = 1; if (flags & NGX_POST_EVENTS) { ngx_post_event(wev, &ngx_posted_events); } else { wev->handler(wev); } }
发现 wev->active 有时候不为1,但是在epoll_wait返回的时候,大多数情况是1,为什么没有recv呢?
查看 wev->handler,在 ngx_http_upstream_process_non_buffered_request 中,有这么一个分支判断:
if(downstream->write->delayed) { ngx_log_error(NGX_LOG_DEBUG, upstream->log, 0, "[no_buffering_limite_rate] downstream->write->delayed return here. remove upstream read event"); ngx_del_event(upstream->read, NGX_READ_EVENT, 0); return; }
在做了限速的情况下,只删除了 upstream->read 事件,没有删除 upstream->write 事件。
而我们epoll_wait返回的是 EPOLLOUT 事件。
在这种情况下,wev->handler(wev); 由于处理非常快,
ngx_process_events_and_timers(ngx_cycle_t *cycle)函数在下面这个逻辑:
delta = ngx_current_msec; (void) ngx_process_events(cycle, timer, flags); delta = ngx_current_msec - delta;
这样delta几乎为0,使得下次循环的时候扫描的timer为红黑树的最小值,且这个值都没有超时。
timer = ngx_event_find_timer();
所以我们也看到epoll_wai的超时值大多数时间非常小。
最后,这个问题在满足限速情况下,将NGX_WRITE_EVENT 从epoll中删除,等到了限速不满足的时候,再加入这个event。
原文地址:https://www.cnblogs.com/10087622blog/p/10256121.html
时间: 2024-11-06 03:30:37