Projects: Linux scalability: Accept() scalability on Linux 惊群效应

小结:

1、不必要的唤醒

惊群效应

https://github.com/benoitc/gunicorn/issues/792#issuecomment-46718939

https://www.citi.umich.edu/u/cel/linux-scalability/reports/accept.html

http://stackoverflow.com/questions/12494914/how-does-the-operating-system-load-balance-between-multiple-processes-accepting/12502808#12502808
https://www.citi.umich.edu/u/cel/linux-scalability/reports/accept.html

Introduction

Network servers that use TCP/IP to communicate with their clients are rapidly increasing their offered loads. A service may elect to create multiple threads or processes to wait for increasing numbers of concurrent incoming connections. By pre-creating these multiple threads, a network server can handle connections and requests at a faster rate than with a single thread.

In Linux, when multiple threads call accept() on the same TCP socket, they get put on the same wait queue, waiting for an incoming connection to wake them up. In the Linux 2.2.9 kernel (and earlier), when an incoming TCP connection is accepted, the wake_up_interruptible() function is invoked to awaken waiting threads. This function walks the socket‘s wait queue and awakens everybody. All but one of the threads, however, will put themselves back on the wait queue to wait for the next connection. This unnecessary awakening is commonly referred to as a "thundering herd" problem and creates scalability problems for network server applications.

This report explores the effects of the "thundering herd" problem associated with theaccept() system call as implemented in the Linux kernel. In the rest of this paper, we discuss the nature of the problem and how it affects the scalability of network server applications running on Linux. Finally, we will benchmark the solutions and give the results and description of the benchmark. All benchmarks and patches are against the Linux 2.2.9 kernel.

Conclusion

By thoroughly studying this "thundering herd" problem, we have shown that it is indeed a bottleneck in high-load server performance, and that either patch significantly improves the performance of a high-load server. Even though both patches performed well in the testing, the "wake one" patch is cleaner and easier to incorporate into new or existing code. It also has the advantage of not committing a task to "exclusive" status before it is awakened, so extra code doesn‘t have to be incorporated for special cases to completely empty the wait-queue. The "wake one" patch can also solve any "thundering herd" problems locally, while the "task exclusive" method may require changes in multiple places where the programmer is responsible for making sure that all adjustments are made. This makes the "wake one" solution easily extensible to all parts of the kernel.

原文地址:https://www.cnblogs.com/yuanjiangw/p/10836032.html

时间: 2024-10-12 15:23:31

Projects: Linux scalability: Accept() scalability on Linux 惊群效应的相关文章

uWSGI Apache 处理 惊群效应的方式 现代的内核

Serializing accept(), AKA Thundering Herd, AKA the Zeeg Problem — uWSGI 2.0 documentationhttps://uwsgi-docs.readthedocs.io/en/latest/articles/SerializingAccept.html 原文地址:https://www.cnblogs.com/yuanjiangw/p/10836073.html

linux 惊群问题

1. 结论 对于惊群的资料,网上特别多,良莠不齐,也不全面.看的时候,有的资料说,惊群已经解决了,有的资料说,惊群还没解决.. 哪个才是对的?!  一怒之下,在研究各种公开资料的基础上,特意查对了linux源码,总结了此文.希望对有需要的人略有帮助,希望各位大神轻拍,如有错漏,不吝指教,感激不尽.([email protected]) 先说结论吧: 1. Linux多进程accept系统调用的惊群问题(注意,这里没有使用select.epoll等事件机制),在linux 2.6版本之前的版本存在

Linux网络编程“惊群”问题总结

1.前言 我从事Linux系统下网络开发将近4年了,经常还是遇到一些问题,只是知其然而不知其所以然,有时候和其他人交流,搞得非常尴尬.如今计算机都是多核了,网络编程框架也逐步丰富多了,我所知道的有多进程.多线程.异步事件驱动常用的三种模型.最经典的模型就是Nginx中所用的Master-Worker多进程异步驱动模型.今天和大家一起讨论一下网络开发中遇到的“惊群”现象.之前只是听说过这个现象,网上查资料也了解了基本概念,在实际的工作中还真没有遇到过.今天周末,结合自己的理解和网上的资料,彻底将“

accept与epoll惊群 转载

今天打开 OneNote,发现里面躺着一篇很久以前写的笔记,现在将它贴出来. 1. 什么叫惊群现象 首先,我们看看维基百科对惊群的定义: The thundering herd problem occurs when a large number of processes waiting for an event are awoken when that event occurs, but only one process is able to proceed at a time. After

【转载】“惊群”,看看nginx是怎么解决它的

原文:http://blog.csdn.net/russell_tao/article/details/7204260 在说nginx前,先来看看什么是“惊群”?简单说来,多线程/多进程(linux下线程进程也没多大区别)等待同一个socket事件,当这个事件发生时,这些线程/进程被同时唤醒,就是惊群.可以想见,效率很低下,许多进程被内核重新调度唤醒,同时去响应这一个事件,当然只有一个进程能处理事件成功,其他的进程在处理该事件失败后重新休眠(也有其他选择).这种性能浪费现象就是惊群. 惊群通常发

惊群问题

惊群问题 惊群问题是由于系统中有多个进程在等待同一个资源,当资源可用的时候,系统会唤醒所有或部分处于休眠状态的进程去争抢资源,但是最终只会有一个进程能够成功的响应请求并获得资源,但在这个过程中由于系统要对全部的进程唤醒,导致了需要对这些进程进行不必要的切换,从而会产生系统资源的浪费. 这种情况一般是accept或epoll_create在子进程中处于监听状态,也就是先创建子进程或者子线程然后调用accept或epoll_create阻塞监听的时候发生-,然而在先调用accept阻塞和先创建epo

Nginx——事件驱动机制(惊群问题,负载均衡)

事件框架处理流程 每个worker子进程都在ngx_worker_process_cycle方法中循环处理事件,处理分发事件则在ngx_worker_process_cycle方法中调用ngx_process_events_and_timers方法,循环调用该方法就是 在处理所有事件,这正是事件驱动机制的核心.该方法既会处理普通的网络事件,也会处理定时器事件. ngx_process_events_and_timers方法中核心操作主要有以下3个: 1)  调用所使用事件驱动模块实现的proce

“惊群”,看看nginx是怎么解决它的

在说nginx前,先来看看什么是“惊群”?简单说来,多线程/多进程(linux下线程进程也没多大区别)等待同一个socket事件,当这个事件发生时,这些线程/进程被同时唤醒,就是惊群.可以想见,效率很低下,许多进程被内核重新调度唤醒,同时去响应这一个事件,当然只有一个进程能处理事件成功,其他的进程在处理该事件失败后重新休眠(也有其他选择).这种性能浪费现象就是惊群. 惊群通常发生在server 上,当父进程绑定一个端口监听socket,然后fork出多个子进程,子进程们开始循环处理(比如acce

2、惊群现象

参考文献有 1:https://blog.csdn.net/russell_tao/article/details/7204260 2:https://www.jianshu.com/p/8f362e943e56 什么是“惊群”? 多线程/多进程(linux下线程进程也没多大区别)等待同一个socket事件,当这个事件发生时,这些线程/进程被同时唤醒,就是惊群.可以想见,效率很低下,许多进程被内核重新调度唤醒,同时去响应这一个事件,当然只有一个进程能处理事件成功,其他的进程在处理该事件失败后重新