一、标准的wr/rd socket姿势
标准流程sock read/write,参考sim里的代码【https://github.com/ideawu/sim.git】
sock默认为“阻塞的”,本例中设置为“非阻塞”的。
read返回值:
>0:rd到数据;
=0:sock连接断开;
<0:需要配合errno来判断。EINTR/EWOULDBLOCK。
阻塞的只需要rd一次就break,非阻塞的需要循环rd。
/*本例中noblock_ == true, 需要循环rd.流程如下
2016-02-10 14:13:10.937 [DEBUG] link.cpp(85): rd push buf(hao), len(3)
2016-02-10 14:13:10.937 [DEBUG] link.cpp(67): rd errno == EWOULDBLOCK
*/
http://blog.csdn.net/historyasamirror/article/details/5778378
blocking IO
在linux中,默认情况下所有的socket都是blocking,一个典型的读操作流程大概是这样:
non-blocking IO
linux下,可以通过设置socket使其变为non-blocking。当对一个non-blocking socket执行读操作时,流程是这个样子:
IO multiplexing
IO multiplexing这个词可能有点陌生,但是如果我说select,epoll,大概就都能明白了。有些地方也称这种IO方式为event driven IO。我们都知道,select/epoll的好处就在于单个process就可以同时处理多个网络连接的IO。它的基本原理就是select/epoll这个function会不断的轮询所负责的所有socket,当某个socket有数据到达了,就通知用户进程。它的流程如图:
Asynchronous I/O
linux下的asynchronous IO其实用得很少。先看一下它的流程:
keepalive
1. tcp_keepalive_time
the interval between the last data packet sent (simple ACKs are not considered data) and the first keepalive probe; after the connection is marked to need keepalive, this counter is not used any further
2. tcp_keepalive_intvl
the interval between subsequential keepalive probes, regardless of what the connection has exchanged in the meantime
3. tcp_keepalive_probes
the number of unacknowledged probes to send before considering the connection dead and notifying the application layer
当本地端tcp发现有tcp_keepalive_time(1800)秒后未收到peer端数据,开始以间隔tcp_keepalive_intvl(75)秒的频率发送心跳包,如果连续tcp_keepalive_probes(9)次以上未响应,表示peed端已经down了,本地端close连接。
TCP_NODELAY
TCP_SND_QUEUELEN
tpcTCP
https://www.processon.com/view/56377e1ce4b03a7c1d4ccb6e