epoll_wait()返回可用uid时,对uid取状态,本该是BROKEN的,却取到CLOSED,然而,不能像处理BROKEN事件那样处理CLOSED事件,这样移除不了CLOSED事件,于是epoll_wait不断返回该uid,就造成了死循环。跟踪代码至底层,寻找原因。
int CUDTUnited::epoll_remove_usock(const int eid, const UDTSOCKET u)
{
int ret = m_EPoll.remove_usock(eid, u);
CUDTSocket* s = locate(u);
if (NULL != s)
{
s->m_pUDT->removeEPoll(eid);
}
//else
//{
// throw CUDTException(5, 4);
//}
return ret;
}
CUDTSocket* CUDTUnited::locate(const UDTSOCKET u)
{
CGuard cg(m_ControlLock);
map<UDTSOCKET, CUDTSocket*>::iterator i = m_Sockets.find(u);
if ((i == m_Sockets.end()) || (i->second->m_Status == CLOSED))
return NULL;
return i->second;
}
void CUDT::removeEPoll(const int eid)
{
// clear IO events notifications;
// since this happens after the epoll ID has been removed, they cannot be set again
set<int> remove;
remove.insert(eid);
s_UDTUnited.m_EPoll.update_events(m_SocketID, remove, UDT_EPOLL_IN | UDT_EPOLL_OUT, false);
CGuard::enterCS(s_UDTUnited.m_EPoll.m_EPollLock);
m_sPollID.erase(eid);
CGuard::leaveCS(s_UDTUnited.m_EPoll.m_EPollLock);
}
CUDTUnited::epoll_remove_usock里,先locate目前uid的位置,但如果此时uid的状态是CLOSED,则返回NULL, 于是,epoll_remove_usock无法再继续调用removeEPoll,所以无法移除epoll事件。
但为什么会发生CLOSED事件呢?按照作者的原意,应该是只会发生BROKEN事件,不会发生CLOSED事件的,继续查找原因。
首先看看BROKEN事件怎么发生的。
客户端疑似断开十秒以上之后, CUDT::checkTimers()做以下操作
……
……
m_bClosing = true;
m_bBroken = true;
m_iBrokenCounter = 30;
// update snd U list to remove this socket
m_pSndQueue->m_pSndUList->update(this);
releaseSynch();
// app can call any UDT API to learn the connection_broken error
s_UDTUnited.m_EPoll.update_events(m_SocketID, m_sPollID, UDT_EPOLL_IN | UDT_EPOLL_OUT | UDT_EPOLL_ERR, true);
CTimer::triggerEvent();
……
……
在这里把m_bBroken置为true,并触发epoll事件。
然而,在epoll_wait返回事件之前,还可能发生这个:
#ifndef WIN32
void* CUDTUnited::garbageCollect(void* p)
#else
DWORD WINAPI CUDTUnited::garbageCollect(LPVOID p)
#endif
{
CUDTUnited* self = (CUDTUnited*)p;
CGuard gcguard(self->m_GCStopLock);
while (!self->m_bClosing)
{
self->checkBrokenSockets();
……
……
void CUDTUnited::checkBrokenSockets()
{
CGuard cg(m_ControlLock);
// set of sockets To Be Closed and To Be Removed
vector<UDTSOCKET> tbc;
vector<UDTSOCKET> tbr;
for (map<UDTSOCKET, CUDTSocket*>::iterator i = m_Sockets.begin(); i != m_Sockets.end(); ++ i)
{
// check broken connection
if (i->second->m_pUDT->m_bBroken)
{
if (i->second->m_Status == LISTENING)
{
// for a listening socket, it should wait an extra 3 seconds in case a client is connecting
if (CTimer::getTime() - i->second->m_TimeStamp < 3000000)
continue;
}
else if ((i->second->m_pUDT->m_pRcvBuffer != NULL) && (i->second->m_pUDT->m_pRcvBuffer->getRcvDataSize() > 0) && (i->second->m_pUDT->m_iBrokenCounter -- > 0))
{
// if there is still data in the receiver buffer, wait longer
continue;
}
//close broken connections and start removal timer
i->second->m_Status = CLOSED;
i->second->m_TimeStamp = CTimer::getTime();
tbc.push_back(i->first);
m_ClosedSockets[i->first] = i->second;
……
……
GC线程是UDT的垃圾回收处理,在UDT调用cleanup(),之前,会一直处于checkBrokenSocket和阻塞的循环中。
然后在checkBrokenSocket里,当socket的m_bBroken为true时,m_Status的状态被置为CLOSED。
所以,这时候再用getsocketstate取socket的状态,就会取到CLOSED,也就是明明是BROKEN事件,硬生生变成了CLOSED事件!然后接下去epoll事件的移除就失败了。
于是,修改如下,
把
int CEPoll::remove_usock(const int eid, const UDTSOCKET& u)
{
CGuard pg(m_EPollLock);
map<int, CEPollDesc>::iterator p = m_mPolls.find(eid);
if (p == m_mPolls.end())
throw CUDTException(5, 13);
p->second.m_sUDTSocksIn.erase(u);
p->second.m_sUDTSocksOut.erase(u);
p->second.m_sUDTSocksEx.erase(u);
return 0;
}
改为
int CEPoll::remove_usock2(const int eid, const UDTSOCKET& u)
{
CGuard pg(m_EPollLock);
map<int, CEPollDesc>::iterator p = m_mPolls.find(eid);
if (p == m_mPolls.end())
throw CUDTException(5, 13);
p->second.m_sUDTSocksIn.erase(u);
p->second.m_sUDTSocksOut.erase(u);
p->second.m_sUDTSocksEx.erase(u);
p->second.m_sUDTWrites.erase(u);
p->second.m_sUDTReads.erase(u);
p->second.m_sUDTExcepts.erase(u);
return 0;
}
并去掉CUDTUnited::epoll_remove_usock()中对removeEPoll()的调用。
这是比较简单也比较粗糙的改法,应该有更方便的思路才对。