剑指架构师系列-tomcat6通过IO复用实现connector

由于tomcat6的配置文件如下:

<Connector port="80" protocol="org.apache.coyote.http11.Http11NioProtocol"
		connectionTimeout="20000" URIEncoding="UTF-8" useBodyEncodingForURI="true"
		enableLookups="false" redirectPort="8443" />

所以在StandardService类中执行如下代码启动Connector时,其中的connector为Http11NioProtocol实现类。

synchronized (connectors) {
            for (int i = 0; i < connectors.length; i++) {
                try {
                    ((Lifecycle) connectors[i]).start();
                } catch (Exception e) {
                    log.error(sm.getString("standardService.connector.startFailed",connectors[i]), e);
                }
            }
        }

Connector类中调用org.apache.coyote.http11.Http11NioProtocol的start()方法。在Http11NioProtocol类中又调用了org.apache.tomcat.util.net.NioEndpoint的start()方法。

public void start()throws Exception {
        // Initialize socket if not done before
        if (!initialized) {
            init();
        }
        if (!running) {
            running = true;
            paused = false;

            // Create worker collection
            if (getUseExecutor()) {
                if ( executor == null ) {
                    TaskQueue taskqueue = new TaskQueue();
                    TaskThreadFactory tf = new TaskThreadFactory(getName() + "-exec-",this);

                   /*
                    corePoolSize the number of threads to keep in the pool, even if they are idle, unless allowCoreThreadTimeOut is set
                    maximumPoolSize the maximum number of threads to allow in the pool
                    keepAliveTime when the number of threads is greater than the core, this is the maximum time that excess idle threads will wait for new tasks before terminating.
                    unit the time unit for the keepAliveTime argument
                    workQueue the queue to use for holding tasks before they are executed. This queue will hold only the Runnable tasks submitted by the execute method.
                    threadFactory the factory to use when the executor creates a new thread
                    */
                    executor = new ThreadPoolExecutor(
                    		getMinSpareThreads(),
                    		getMaxThreads(),
                    		60,
                    		TimeUnit.SECONDS,
                    		taskqueue,
                    		tf);
                    taskqueue.setParent( (ThreadPoolExecutor) executor, this);
                }
            } else if ( executor == null ) {//avoid two thread pools being created
                workers = new WorkerStack(maxThreads,this);
            }

            // Poller线程,由于Acceptor委托线程为客户端Socket注册了READ事件,当READ准备好时,就会进入Poller线程的循环,Poller线程也是委托线程池去做,
            // 线程池将NioChannel加入到ConcurrentLinkedQueue<NioChannel>队列中。该线程数目可配置,默认为1个

            // Start poller threads
            pollers = new Poller[getPollerThreadCount()];
            for (int i=0; i<pollers.length; i++) {
                pollers[i] = new Poller(this);
                Thread pollerThread = new Thread(pollers[i], getName() + "-ClientPoller-"+i);
                pollerThread.setPriority(threadPriority);
                pollerThread.setDaemon(true);
                pollerThread.start();
            }

            // Start acceptor threads
            for (int i = 0; i < acceptorThreadCount; i++) {
                Thread acceptorThread = new Thread(new Acceptor(this), getName() + "-Acceptor-" + i);
                acceptorThread.setPriority(threadPriority);
                acceptorThread.setDaemon(daemon);
                acceptorThread.start();
            }
        }
    }

默认情况下会启动一个Acceptor线程与4个Poller线程。

注意:本作者为了方便代码的阅读,将NioEndpoint类重新进行了整理,也就是为NioEndpoint中的所有内部类都新建为了public类。由于内部类需要用到NioEndpoint的一些变量,所以在new一个public为在的时候需要为这个类传递this,也就是当前的NioEndpoint对象。

看一下Acceptor线程的run()方法是怎么运行接收请求的。

                // Accept the next incoming connection from the server socket
                SocketChannel socket = endpoint.serverSock.accept();  // this is clientSocket
                // Hand this socket off to an appropriate processor
                //TODO FIXME - this is currently a blocking call, meaning we will be blocking
                //further accepts until there is a thread available.
                if ( endpoint.running && (!endpoint.paused) && socket != null ) {
                    //processSocket(socket);
                    if (!endpoint.setSocketOptions(socket)) {
                        try {
                            socket.socket().close();
                            socket.close();
                        } catch (IOException ix) {
//                            if (log.isDebugEnabled())
//                                log.debug("", ix);
                        }
                    }
                }

这个线程主要调用了setSocketOptions()方法,源代码如下:

 public boolean setSocketOptions(SocketChannel socket) {
        // Process the connection
        try {
            //disable blocking, APR style, we are gonna be polling it
            socket.configureBlocking(false);
            Socket sock = socket.socket();
            socketProperties.setProperties(sock);

            NioChannel channel = nioChannels.poll();
            if ( channel == null ) {
                // SSL setup
                if (sslContext != null) {
                    SSLEngine engine = createSSLEngine();
                    int appbufsize = engine.getSession().getApplicationBufferSize();
                    NioBufferHandler bufhandler = new NioBufferHandler(Math.max(appbufsize,socketProperties.getAppReadBufSize()),
                                                                       Math.max(appbufsize,socketProperties.getAppWriteBufSize()),
                                                                       socketProperties.getDirectBuffer());
                    channel = new SecureNioChannel(socket, engine, bufhandler, selectorPool);
                } else {
                    // normal tcp setup
                    NioBufferHandler bufhandler = new NioBufferHandler(socketProperties.getAppReadBufSize(),
                                                                       socketProperties.getAppWriteBufSize(),
                                                                       socketProperties.getDirectBuffer());

                    channel = new NioChannel(socket, bufhandler);
                }
            } else {
                channel.setIOChannel(socket);
                if ( channel instanceof SecureNioChannel ) {
                    SSLEngine engine = createSSLEngine();
                    ((SecureNioChannel)channel).reset(engine);
                } else {
                    channel.reset();
                }
            }
            getPoller0().register(channel);  // 初始化为4个Poller
        } catch (Throwable t) {
            try {
                log.error("",t);
            }catch ( Throwable tt){}
            // Tell to close the socket
            return false;
        }
        return true;
    }

getPoller0()方法通过循环均匀获取channel来register各个channel。看下一Poller线程的register()方法:

public void register(final NioChannel socket) {
		socket.setPoller(this);

		KeyAttachment key = endpoint.keyCache.poll();
		final KeyAttachment ka = key != null ? key : new KeyAttachment();
		ka.reset(this, socket, endpoint.getSocketProperties().getSoTimeout());
		ka.interestOps(SelectionKey.OP_READ);// this is what OP_REGISTER turns  into.

		PollerEvent r = endpoint.eventCache.poll();

		// 有PollerEvent对象就重复利用,没有就新建一个
		if (r == null)
			r = new PollerEvent(socket, ka, endpoint.OP_REGISTER, endpoint);
		else
			r.reset(socket, ka, endpoint.OP_REGISTER);

		addEvent(r);
	}

最后调用addEvent()方法向Poller类中的如下变量添加了这个PollerEvent对象

    protected ConcurrentLinkedQueue<Runnable> events = new ConcurrentLinkedQueue<Runnable>();

在Poller的run()方法中有一句代码如下:

hasEvents = (hasEvents | events());

调用了events()方法,如下:

public boolean events() {
        boolean result = false;
            Runnable r = null;
            result = (events.size() > 0);
            while ( (r = (Runnable)events.poll()) != null ) {
                try {
                    r.run();
                    if ( r instanceof PollerEvent ) {
                        ((PollerEvent)r).reset();
                        endpoint.eventCache.offer((PollerEvent)r);
                    }
                } catch ( Throwable x ) {
//                    log.error("",x);
                }
            }
        return result;
    }

这个events中如果有PollEvent对象,那么调用t.run()方法运行,然后将这个对象存入eventCache()中。看一下PollEvent对象的run()方法,如下:

    protected NioChannel socket;
    protected int interestOps;   // 感兴趣的集合
    protected KeyAttachment key;

    public void run() {
        if ( interestOps == endpoint.OP_REGISTER ) {  // 是注册事件
            try {
                socket.getIOChannel().register(socket.getPoller().getSelector(), SelectionKey.OP_READ, key);
            } catch (Exception x) {
//                log.error("", x);
            }
        } else {
            final SelectionKey key = socket.getIOChannel().keyFor(socket.getPoller().getSelector());
            try {
                boolean cancel = false;
                if (key != null) {
                    final KeyAttachment att = (KeyAttachment) key.attachment();
                    if ( att!=null ) {
                        //handle callback flag
                        if (att.getComet() && (interestOps & endpoint.OP_CALLBACK) == endpoint.OP_CALLBACK ) {
                            att.setCometNotify(true);
                        } else {
                            att.setCometNotify(false);
                        }
                        interestOps = (interestOps & (~endpoint.OP_CALLBACK));//remove the callback flag
                        att.access();//to prevent timeout
                        //we are registering the key to start with, reset the fairness counter.
                        int ops = key.interestOps() | interestOps;
                        att.interestOps(ops);
                        key.interestOps(ops);
                        att.setCometOps(ops);
                    } else {
                        cancel = true;
                    }
                } else {
                    cancel = true;
                }
                if ( cancel )
                	socket.getPoller().cancelledKey(key,SocketStatus.ERROR,false);
            }catch (CancelledKeyException ckx) {
                try {
                    socket.getPoller().cancelledKey(key,SocketStatus.DISCONNECT,true);
                }catch (Exception ignore) {}
            }
        }//end if
    }//run

最主要的功能就是为NioChannel注册感兴趣的事件。这样我们就可以回到Poller的run()方法中看具体的操作了。

                 Iterator iterator = keyCount > 0 ? selector.selectedKeys().iterator() : null;
                // Walk through the collection of ready keys and dispatch
                // any active event.
                while (iterator != null && iterator.hasNext()) {
                    SelectionKey sk = (SelectionKey) iterator.next();
                    KeyAttachment attachment = (KeyAttachment)sk.attachment();
                    // Attachment may be null if another thread has called
                    // cancelledKey()
                    if (attachment == null) {
                        iterator.remove();
                    } else {
                        attachment.access();
                        iterator.remove();
                        processKey(sk, attachment);
                    }
                }//while

如果有感兴趣的事件发生,那么进入while循环后调用processKey()方法进行处理:

if ( close ) {
                cancelledKey(sk, SocketStatus.STOP, false);
            } else if ( sk.isValid() && attachment != null ) {
                attachment.access();//make sure we don‘t time out valid sockets
                sk.attach(attachment);//cant remember why this is here
                NioChannel channel = attachment.getChannel();
                if (sk.isReadable() || sk.isWritable() ) {
                    if ( attachment.getSendfileData() != null ) {
                        processSendfile(sk,attachment,true, false);
                    } else if ( attachment.getComet() ) {
                        //check if thread is available
                        if ( endpoint.isWorkerAvailable() ) {
                            //set interest ops to 0 so we don‘t get multiple
                            //invokations for both read and write on separate threads
                            reg(sk, attachment, 0);
                            //read goes before write
                            if (sk.isReadable()) {
                                //read notification
                                if (!endpoint.processSocket(channel, SocketStatus.OPEN))
                                	endpoint.processSocket(channel, SocketStatus.DISCONNECT);
                            } else {
                                //future placement of a WRITE notif
                                if (!endpoint.processSocket(channel, SocketStatus.OPEN))
                                	endpoint.processSocket(channel, SocketStatus.DISCONNECT);
                            }
                        } else {
                            result = false;
                        }
                    } else {
                        //later on, improve latch behavior
                        if ( endpoint.isWorkerAvailable() ) {
                            unreg(sk, attachment,sk.readyOps());
                            boolean close = (!endpoint.processSocket(channel));
                            if (close) {
                                cancelledKey(sk,SocketStatus.DISCONNECT,false);
                            }
                        } else {
                            result = false;
                        }
                    }
                }
            } else {
                //invalid key
                cancelledKey(sk, SocketStatus.ERROR,false);
            }

进入后调用了NioEndpoint的processSocket()方法,如下:

 public boolean processSocket(NioChannel socket, SocketStatus status) {
        return processSocket(socket,status,true);
    }

    public boolean processSocket(NioChannel socket, SocketStatus status, boolean dispatch) {
        try {
            KeyAttachment attachment = (KeyAttachment)socket.getAttachment(false);
            attachment.setCometNotify(false); //will get reset upon next reg
            if (executor == null) {
                getWorkerThread().assign(socket, status);
            } else {
                SocketProcessor sc = processorCache.poll();
                if (sc == null ){
                	sc = new SocketProcessor(socket,status,this);
                }else{
                	sc.reset(socket,status);
                }
                if ( dispatch ) executor.execute(sc);
                else sc.run();
            }
        } catch (Throwable t) {
            // This means we got an OOM or similar creating a thread, or that
            // the pool and its queue are full
            log.error(sm.getString("endpoint.process.fail"), t);
            return false;
        }
        return true;
    }

如果sc为空则新建,否则从processorCache中取出重置后重复利用。使用线程池或者直接调用run()方法执行,SocketProcesser类的run()方法有如下代码:

boolean closed = (status == null) ?
(nioEndpoint.getHandler().process(socket) == Handler.SocketState.CLOSED)
: (nioEndpoint.getHandler().event(socket, status) == Handler.SocketState.CLOSED);

获取Http11ConnectionHandler的process()对socket进行处理,如下:

public SocketState process(NioChannel socket) {
        Http11NioProcessor processor = null;
        try {
            processor = connections.remove(socket);

            if (processor == null) {
                processor = recycledProcessors.poll();
            }
            if (processor == null) {
                processor = createProcessor();
            }

            if (processor instanceof ActionHook) {
                ((ActionHook) processor).action(ActionCode.ACTION_START, null);
            }

            if (proto.ep.isSSLEnabled() && (proto.sslImplementation != null)) {
                if (socket instanceof SecureNioChannel) {
                    SecureNioChannel ch = (SecureNioChannel)socket;
                    processor.setSslSupport(proto.sslImplementation.getSSLSupport(ch.getSslEngine().getSession()));
                }else processor.setSslSupport(null);
            } else {
                processor.setSslSupport(null);
            }

            SocketState state = processor.process(socket);
            if (state == SocketState.LONG) {
                // In the middle of processing a request/response. Keep the
                // socket associated with the processor.
                connections.put(socket, processor);
                socket.getPoller().add(socket);
            } else if (state == SocketState.OPEN) {
                // In keep-alive but between requests. OK to recycle
                // processor. Continue to poll for the next request.
                release(socket, processor);
                socket.getPoller().add(socket);
            } else {
                // Connection closed. OK to recycle the processor.
                release(socket, processor);
            }
            return state;

        } catch (Exception e) {
             e.printStackTrace();
        }
        release(socket, processor);
        return SocketState.CLOSED;
    }

  

  

 

  

 

  

  

  

时间: 2024-11-19 22:38:10

剑指架构师系列-tomcat6通过IO复用实现connector的相关文章

剑指架构师系列-tomcat6通过伪异步实现connector

首先在StandardService中start接收请求的线程,如下: synchronized (connectors) { for (int i = 0; i < connectors.length; i++) { try { ((Lifecycle) connectors[i]).start(); } catch (Exception e) { log.error(sm.getString("standardService.connector.startFailed",co

剑指架构师系列-tomcat6的connector

首先在StandardService中start接收请求的线程,如下: synchronized (connectors) { for (int i = 0; i < connectors.length; i++) { try { ((Lifecycle) connectors[i]).start(); } catch (Exception e) { log.error(sm.getString("standardService.connector.startFailed",co

剑指架构师系列-spring boot的logback日志记录

Spring Boot集成了Logback日志系统. Logback的核心对象主要有3个:Logger.Appender.Layout 1.Logback Logger:日志的记录器 主要用于存放日志对象,也可以定义日志类型.级别. 级别:ERROR.WARE.INFO.DEBUG和TRACE.没有FATAL,归纳到了ERROR级别里.ERROR.WARN and INFO level messages are logged by default. 在Spring Boot中,最好定义为logb

剑指架构师系列-Linux下的调优

1.I/O调优 CentOS下的iostat命令输出如下: $iostat -d -k 1 2 # 查看TPS和吞吐量 参数 -d 表示,显示设备(磁盘)使用状态:-k某些使用block为单位的列强制使用Kilobytes为单位:1 10表示,数据显示每隔1秒刷新一次,共显示2次. tps:该设备每秒的传输次数,也就是一次I/O请求.多个逻辑请求可能会被合并为"一次I/O请求"."一次传输"请求的大小是未知的. kB_read/s:每秒从设备读取的数据量:kB_wr

剑指架构师系列-持续集成之Maven+Nexus+Jenkins+git+Spring boot

1.Nexus与Maven 先说一下这个Maven是什么呢?大家都知道,Java社区发展的非常强大,封装各种功能的Jar包满天飞,那么如何才能方便的引入我们项目,为我所用呢?答案就是Maven,只需要粘贴个Jar包的地址,Maven就会自动到网上查找引入到你的项目中.不过首先你的下载个Maven,然后指定一下 当下来的包包(jar)放到哪里. 我的版本是apache-maven-3.2.1,找到conf里面的配置文件 settings.xml,瞅瞅有没有 <localRepository>E:

剑指架构师系列-Hibernate需要掌握的Annotation

1.一对多的关系配置 @Entity @Table(name = "t_order") public class Order { @Id @GeneratedValue private int id; private String name; /* * 该属性定义类和类之间的级联关系.定义的级联关系将被容器视为对当前类对象及其关联类对象采取相同的操作, * 而且这种关系是递归调用的.举个例子:Order 和OrderItem有级联关系,那么删除Order时将同时删除它所对 * 应的Or

剑指架构师系列-Redis安装与使用

1.安装Redis 我们在VMware中安装CentOS 64位系统后,在用户目录下下载安装Redis. 下载redis目前最稳定版本也是功能最完善,集群支持最好并加入了sentinel(哨兵-高可用)功能的redis-stable版, http://download.redis.io/releases/ wget http://download.redis.io/releases/redis-stable.tar.gz tar -xzvf redis-stable.tar.gz cd redi

剑指架构师系列-MySQL的安装及主从同步

1.安装数据库 wget http://dev.mysql.com/get/mysql-community-release-el7-5.noarch.rpm rpm -ivh mysql-community-release-el7-5.noarch.rpm yum install mysql-community-server 安装时使用root用户权限.安装成功后即可进行启动: /bin/systemctl restart mysqld.service 修改MySQL数据库root用户的密码,如

剑指架构师系列-Redis集群部署

初步搭建Redis集群 克隆已经安装Redis的虚拟机,我们使用这两个虚拟机中的Redis来搭建集群. master:192.168.2.129 端口:7001 slave:192.168.2.132 端口:7002 sentinel:192.168.2.129 端口:26379 来说一下这个sentinel,sentinel是一个管理redis实例的工具,它可以实现对redis的监控.通知.自动故障转移.sentinel不断的检测redis实例是否可以正常工作,通过API向其他程序报告redi