tomcat源码 Connector

Connector容器主要负责解析socket请求,在tomcat中的源码位于org.apache.catalina.connector和org.apache.coyote包路径下;通过上两节的分析,我们知道了Connector是Service的子容器,而Service又是Server的子容器。在server.xml文件中配置,然后在Catalina类中通过Digester完成实例化。在server.xml中默认配置了两种Connector的实现,分别用来处理Http请求和AJP请求。
Connector的实现一共有以下三种:

1、Http Connector:解析HTTP请求,又分为BIO Http Connector和NIO Http Connector,即阻塞IO Connector和非阻塞IO Connector。本文主要分析NIO Http Connector的实现过程。

2、AJP:基于AJP协议,用于Tomcat与HTTP服务器通信定制的协议,能提供较高的通信速度和效率。如与Apache服务器集成时,采用这个协议。

3、APR HTTP Connector:用C实现,通过JNI调用的。主要提升对静态资源(如HTML、图片、CSS、JS等)的访问性能。

具体要使用哪种Connector可以在server.xml文件中通过protocol属性配置如下:

    <Connector port="8080" protocol="HTTP/1.1"
               connectionTimeout="20000"
               redirectPort="8443" />

然后看一下Connector的构造器:

public Connector(String protocol) {
    setProtocol(protocol);
    // Instantiate protocol handler
    ProtocolHandler p = null;
    try {
        Class<?> clazz = Class.forName(protocolHandlerClassName);
        p = (ProtocolHandler) clazz.getConstructor().newInstance();
    } catch (Exception e) {
        log.error(sm.getString(
                "coyoteConnector.protocolHandlerInstantiationFailed"), e);
    } finally {
        this.protocolHandler = p;
    }

    if (Globals.STRICT_SERVLET_COMPLIANCE) {
        uriCharset = StandardCharsets.ISO_8859_1;
    } else {
        uriCharset = StandardCharsets.UTF_8;
    }
}

public void setProtocol(String protocol) {

    boolean aprConnector = AprLifecycleListener.isAprAvailable() &&
            AprLifecycleListener.getUseAprConnector();

    if ("HTTP/1.1".equals(protocol) || protocol == null) {
        if (aprConnector) {
            setProtocolHandlerClassName("org.apache.coyote.http11.Http11AprProtocol");
        } else {
            setProtocolHandlerClassName("org.apache.coyote.http11.Http11NioProtocol");
        }
    } else if ("AJP/1.3".equals(protocol)) {
        if (aprConnector) {
            setProtocolHandlerClassName("org.apache.coyote.ajp.AjpAprProtocol");
        } else {
            setProtocolHandlerClassName("org.apache.coyote.ajp.AjpNioProtocol");
        }
    } else {
        setProtocolHandlerClassName(protocol);
    }
}

通过分析Connector构造器的源码可以知道,每一个Connector对应了一个protocolHandler,一个protocolHandler被设计用来监听服务器某个端口的网络请求,但并不负责处理请求(处理请求由Container组件完成)。下面就以Http11NioProtocol为例分析Http请求的解析过程。
在Connector的startInterval方法中启动了protocolHandler,代码如下:

protected void startInternal() throws LifecycleException {

    // Validate settings before starting
    if (getPort() < 0) {
        throw new LifecycleException(sm.getString(
                "coyoteConnector.invalidPort", Integer.valueOf(getPort())));
    }

    setState(LifecycleState.STARTING);

    try {
        protocolHandler.start();
    } catch (Exception e) {
        throw new LifecycleException(
                sm.getString("coyoteConnector.protocolHandlerStartFailed"), e);
    }
}

Http11NioProtocol创建一个org.apache.tomcat.util.net.NioEndpoint实例,然后将监听端口并解析请求的工作全被委托给NioEndpoint实现。tomcat在使用Http11NioProtocol解析HTTP请求时一共设计了三种线程,分别为Acceptor,Poller和Worker。

1、Acceptor线程

Acceptor实现了Runnable接口,根据其命名就知道它是一个接收器,负责接收socket,其接收方法是serverSocket.accept()方式,获得SocketChannel对象,然后封装成tomcat自定义的org.apache.tomcat.util.net.NioChannel。虽然是Nio,但在接收socket时仍然使用传统的方法,使用阻塞方式实现。Acceptor以线程池的方式被创建和管理,在NioEndpoint的startInternal()方法中完成Acceptor的启动,源码如下:

public void startInternal() throws Exception {

    if (!running) {
        running = true;
        paused = false;

        processorCache = new SynchronizedStack<>(SynchronizedStack.DEFAULT_SIZE,
                socketProperties.getProcessorCache());
        eventCache = new SynchronizedStack<>(SynchronizedStack.DEFAULT_SIZE,
                        socketProperties.getEventCache());
        nioChannels = new SynchronizedStack<>(SynchronizedStack.DEFAULT_SIZE,
                socketProperties.getBufferPool());

        // Create worker collection
        if ( getExecutor() == null ) {
            createExecutor();
        }

        //设置最大连接数,默认值为maxConnections = 10000,通过同步器AQS实现。
        initializeConnectionLatch();

        //默认是2个,Math.min(2,Runtime.getRuntime().availableProcessors());和虚拟机处理器个数比较
        // Start poller threads
        pollers = new Poller[getPollerThreadCount()];
        for (int i=0; i<pollers.length; i++) {
            pollers[i] = new Poller();
            Thread pollerThread = new Thread(pollers[i], getName() + "-ClientPoller-"+i);
            pollerThread.setPriority(threadPriority);
            pollerThread.setDaemon(true);
            pollerThread.start();
        }

        startAcceptorThreads();
    }
}

继续追踪startAcceptorThreads的源码

protected final void startAcceptorThreads() {
    //启动Acceptor线程,默认是1个
    int count = getAcceptorThreadCount();
    acceptors = new Acceptor[count];

    for (int i = 0; i < count; i++) {
        acceptors[i] = createAcceptor();
        String threadName = getName() + "-Acceptor-" + i;
        acceptors[i].setThreadName(threadName);
        Thread t = new Thread(acceptors[i], threadName);
        t.setPriority(getAcceptorThreadPriority());
        t.setDaemon(getDaemon());
        t.start();
    }
}

Acceptor线程的核心代码在它的run方法中:

protected class Acceptor extends AbstractEndpoint.Acceptor {

    @Override
    public void run() {

        int errorDelay = 0;

        // Loop until we receive a shutdown command
        while (running) {

            // Loop if endpoint is paused
            while (paused && running) {
                state = AcceptorState.PAUSED;
                try {
                    Thread.sleep(50);
                } catch (InterruptedException e) {
                    // Ignore
                }
            }

            if (!running) {
                break;
            }
            state = AcceptorState.RUNNING;

            try {
                //if we have reached max connections, wait
                countUpOrAwaitConnection();

                SocketChannel socket = null;
                try {
                    // Accept the next incoming connection from the server
                    // socket
                    //接收socket请求
                    socket = serverSock.accept();
                } catch (IOException ioe) {
                    // We didn‘t get a socket
                    countDownConnection();
                    if (running) {
                        // Introduce delay if necessary
                        errorDelay = handleExceptionWithDelay(errorDelay);
                        // re-throw
                        throw ioe;
                    } else {
                        break;
                    }
                }
                // Successful accept, reset the error delay
                errorDelay = 0;

                // Configure the socket
                if (running && !paused) {
                    // setSocketOptions() will hand the socket off to
                    // an appropriate processor if successful
                    if (!setSocketOptions(socket)) {
                        closeSocket(socket);
                    }
                } else {
                    closeSocket(socket);
                }
            } catch (Throwable t) {
                ExceptionUtils.handleThrowable(t);
                log.error(sm.getString("endpoint.accept.fail"), t);
            }
        }
        state = AcceptorState.ENDED;
    }

    private void closeSocket(SocketChannel socket) {
        countDownConnection();
        try {
            socket.socket().close();
        } catch (IOException ioe)  {
            if (log.isDebugEnabled()) {
                log.debug(sm.getString("endpoint.err.close"), ioe);
            }
        }
        try {
            socket.close();
        } catch (IOException ioe) {
            if (log.isDebugEnabled()) {
                log.debug(sm.getString("endpoint.err.close"), ioe);
            }
        }
    }
}

Acceptor完成了socket请求的接收,然后交给NioEndpoint 进行配置,继续追踪Endpoint的setSocketOptions方法。

protected boolean setSocketOptions(SocketChannel socket) {
    // Process the connection
    try {
        //disable blocking, APR style, we are gonna be polling it
        //设置为非阻塞
        socket.configureBlocking(false);
        Socket sock = socket.socket();
        socketProperties.setProperties(sock);

        NioChannel channel = nioChannels.pop();
        if (channel == null) {
            SocketBufferHandler bufhandler = new SocketBufferHandler(
                    socketProperties.getAppReadBufSize(),
                    socketProperties.getAppWriteBufSize(),
                    socketProperties.getDirectBuffer());
            if (isSSLEnabled()) {
                channel = new SecureNioChannel(socket, bufhandler, selectorPool, this);
            } else {
                channel = new NioChannel(socket, bufhandler);
            }
        } else {
            channel.setIOChannel(socket);
            channel.reset();
        }
         //轮训pollers数组元素,调用Poller的register方法,完成channel的注册。
        getPoller0().register(channel);
    } catch (Throwable t) {
        ExceptionUtils.handleThrowable(t);
        try {
            log.error("",t);
        } catch (Throwable tt) {
            ExceptionUtils.handleThrowable(tt);
        }
        // Tell to close the socket
        return false;
    }
    return true;
}

分析setSocketOptions的源码可以知道,该方法的主要功能是利用传入的SocketChannel参数生成SecureNioChannel或者NioChannel,然后注册到Poller线程的selector中,可以进一步了解Java nio的相关知识,对这一块内容有更深的理解。

2、Poller线程

Poller同样实现了Runnable接口,是NioEndpoint类的内部类。在Endpoint的startInterval方法中创建、配置并启动了Poller线程,见代码清单4。Poller主要职责是不断轮询其selector,检查准备就绪的socket(有数据可读或可写),实现io的多路复用。其构造其中初始化了selector。

public Poller() throws IOException {
    this.selector = Selector.open();
}

在分析Acceptor的时候,提到了Acceptor接受到一个socket请求后,调用NioEndpoint的setSocketOptions方法(代码清单6),该方法生成了NioChannel后调用Poller的register方法生成PoolorEvent后加入到Eventqueue,register方法的源码如下:

public void register(final NioChannel socket) {
    socket.setPoller(this);
    NioSocketWrapper ka = new NioSocketWrapper(socket, NioEndpoint.this);
    socket.setSocketWrapper(ka);
    ka.setPoller(this);
    ka.setReadTimeout(getSocketProperties().getSoTimeout());
    ka.setWriteTimeout(getSocketProperties().getSoTimeout());
    ka.setKeepAliveLeft(NioEndpoint.this.getMaxKeepAliveRequests());
    ka.setSecure(isSSLEnabled());
    ka.setReadTimeout(getConnectionTimeout());
    ka.setWriteTimeout(getConnectionTimeout());
    PollerEvent r = eventCache.pop();
    ka.interestOps(SelectionKey.OP_READ);//this is what OP_REGISTER turns into.
    //生成PoolorEvent并加入到Eventqueue
    if ( r==null) r = new PollerEvent(socket,ka,OP_REGISTER);
    else r.reset(socket,ka,OP_REGISTER);
    addEvent(r);
}

Poller的核心代码也在其run方法中:

public void run() {
    // Loop until destroy() is called
    // 调用了destroy()方法后终止此循环
    while (true) {

        boolean hasEvents = false;

        try {
            if (!close) {
                hasEvents = events();
                if (wakeupCounter.getAndSet(-1) > 0) {
                    //if we are here, means we have other stuff to do
                    //do a non blocking select
                    //非阻塞的 select
                    keyCount = selector.selectNow();
                } else {
                    //阻塞selector,直到有准备就绪的socket
                    keyCount = selector.select(selectorTimeout);
                }
                wakeupCounter.set(0);
            }
            if (close) {
                //该方法遍历了eventqueue中的所有PollerEvent,然后依次调用PollerEvent的run,将socket注册到selector中。
                events();
                timeout(0, false);
                try {
                    selector.close();
                } catch (IOException ioe) {
                    log.error(sm.getString("endpoint.nio.selectorCloseFail"), ioe);
                }
                break;
            }
        } catch (Throwable x) {
            ExceptionUtils.handleThrowable(x);
            log.error("",x);
            continue;
        }
        //either we timed out or we woke up, process events first
        if ( keyCount == 0 ) hasEvents = (hasEvents | events());

        Iterator<SelectionKey> iterator =
            keyCount > 0 ? selector.selectedKeys().iterator() : null;
        // Walk through the collection of ready keys and dispatch
        // any active event.
        //遍历就绪的socket事件
        while (iterator != null && iterator.hasNext()) {
            SelectionKey sk = iterator.next();
            NioSocketWrapper attachment = (NioSocketWrapper)sk.attachment();
            // Attachment may be null if another thread has called
            // cancelledKey()
            if (attachment == null) {
                iterator.remove();
            } else {
                iterator.remove();
                //调用processKey方法对有数据读写的socket进行处理,在分析Worker线程时会分析该方法
                processKey(sk, attachment);
            }
        }//while

        //process timeouts
        timeout(keyCount,hasEvents);
    }//while

    getStopLatch().countDown();
}

run方法中调用了events方法:

public boolean events() {
    boolean result = false;

    PollerEvent pe = null;
    for (int i = 0, size = events.size(); i < size && (pe = events.poll()) != null; i++ ) {
        result = true;
        try {
            //将pollerEvent中的每个socketChannel注册到selector中
            pe.run();
            pe.reset();
            if (running && !paused) {
                //将注册了的pollerEvent加到endPoint.eventCache
                eventCache.push(pe);
            }
        } catch ( Throwable x ) {
            log.error("",x);
        }
    }

    return result;
}

继续跟进PollerEvent的run方法:

public void run() {
    if (interestOps == OP_REGISTER) {
        try {
            //将SocketChannel注册到selector中,注册时间为SelectionKey.OP_READ读事件
            socket.getIOChannel().register(
                    socket.getPoller().getSelector(), SelectionKey.OP_READ, socketWrapper);
        } catch (Exception x) {
            log.error(sm.getString("endpoint.nio.registerFail"), x);
        }
    } else {
        final SelectionKey key = socket.getIOChannel().keyFor(socket.getPoller().getSelector());
        try {
            if (key == null) {
                // The key was cancelled (e.g. due to socket closure)
                // and removed from the selector while it was being
                // processed. Count down the connections at this point
                // since it won‘t have been counted down when the socket
                // closed.
                socket.socketWrapper.getEndpoint().countDownConnection();
                ((NioSocketWrapper) socket.socketWrapper).closed = true;
            } else {
                final NioSocketWrapper socketWrapper = (NioSocketWrapper) key.attachment();
                if (socketWrapper != null) {
                    //we are registering the key to start with, reset the fairness counter.
                    int ops = key.interestOps() | interestOps;
                    socketWrapper.interestOps(ops);
                    key.interestOps(ops);
                } else {
                    socket.getPoller().cancelledKey(key);
                }
            }
        } catch (CancelledKeyException ckx) {
            try {
                socket.getPoller().cancelledKey(key);
            } catch (Exception ignore) {}
        }
    }
}

3、Worker线程

Worker线程即SocketProcessor是用来处理Socket请求的。SocketProcessor也同样是Endpoint的内部类。在Poller的run方法中(代码清单8)监听到准备就绪的socket时会调用processKey方法进行处理:

protected void processKey(SelectionKey sk, NioSocketWrapper attachment) {
    try {
        if ( close ) {
            cancelledKey(sk);
        } else if ( sk.isValid() && attachment != null ) {
            //有读写事件就绪时
            if (sk.isReadable() || sk.isWritable() ) {
                if ( attachment.getSendfileData() != null ) {
                    processSendfile(sk,attachment, false);
                } else {
                    unreg(sk, attachment, sk.readyOps());
                    boolean closeSocket = false;
                    // Read goes before write
                    // socket可读时,先处理读事件
                    if (sk.isReadable()) {
                        //调用processSocket方法进一步处理
                        if (!processSocket(attachment, SocketEvent.OPEN_READ, true)) {
                            closeSocket = true;
                        }
                    }
                    //写事件
                    if (!closeSocket && sk.isWritable()) {
                        //调用processSocket方法进一步处理
                        if (!processSocket(attachment, SocketEvent.OPEN_WRITE, true)) {
                            closeSocket = true;
                        }
                    }
                    if (closeSocket) {
                        cancelledKey(sk);
                    }
                }
            }
        } else {
            //invalid key
            cancelledKey(sk);
        }
    } catch ( CancelledKeyException ckx ) {
        cancelledKey(sk);
    } catch (Throwable t) {
        ExceptionUtils.handleThrowable(t);
        log.error("",t);
    }
}

继续跟踪processSocket方法:

public boolean processSocket(SocketWrapperBase<S> socketWrapper,
        SocketEvent event, boolean dispatch) {
    try {
        if (socketWrapper == null) {
            return false;
        }
        // 尝试循环利用之前回收的SocketProcessor对象,如果没有可回收利用的则创建新的SocketProcessor对象
        SocketProcessorBase<S> sc = processorCache.pop();
        if (sc == null) {
            sc = createSocketProcessor(socketWrapper, event);
        } else {
            // 循环利用回收的SocketProcessor对象
            sc.reset(socketWrapper, event);
        }
        Executor executor = getExecutor();
        if (dispatch && executor != null) {
            //SocketProcessor实现了Runneble接口,可以直接传入execute方法进行处理
            executor.execute(sc);
        } else {
            sc.run();
        }
    } catch (RejectedExecutionException ree) {
        getLog().warn(sm.getString("endpoint.executor.fail", socketWrapper) , ree);
        return false;
    } catch (Throwable t) {
        ExceptionUtils.handleThrowable(t);
        // This means we got an OOM or similar creating a thread, or that
        // the pool and its queue are full
        getLog().error(sm.getString("endpoint.process.fail"), t);
        return false;
    }
    return true;
}

//NioEndpoint中createSocketProcessor创建一个SocketProcessor。
protected SocketProcessorBase<NioChannel> createSocketProcessor(
        SocketWrapperBase<NioChannel> socketWrapper, SocketEvent event) {
    return new SocketProcessor(socketWrapper, event);
}

总结:

Http11NioProtocol是基于Java Nio实现的,创建了Acceptor、Poller和Worker线程实现多路io的复用。三类线程之间的关系如下图所示:

Acceptor和Poller之间是生产者消费者模式的关系,Acceptor不断向EventQueue中添加PollerEvent,Pollor轮询检查EventQueue中就绪的PollerEvent,然后发送给Work线程进行处理。

原文地址:https://www.cnblogs.com/grasp/p/10099897.html

时间: 2024-10-08 18:53:52

tomcat源码 Connector的相关文章

Tomcat源码分析之—具体启动流程分析

从Tomcat启动调用栈可知,Bootstrap类的main方法为整个Tomcat的入口,在init初始化Bootstrap类的时候为设置Catalina的工作路径也就是Catalina_HOME信息.Catalina.base信息,在initClassLoaders方法中初始化类加载器,然后通过反射初始化org.apache.catalina.startup.Catalina作为catalina守护进程: 一.load Bootstrap中load流程: 反射调用Catalina的load方法

TOMCAT源码分析(启动框架)

建议: 毕竟TOMCAT的框架还是比较复杂的, 单是从文字上理解, 是不那么容易掌握TOMCAT的框架的. 所以得实践.实践.再实践. 建议下载一份TOMCAT的源码, 调试通过, 然后单步跟踪其启动过程. 如果有不明白的地方, 再来查阅本文, 看是否能得到帮助. 我相信这样效果以及学习速度都会好很多! 1. Tomcat的整体框架结构 Tomcat的基本框架, 分为4个层次. Top Level Elements: Server Service Connector HTTP AJP Conta

tomcat源码学习(2)&#160;&#160;关于apache&#160;digest

好久不写博文,罪过罪过.因为最近公司比较忙加上琐事有点多,所以隔了好久才来更新博文. apache digest本来是struts2框架中来加载xml文件并实例化对象的一个jar包,后来使用的越来越多. 我们都知道tomcat的conf文件夹下有一个server.xml配置文件,我们经常会其中的来进行配置以来运行一个java web项目,也经常修改中的port属性以来实现修改tomcat监听的端口.其实每个标签基本上都对应着一个对象,那tomcat是如何将这些对象实例化到java 虚拟机的运行内

tomcat源码分析(一)从tomcat架构说起

p { margin-bottom: 0.25cm; line-height: 120% } 首先dowload源码并导入到Eclipse中,导入后代码代码的层次如下图所示.先简单来看下tomcat源码的结构.javax这个包主要是有关JavaEE规范的,比如Servlet等等,并不是我们主要分析的.需要分析有org.apache下的子包,其中catalina中很重要,其中tomcat的启动就在该包下中的startup包下, catalina下还定义了tomcat容器(server,servic

Tomcat源码分析——请求原理分析(下)

前言 本文继续讲解TOMCAT的请求原理分析,建议朋友们阅读本文时首先阅读过<TOMCAT源码分析——请求原理分析(上)>和<TOMCAT源码分析——请求原理分析(中)>.在<TOMCAT源码分析——请求原理分析(中)>一文我简单讲到了Pipeline,但并未完全展开,本文将从Pipeline开始讲解请求原理的剩余内容. 管道 在Tomcat中管道Pipeline是一个接口,定义了使得一组阀门Valve按照顺序执行的规范,Pipeline中定义的接口如下: getBas

spring boot实战(第十五篇)嵌入tomcat源码分析

嵌入tomcat源码分析 在启动spring boot工程时利用@SpringBootApplication注解,该注解启动@EnableAutoConfiguration自动配置,加载META-INF/spring.factories文件 # Auto Configure org.springframework.boot.autoconfigure.EnableAutoConfiguration=org.springframework.boot.autoconfigure.admin.Spri

Tomcat源码分析之—组件启动实现分析

Tomcat由多个组件组成,那么Tomcat是怎么对他们的生命周期进行管理的么,这里将从Tomcat源码去分析其生命周期的实现: Bootstrape类为Tomcat的入口,所有的组件够通过实现Lifecycle接口来管理生命周期,Tomcat启动的时候只需调用Server容器的start(),然后父容器依序启动他所包含的子容器,关闭也是如此. 通过阅读源码可知一个Server里包含一个或多个Service,一个Service里包含一个Container,一个或多个Connector.Conta

Tomcat源码分析

前言: 本文是我阅读了TOMCAT源码后的一些心得. 主要是讲解TOMCAT的系统框架, 以及启动流程.若有错漏之处,敬请批评指教! 建议: 毕竟TOMCAT的框架还是比较复杂的, 单是从文字上理解, 是不那么容易掌握TOMCAT的框架的. 所以得实践.实践.再实践. 建议下载一份TOMCAT的源码, 调试通过, 然后单步跟踪其启动过程. 如果有不明白的地方, 再来查阅本文, 看是否能得到帮助. 我相信这样效果以及学习速度都会好很多! 1. Tomcat的整体框架结构 Tomcat的基本框架,

Tomcat源码分析——请求原理分析(中)

前言 在<TOMCAT源码分析——请求原理分析(上)>一文中已经介绍了关于Tomcat7.0处理请求前作的初始化和准备工作,请读者在阅读本文前确保掌握<TOMCAT源码分析——请求原理分析(上)>一文中的相关知识以及HTTP协议和TCP协议的一些内容.本文重点讲解Tomcat7.0在准备好接受请求后,请求过程的原理分析. 请求处理架构 在正式开始之前,我们先来看看图1中的Tomcat请求处理架构. 图1 Tomcat请求处理架构 图1列出了Tomcat请求处理架构中的主要组件,这里