Apache-Tomcat集群--session复制

一、环境信息:win7 64位

1,Apache: httpd-2.4.16-win32-VC14

2,Tomcat6.0

3,JK: mod_jk-1.2.40-win32-VC14.zip

二、Apache官网配置指南

For the impatient

Simply add

<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>

to your <Engine> or your <Host> element to enable clustering.

Using the above configuration will enable all-to-all session replication using the DeltaManager to replicate session deltas. By all-to-all we mean that the session gets replicated to all the other nodes in the cluster. This works great for smaller
cluster but we don‘t recommend it for larger clusters(a lot of tomcat nodes). Also when using the delta manager it will replicate to all nodes, even nodes that don‘t have the application deployed.

To get around this problem, you‘ll want to use the BackupManager. This manager only replicates the session data to one backup node, and only to nodes that have the application deployed. Downside of the BackupManager: not quite as battle tested as the delta
manager.

Here are some of the important default values:

1. Multicast address is 228.0.0.4

2. Multicast port is 45564 (the port and the address together determine cluster membership.

3. The IP broadcasted is java.net.InetAddress.getLocalHost().getHostAddress() (make sure you don‘t broadcast 127.0.0.1, this is a common error)

4. The TCP port listening for replication messages is the first available server socket in range 4000-4100

5. Two listeners are configured ClusterSessionListener and JvmRouteSessionIDBinderListener

6. Two interceptors are configured TcpFailureDetector and MessageDispatch15Interceptor

The following is the default cluster configuration:

        <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
                 channelSendOptions="8">

          <Manager className="org.apache.catalina.ha.session.DeltaManager"
                   expireSessionsOnShutdown="false"
                   notifyListenersOnReplication="true"/>

          <Channel className="org.apache.catalina.tribes.group.GroupChannel">
            <Membership className="org.apache.catalina.tribes.membership.McastService"
                        address="228.0.0.4"
                        port="45564"
                        frequency="500"
                        dropTime="3000"/>
            <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
                      address="auto"
                      port="4000"
                      autoBind="100"
                      selectorTimeout="5000"
                      maxThreads="6"/>

            <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
              <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
            </Sender>
            <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
            <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
          </Channel>

          <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
                 filter=""/>
          <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>

          <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
                    tempDir="/tmp/war-temp/"
                    deployDir="/tmp/war-deploy/"
                    watchDir="/tmp/war-listen/"
                    watchEnabled="false"/>

          <ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
          <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
        </Cluster>
    

Will cover this section in more detail later in this document.

Cluster Basics

To run session replication in your Tomcat 6.0 container, the following steps should be completed:

  • All your session attributes must implement java.io.Serializable
  • Uncomment the Cluster element in server.xml
  • If you have defined custom cluster valves, make sure you have the ReplicationValve defined as well under the Cluster element in server.xml
  • If your Tomcat instances are running on the same machine, make sure the tcpListenPort attribute is unique for each instance, in most cases Tomcat is smart enough to resolve this on it‘s own by autodetecting available ports in the range 4000-4100
  • Make sure your web.xml has the <distributable/> element
  • If you are using mod_jk, make sure that jvmRoute attribute is set at your Engine <Engine name="Catalina" jvmRoute="node01" > and that the jvmRoute attribute value matches your worker name in workers.properties
  • Make sure that all nodes have the same time and sync with NTP service!
  • Make sure that your loadbalancer is configured for sticky session mode.

Load balancing can be achieved through many techniques, as seen in the Load Balancing chapter.

Note: Remember that your session state is tracked by a cookie, so your URL must look the same from the out side otherwise, a new session will be created.

Note: Clustering support currently requires the JDK version 1.5 or later.

The Cluster module uses the Tomcat JULI logging framework, so you can configure logging through the regular logging.properties file. To track messages, you can enable logging on the key:org.apache.catalina.tribes.MESSAGES

Overview

To enable session replication in Tomcat, three different paths can be followed to achieve the exact same thing:

  1. Using session persistence, and saving the session to a shared file system (PersistenceManager + FileStore)
  2. Using session persistence, and saving the session to a shared database (PersistenceManager + JDBCStore)
  3. Using in-memory-replication, using the SimpleTcpCluster that ships with Tomcat 6 (lib/catalina-tribes.jar + lib/catalina-ha.jar)

In this release of session replication, Tomcat can perform an all-to-all replication of session state using theDeltaManager or perform backup replication to only one node using the BackupManager. The all-to-all replication is an
algorithm that is only efficient when the clusters are small. For larger clusters, to use a primary-secondary session replication where the session will only be stored at one backup server simply setup the BackupManager.

Currently you can use the domain worker attribute (mod_jk > 1.2.8) to build cluster partitions with the potential of having a more scaleable cluster solution with the DeltaManager(you‘ll need to configure the domain interceptor for this). In order to keep the
network traffic down in an all-to-all environment, you can split your cluster into smaller groups. This can be easily achieved by using different multicast addresses for the different groups. A very simple setup would look like this:

        DNS Round Robin
               |
         Load Balancer
          /                 Cluster1      Cluster2
      /     \        /       Tomcat1 Tomcat2  Tomcat3 Tomcat4

What is important to mention here, is that session replication is only the beginning of clustering. Another popular concept used to implement clusters is farming, i.e., you deploy your apps only to one server, and the cluster will distribute the deployments
across the entire cluster. This is all capabilities that can go into with the FarmWarDeployer (s. cluster example at server.xml)

In the next section will go deeper into how session replication works and how to configure it.

Cluster Information

Membership is established using multicast heartbeats. Hence, if you wish to subdivide your clusters, you can do this by changing the multicast IP address or port in the <Membership> element.

The heartbeat contains the IP address of the Tomcat node and the TCP port that Tomcat listens to for replication traffic. All data communication happens over TCP.

The ReplicationValve is used to find out when the request has been completed and initiate the replication, if any. Data is only replicated if the session has changed (by calling setAttribute or removeAttribute on the session).

One of the most important performance considerations is the synchronous versus asynchronous replication. In a synchronous replication mode the request doesn‘t return until the replicated session has been sent over the wire and reinstantiated on all the other
cluster nodes. Synchronous vs. asynchronous is configured using the channelSendOptions flag and is an integer value. The default value for the SimpleTcpCluster/DeltaManager combo is 8, which is asynchronous. You can read more on the send
flag(overview)
 or the send flag(javadoc). During async replication, the request is returned before the data has been replicated.
async replication yields shorter request times, and synchronous replication guarantees the session to be replicated before the request returns.

Bind session after crash to failover node

If you are using mod_jk and not using sticky sessions or for some reasons sticky session don‘t work, or you are simply failing over, the session id will need to be modified as it previously contained the worker id of the previous tomcat (as defined by jvmRoute
in the Engine element). To solve this, we will use the JvmRouteBinderValve.

The JvmRouteBinderValve rewrites the session id to ensure that the next request will remain sticky (and not fall back to go to random nodes since the worker is no longer available) after a fail over. The valve rewrites the JSESSIONID value in the cookie
with the same name. Not having this valve in place, will make it harder to ensure stickyness in case of a failure for the mod_jk module.

By default, if no valves are configured, the JvmRouteBinderValve is added on. The cluster message listener called JvmRouteSessionIDBinderListener is also defined by default and is used to actually rewrite the session id on the other nodes in the cluster
once a fail over has occurred. Remember, if you are adding your own valves or cluster listeners in server.xml then the defaults are no longer valid, make sure that you add in all the appropriate valves and listeners as defined by the default.

Hint:

With attribute sessionIdAttribute you can change the request attribute name that included the old session id. Default attribute name is org.apache.catalina.cluster.session.JvmRouteOrignalSessionID.

Trick:

You can enable this mod_jk turnover mode via JMX before you drop a node to all backup nodes! Set enable true on all JvmRouteBinderValve backups, disable worker at mod_jk and then drop node and restart it! Then enable mod_jk Worker and disable JvmRouteBinderValves
again. This use case means that only requested session are migrated.

Configuration Example
        <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
                 channelSendOptions="6">

          <Manager className="org.apache.catalina.ha.session.BackupManager"
                   expireSessionsOnShutdown="false"
                   notifyListenersOnReplication="true"
                   mapSendOptions="6"/>
          <!--
          <Manager className="org.apache.catalina.ha.session.DeltaManager"
                   expireSessionsOnShutdown="false"
                   notifyListenersOnReplication="true"/>
          -->
          <Channel className="org.apache.catalina.tribes.group.GroupChannel">
            <Membership className="org.apache.catalina.tribes.membership.McastService"
                        address="228.0.0.4"
                        port="45564"
                        frequency="500"
                        dropTime="3000"/>
            <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
                      address="auto"
                      port="5000"
                      selectorTimeout="100"
                      maxThreads="6"/>

            <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
              <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
            </Sender>
            <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
            <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
            <Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
          </Channel>

          <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
                 filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>

          <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
                    tempDir="/tmp/war-temp/"
                    deployDir="/tmp/war-deploy/"
                    watchDir="/tmp/war-listen/"
                    watchEnabled="false"/>

          <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
        </Cluster>
    

Break it down!!

        <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
                 channelSendOptions="6">
    

The main element, inside this element all cluster details can be configured. The channelSendOptions is the flag that is attached to each message sent by the SimpleTcpCluster class or any objects that are invoking the SimpleTcpCluster.send method.
The description of the send flags is available at our javadoc site The DeltaManager sends information using the SimpleTcpCluster.send
method, while the backup manager sends it itself directly through the channel.

For more info, Please visit the reference documentation

          <Manager className="org.apache.catalina.ha.session.BackupManager"
                   expireSessionsOnShutdown="false"
                   notifyListenersOnReplication="true"
                   mapSendOptions="6"/>
          <!--
          <Manager className="org.apache.catalina.ha.session.DeltaManager"
                   expireSessionsOnShutdown="false"
                   notifyListenersOnReplication="true"/>
          -->
    

This is a template for the manager configuration that will be used if no manager is defined in the <Context> element. In Tomcat 5.x each webapp marked distributable had to use the same manager, this is no longer the case since Tomcat 6 you can define a manager
class for each webapp, so that you can mix managers in your cluster. Obviously the managers on one node‘s application has to correspond with the same manager on the same application on the other node. If no manager has been specified for the webapp, and the
webapp is marked <distributable/> Tomcat will take this manager configuration and create a manager instance cloning this configuration.

For more info, Please visit the reference documentation

          <Channel className="org.apache.catalina.tribes.group.GroupChannel">
    

The channel element is Tribes, the group communication framework used inside Tomcat. This element encapsulates everything that has to do with communication
and membership logic.

For more info, Please visit the reference documentation

            <Membership className="org.apache.catalina.tribes.membership.McastService"
                        address="228.0.0.4"
                        port="45564"
                        frequency="500"
                        dropTime="3000"/>
    

Membership is done using multicasting. Please note that Tribes also supports static memberships using theStaticMembershipInterceptor if you want to extend your membership to points beyond multicasting. The address attribute is the multicast
address used and the port is the multicast port. These two together create the cluster separation. If you want a QA cluster and a production cluster, the easiest config is to have the QA cluster be on a separate multicast address/port combination than the
production cluster.

The membership component broadcasts TCP adress/port of itselt to the other nodes so that communication between nodes can be done over TCP. Please note that the address being broadcasted is the one of the Receiver.address attribute.

For more info, Please visit the reference documentation

            <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
                      address="auto"
                      port="5000"
                      selectorTimeout="100"
                      maxThreads="6"/>
    

In tribes the logic of sending and receiving data has been broken into two functional components. The Receiver, as the name suggests is responsible for receiving messages. Since the Tribes stack is thread less, (a popular improvement now adopted by other
frameworks as well), there is a thread pool in this component that has a maxThreads and minThreads setting.

The address attribute is the host address that will be broadcasted by the membership component to the other nodes.

For more info, Please visit the reference documentation

            <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
              <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
            </Sender>
    

The sender component, as the name indicates is responsible for sending messages to other nodes. The sender has a shell component, the ReplicationTransmitter but the real stuff done is done in the sub component, Transport. Tribes
support having a pool of senders, so that messages can be sent in parallel and if using the NIO sender, you can send messages concurrently as well.

Concurrently means one message to multiple senders at the same time and Parallel means multiple messages to multiple senders at the same time.

For more info, Please visit the reference documentation

            <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
            <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
            <Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
          </Channel>
    

Tribes uses a stack to send messages through. Each element in the stack is called an interceptor, and works much like the valves do in the Tomcat servlet container. Using interceptors, logic can be broken into more managable pieces of code. The interceptors
configured above are:

TcpFailureDetector - verifies crashed members through TCP, if multicast packets get dropped, this interceptor protects against false positives, ie the node marked as crashed even though it still is alive and running.

MessageDispatch15Interceptor - dispatches messages to a thread (thread pool) to send message asynchrously.

ThroughputInterceptor - prints out simple stats on message traffic.

Please note that the order of interceptors is important. the way they are defined in server.xml is the way they are represented in the channel stack. Think of it as a linked list, with the head being the first most interceptor and the tail the last.

For more info, Please visit the reference documentation

          <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
                 filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>
    

The cluster uses valves to track requests to web applications, we‘ve mentioned the ReplicationValve and the JvmRouteBinderValve above. The <Cluster> element itself is not part of the pipeline in Tomcat, instead the cluster adds the valve to its parent container.
If the <Cluster> elements is configured in the <Engine> element, the valves get added to the engine and so on.

For more info, Please visit the reference documentation

          <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
                    tempDir="/tmp/war-temp/"
                    deployDir="/tmp/war-deploy/"
                    watchDir="/tmp/war-listen/"
                    watchEnabled="false"/>
    

The default tomcat cluster supports farmed deployment, ie, the cluster can deploy and undeploy applications on the other nodes. The state of this component is currently in flux but will be addressed soon. There was a change in the deployment algorithm between
Tomcat 5.0 and 5.5 and at that point, the logic of this component changed to where the deploy dir has to match the webapps directory.

For more info, Please visit the reference documentation

          <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
        </Cluster>
    

Since the SimpleTcpCluster itself is a sender and receiver of the Channel object, components can register themselves as listeners to the SimpleTcpCluster. The listener above ClusterSessionListener listens for DeltaManager replication messages
and applies the deltas to the manager that in turn applies it to the session.

For more info, Please visit the reference documentation

Cluster Architecture

Component Levels:

         Server
           |
         Service
           |
         Engine
           |  \
           |  --- Cluster --*
           |
         Host
           |
         ------
        /           Cluster    Context(1-N)
        |                     |             -- Manager
        |                           |                   -- DeltaManager
        |                   -- BackupManager
        |
     ---------------------------
        |                             Channel                        -----------------------------         |                               Interceptor_1 ..                       |                                 Interceptor_N                        -----------------------------           |          |         |                Receiver    Sender   Membership                                                -- Valve
                                         |                                               |       -- ReplicationValve
                                         |       -- JvmRouteBinderValve
                                         |
                                         -- LifecycleListener
                                         |
                                         -- ClusterListener
                                         |                                               |       -- ClusterSessionListener
                                         |       -- JvmRouteSessionIDBinderListener
                                         |
                                         -- Deployer
                                                                                                 -- FarmWarDeployer
How it Works

To make it easy to understand how clustering works, We are gonna take you through a series of scenarios. In the scenario we only plan to use two tomcat instances TomcatA and TomcatB. We will cover the following sequence of events:

  1. TomcatA starts up
  2. TomcatB starts up (Wait that TomcatA start is complete)
  3. TomcatA receives a request, a session S1 is created.
  4. TomcatA crashes
  5. TomcatB receives a request for session S1
  6. TomcatA starts up
  7. TomcatA receives a request, invalidate is called on the session (S1)
  8. TomcatB receives a request, for a new session (S2)
  9. TomcatA The session S2 expires due to inactivity.

Ok, now that we have a good sequence, we will take you through exactly what happens in the session repliction code

  1. TomcatA starts up

    Tomcat starts up using the standard start up sequence. When the Host object is created, a cluster object is associated with it. When the contexts are parsed, if the distributable element is in place in web.xml Tomcat asks the Cluster class (in this case SimpleTcpCluster)
    to create a manager for the replicated context. So with clustering enabled, distributable set in web.xml Tomcat will create a DeltaManager for that context instead of a StandardManager. The cluster class will start up a membership
    service (multicast) and a replication service (tcp unicast). More on the architecture further down in this document.

  2. TomcatB starts up

    When TomcatB starts up, it follows the same sequence as TomcatA did with one exception. The cluster is started and will establish a membership (TomcatA,TomcatB). TomcatB will now request the session state from a server that already exists in the cluster,
    in this case TomcatA. TomcatA responds to the request, and before TomcatB starts listening for HTTP requests, the state has been transferred from TomcatA to TomcatB. In case TomcatA doesn‘t respond, TomcatB will time out after 60 seconds, and issue a log entry.
    The session state gets transferred for each web application that has distributable in its web.xml. Note: To use session replication efficiently, all your tomcat instances should be configured the same.
  3. TomcatA receives a request, a session S1 is created.

    The request coming in to TomcatA is treated exactly the same way as without session replication. The action happens when the request is completed, the ReplicationValve will intercept the request before the response is returned to the user. At
    this point it finds that the session has been modified, and it uses TCP to replicata the session to TomcatB. Once the serialized data has been handed off to the operating systems TCP logic, the request returns to the user, back through the valve pipeline.
    For each request the entire session is replicated, this allows code that modifies attributes in the session without calling setAttribute or removeAttribute to be replicated. a useDirtyFlag configuration parameter can be used to optimize the number of times
    a session is replicated.
  4. TomcatA crashes

    When TomcatA crashes, TomcatB receives a notification that TomcatA has dropped out of the cluster. TomcatB removes TomcatA from its membership list, and TomcatA will no longer be notified of any changes that occurs in TomcatB. The load balancer will redirect
    the requests from TomcatA to TomcatB and all the sessions are current.
  5. TomcatB receives a request for session S1

    Nothing exciting, TomcatB will process the request as any other request.
  6. TomcatA starts up

    Upon start up, before TomcatA starts taking new request and making itself available to it will follow the start up sequence described above 1) 2). It will join the cluster, contact TomcatB for the current state of all the sessions. And once it receives the
    session state, it finishes loading and opens its HTTP/mod_jk ports. So no requests will make it to TomcatA until it has received the session state from TomcatB.
  7. TomcatA receives a request, invalidate is called on the session (S1)

    The invalidate is call is intercepted, and the session is queued with invalidated sessions. When the request is complete, instead of sending out the session that has changed, it sends out an "expire" message to TomcatB and TomcatB will invalidate the session
    as well.
  8. TomcatB receives a request, for a new session (S2)

    Same scenario as in step 3)
  9. TomcatA The session S2 expires due to inactivity.

    The invalidate is call is intercepted the same was as when a session is invalidated by the user, and the session is queued with invalidated sessions. At this point, the invalidet session will not be replicated across until another request comes through the
    system and checks the invalid queue.

Phuuuhh! :)

Membership Clustering membership is established using very simple multicast pings. Each Tomcat instance will periodically send out a multicast ping, in the ping message the instance will broad cast its IP and TCP listen port for replication.
If an instance has not received such a ping within a given timeframe, the member is considered dead. Very simple, and very effective! Of course, you need to enable multicasting on your system.

TCP Replication Once a multicast ping has been received, the member is added to the cluster Upon the next replication request, the sending instance will use the host and port info and establish a TCP socket. Using this socket it sends over
the serialized data. The reason I choose TCP sockets is because it has built in flow control and guaranteed delivery. So I know, when I send some data, it will make it there :)

Distributed locking and pages using frames Tomcat does not keep session instances in sync across the cluster. The implementation of such logic would be to much overhead and cause all kinds of problems. If your client accesses the same session
simultanously using multiple requests, then the last request will override the other sessions in the cluster.

三、配置文件

1,Apache\conf\httpd.conf

##文件末尾添加以下配置
##############################################################
# (httpd.conf)
# 加载 mod_jk 模块
LoadModule jk_module modules/mod_jk.so

JkWorkersFile conf/workers.properties
JkMountFile conf/uriworkermap.properties
JkLogFile logs/mod_jk.log
JkLogLevel info
##############################################################

2,Apache\conf\workers.properties

#server 列表
worker.list=DLOG4J,status

#========tomcat1========
#ajp13 端口号,在tomcat下server.xml配  置,默认8009
worker.tomcat1.port=8010
#tomcat的主机地址,如不为本机,请填写ip地址
worker.tomcat1.host=sso.test
worker.tomcat1.type=ajp13
#server的加权比重,值越高,分得的请求越多
worker.tomcat1.lbfactor = 1
#========tomcat3========
#ajp13 端口号,在tomcat下server.xml配置,
worker.tomcat3.port=8012
#tomcat的主机地址,如不为本机,请填写ip地址
worker.tomcat3.host=sso.test
worker.tomcat3.type=ajp13
#server的加权比重,值越高,分得的请求越多
worker.tomcat3.lbfactor=1
#========tomcat4========
#ajp13 端口号,在tomcat下server.xml配置,
worker.tomcat4.port=8013
#tomcat的主机地址,如不为本机,请填写ip地址
worker.tomcat4.host=sso.test
worker.tomcat4.type=ajp13
#server的加权比重,值越高,分得的请求越多
worker.tomcat4.lbfactor=1
#========tomcat7========TOMCAT7无法与TOMCAT6同时部署集群
#ajp13 端口号,在tomcat下server.xml配置,
#worker.tomcat7.port=8016
#tomcat的主机地址,如不为本机,请填写ip地址
#worker.tomcat7.host=sso.test
#worker.tomcat7.type=ajp13
#server的加权比重,值越高,分得的请求越多
#worker.tomcat7.lbfactor=1

#========controller,负载均衡控制器========
worker.DLOG4J.type=lb
worker.retries=3
worker.DLOG4J.balanced_workers=tomcat1,tomcat3,tomcat4
worker.DLOG4J.sticky_session=true

worker.status.type=status

3,Apache\conf\uriworkermap.properties

/*=DLOG4J
/jkstatus=status

!/*.gif=DLOG4J
!/*.jpg=DLOG4J
!/*.png=DLOG4J
!/*.css=DLOG4J
!/*.js=DLOG4J
!/*.htm=DLOG4J
!/*.html=DLOG4J

4,Tomcat\conf\server.xml

<!-- 在Engine标签替换成以下 -->
    <Engine name="Catalina" defaultHost="sso.test" jvmRoute="tomcat1">

      <!--For clustering, please take a look at documentation at:
          /docs/cluster-howto.html  (simple how to)
          /docs/config/cluster.html (reference documentation) -->
      <!--
      <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
      -->
	  <!--集群配置开始 -->
	  <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
                 channelSendOptions="8">
          <!--
		  <Manager className="org.apache.catalina.ha.session.BackupManager"
					expireSessionsOnShutdown="false"
					notifyListenersOnReplication="true"
					mapSendOptions="8"/> -->
		  <!--   -->
          <Manager className="org.apache.catalina.ha.session.DeltaManager"
                   expireSessionsOnShutdown="false"
                   notifyListenersOnReplication="true"/>

          <Channel className="org.apache.catalina.tribes.group.GroupChannel">
            <Membership className="org.apache.catalina.tribes.membership.McastService"
                        address="228.0.0.4"
                        port="45564"
                        frequency="500"
                        dropTime="3000"/>
            <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
                      address="auto"
                      port="4000"
                      autoBind="100"
                      selectorTimeout="5000"
                      maxThreads="6"/>
            <!-- timeout="60000"-->
            <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
              <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender" />
            </Sender>
            <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
            <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
          <Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
          </Channel>

          <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
                 filter=""/>
          <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>

          <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
                    tempDir="/tmp/war-temp/"
                    deployDir="/tmp/war-deploy/"
                    watchDir="/tmp/war-listen/"
                    watchEnabled="false"/>

          <ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
          <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
        </Cluster>
	  <!--集群配置结束 -->

参考文件:

1,http://tomcat.apache.org/tomcat-6.0-doc/config/cluster.html

2,http://tomcat.apache.org/tomcat-6.0-doc/cluster-howto.html#Cluster%20Basics

3,http://www.cnblogs.com/God-froest/p/apache_tomcat.html

时间: 2024-11-13 06:39:50

Apache-Tomcat集群--session复制的相关文章

Tomcat集群session复制,httpd/nginx反代Tomcat集群

   一个大型站点都会涉及到动态应用,动态应用都需要做会话保持,常见的会话保持方式就三种,一是session stick,二是session replication,三是session share,对于小型规模的tomcat集群,大多者会采用session replication方式,但阅读官方文档也好,查询大牛博客也罢,发现均有不准确之处,所以亲测成功实现之后得出如下文档,还望高人指点. 实验环境: 操作系统:CentOS 7.2 tomcat版本:tomcat-7.0.54(yum安装方式)

(apache+tomcat集群+memcached番外篇)单台tomcat的session信息的2种持久化方式

为什么要实现搭建tomcat集群环境呢?主要因为单个tomcat无论从吞吐量和并发数上,会达到一定的极限.如果访问量超过单个tomcat的承受能力的话,tomcat一般要么拒绝提供服务,要么直接宕掉.所以,必须要依靠tomcat集群技术.举个最简单的例子,拿"送快件"比喻,如果一个人,5分钟送一件,一小时之内,送10个,一个人完全能胜任这项工作.假设现在到了双十一,要求1小时,送100个, 那怎么办?只能安排更多的人加入"送快件"这项工作中来.这其实和集群一个道理.

Apache+Tomcat集群配置

本文Apache+Tomcat集群配置 基于最新的Apache和Tomcat,具体是2011年4月20日最新的Tomcat和Apache集群和负载均衡配置. 准备环境 Apache[下载地址] 企业框架源码 Apache是http服务器,我们利用其对Tomcat进行负载均衡.目前最新版本为2.2.17,下载地址为http://httpd.apache.org/download.cgi#apache22.如下图: 目前已经出现Apache2.3.11,但是为beta版本,所以没有使用. 下载后直接

nginx+memcached+tomcat集群 session共享完整版

nginx+memcached+tomcat集群 session共享完整版 集群环境 1.nginx版本 nginx-1.6.2.tar.gz 2.jdk 版本 jdk-7u21-linux-x64.tar.gz 3.tomcat 版本  7.0.29 4.memcached 版本 memcached-1.4.22.tar.gz 5. CentOS 6.5 系统采用一台服务做测试 一.nginx安装 安装依赖包 yum -y install gcc gcc-c++ 1.安装pcre库 tar z

Apache + Tomcat集群配置详解

Apache + Tomcat集群配置详解 一.软件准备 Apache 2.2 : http://httpd.apache.org/download.cgi,下载msi安装程序,选择no ssl版本 Tomcat 6.0 : http://tomcat.apache.org/download-60.cgi,下载Tomcat 6.0.18 zip文件 注意:由于Apache和Tomcat项目与集群相关的模块均处于持续发展和优化过程中,因此笔者不保证本文配置方法对所有Apache和Tomcat版本均

Apache + Tomcat集群配置详解 (1)

一.软件准备 Apache 2.2 : http://httpd.apache.org/download.cgi,下载msi安装程序,选择no ssl版本 Tomcat 6.0 : http://tomcat.apache.org/download-60.cgi,下载Tomcat 6.0.18 zip文件 注意:由于Apache和Tomcat项目与集群相关的模块均处于持续发展和优化过程中,因此笔者不保证本文配置方法对所有Apache和Tomcat版本均适用. 二.软件安装 把Apache安装为运

Centos下Apache+Tomcat集群--搭建记录

一.目的 利用apache的mod_jk模块,实现tomcat集群服务器的负载均衡以及会话复制,这里用到了<Cluster>. 二.环境 1.基础:3台主机,系统Centos6.5,4G内存,50G硬盘. yum源已更换为阿里源(如何更换可参考博客的另外一篇文章) 2.软件:development tools,jdk-7u9-linux-x64.rpm;源码编译软件apr-util-1.5.4.tar.gz,apr-1.5.2.tar.gz,tomcat-connectors-1.2.40-s

Apache + Tomcat 负载均衡 session复制

转自:http://blog.csdn.net/cssmhyl/article/details/8455400 http://snowolf.iteye.com/blog/743611 Apache 和 Tomcat原本就是一家,更是一家亲! Apache与Tomcat整合,无非是将Apache作为前端依据请求路径.端口.代理分发给多个Tomcat,以到达转发和负载均衡的目的!同一时候.通过Apache和Tomcat相互作用,进行粘性会话,会话拷贝构建集群!这一切的最终结果就是"云服务"

apache+tomcat集群、负载均衡

在开发测试环境中,可能硬件资源不够情况下,要怎么实现集群呢,以下就举一例: 环境: win7 工具: apache 一枚 tomcat 两枚 分别命名为tomcat1.tomcat2 mod_jk 一枚(尽量和apache同一个版本) 一路安装,mod_jk不是安装的.这个文件要拷贝到Apache目录的modules目录里面. 配置Tomcat集群 打开tomcat下conf/server.xml文件,找到<Engine name="Catalina" defaultHost=&

【Java Web】apache+tomcat集群实现负载均衡

一步步按照流程实现Apache 负载均衡 + tomcat群集的步骤: 1.环境介绍 操作系统环境 :windows xp sp3 (一台机器上跑 2个或多个tomcat服务) Java环境:   jdk1.6.0_13 软件:apache_2.2.13-win32-x86-no_ssl.msi apache-tomcat-6.0.20.zip 2.jdk的安装 安装过程省略 环境变量的设置 3.正式开始安装: apache的安装: 下载apache软件包:apache_2.2.13-win32