Coherence在extend模式下,proxy的负载均衡机制官方解释是
Extend client connections are load balanced across proxy service members. By default, a proxy-based strategy is used that distributes client connections to proxy service members that are being utilized the least. Custom proxy-based strategies can be created or the default strategy can be modified as required. As an alternative, a client-based load balance strategy can be implemented by creating a client-side address provider or by relying on randomized client connections to proxy service members. The random approach provides minimal balancing as compared to proxy-based load balancing.
Proxy-based load balancing is the default strategy that is used to balance client connections between two or more members of the same proxy service. The strategy is weighted by a proxy‘s existing connection count, then by its daemon pool utilization, and lastly by its message backlog.
The proxy-based load balancing strategy is configured within a <proxy-scheme>
definition using a <load-balancer>
element that is set to proxy
. For clarity, the following example explicitly specifies the strategy. However, the strategy is used by default if no strategy is specified and is not required in a proxy scheme definition.
说的比较模糊,在weblogic作为前端来连入后台coherence cluster的情况下,我们模拟实际的生产环境,看一看实际的运作
针对proxy节点的proxy-override.xml
<?xml version="1.0"?> <!DOCTYPE cache-config SYSTEM "cache-config.dtd"> <cache-config> <caching-scheme-mapping> <cache-mapping> <cache-name>*</cache-name> <scheme-name>distributed-scheme</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> </backing-map-scheme> <proxy-scheme> </caching-schemes> |
针对storage节点的storage-override.xml
<?xml version="1.0"?> <cache-config> <caching-scheme-mapping> <listener/> |
针对客户端的client.xml
<?xml version="1.0"?> <!DOCTYPE cache-config SYSTEM "cache-config.dtd"> <cache-config> <caching-scheme-mapping> <cache-mapping> <cache-name>*</cache-name> <scheme-name>extend-dist</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> |
另外一个客户端的配置文件client-2.xml
<?xml version="1.0"?> <!DOCTYPE cache-config SYSTEM "cache-config.dtd"> <cache-config> <caching-scheme-mapping> <cache-mapping> <cache-name>*</cache-name> <scheme-name>extend-dist</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> |
通过proxy-server.cmd启动两个proxy节点,会监听在9099和9100端口
"%java_exec%" -server -showversion -Dtangosol.coherence.mode=prod -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.cacheconfig=E:\wls12c\coherence\bin\proxy-override.xml %java_opts% -cp "%coherence_home%\lib\coherence.jar" com.tangosol.net.DefaultCacheServer %* |
通过storage-cmd启动一个storage节点
"%java_exec%" -server -showversion -Dtangosol.coherence.mode=prod -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.management=all %java_opts% -Dtangosol.coherence.cacheconfig=E:\wls12c\coherence\bin\storage-override.xml -cp "%coherence_home%\lib\coherence.jar" com.tangosol.net.DefaultCacheServer %* |
在weblogic的setDomainEnv.cmd文件中加入
set JAVA_OPTIONS=%JAVA_OPTIONS% -Dtangosol.coherence.cacheconfig="E:\wls12c\coherence\bin\client.xml" -Dtangosol.coherence.tcmp.enabled=false set CLASSPATH=E:\wls1212\coherence\lib\coherence.jar;%CLASSPATH% |
然后部署一个web应用,核心是一个jsp文件coput.jsp,批量放入10000个对象
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <%@page import="java.util.*"%> <%@page import="com.tangosol.net.*"%> <%@ page contentType="text/html;charset=windows-1252"%> <% for (int i=0;i<10000;i++) { %> </body> |
启动weblogic访问,主要连到了9099的proxy server
没有压力的时候,通过jvisualvm监控的proxy的线程如下,可以看到有每个都有10个空闲线程:
通过jmeter运行20个线程的压力,压力主要压在weblogic server
压力上来时候线程如下
可以看到当空闲线程为0,并且有6个在backlog等待时,都没有分到另外一个proxy.
结论如下:
- Coherence将weblogic server的一个实例当成是一个客户端,并不是基于这个客户端内部的thread进行负载均衡(这不是bug,而是产品设计如此)
- WebLogic Server和Coherence建立的是长连接,除非在超时时间外没有线程访问会断开,在压力比较大的时候,weblogic server会一直用这个连接,并不管这个连接是否已经用完。
- Proxy设置的线程数是有限的,最大512条,proxy占用的资源也是有限的,在压力大的时候可能这个proxy会缓慢甚至挂掉,所以最好的办法还是要把压力分散到不同的proxy上面去。
- 但当多个weblogic实例上来时会进行负载均衡。严格来说,coherence集群是按照client端的NIC和port进行负载均衡的。
修改coput.jsp如下:
也就是会根据不同的thread id,选择加载不同的xml,实现均分。
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <%@page import="java.util.*"%> <%@page import="com.tangosol.net.*"%> <%@ page contentType="text/html;charset=windows-1252"%> <% for (int i=0;i<10000;i++) { %> </body> |
再次进行压力测试,如下:
可以看到压力以及分在两个proxy上.
后台weblogic日志,在根据threadid调用.