Coherence代理的负载均衡

Coherence在extend模式下,proxy的负载均衡机制官方解释是

Extend client connections are load balanced across proxy service members. By default, a proxy-based strategy is used that distributes client connections to proxy service members that are being utilized the least. Custom proxy-based strategies can be created or the default strategy can be modified as required. As an alternative, a client-based load balance strategy can be implemented by creating a client-side address provider or by relying on randomized client connections to proxy service members. The random approach provides minimal balancing as compared to proxy-based load balancing.

Proxy-based load balancing is the default strategy that is used to balance client connections between two or more members of the same proxy service. The strategy is weighted by a proxy‘s existing connection count, then by its daemon pool utilization, and lastly by its message backlog.

The proxy-based load balancing strategy is configured within a <proxy-scheme> definition using a <load-balancer> element that is set to proxy. For clarity, the following example explicitly specifies the strategy. However, the strategy is used by default if no strategy is specified and is not required in a proxy scheme definition.

说的比较模糊,在weblogic作为前端来连入后台coherence cluster的情况下,我们模拟实际的生产环境,看一看实际的运作

针对proxy节点的proxy-override.xml


<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>*</cache-name>
<scheme-name>distributed-scheme</scheme-name>
</cache-mapping>
</caching-scheme-mapping>

<caching-schemes>
<!-- Distributed caching scheme. -->
<distributed-scheme>
<scheme-name>distributed-scheme</scheme-name>
<service-name>DistributedCache</service-name>
<thread-count>50</thread-count>
<backup-count>1</backup-count>
<backing-map-scheme>
<local-scheme>
<scheme-name>LocalSizeLimited</scheme-name>
</local-scheme>

</backing-map-scheme>
<autostart>true</autostart>
<local-storage>false</local-storage>
</distributed-scheme>
<local-scheme>
<scheme-name>LocalSizeLimited</scheme-name>
<eviction-policy>LRU</eviction-policy>
<high-units>500</high-units>
<unit-calculator>BINARY</unit-calculator>
<unit-factor>1048576</unit-factor>
<expiry-delay>48h</expiry-delay>
</local-scheme>

<proxy-scheme>
<service-name>ExtendTcpProxyService</service-name>
<thread-count>10</thread-count>
<acceptor-config>
<tcp-acceptor>
<local-address>
<address>192.168.0.101</address>
<port>9099</port>
</local-address>
</tcp-acceptor>
</acceptor-config>
<load-balancer>proxy</load-balancer>
<autostart>true</autostart>
</proxy-scheme>

</caching-schemes>
</cache-config>

针对storage节点的storage-override.xml


<?xml version="1.0"?>

<cache-config>

<caching-scheme-mapping>
<!--
Caches with names that start with ‘DBBacked‘ will be created
as distributed-db-backed.
-->
<cache-mapping>
<cache-name>POFSample</cache-name>
<scheme-name>distributed-pof</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<!--
DB Backed Distributed caching scheme.
-->
<distributed-scheme>
<scheme-name>distributed-pof</scheme-name>
<service-name>DistributedCache</service-name>
<thread-count>50</thread-count>
<backing-map-scheme> <local-scheme/> </backing-map-scheme>

<listener/>
<autostart>true</autostart>
<local-storage>true</local-storage>
</distributed-scheme>
</caching-schemes>
</cache-config>

针对客户端的client.xml


<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>*</cache-name>
<scheme-name>extend-dist</scheme-name>
</cache-mapping>
</caching-scheme-mapping>

<caching-schemes>
<remote-cache-scheme>
<scheme-name>extend-dist</scheme-name>
<service-name>ExtendTcpCacheService</service-name>
<initiator-config>
<tcp-initiator>
<remote-addresses>
<socket-address>
<address>192.168.0.101</address>
<port>9100</port>
</socket-address>
<socket-address>
<address>192.168.0.101</address>
<port>9099</port>
</socket-address>
</remote-addresses>
<connect-timeout>10s</connect-timeout>
</tcp-initiator>
<outgoing-message-handler>
<request-timeout>5s</request-timeout>
</outgoing-message-handler>
</initiator-config>
</remote-cache-scheme>
</caching-schemes>
</cache-config>

另外一个客户端的配置文件client-2.xml


<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>*</cache-name>
<scheme-name>extend-dist</scheme-name>
</cache-mapping>
</caching-scheme-mapping>

<caching-schemes>
<remote-cache-scheme>
<scheme-name>extend-dist</scheme-name>
<service-name>ExtendTcpCacheService2</service-name>
<initiator-config>
<tcp-initiator>
<remote-addresses>
<socket-address>
<address>192.168.0.101</address>
<port>9099</port>
</socket-address>
<socket-address>
<address>192.168.0.101</address>
<port>9100</port>
</socket-address>
</remote-addresses>
<connect-timeout>10s</connect-timeout>
</tcp-initiator>
<outgoing-message-handler>
<request-timeout>5s</request-timeout>
</outgoing-message-handler>
</initiator-config>
</remote-cache-scheme>
</caching-schemes>
</cache-config>

通过proxy-server.cmd启动两个proxy节点,会监听在9099和9100端口


"%java_exec%" -server -showversion -Dtangosol.coherence.mode=prod -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.cacheconfig=E:\wls12c\coherence\bin\proxy-override.xml %java_opts% -cp "%coherence_home%\lib\coherence.jar" com.tangosol.net.DefaultCacheServer %*

通过storage-cmd启动一个storage节点


"%java_exec%" -server -showversion -Dtangosol.coherence.mode=prod -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.management=all %java_opts% -Dtangosol.coherence.cacheconfig=E:\wls12c\coherence\bin\storage-override.xml -cp "%coherence_home%\lib\coherence.jar" com.tangosol.net.DefaultCacheServer %*

在weblogic的setDomainEnv.cmd文件中加入


set JAVA_OPTIONS=%JAVA_OPTIONS% -Dtangosol.coherence.cacheconfig="E:\wls12c\coherence\bin\client.xml" -Dtangosol.coherence.tcmp.enabled=false

set CLASSPATH=E:\wls1212\coherence\lib\coherence.jar;%CLASSPATH%

然后部署一个web应用,核心是一个jsp文件coput.jsp,批量放入10000个对象


<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<%@page import="java.util.*"%>
<%@page import="com.tangosol.net.*"%>

<%@ page contentType="text/html;charset=windows-1252"%>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=windows-1252"/>
<title>setName</title>
</head>
<html>
<body>
<h3>

<%
String mysession;
//private final ClassLoader loader = null;
NamedCache cache12;
cache12 = CacheFactory.getCache("POFSample");

for (int i=0;i<10000;i++) {
//String key = "hello";
cache12.put (i, "CacheValue=eric"+i);
}

%>
put 10000 records success.........
</h3>

</body>
</html>

启动weblogic访问,主要连到了9099的proxy server

没有压力的时候,通过jvisualvm监控的proxy的线程如下,可以看到有每个都有10个空闲线程:

通过jmeter运行20个线程的压力,压力主要压在weblogic server

压力上来时候线程如下

可以看到当空闲线程为0,并且有6个在backlog等待时,都没有分到另外一个proxy.

结论如下:

  • Coherence将weblogic server的一个实例当成是一个客户端,并不是基于这个客户端内部的thread进行负载均衡(这不是bug,而是产品设计如此)
  • WebLogic Server和Coherence建立的是长连接,除非在超时时间外没有线程访问会断开,在压力比较大的时候,weblogic server会一直用这个连接,并不管这个连接是否已经用完。
  • Proxy设置的线程数是有限的,最大512条,proxy占用的资源也是有限的,在压力大的时候可能这个proxy会缓慢甚至挂掉,所以最好的办法还是要把压力分散到不同的proxy上面去。
  • 但当多个weblogic实例上来时会进行负载均衡。严格来说,coherence集群是按照client端的NIC和port进行负载均衡的。

修改coput.jsp如下:

也就是会根据不同的thread id,选择加载不同的xml,实现均分。


<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<%@page import="java.util.*"%>
<%@page import="com.tangosol.net.*"%>

<%@ page contentType="text/html;charset=windows-1252"%>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=windows-1252"/>
<title>setName</title>
</head>
<html>
<body>
<h3>

<%
String mysession;
//private final ClassLoader loader = null;
NamedCache cache12;
ConfigurableCacheFactory factory1 = null;
int threadid = (int)Thread.currentThread().getId();
System.out.println("Thread id="+ threadid);
if ((threadid % 2) == 0) {
factory1 = new DefaultConfigurableCacheFactory("E:\\wls12c\\coherence\\bin\\client-2.xml", Thread.currentThread().getClass().getClassLoader());
} else {
factory1 = new DefaultConfigurableCacheFactory("E:\\wls12c\\coherence\\bin\\client.xml", Thread.currentThread().getClass().getClassLoader());
}
cache12 = factory1.ensureCache("POFSample", Thread.currentThread().getClass().getClassLoader());

for (int i=0;i<10000;i++) {
//String key = "hello";
cache12.put (i, "CacheValue=eric"+i);
}

%>
put 10000 records success.........
</h3>

</body>
</html>

再次进行压力测试,如下:

可以看到压力以及分在两个proxy上.

后台weblogic日志,在根据threadid调用.

时间: 2024-10-07 07:51:54

Coherence代理的负载均衡的相关文章

Nginx反向代理,负载均衡,redis session共享,keepalived高可用

相关知识自行搜索,直接上干货... 使用的资源: nginx主服务器一台,nginx备服务器一台,使用keepalived进行宕机切换. tomcat服务器两台,由nginx进行反向代理和负载均衡,此处可搭建服务器集群. redis服务器一台,用于session的分离共享. nginx主服务器:192.168.50.133 nginx备服务器:192.168.50.135 tomcat项目服务器1:192.168.50.137 tomcat项目服务器2:192.168.50.139 redis服

nginx实现反向代理与负载均衡

nginx在LB Cluster集群中也可以扮演一定的角色,即反向代理与负载均衡.在一个Web服务中,来自客户端的请求可以经由nginx服务器转发至后端服务器,并且按照一定的算法实现负载均衡. 1.反向代理 1)在192.168.10.17/24与192.168.10.77/24主机上安装nginx,提供web服务. 编辑两台主机的主页文件内容分别为 inode17 page 和inode77 page 2)主机172.16.10.66提供nginx的反向代理服务 为了与后端主机进行通信,此主机

Nginx反向代理、负载均衡、页面缓存、URL重写及读写分离详解

大纲 一.前言 二.环境准备 三.安装与配置Nginx 四.Nginx之反向代理 五.Nginx之负载均衡 六.Nginx之页面缓存 七.Nginx之URL重写 八.Nginx之读写分离 注,操作系统为 CentOS 6.4 x86_64 , Nginx 是版本是最新版的1.4.2,所以实验用到的软件请点击这里下载:http://yunpan.cn/QXIgqMmVmuZrm 一.前言 在前面的几篇博文中我们主要讲解了Nginx作为Web服务器知识点,主要的知识点有nginx的理论详解.ngin

Web服务之Nginx反向代理与负载均衡

一.代理 正向代理: 正向代理是一个位于客户端和目标服务器之间的服务器,为了从目标服务器取得内容,客户端向代理发送一个请求并指定目标服务器,然后代理向目标服务器转交请求并将获得的内容返回给客户端.客户端必须要进行一些特别的设置才能使用正向代理. 作用: 访问无法访问的服务器(翻墙,懂得) 加速访问目标服务器(链路加速) Cache缓存(访问加速) 实现客户端访问授权 隐藏访问者 反向代理: 反向代理(Reverse Proxy)方式是指以代理服务器来接受internet上的连接请求,然后将请求转

Nginx反向代理以及负载均衡配置

前提:最近在研究nginx的用法,在windows上小试了一下,由于windows下不支持nginx缓存配置,所以本文主要是讲nginx,以及反向代理与负载均衡. [一.为什么要使用nginx] 要回答为什么要使用nginx,那就先说说nginx能做些什么. 首先,nginx能做反向代理,那么什么是反向代理呢,举个栗子,我想在本地使用 www.mickey.com 的域名去访问 www.taobao.com.那么这个时候我们就可以通过nginx去实现. 再者,nginx能实现负载均衡,什么是负载

Dubbo服务调用的动态代理和负载均衡

Dubbo服务调用的动态代理及负载均衡源码解析请参见:http://manzhizhen.iteye.com/blog/2314514

Tomcat:利用Apache配置反向代理、负载均衡

Apache 反向代理.负载均衡 准备工作 1.  2 个tomcat实例 2.  安装Apache server2.2 基于apache server配置反向代理 在这个配置中,只使用到了apache server,没有使用到tomcat. 1)  ${apacheserver}/conf/extra/httpd-vhosts.conf在配置一个使用反向代理的虚拟主机 <VirtualHost *:80> ServerAdmin [email protected] ServerName ww

docker-tomcat-nginx 反向代理和负载均衡

1.部署tomcat镜像 #下载官方的tomcat镜像. sudo docker pull tomcat:7-jre7 #启动docker容器,2个实例,分别映射不同的端口号, #~/work/sample-webapps/[v1.0|v2.0]/下面存放JavaWeb.war包,通过volume方式映射到docker镜像中 docker run -it --rm -p 8080:8080 -v ~/work/sample-webapps/v1.0:/usr/local/tomcat/webap

Nginx 反向代理、负载均衡、页面缓存、URL重写及读写分离详解

大纲 一.前言 二.环境准备 三.安装与配置Nginx 四.Nginx之反向代理 五.Nginx之负载均衡 六.Nginx之页面缓存 七.Nginx之URL重写 八.Nginx之读写分离 注,操作系统为 CentOS 6.4 x86_64 , Nginx 是版本是最新版的1.4.2,所以实验用到的软件请点击这里下载:http://yunpan.cn/QXIgqMmVmuZrm 一.前言 在前面的几篇博文中我们主要讲解了Nginx作为Web服务器知识点,主要的知识点有nginx的理论详解.ngin