Coherence装载数据的研究 - Invocation Service

这里验证第三个方法,原理是将需要装载的数据分载在所有的存储节点上,不同的地方是利用了存储节点提供的InvocationService进行装载,而不是PreloadRequest,

原理如图

前提条件是:

  • 需要知道所有要装载的key值
  • 需要根据存储节点的数目把key值进行分摊,这里是通过
  • Map<Member, List<String>> divideWork(Set members)这个方法,输入Coherence的存储节点成员,输出一个map结构,以member为key,所有的entry key值为value.
  • 装载数据的任务,主要是通过驱动MyLoadInvocable的run方法,把数据在各个节点中进行装载,MyLoadInvocable必须扩展AbstractInvocable并实现PortableObject,不知何解,我尝试实现Seriable方法,结果出错
  • 在拆解所有key值的任务过程中,发现list<String>数组被后面的值覆盖,后来每次放入map的时候新建一个List才避免此现象发生.
  • 不需要实现CacheLoader或者CacheStore方法

Person.java


package dataload;

import java.io.Serializable;

public class Person implements Serializable {
private String Id;
private String Firstname;

public void setId(String Id) {
this.Id = Id;
}

public String getId() {
return Id;
}

public void setFirstname(String Firstname) {
this.Firstname = Firstname;
}

public String getFirstname() {
return Firstname;
}

public void setLastname(String Lastname) {
this.Lastname = Lastname;
}

public String getLastname() {
return Lastname;
}

public void setAddress(String Address) {
this.Address = Address;
}

public String getAddress() {
return Address;
}
private String Lastname;
private String Address;

public Person() {
super();
}

public Person(String sId,String sFirstname,String sLastname,String sAddress) {
Id=sId;
Firstname=sFirstname;
Lastname=sLastname;
Address=sAddress;
}
}

MyLoadInvocable.java

装载数据的任务,主要是通过驱动这个任务的run方法,把数据在各个节点中进行装载


package dataload;

import com.tangosol.io.pof.PofReader;
import com.tangosol.io.pof.PofWriter;
import com.tangosol.io.pof.PortableObject;
import com.tangosol.net.AbstractInvocable;
import com.tangosol.net.CacheFactory;
import com.tangosol.net.NamedCache;

import java.io.IOException;
import java.io.Serializable;

import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;

import java.util.Hashtable;
import java.util.Iterator;
import java.util.List;

import javax.naming.Context;
import javax.naming.InitialContext;

import serp.bytecode.NameCache;

public class MyLoadInvocable extends AbstractInvocable implements PortableObject {

private List<String> m_memberKeys;
private String m_cache;

public MyLoadInvocable() {
super();
}

public MyLoadInvocable(List<String> memberKeys, String cache) {
m_memberKeys = memberKeys;
m_cache = cache;
}

public Connection getConnection() {
Connection m_con = null;
try {
Context ctx = null;

Hashtable<String,String> ht = new Hashtable<String,String>();
ht.put(Context.INITIAL_CONTEXT_FACTORY,"weblogic.jndi.WLInitialContextFactory");
ht.put(Context.PROVIDER_URL,"t3://localhost:7001");
ctx = new InitialContext(ht);
javax.sql.DataSource ds= (javax.sql.DataSource) ctx.lookup("ds");

m_con = ds.getConnection();
} catch (Exception e) {
System.out.println(e.getMessage());
}

return m_con;
}

public void run() {
System.out.println("Enter MyLoadInvocable run....");
NamedCache cache = CacheFactory.getCache(m_cache);
Person person = null;
Connection con = getConnection();
String sSQL = "SELECT id, firstname,lastname,address FROM persons WHERE id = ?";
System.out.println("Enter load= "+sSQL);

try
{
PreparedStatement stmt = con.prepareStatement(sSQL);

for(int i = 0; i < m_memberKeys.size(); i++)
{

String id = (String)m_memberKeys.get(i);
//System.out.println(list.get(i));
System.out.println("==========="+id);

stmt.setString(1, id);
ResultSet rslt = stmt.executeQuery();
if (rslt.next())
{
person = new Person(rslt.getString("id"),rslt.getString("firstname"),rslt.getString("lastname"),rslt.getString("address"));
cache.put(person.getId(),person);

}
// stmt.close();

}

stmt.close();

}catch (Exception e)
{
System.out.println("==="+e.getMessage());
}

}

public void readExternal(PofReader in)
throws IOException
{
m_memberKeys = (List<String>) in.readObject(0);
m_cache = (String) in.readObject(1);
}

/**
* {@inheritDoc}
*/
public void writeExternal(PofWriter out)
throws IOException
{
out.writeObject(0, m_memberKeys);
out.writeObject(1, m_cache);
}

}

LoadUsingEP.java

装载的客户端,负责数据分段,InvocationService查找以及驱动。


package dataload;

import com.tangosol.net.CacheFactory;
import com.tangosol.net.InvocationService;
import com.tangosol.net.Member;
import com.tangosol.net.NamedCache;
import com.tangosol.net.PartitionedService;
import com.tangosol.util.InvocableMap;

import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;

import java.sql.SQLException;

import java.sql.Statement;

import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.Hashtable;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Set;

import javax.naming.Context;
import javax.naming.InitialContext;

public class LoaderUsingEP {

private Connection m_con;

public Connection getConnection() {
try {
Context ctx = null;

Hashtable<String,String> ht = new Hashtable<String,String>();
ht.put(Context.INITIAL_CONTEXT_FACTORY,"weblogic.jndi.WLInitialContextFactory");
ht.put(Context.PROVIDER_URL,"t3://localhost:7001");
ctx = new InitialContext(ht);
javax.sql.DataSource ds= (javax.sql.DataSource) ctx.lookup("ds");

m_con = ds.getConnection();
} catch (Exception e) {
System.out.println(e.getMessage());
}

return m_con;
}

protected Set getStorageMembers(NamedCache cache)
{
return ((PartitionedService) cache.getCacheService())
.getOwnershipEnabledMembers();
}

protected Map<Member, List<String>> divideWork(Set members)
{
Iterator i = members.iterator();
Map<Member, List<String>> mapWork = new HashMap(members.size());

try {
String sql = "select count(*) from persons";
int totalcount = 0;
int membercount = members.size();
Connection con = getConnection();
Statement st = con.createStatement();
ResultSet rs = st.executeQuery(sql);
while (rs.next())
totalcount = Integer.parseInt(rs.getString(1));

int onecount = totalcount / membercount;
int lastcount = totalcount % membercount;

sql = "select id from persons";

ResultSet rs1 = st.executeQuery(sql);
int count = 0;
int currentworker=0;
ArrayList<String> list=new ArrayList<String>();

while (rs1.next()) {

if (count < onecount) {

list.add(rs1.getString("id"));
count++;
} else {

Member member = (Member) i.next();

ArrayList<String> list2=new ArrayList<String>();
list2.addAll(list);
mapWork.put(member, list2);

list.clear();

/* print the list value
for(Map.Entry<Member, List<String>> entry:mapWork.entrySet()){
System.out.println("first="+entry.getKey());
List<String> memberKeys = entry.getValue();
for(int j = 0; j < memberKeys.size(); j++)
{
System.out.println("firsttime="+memberKeys.get(j));
//System.out.println(list.get(i));
}

}
*/
count=0;
list.add(rs1.getString("id"));
count++;

currentworker ++;

if (currentworker == membercount-1) {
onecount = onecount+lastcount;
}

}

}

Member member = (Member) i.next();
mapWork.put(member, list);

st.close();
con.close();
} catch (Exception e) {
System.out.println("Exception...."+e.getMessage());
}

for(Map.Entry<Member, List<String>> entry:mapWork.entrySet()){
System.out.println("final="+entry.getKey());
List<String> memberKeys = entry.getValue();
for(int j = 0; j < memberKeys.size(); j++)
{
System.out.println(memberKeys.get(j));
}

}
return mapWork;
}

public void load()
{
NamedCache cache = CacheFactory.getCache("SampleCache");

Set members = getStorageMembers(cache);
System.out.println("members"+members.size());

Map<Member, List<String>> mapWork = divideWork(members);

InvocationService service = (InvocationService)
CacheFactory.getService("LocalInvocationService");

for (Map.Entry<Member, List<String>> entry : mapWork.entrySet())
{

Member member = entry.getKey();
List<String> memberKeys = entry.getValue();
System.out.println(memberKeys.size());

MyLoadInvocable task = new MyLoadInvocable(memberKeys, cache.getCacheName());
service.execute(task, Collections.singleton(member), null);
}
}

public static void main(String[] args) {
LoaderUsingEP ep = new LoaderUsingEP();
ep.load();
}

}

需要配置的客户端schema

storage-override-client.xml


<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<!--
Caches with names that start with ‘DBBacked‘ will be created
as distributed-db-backed.
-->
<cache-mapping>
<cache-name>SampleCache</cache-name>
<scheme-name>distributed-pof</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<!--
DB Backed Distributed caching scheme.
-->
<distributed-scheme>
<scheme-name>distributed-pof</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>

<read-write-backing-map-scheme>

<internal-cache-scheme>
<local-scheme/>
</internal-cache-scheme>

<cachestore-scheme>
<class-scheme>
<class-name>dataload.DBCacheStore</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>persons</param-value>
</init-param>
</init-params>
</class-scheme>
</cachestore-scheme>
</read-write-backing-map-scheme>
</backing-map-scheme>

<listener/>
<autostart>true</autostart>
<local-storage>false</local-storage>
</distributed-scheme>

<invocation-scheme>
<scheme-name>my-invocation</scheme-name>
<service-name>LocalInvocationService</service-name>
<thread-count>5</thread-count>
<autostart>true</autostart>
</invocation-scheme>

</caching-schemes>
</cache-config>

存储节点的Schema


<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<!--
Caches with names that start with ‘DBBacked‘ will be created
as distributed-db-backed.
-->
<cache-mapping>
<cache-name>SampleCache</cache-name>
<scheme-name>distributed-pof</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<!--
DB Backed Distributed caching scheme.
-->
<distributed-scheme>
<scheme-name>distributed-pof</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>

<read-write-backing-map-scheme>

<internal-cache-scheme>
<local-scheme/>
</internal-cache-scheme>

<cachestore-scheme>
<class-scheme>
<class-name>dataload.DBCacheStore</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>persons</param-value>
</init-param>
</init-params>
</class-scheme>
</cachestore-scheme>
</read-write-backing-map-scheme>
</backing-map-scheme>

<listener/>
<autostart>true</autostart>
<local-storage>true</local-storage>
</distributed-scheme>

<invocation-scheme>
<scheme-name>my-invocation</scheme-name>
<service-name>LocalInvocationService</service-name>
<thread-count>5</thread-count>
<autostart>true</autostart>
</invocation-scheme>

</caching-schemes>
</cache-config>

输出结果

可见数据分片装载.

时间: 2024-11-05 16:07:11

Coherence装载数据的研究 - Invocation Service的相关文章

Coherence装载数据的研究

最近给客户准备培训,看到Coherence可以通过三种方式批量加载数据,分别是: Custom application InvocableMap - PreloadRequest Invocation Service Custom application的方式简单易懂,基本就是通过put和putAll方法实现,就不再纠结了.但问题是无论是put还是putAll 都是一个串行过程,如果装载大量数据的话,就需要有一种并行机制实现并行装载. 本文对第二种方式InvocableMap做一些研究,Prel

Extjs的form表单自动装载数据(通过Ext.data.Model的代理加载数据)

在做项目的时候遇到一个问题,以前双击grid页面一行数据的时候,会吧双击这一行的数据自动加载到双击的页面(Ext的弹出框),可以通过this.down(''form).getForm().loadRecord(record)来自动加载,可是现在有一个需求就是双击grid一行弹出一个新的浏览器页面(不是ext的弹出框,通过window.open()实现),只能把双击的id传到页面,再重新查数据手动赋值,如果一个页面字段很多,一个一个赋值是很辛苦的,所以就想能不能自动装载数据 通过长时间研究发现,t

总结一下用caffe跑图片数据的研究流程

最近在用caffe玩一些数据集,这些数据集是从淘宝爬下来的图片.主要是想研究一下对女性衣服的分类. 下面是一些具体的操作流程,这里总结一下. 1 爬取数据.写爬虫从淘宝爬取自己需要的数据. 2 数据预处理.将图片从jpg,png格式转为leveldb格式.因为caffe的输入层datalayer是从leveldb读取的.这一步自己基于caffe写了个工具实现转换. 转换命令例子: ./convert_imagedata.bin /home/linger/imdata/skirt_train/ /

基于Web Service的客户端框架搭建一:C#使用Http Post方式传递Json数据字符串调用Web Service

引言 前段时间一直在做一个ERP系统,随着系统功能的完善,客户端(CS模式)变得越来越臃肿.现在想将业务逻辑层以下部分和界面层分离,使用Web Service来做.由于C#中通过直接添加引用的方来调用Web Service的方式不够灵活,故采取手动发送Http请求的方式来调用Web Service.最后选择使用Post方式来调用Web Service,至于安全性和效率暂不考虑.在学习使用的过程,遇到了很多问题,也花了很长时间来解决,网上相关的帖子很少,如果各位在使用的过程中有一些问题难以解决,可

使用 Hive装载数据的几种方式

装载数据 1.以LOAD的方式装载数据 LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION(partcol1=val1, partcol2=val2 ...)] 1) 使用LOCAL这个关键字,那么这个路径应该为本地文件系统路径,是拷贝本地数据到位于HDFS上的目标位置,而不使用LOCAL这个关键字,那么这个路径应该为HDFS中的路径,是把本身就在HDFS上的数据转移到目标位置. 同时,因

android的liveview装载数据

设置布局 <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" >

(转)淘淘商城系列——导入商品数据到索引库——Service层

http://blog.csdn.net/yerenyuan_pku/article/details/72894187 通过上文的学习,我相信大家已经学会了如何使用Solrj来操作索引库.本文我们将把商品数据导入到索引库中的Service层代码编写完毕! 首先在taotao-search-interface工程中新建一个接口,如下图所示. 可以看到importAllItemToIndex方法的返回值类型是TaotaoResult,当你纠结返回值是什么的时候,你就可以使用TaotaoResult.

ISO14229 根据标识符读取数据ReadDataByIdentifier(22 Hex) service

ReadDataByIdentifier(22 Hex) service 根据标识符读取数据 例子: req=88 18 DA 00 F1 03 22 F1 20 00 00 00 00 res=88 18 DA F1 00 10 0F 62 F1 20 33 36 30 req=88 18 DA 00 F1 30 00 0A 00 00 00 00 00 res=88 18 DA F1 00 21 31 36 31 31 2D 35 32 res=88 18 DA F1 00 22 45 58

想知道站点是否被惩罚?这些数据赶紧研究!

众所周知,搜索引擎的算法规则总是在不断的调整中,再加上最近"魏则西时间"对于竞价排名的影响,使得百度近期的负面评价增加了许多.联合调查组要求百度对竞价排名的算法进行调整,以改善医疗机构的推广占比,同时百度增加的审核职责也更为严格.处于这样的大环境下,小编今天来查看站点信息时,发现合肥人才网(www.400815.com)的权重.收录.关键词排名等数据都处于下降状态,这又是为什么呢? 也许很多新手SEO从业者会说,目前的形式下百度对自然排名肯定也有加强审核,所以数据有所波动很正常,可作为