由于项目需要,写了版针对业务的自动化测试代码,主要应用场景在于由于业务日趋复杂,一些公共代码的改动,担心会影响已有业务。还没进行重写,但知识点还是不少的与大家分享实践下。首先,介绍下整个流处理的业务流程。
首先 从网管实时接入数据到kafka,然后消息接入 进行预处理(这个过程是通过jetty框架,直接用servlet启动的项目,因为考虑到tomcat的并发不够,所以这样用。)随后预处理完 传入kafka,然后storm的不同的topo根据不同的传入类型,进行接入消息的规则匹配,规则是存在于前台的项目中,定时刷入redis(1分钟1刷) 随后加载用户卡数据、用户信息等(这些数据是每晚通过跑mapreduce任务生成的大宽表,直接导入redis),通过redis加载效率非常高,满足实时性(如果redis中不存在数据的情况下,会连接hbase,再进行查询) 随后进行业务处理(有时有些会调各个网管的接口,获取相应业务数据),随后将封装好的数据发总致下游通知拓扑,通知拓扑通过webservice或者restTemple发送值各个其他平台,比如微信,支付宝,短信等,最终将整个运行日志写入hbase。
首先准备下一些需要的公共类,kafkaclient:
private Properties properties; private String defaultTopic; private KafkaProducer<K, V> producer; public void setProperties(Properties properties) { this.properties = properties; } public void setDefaultTopic(String defaultTopic) { this.defaultTopic = defaultTopic; } public void setProducer(KafkaProducer<K, V> producer) { this.producer = producer; } public void init() { if (properties == null) { throw new NullPointerException("kafka properties is null."); } this.producer = new KafkaProducer<K, V>(properties); } public void syncSend(V value) { ProducerRecord<K, V> producerRecord = new ProducerRecord<K, V>(defaultTopic, value); try { producer.send(producerRecord).get(); } catch (Exception e) { throw new RuntimeException(e); } } public void asyncSend(V value) { ProducerRecord<K, V> producerRecord = new ProducerRecord<K, V>(defaultTopic, value); producer.send(producerRecord); }
HbaseUtil:
private static final Logger logger = LoggerFactory.getLogger(HbaseResult.class); private Gson gson = new Gson(); private HConnection connection = null; private Configuration conf = null; private String logFile = "D:/error.txt"; public void init() throws IOException { logger.info("start init HBasehelper..."); conf = HBaseConfiguration.create(); connection = HConnectionManager.createConnection(conf); logger.info("init HBasehelper successed!"); } public synchronized HConnection getConnection() throws IOException { if (connection == null) { connection = HConnectionManager.createConnection(conf); } return connection; } private synchronized void closeConnection() { if (connection != null) { try { connection.close(); } catch (IOException e) { } } connection = null; }
kafkaClient主要负责将读取报文的信息发送至kafka,随之又topo自行运算,最终使用通过调用hbaseUtil,对相应字段的比对查询。
那么下面对整个自动化测试的流程进行说明:
一、导入前台活动 由于是自动化测试,我们不可能每次都手工上下线,或在页面配置启用某个活动,所以通过直接调用前台系统 导入功能 的方法,将活动配置写入mysql数据库,并进行状态的切换。s
List<String> codeList = new ArrayList<String>(); List<String> activityIdList = new ArrayList<String>(); try { FileBody bin = new FileBody(new File("D:/activityTest/activity.ac")); InputStream in = bin.getInputStream(); BufferedReader br = new BufferedReader(new InputStreamReader(in)); String tr = null; while((tr = br.readLine())!=null){ HttpPost httppost = new HttpPost("http://*********:8088/***/***/***/import"); CloseableHttpClient httpclient = HttpClients.createDefault(); ObjectMapper mapper = new ObjectMapper(); ActivityConfig cloneActivity = null; cloneActivity = mapper.readValue(tr.toString(),ActivityConfig.class); List<ActivityConfig> cloneActivitys = new ArrayList<ActivityConfig>();//存放所有的活动 cloneActivitys.add(cloneActivity); for (ActivityConfig cloneActivity1 : cloneActivitys) { String code = cloneActivity1.getCode(); codeList.add(code); } HttpEntity reqEntity = MultipartEntityBuilder.create() .addPart("file", bin) .build(); httppost.setEntity(reqEntity); System.out.println("executing request " + httppost.getRequestLine()); CloseableHttpResponse response = httpclient.execute(httppost); System.out.println(response.getStatusLine()); HttpEntity resEntity = response.getEntity(); if (resEntity != null) { System.out.println("Response content length: " + resEntity.getContentLength()); } EntityUtils.consume(resEntity); response.close(); httpclient.close(); } for(String code : codeList){ String code1 = "‘" + code + "‘"; if(StringUtils.isNotEmpty(activityCode)){ activityCode.append(","); } activityCode.append(code1); } } return activityIdList; ]
二、读取准备好的报文数据(xml形式需通过解析,数据分隔符格式读取后直接发送至kafka)
public String readTxt() throws IOException{ StringBuffer sendMessage = new StringBuffer(); BufferedReader br = null; try { br = new BufferedReader( new InputStreamReader(new FileInputStream(MessageText), "UTF-8")); String line = ""; while((line = br.readLine()) != null){ if (line.contains("<?xml")) { int beginIndex = line.indexOf("<?xml"); line = line.substring(beginIndex); } sendMessage.append(line); } } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (FileNotFoundException e) { e.printStackTrace(); }finally{ br.close(); } return sendMessage.toString(); }
三、下来,我们需要将解析后的报文数据写入hbase的相应用户宽表、卡宽表中,以便storm拓扑中进行用户数据的加载,这里的rowkey为预分区过的。
HbaseResult baseHelper = new HbaseResult(); baseHelper.init(); tableName = "CARD****"; rowkey = HTableManager.generatRowkey(cardNo); data.put("*****", "10019"); data.put("*****", cardNo); data.put("*****", certNo); data.put("*****", "A"); data.put("*****", "1019"); data.put("*****", supplementCardNo); data.put("*****", "10020"); data.put("*****", certNo); data.put("*****", cardType); data.put("*****", cardType); data.put("*****", cardNo.substring(12,16)); data.put("*****", "F"); data.put("*****", "ysy"); Put put = new Put(Bytes.toBytes(rowkey)); for (Entry<String, String> rs : data.entrySet()) { put.add(HTableManager.DEFAULT_FAMILY_NAME, Bytes.toBytes(rs.getKey()), Bytes.toBytes(rs.getValue())); } baseHelper.put(tableName, put); System.out.println("rowkey:"+rowkey); data.clear();
四、随后就可进行消息的发送(发送至集群的kafka)
KafkaInit(); FSTConfiguration fstConf = FSTConfiguration.getDefaultConfiguration(); kafkaClient.syncSend(fstConf.asByteArray(kafkaData));
五、最终进行发送数据的字段对比(通过报文中的,预设的数据字段 与 最终输出的字段或结果进行对比,随后追加写入输出文件)
Result result = baseHelper.getResult("EVENT_LOG_DH", messageKey); //对比字段 baseHelper.compareData(dataMap, result,activityCode); public Result getResult(String tableName, String rowKey) throws IOException { Get get = new Get(Bytes.toBytes(rowKey)); Result result = null; HTableInterface tableInterface = null; try { tableInterface = getConnection().getTable(tableName); result = tableInterface.get(get); return result; } catch (Exception e) { closeConnection(); logger.error("", e); } finally { if (tableInterface != null) { tableInterface.close(); } } public void compareData(Map<String,Object> messageData, Result res,List<String> activityCode) throws IOException{ List<String> Messages = new ArrayList<String>(); for (Cell cell : res.rawCells()) { String qualifier = Bytes.toString(CellUtil.cloneQualifier(cell)); if(Bytes.toString(CellUtil.cloneQualifier(cell)).equalsIgnoreCase("VARIABLESETS")){ System.out.println(qualifier + "[" + new Gson().fromJson(Bytes.toString(CellUtil.cloneValue(cell)), Map.class) + "]"); @SuppressWarnings("unchecked") Map<String,Object> data = gson.fromJson(Bytes.toString(CellUtil.cloneValue(cell)), Map.class); String message = ""; for(String datakey : data.keySet()){ if(messageData.containsKey(datakey)){ String dataValue = getString(data,datakey); String messageValue = getString(messageData,datakey); if(datakey.equals("dh22")){ dataValue = dataValue.substring(0,dataValue.indexOf(".")); messageValue = messageValue.substring(0,messageValue.indexOf(".")); } if(dataValue.equals(messageValue)){ message = datakey + " = " + dataValue + " 与报文中的 " + dataValue + "对比相同"; Messages.add(message); }else{ message = datakey + " = " + dataValue + " 与报文中的 " + dataValue + "不一致!!!"; Messages.add(message); } } } } if(Bytes.toString(CellUtil.cloneQualifier(cell)).equalsIgnoreCase("NOTIFY__")){ } } if(Messages.size() > 0){ StringBuffer sb = new StringBuffer(); for(String error : Messages){ sb.append(error).append("\n"); } FileWriter fw = new FileWriter(logFile,true); fw.write("\n----------------------"); fw.write(sb.toString()); fw.flush(); fw.close(); }else{ String sb = "没有对不上的字段呀"; FileWriter fw = new FileWriter(logFile); fw.write(sb); fw.flush(); fw.close(); } }
六、清除导入的数据等信息,整个流程结束~
public void delHbaseData(String cardNo,String certNo) throws IOException{ String rowkeyCard = HTableManager.generatRowkey(cardNo) ; String rowKeyUse = HTableManager.generatRowkey(certNo) ; Delete delData = null; HTableInterface tableInterface = null; String tableName = ""; try { tableInterface = getConnection().getTable(tableName); tableInterface.delete(delData); } return; } catch (Exception e) { closeConnection(); logger.error("", e); } finally { if (tableInterface != null) { tableInterface.close(); } } }