flume高并发优化——(8)多文件source扩展断点续传

在很多情况下,我们为了不丢失数据,一般都会为数据收集端扩展断点续传,而随着公司日志系统的完善,我们在原有的基础上开发了断点续传的功能,以下是思路,大家共同讨论:

核心流程图:

源码:

/*
 * 作者:许恕
 * 时间:2016年5月3日
 * 功能:实现tail 某目录下的所有符合正则条件的文件
 * Email:[email protected]
 * To detect all files in a folder
 */

package org.apache.flume.source;

import com.google.common.base.Preconditions;
import com.google.common.util.concurrent.ThreadFactoryBuilder;
import org.apache.flume.Context;
import org.apache.flume.Event;
import org.apache.flume.EventDrivenSource;
import org.apache.flume.SystemClock;
import org.apache.flume.channel.ChannelProcessor;
import org.apache.flume.conf.Configurable;
import org.apache.flume.event.EventBuilder;
import org.apache.flume.instrumentation.SourceCounter;
import org.apache.flume.tools.HostUtils;
import org.mortbay.util.ajax.JSON;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.io.*;
import java.nio.charset.Charset;
import java.util.*;
import java.util.concurrent.*;
import java.util.regex.Matcher;
import java.util.regex.Pattern;

/**
 *  step:
 *    1,config one path
 *    2,find all file with RegExp
 *    3,tail one children file
 *    4,batch to channal
 *
 *  demo:
 *    demo.sources.s1.type = org.apache.flume.source.ExecTailSource
 *    demo.sources.s1.filepath=/export/home/tomcat/logs/auth.el.net/
 *    demo.sources.s1.filenameRegExp=(.log{1})$
 *    demo.sources.s1.tailing=true
 *    demo.sources.s1.readinterval=300
 *    demo.sources.s1.startAtBeginning=false
 *    demo.sources.s1.restart=true
 */
public class ExecTailSource extends AbstractSource implements EventDrivenSource,
Configurable {

  private static final Logger logger = LoggerFactory
      .getLogger(ExecTailSource.class);

  private SourceCounter sourceCounter;
  private ExecutorService executor;
  private List<ExecRunnable> listRuners;
  private List<Future<?>> listFuture;
  private long restartThrottle;
  private boolean restart = true;
  private boolean logStderr;
  private Integer bufferCount;
  private long batchTimeout;
  private Charset charset;
  private String filepath;
  private String filenameRegExp;
  private boolean tailing;
  private Integer readinterval;
  private boolean startAtBeginning;
  private boolean contextIsJson;
  private String fileWriteJson;
  private Long flushTime;
  private boolean contextIsFlumeLog;

  @Override
  public void start() {
    logger.info("=start=> flume tail source start begin time:"+new Date().toString());
    logger.info("ExecTail source starting with filepath:{}", filepath);

    List<String> listFiles = getFileList(filepath);
    if(listFiles==null || listFiles.isEmpty()){
      Preconditions.checkState(listFiles != null && !listFiles.isEmpty(),
              "The filepath‘s file not have fiels with filenameRegExp");
    }

    Properties prop=null;

    try{
      prop = new Properties();//属性集合对象
      FileInputStream fis = new FileInputStream(fileWriteJson);//属性文件流
      prop.load(fis);
    }catch(Exception ex){
      logger.error("==>",ex);
    }

    executor = Executors.newFixedThreadPool(listFiles.size());

    listRuners = new ArrayList<ExecRunnable>();
    listFuture = new ArrayList<Future<?>>();

    logger.info("files size is {} ", listFiles.size());
    // FIXME: Use a callback-like executor / future to signal us upon failure.
    for(String oneFilePath : listFiles){
      ExecRunnable runner = new ExecRunnable(getChannelProcessor(), sourceCounter,
              restart, restartThrottle, logStderr, bufferCount, batchTimeout,
              charset,oneFilePath,tailing,readinterval,startAtBeginning,contextIsJson,
              prop,fileWriteJson,flushTime,contextIsFlumeLog);
      listRuners.add(runner);
      Future<?> runnerFuture = executor.submit(runner);
      listFuture.add(runnerFuture);
      logger.info("{} is begin running",oneFilePath);
    }

    /*
     * NB: This comes at the end rather than the beginning of the method because
     * it sets our state to running. We want to make sure the executor is alive
     * and well first.
     */
    sourceCounter.start();
    super.start();
    logger.info("=start=> flume tail source start end time:"+new Date().toString());
    logger.debug("ExecTail source started");
  }

  @Override
  public void stop() {

    logger.info("=stop=> flume tail source stop begin time:"+new Date().toString());
    if(listRuners !=null && !listRuners.isEmpty()){
      for(ExecRunnable oneRunner : listRuners){
        if(oneRunner != null) {
          oneRunner.setRestart(false);
          oneRunner.kill();
        }
      }
    }

    if(listFuture !=null && !listFuture.isEmpty()){
      for(Future<?> oneFuture : listFuture){
        if (oneFuture != null) {
          logger.debug("Stopping ExecTail runner");
          oneFuture.cancel(true);
          logger.debug("ExecTail runner stopped");
        }
      }
    }

    executor.shutdown();
    while (!executor.isTerminated()) {
      logger.debug("Waiting for ExecTail executor service to stop");
      try {
        executor.awaitTermination(500, TimeUnit.MILLISECONDS);
      } catch (InterruptedException e) {
        logger.debug("Interrupted while waiting for ExecTail executor service "
            + "to stop. Just exiting.");
        Thread.currentThread().interrupt();
      }
    }

    sourceCounter.stop();
    super.stop();
    logger.info("=stop=> flume tail source stop end time:"+new Date().toString());

  }

  @Override
  public void configure(Context context) {

    filepath = context.getString("filepath");
    Preconditions.checkState(filepath != null,
        "The parameter filepath must be specified");
    logger.info("The parameter filepath is {}" ,filepath);

    filenameRegExp = context.getString("filenameRegExp");
    Preconditions.checkState(filenameRegExp != null,
            "The parameter filenameRegExp must be specified");
    logger.info("The parameter filenameRegExp is {}" ,filenameRegExp);

    contextIsJson= context.getBoolean(ExecSourceConfigurationConstants.CONFIG_CONTEXTISJSON_THROTTLE,
            ExecSourceConfigurationConstants.DEFAULT_CONTEXTISJSON);

    contextIsFlumeLog=context.getBoolean(ExecSourceConfigurationConstants.CONFIG_CONTEXTISFLUMELOG_THROTTLE,
            ExecSourceConfigurationConstants.DEFAULT_CONTEXTISFLUMELOG);

    fileWriteJson= context.getString(ExecSourceConfigurationConstants.CONFIG_FILEWRITEJSON_THROTTLE,
            ExecSourceConfigurationConstants.DEFAULT_FILEWRITEJSON);

    flushTime= context.getLong(ExecSourceConfigurationConstants.CONFIG_FLUSHTIME_THROTTLE,
            ExecSourceConfigurationConstants.DEFAULT_FLUSHTIME);

    restartThrottle = context.getLong(ExecSourceConfigurationConstants.CONFIG_RESTART_THROTTLE,
        ExecSourceConfigurationConstants.DEFAULT_RESTART_THROTTLE);

    tailing = context.getBoolean(ExecSourceConfigurationConstants.CONFIG_TAILING_THROTTLE,
            ExecSourceConfigurationConstants.DEFAULT_ISTAILING_TRUE);

    readinterval=context.getInteger(ExecSourceConfigurationConstants.CONFIG_READINTERVAL_THROTTLE,
            ExecSourceConfigurationConstants.DEFAULT_READINTERVAL);

    startAtBeginning=context.getBoolean(ExecSourceConfigurationConstants.CONFIG_STARTATBEGINNING_THROTTLE,
            ExecSourceConfigurationConstants.DEFAULT_STARTATBEGINNING);

    restart = context.getBoolean(ExecSourceConfigurationConstants.CONFIG_RESTART,
        ExecSourceConfigurationConstants.DEFAULT_RESTART_TRUE);

    logStderr = context.getBoolean(ExecSourceConfigurationConstants.CONFIG_LOG_STDERR,
        ExecSourceConfigurationConstants.DEFAULT_LOG_STDERR);

    bufferCount = context.getInteger(ExecSourceConfigurationConstants.CONFIG_BATCH_SIZE,
        ExecSourceConfigurationConstants.DEFAULT_BATCH_SIZE);

    batchTimeout = context.getLong(ExecSourceConfigurationConstants.CONFIG_BATCH_TIME_OUT,
        ExecSourceConfigurationConstants.DEFAULT_BATCH_TIME_OUT);

    charset = Charset.forName(context.getString(ExecSourceConfigurationConstants.CHARSET,
        ExecSourceConfigurationConstants.DEFAULT_CHARSET));

    if (sourceCounter == null) {
      sourceCounter = new SourceCounter(getName());
    }
  }

  /**
   * 获取指定路径下的所有文件列表
   *
   * @param dir 要查找的目录
   * @return
   */
  public  List<String> getFileList(String dir) {
    List<String> listFile = new ArrayList<String>();
    File dirFile = new File(dir);
    //如果不是目录文件,则直接返回
    if (dirFile.isDirectory()) {
      //获得文件夹下的文件列表,然后根据文件类型分别处理
      File[] files = dirFile.listFiles();
      if (null != files && files.length > 0) {
        //根据时间排序
        Arrays.sort(files, new Comparator<File>() {
          public int compare(File f1, File f2) {
            return (int) (f1.lastModified() - f2.lastModified());
          }

          public boolean equals(Object obj) {
            return true;
          }
        });
        for (File file : files) {
          //如果不是目录,直接添加
          if (!file.isDirectory()) {
            String oneFileName = file.getName();
            if(match(filenameRegExp,oneFileName)){
              listFile.add(file.getAbsolutePath());
              logger.info("filename:{} is pass",oneFileName);
            }
          } else {
            //对于目录文件,递归调用
            listFile.addAll(getFileList(file.getAbsolutePath()));
          }
        }
      }
    }else{
      logger.info("FilePath:{} is not Directory",dir);
    }
    return listFile;
  }

  /**
   * @param regex
   * 正则表达式字符串
   * @param str
   * 要匹配的字符串
   * @return 如果str 符合 regex的正则表达式格式,返回true, 否则返回 false;
   */
  private boolean match(String regex, String str) {
    Pattern pattern = Pattern.compile(regex);
    Matcher matcher = pattern.matcher(str);
   return matcher.find();
  }

  private static class ExecRunnable implements Runnable {

    public ExecRunnable( ChannelProcessor channelProcessor,
        SourceCounter sourceCounter, boolean restart, long restartThrottle,
        boolean logStderr, int bufferCount, long batchTimeout,
                         Charset charset, String filepath,
                         boolean tailing,Integer readinterval,
                         boolean startAtBeginning,boolean contextIsJson,
                         Properties prop,String fileWriteJson,Long flushTime,
                         boolean contextIsFlumeLog) {

      this.channelProcessor = channelProcessor;
      this.sourceCounter = sourceCounter;
      this.restartThrottle = restartThrottle;
      this.bufferCount = bufferCount;
      this.batchTimeout = batchTimeout;
      this.restart = restart;
      this.logStderr = logStderr;
      this.charset = charset;
      this.filepath=filepath;
      this.logfile=new File(filepath);
      this.tailing=tailing;
      this.readinterval=readinterval;
      this.startAtBeginning=startAtBeginning;
      this.contextIsJson=contextIsJson;
      this.prop = prop;
      this.fileWriteJson=fileWriteJson;
      this.flushTime=flushTime;
      this.contextIsFlumeLog=contextIsFlumeLog;
    }

    private final ChannelProcessor channelProcessor;
    private final SourceCounter sourceCounter;
    private volatile boolean restart;
    private final long restartThrottle;
    private final int bufferCount;
    private long batchTimeout;
    private final boolean logStderr;
    private final Charset charset;
    private SystemClock systemClock = new SystemClock();
    private Long lastPushToChannel = systemClock.currentTimeMillis();
    ScheduledExecutorService timedFlushService;
    ScheduledFuture<?> future;
    private String filepath;
    private boolean contextIsJson;
    private Properties prop;
    private long timepoint;
    private String fileWriteJson;
    private Long flushTime;

    /**
     * 当读到文件结尾后暂停的时间间隔
     */
    private long readinterval = 500;

    /**
     * 设置日志文件
     */
    private File logfile;

    /**
     * 设置是否从头开始读
     */
    private boolean startAtBeginning = false;

    /**
     * 设置tail运行标记
     */
    private boolean tailing = false;

    private boolean contextIsFlumeLog=false;

    private static String getDomain(String filePath){
      String[] strs = filePath.split("/");
      String domain ;
      domain=strs[strs.length-2];
      if(domain==null || domain.isEmpty()){
        domain=filePath;
      }
      return domain;
    }

    @Override
    public void run() {
      do {
        logger.info("=run=> flume tail source run start time:"+new Date().toString());
        timepoint=System.currentTimeMillis();
        Long filePointer = 0l;
        if (this.startAtBeginning) { //判断是否从头开始读文件
          filePointer = 0l;
        } else {
          if(prop!=null || prop.contains(filepath)){

            try {
              filePointer = Long.valueOf((String) prop.get(filepath));
             logger.info("=ExecRunnable.run=>filePointer get from  Properties");
            }catch (Exception ex){
              logger.error("=ExecRunnable.run=>",ex);
              logger.info("=ExecRunnable.run=> error filePointer get from file size");
            }
          }
          if(filePointer ==null){
            filePointer = this.logfile.length(); //指针标识从文件的当前长度开始。
            logger.info("=ExecRunnable.run=>filePointer get from file size");
          }

        }

        final List<Event> eventList = new ArrayList<Event>();

        timedFlushService = Executors.newSingleThreadScheduledExecutor(
                new ThreadFactoryBuilder().setNameFormat(
                "timedFlushExecService" +
                Thread.currentThread().getId() + "-%d").build());
        RandomAccessFile randomAccessFile = null;
        try {

          randomAccessFile= new RandomAccessFile(logfile, "r"); //创建随机读写文件
          future = timedFlushService.scheduleWithFixedDelay(new Runnable() {
                                                              @Override
                                                              public void run() {
                                                                try {
                                                                  synchronized (eventList) {
                                                                    if(!eventList.isEmpty() && timeout()) {
                                                                      flushEventBatch(eventList);
                                                                    }
                                                                  }
                                                                } catch (Exception e) {
                                                                  logger.error("Exception occured when processing event batch", e);
                                                                  if(e instanceof InterruptedException) {
                                                                    Thread.currentThread().interrupt();
                                                                  }
                                                                }
                                                              }
                                                            },
                  batchTimeout, batchTimeout, TimeUnit.MILLISECONDS);

          while (this.tailing) {
            long fileLength = this.logfile.length();
            if (fileLength < filePointer) {
              randomAccessFile = new RandomAccessFile(logfile, "r");
              filePointer = 0l;
            }
            if (fileLength > filePointer) {
              randomAccessFile.seek(filePointer);
              String line = randomAccessFile.readLine();
              while (line != null) {

                //送channal
                synchronized (eventList) {
                  sourceCounter.incrementEventReceivedCount();

                  String bodyjson = "";
                  if (!contextIsJson) {
                    HashMap body = new HashMap();
                    body.put("context", line.toString());
                    body.put("filepath", filepath);
                    body.put("contextType", filepath);
                    body.put("created", System.currentTimeMillis());
                    body.put("localHostIp", HostUtils.getLocalHostIp());
                    body.put("localHostName", HostUtils.getLocalHostName());
                    body.put("domain", getDomain(filepath));
                    String context = line.toString();
                    if(contextIsFlumeLog){
                      body = ParseFlumeLog(context,body);
                    }
                    bodyjson = JSON.toString(body);
                    if(bodyjson.indexOf("{")>0){
                      bodyjson.substring(bodyjson.indexOf("{"),bodyjson.length()-1);
                    }
                  } else {
                    bodyjson = line.toString();
                  }

                  Event oneEvent = EventBuilder.withBody(bodyjson.getBytes(charset));
                  eventList.add(oneEvent);
                  if (eventList.size() >= bufferCount || timeout()) {
                    flushEventBatch(eventList);
                  }
                }

                //读下一行
                line = randomAccessFile.readLine();
                try {
                  Long nowfilePointer = randomAccessFile.getFilePointer();
                  if (!nowfilePointer.equals(filePointer)) {
                    filePointer = nowfilePointer;
                    if (System.currentTimeMillis() - timepoint > flushTime) {
                      timepoint = System.currentTimeMillis();
                      prop.setProperty(filepath, filePointer.toString());
                      FileOutputStream fos = new FileOutputStream(fileWriteJson);
                      if (fos != null) {
                        prop.store(fos, "Update ‘" + filepath + "‘ value");
                      }
                      fos.close();

                    }
                  }
                }catch(Exception ex){
                  ex.printStackTrace();
                }
              }

            }
            Thread.sleep(this.readinterval);
          }

          synchronized (eventList) {
              if(!eventList.isEmpty()) {
                flushEventBatch(eventList);
              }
          }

        } catch (Exception e) {
          logger.error("Failed while running filpath: " + filepath, e);
          if(e instanceof InterruptedException) {
            Thread.currentThread().interrupt();
          }
        } finally {

          if(randomAccessFile!=null){
            try {
              randomAccessFile.close();
            } catch (IOException ex) {
              logger.error("Failed to close reader for ExecTail source", ex);
            }
          }

        }
        logger.info("=run=> flume tail source run restart:"+restart);
        if(restart) {
          logger.info("=run=> flume tail source run restart time:"+new Date().toString());
          logger.info("Restarting in {}ms", restartThrottle);
          try {
            Thread.sleep(restartThrottle);
          } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
          }
        } else {
          logger.info("filepath [" + filepath + "] exited with restart[" + restart+"]");
        }
      } while(restart);
    }

    private void flushEventBatch(List<Event> eventList){
      channelProcessor.processEventBatch(eventList);
      sourceCounter.addToEventAcceptedCount(eventList.size());
      eventList.clear();
      lastPushToChannel = systemClock.currentTimeMillis();
    }

    private HashMap ParseFlumeLog(String log,HashMap logMap){
      String[] strLogs = log.split("\\|");
      logMap.put("className",strLogs[0]);
      logMap.put("methodName",strLogs[1]);
      logMap.put("level",strLogs[2]);
      logMap.put("treeId",strLogs[3]);
      logMap.put("requestId",strLogs[4]);
      logMap.put("transactionId",strLogs[5]);
      return logMap;
    }

    private boolean timeout(){
      return (systemClock.currentTimeMillis() - lastPushToChannel) >= batchTimeout;
    }

    private static String[] formulateShellCommand(String shell, String command) {
      String[] shellArgs = shell.split("\\s+");
      String[] result = new String[shellArgs.length + 1];
      System.arraycopy(shellArgs, 0, result, 0, shellArgs.length);
      result[shellArgs.length] = command;
      return result;
    }

    public int kill() {
      logger.info("=kill=> flume tail source kill start time:"+new Date().toString());
      this.tailing=false;
        synchronized (this.getClass()) {
          try {
            // Stop the Thread that flushes periodically
            if (future != null) {
              future.cancel(true);
            }

            if (timedFlushService != null) {
              timedFlushService.shutdown();
              while (!timedFlushService.isTerminated()) {
                try {
                  timedFlushService.awaitTermination(500, TimeUnit.MILLISECONDS);
                } catch (InterruptedException e) {
                  logger.debug("Interrupted while waiting for ExecTail executor service "
                          + "to stop. Just exiting.");
                  Thread.currentThread().interrupt();
                }
              }
            }
            logger.info("=kill=> flume tail source kill end time:" + new Date().toString());
            return Integer.MIN_VALUE;
          } catch (Exception ex) {
            logger.error("=kill=>", ex);
            Thread.currentThread().interrupt();
          }
        }
      logger.info("=kill=> flume tail source kill end time:"+new Date().toString());
      return Integer.MIN_VALUE / 2;
    }
    public void setRestart(boolean restart) {
      this.restart = restart;
    }
  }
  private static class StderrReader extends Thread {
    private BufferedReader input;
    private boolean logStderr;

    protected StderrReader(BufferedReader input, boolean logStderr) {
      this.input = input;
      this.logStderr = logStderr;
    }

    @Override
    public void run() {
      try {
        int i = 0;
        String line = null;
        while((line = input.readLine()) != null) {
          if(logStderr) {
            // There is no need to read ‘line‘ with a charset
            // as we do not to propagate it.
            // It is in UTF-16 and would be printed in UTF-8 format.
            logger.info("StderrLogger[{}] = ‘{}‘", ++i, line);
          }
        }
      } catch (IOException e) {
        logger.info("StderrLogger exiting", e);
      } finally {
        try {
          if(input != null) {
            input.close();
          }
        } catch (IOException ex) {
          logger.error("Failed to close stderr reader for ExecTail source", ex);
        }
      }
    }
  }
}

总结:

这是初步的实现,通过多线程与file i/o 协作,在效率和实时性上,做平衡,实现一版简单的断点续传,以前看似高大上的功能已经被破解一小部分,在讨论中,实际还有结果的收集与去重和正确性验证,加密,等,综合成本考虑,这版简单的,暂行,我们又为flume贡献了一大功能,希望大家一起,将flume做的更完善!

时间: 2024-07-31 09:04:55

flume高并发优化——(8)多文件source扩展断点续传的相关文章

flume高并发优化——(9)配置文件交由zookeeper管理

我们都希望,配置文件是从一个服务引出,然后客户端监听服务端变化,实时重启自身加载最新配置,这样,我们就不用维护每个独立的客户端配置,更新也变得非常简单,而flume,显然意识到了这一个巨大的实惠,他是支持配置文件交由zookeeper维护的,这样我们在修改配置时,flume会自动重新加载. 1,zookeeper 添加节点 我们利用博客<使用zkweb维护zookeeper数据>中介绍的软件,编辑某一个随意路径,用于盛放我们的配置,路径如下: 2,flume 配置zk端 利用博客<lin

nginx高并发优化

nginx 高并发优化 一.关闭系统中不需要的服务 二.优化磁盘写操作 mount -o remount defaults,noatime,nodiratime partion mount_partion fstab 将partion mount_partion defaults 0 0 修改为partion mount_partion defaults,noatime,nodiratime 0 0 即修改为写入磁盘不修改访问时间 三.优化资源限制 ulimit -n 和ulimit -u 即o

对mysql的高并发优化配置的一些思考

对mysql的高并发优化配置的一些思考 mysql的高并发优化配置方案很多,但是适应你自己的就变得很少了,我们对数据库的优化,无非就是为了应对mysql的高并发情况罢了.随着大数据的时代的到来和网络用户的增多,很多企业中,可能每天应对的数量达百万,千万,甚至上亿的pv量,这样的量已经是超过普通配置的mysql所承受的量,所以应对日益增长的pv量,我们需要对mysql做出相应的对策,进一步优化mysql,达到我们所预期的效果,预防因为高并发所引起的mysql宕机,通过调试优化mysql,我们便可以

Java高并发秒杀API之高并发优化

---恢复内容开始--- 第1章 秒杀系统高并发优化分析   1.为什么要单独获得系统时间 访问cdn这些静态资源不用请求系统服务器 而CDN上没有系统时间,需要单独获取,获取系统时间不用优化,只是new了一个日期对象返回,java访问一次内存(cacheline)的时间大概为10ns,即一秒可可访问一亿次 倒计时放在js端,在浏览器中,不会对服务器端造成影响,也不用优化 2.秒杀地址接口分析 秒杀未开启,秒杀开启,秒杀结束,秒杀地址返回的数据不同,不是静态的,无法使用CDN缓存 但它适合使用r

海量数据、高并发优化方案

一.应用服务器负载均衡 1.链路负载均衡 通过DNS解析域名时,将客户端的访问解析成不同的IP,分配到不同的入口,同时尽可能保证所访问的入口是所有入口中可能较快的一个. 2.软件负载均衡 访问时生成页面的任务会被分配给其中一台服务器完成,这个过程要保证公正.公平.平均. 3.硬件负载均衡 二.页面优化 1.减少请求次数 通过合并CSS和Javascript文件来减少请求次数或是将资源文件分布在多个域名下来绕过浏览器并发加载的限制. 2.压缩CSS和Javascript代码. 通过对文件代码内容删

MySQL MyISAM/InnoDB高并发优化经验

最近做的一个应用,功能要求非常简单,就是 key/value 形式的存储,简单的 INSERT/SELECT,没有任何复杂查询,唯一的问题是量非常大,如果目前投入使用,初期的单表 insert 频率约 20Hz(次/秒,我喜欢这个单位,让我想起国内交流电是 50Hz),但我估计以后会有 500Hz+ 的峰值.目前的工作成果,额定功率 200Hz(CPU 占用 10 – 20,load avg = 2),最大功率 500Hz(这时 load avg > 20,很明显,只能暂时挺挺,应该在出现这种负

SSM实战——秒杀系统之高并发优化

一:高并发点 高并发出现在秒杀详情页,主要可能出现高并发问题的地方有:秒杀地址暴露.执行秒杀操作. 二:静态资源(页面)访问优化--CDN CDN,内容分发网络.我们把静态的资源(html/css/js)放在CDN上,以加快用户获取数据的速度. 用户访问页面时,优先从最近的CDN服务器上获取页面资源,而不是从单一的网站服务器上获取.只有CDN获取不到才会访问后端服务器. 因此,我们可以使用CDN进行网站的加速优化,把静态资源(或某些动态资源)推送到CDN站点上.(大公司自己搭建CDN网络,小公司

SSM(Spring+SpringMVC+MyBatis)高并发优化思路

SSM(Spring+SpringMVC+MyBatis)框架集由Spring.MyBatis两个开源框架整合而成(SpringMVC是Spring中的部分内容).常作为数据源较简单的web项目的框架. 学习课程的地址:https://www.imooc.com/learn/632 老师的GitHub地址:https://github.com/geekyijun/seckill 高并发发生在哪里?分析整个系统流程,用户进行秒杀时最感兴趣的进入详情页进行秒杀.图中红色表示可能会出现高并发的点,绿色

tengine整合tomcat加上memcached实现高并发、负载均衡、可扩展架构

1.高可用.负载均衡.可扩展架构的需要背景 2.系统架构 3.系统规划及说明 4.系统部署及测试 5.总结 1.高可用.负载均衡.可扩展架构的需要背景 从互联网诞生以来,网站架构随着互联网的快速发展发生着巨大的变化,现今,数据每天都在以爆炸式的增长,大数据.云计算等概念被业内炒得沸沸扬扬,这些前沿技术也在各行各业落地开花.每一种新技术的提出几乎都会或多或少影响着IT的基础架构,面对数据的快速增长.我们急需一套高可用.负载均衡.可扩展的架构来作为支撑. 2.系统架构 此次博文介绍一套高可用.负载均