Akka(12): 分布式运算:Cluster-Singleton-让运算在集群节点中自动转移

在很多应用场景中都会出现在系统中需要某类Actor的唯一实例(only instance)。这个实例在集群环境中可能在任何一个节点上,但保证它是唯一的。Akka的Cluster-Singleton提供对这种Singleton Actor模式的支持,能做到当这个实例所在节点出现问题需要脱离集群时自动在另一个节点上构建一个同样的Actor,并重新转交控制。当然,由于涉及了一个新构建的Actor,内部状态会在这个过程中丢失。Single-Actor的主要应用包括某种对外部只能支持一个接入的程序接口,或者一种带有由多个其它Actor运算结果产生的内部状态的累积型Actor(aggregator)。当然,如果使用一种带有内部状态的Singleton-Actor,可以考虑使用PersistenceActor来实现内部状态的自动恢复。如此Cluster-Singleton变成了一种非常实用的模式,可以在许多场合下应用。

Cluster-Singleton模式也恰恰因为它的唯一性特点存在着一些隐忧,需要特别关注。唯一性容易造成的隐忧包括:容易造成超负荷、无法保证稳定在线、无法保证消息投递。这些需要用户在编程时增加特别处理。

好了,我们设计个例子来了解Cluster-Singleton,先看看Singleton-Actor的功能:

class SingletonActor extends PersistentActor with ActorLogging {
  import SingletonActor._
  val cluster = Cluster(context.system)

  var freeHoles = 0
  var freeTrees = 0
  var ttlMatches = 0

  override def persistenceId = self.path.parent.name + "-" + self.path.name

  def updateState(evt: Event): Unit = evt match {
    case AddHole =>
      if (freeTrees > 0) {
        ttlMatches += 1
        freeTrees -= 1
      } else freeHoles += 1
    case AddTree =>
      if (freeHoles > 0) {
        ttlMatches += 1
        freeHoles -= 1
      } else freeTrees += 1

  }

  override def receiveRecover: Receive = {
    case evt: Event => updateState(evt)
    case SnapshotOffer(_,ss: State) =>
      freeHoles = ss.nHoles
      freeTrees = ss.nTrees
      ttlMatches = ss.nMatches
  }

  override def receiveCommand: Receive = {
    case Dig =>
      persist(AddHole){evt =>
        updateState(evt)
      }
      sender() ! AckDig   //notify sender message received
      log.info(s"State on ${cluster.selfAddress}:freeHoles=$freeHoles,freeTrees=$freeTrees,ttlMatches=$ttlMatches")

    case Plant =>
      persist(AddTree) {evt =>
        updateState(evt)
      }
      sender() ! AckPlant   //notify sender message received
      log.info(s"State on ${cluster.selfAddress}:freeHoles=$freeHoles,freeTrees=$freeTrees,ttlMatches=$ttlMatches")

    case Disconnect =>  //this node exits cluster. expect switch to another node
      log.info(s"${cluster.selfAddress} is leaving cluster ...")
      cluster.leave(cluster.selfAddress)
    case CleanUp =>
      //clean up ...
      self ! PoisonPill
  }

}

这个SingletonActor就是一种特殊的Actor,它继承了PersistentActor,所以需要实现PersistentActor的抽象函数。SingletonActor维护了几个内部状态,分别是各类运算的当前累积结果freeHoles,freeTrees,ttlMatches。SingletonActor模拟的是一个种树场景:当收到Dig指令后产生登记树坑AddHole事件,在这个事件中更新当前状态值;当收到Plant指令后产生AddTree事件并更新状态。因为Cluster-Singleton模式无法保证消息安全投递所以应该加个回复机制AckDig,AckPlant让消息发送者可用根据情况补发消息。我们是用Cluster.selfAddress来确认当前集群节点的转换。

我们需要在所有承载SingletonActor的集群节点上构建部署ClusterSingletonManager,如下:

  def create(port: Int) = {
    val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.port=$port")
        .withFallback(ConfigFactory.parseString("akka.cluster.roles=[singleton]"))
        .withFallback(ConfigFactory.load())
    val singletonSystem = ActorSystem("SingletonClusterSystem",config)

    startupSharedJournal(singletonSystem, (port == 2551), path =
        ActorPath.fromString("akka.tcp://[email protected]:2551/user/store"))

    val singletonManager = singletonSystem.actorOf(ClusterSingletonManager.props(
      singletonProps = Props[SingletonActor],
      terminationMessage = CleanUp,
      settings = ClusterSingletonManagerSettings(singletonSystem).withRole(Some("singleton"))
    ), name = "singletonManager")

  }

可以看的出来,ClusterSingletonManager也是一种Actor,通过ClusterSingletonManager.props配置其所管理的SingletonActor。我们的目的主要是去求证当前集群节点出现故障需要退出集群时,这个SingletonActor是否能够自动转移到其它在线的节点上。ClusterSingletonManager的工作原理是首先在所有选定的集群节点上构建和部署,然后在最先部署的节点上启动SingletonActor,当这个节点不可使用时(unreachable)自动在次先部署的节点上重新构建部署SingletonActor。

同样作为一种Actor,ClusterSingletonProxy是通过与ClusterSingletonManager消息沟通来调用SingletonActor的。ClusterSingletonProxy动态跟踪在线的SingletonActor,为用户提供它的ActorRef。我们可以通过下面的代码来具体调用SingletonActor:

object SingletonUser {
  def create = {
    val config = ConfigFactory.parseString("akka.cluster.roles=[frontend]")
      .withFallback(ConfigFactory.load())
    val suSystem = ActorSystem("SingletonClusterSystem",config)

    val singletonProxy = suSystem.actorOf(ClusterSingletonProxy.props(
      singletonManagerPath = "/user/singletonManager",
      settings = ClusterSingletonProxySettings(suSystem).withRole(None)
    ), name= "singletonUser")

    import suSystem.dispatcher
    //send Dig messages every 2 seconds to SingletonActor through prox
    suSystem.scheduler.schedule(0.seconds,3.second,singletonProxy,SingletonActor.Dig)

    //send Plant messages every 3 seconds to SingletonActor through prox
    suSystem.scheduler.schedule(1.seconds,2.second,singletonProxy,SingletonActor.Plant)

    //send kill message to hosting node every 30 seconds
    suSystem.scheduler.schedule(10.seconds,15.seconds,singletonProxy,SingletonActor.Disconnect)
  }

}

我们分不同的时间段通过ClusterSingletonProxy向SingletonActor发送Dig和Plant消息。然后每隔30秒向SingletonActor发送一个Disconnect消息通知它所在节点开始脱离集群。然后我们用下面的代码来试着运行:

package clustersingleton.demo

import clustersingleton.sa.SingletonActor
import clustersingleton.frontend.SingletonUser

object ClusterSingletonDemo extends App {

  SingletonActor.create(2551)    //seed-node

  SingletonActor.create(0)   //ClusterSingletonManager node
  SingletonActor.create(0)
  SingletonActor.create(0)
  SingletonActor.create(0)

  SingletonUser.create     //ClusterSingletonProxy node

}

运算结果如下:

[INFO] [07/09/2017 20:17:28.210] [main] [akka.remote.Remoting] Starting remoting
[INFO] [07/09/2017 20:17:28.334] [main] [akka.remote.Remoting] Remoting started; listening on addresses :[akka.tcp://[email protected]:2551]
[INFO] [07/09/2017 20:17:28.489] [main] [akka.remote.Remoting] Starting remoting
[INFO] [07/09/2017 20:17:28.493] [main] [akka.remote.Remoting] Remoting started; listening on addresses :[akka.tcp://[email protected]:55839]
[INFO] [07/09/2017 20:17:28.514] [main] [akka.remote.Remoting] Starting remoting
[INFO] [07/09/2017 20:17:28.528] [main] [akka.remote.Remoting] Remoting started; listening on addresses :[akka.tcp://[email protected]:55840]
[INFO] [07/09/2017 20:17:28.566] [main] [akka.remote.Remoting] Starting remoting
[INFO] [07/09/2017 20:17:28.571] [main] [akka.remote.Remoting] Remoting started; listening on addresses :[akka.tcp://[email protected]:55841]
[INFO] [07/09/2017 20:17:28.595] [main] [akka.remote.Remoting] Starting remoting
[INFO] [07/09/2017 20:17:28.600] [main] [akka.remote.Remoting] Remoting started; listening on addresses :[akka.tcp://[email protected]:55842]
[INFO] [07/09/2017 20:17:28.620] [main] [akka.remote.Remoting] Starting remoting
[INFO] [07/09/2017 20:17:28.624] [main] [akka.remote.Remoting] Remoting started; listening on addresses :[akka.tcp://[email protected]:55843]
[INFO] [07/09/2017 20:17:28.794] [SingletonClusterSystem-akka.actor.default-dispatcher-15] [akka.tcp://[email protected]:55843/user/singletonUser] Singleton identified at [akka.tcp://[email protected]:2551/user/singletonManager/singleton]
[INFO] [07/09/2017 20:17:28.817] [SingletonClusterSystem-akka.actor.default-dispatcher-15] [akka.tcp://[email protected]:2551/user/singletonManager/singleton] State on akka.tcp://[email protected]:2551:freeHoles=0,freeTrees=0,ttlMatches=0
[INFO] [07/09/2017 20:17:29.679] [SingletonClusterSystem-akka.actor.default-dispatcher-14] [akka.tcp://[email protected]:2551/user/singletonManager/singleton] State on akka.tcp://[email protected]:2551:freeHoles=1,freeTrees=0,ttlMatches=0
...
[INFO] [07/09/2017 20:17:38.676] [SingletonClusterSystem-akka.actor.default-dispatcher-3] [akka.tcp://[email protected]:2551/user/singletonManager/singleton] akka.tcp://[email protected]:2551 is leaving cluster ...
[INFO] [07/09/2017 20:17:39.664] [SingletonClusterSystem-akka.actor.default-dispatcher-3] [akka.tcp://[email protected]:2551/user/singletonManager/singleton] State on akka.tcp://[email protected]:2551:freeHoles=0,freeTrees=1,ttlMatches=4
[INFO] [07/09/2017 20:17:40.654] [SingletonClusterSystem-akka.actor.default-dispatcher-21] [akka.tcp://[email protected]:2551/user/singletonManager/singleton] State on akka.tcp://[email protected]:2551:freeHoles=0,freeTrees=2,ttlMatches=4
[INFO] [07/09/2017 20:17:41.664] [SingletonClusterSystem-akka.actor.default-dispatcher-17] [akka.tcp://[email protected]:2551/user/singletonManager/singleton] State on akka.tcp://[email protected]:2551:freeHoles=0,freeTrees=1,ttlMatches=5
[INFO] [07/09/2017 20:17:42.518] [SingletonClusterSystem-akka.actor.default-dispatcher-3] [akka.tcp://[email protected]:55843/user/singletonUser] Singleton identified at [akka.tcp://[email protected]:55839/user/singletonManager/singleton]
[INFO] [07/09/2017 20:17:43.653] [SingletonClusterSystem-akka.actor.default-dispatcher-19] [akka.tcp://[email protected]:55839/user/singletonManager/singleton] State on akka.tcp://[email protected]:55839:freeHoles=0,freeTrees=2,ttlMatches=5
[INFO] [07/09/2017 20:17:43.672] [SingletonClusterSystem-akka.actor.default-dispatcher-15] [akka.tcp://[email protected]:55839/user/singletonManager/singleton] State on akka.tcp://[email protected]:55839:freeHoles=0,freeTrees=1,ttlMatches=6
[INFO] [07/09/2017 20:17:45.665] [SingletonClusterSystem-akka.actor.default-dispatcher-14] [akka.tcp://[email protected]:55839/user/singletonManager/singleton] State on akka.tcp://SingletonCluste[email protected]:55839:freeHoles=0,freeTrees=2,ttlMatches=6
[INFO] [07/09/2017 20:17:46.654] [SingletonClusterSystem-akka.actor.default-dispatcher-19] [akka.tcp://[email protected]:55839/user/singletonManager/singleton] State on akka.tcp://[email protected]:55839:freeHoles=0,freeTrees=3,ttlMatches=6
...
[INFO] [07/09/2017 20:17:53.673] [SingletonClusterSystem-akka.actor.default-dispatcher-20] [akka.tcp://[email protected]:55839/user/singletonManager/singleton] akka.tcp://[email protected]:55839 is leaving cluster ...
[INFO] [07/09/2017 20:17:55.654] [SingletonClusterSystem-akka.actor.default-dispatcher-13] [akka.tcp://[email protected]:55839/user/singletonManager/singleton] State on akka.tcp://[email protected]:55839:freeHoles=0,freeTrees=4,ttlMatches=9
[INFO] [07/09/2017 20:17:55.664] [SingletonClusterSystem-akka.actor.default-dispatcher-24] [akka.tcp://[email protected]:55839/user/singletonManager/singleton] State on akka.tcp://[email protected]:55839:freeHoles=0,freeTrees=3,ttlMatches=10
[INFO] [07/09/2017 20:17:56.646] [SingletonClusterSystem-akka.actor.default-dispatcher-5] [akka.tcp://[email protected]:55843/user/singletonUser] Singleton identified at [akka.tcp://[email protected]:55840/user/singletonManager/singleton]
[INFO] [07/09/2017 20:17:57.662] [SingletonClusterSystem-akka.actor.default-dispatcher-17] [akka.tcp://[email protected]:55840/user/singletonManager/singleton] State on akka.tcp://[email protected]:55840:freeHoles=0,freeTrees=4,ttlMatches=10
[INFO] [07/09/2017 20:17:58.652] [SingletonClusterSystem-akka.actor.default-dispatcher-23] [akka.tcp://[email protected]:55840/user/singletonManager/singleton] State on akka.tcp://[email protected]:55840:freeHoles=0,freeTrees=5,ttlMatches=10

从结果显示里我们可以观察到随着节点脱离集群,SingletonActor自动转换到其它的集群节点上继续运行。

值得再三注意的是:以此等简单的编码就可以实现那么复杂的集群式分布运算程序,说明Akka是一种具有广阔前景的实用编程工具!

下面是本次示范的完整源代码:

build.sbt

name := "cluster-singleton"

version := "1.0"

scalaVersion := "2.11.9"

resolvers += "Akka Snapshot Repository" at "http://repo.akka.io/snapshots/"

val akkaversion = "2.4.8"

libraryDependencies ++= Seq(
  "com.typesafe.akka" %% "akka-actor" % akkaversion,
  "com.typesafe.akka" %% "akka-remote" % akkaversion,
  "com.typesafe.akka" %% "akka-cluster" % akkaversion,
  "com.typesafe.akka" %% "akka-cluster-tools" % akkaversion,
  "com.typesafe.akka" %% "akka-cluster-sharding" % akkaversion,
  "com.typesafe.akka" %% "akka-persistence" % "2.4.8",
  "com.typesafe.akka" %% "akka-contrib" % akkaversion,
  "org.iq80.leveldb" % "leveldb" % "0.7",
  "org.fusesource.leveldbjni" % "leveldbjni-all" % "1.8")

application.conf

akka.actor.warn-about-java-serializer-usage = off
akka.log-dead-letters-during-shutdown = off
akka.log-dead-letters = off

akka {
  loglevel = INFO
  actor {
    provider = "akka.cluster.ClusterActorRefProvider"
  }

  remote {
    log-remote-lifecycle-events = off
    netty.tcp {
      hostname = "127.0.0.1"
      port = 0
    }
  }

  cluster {
    seed-nodes = [
      "akka.tcp://[email protected]:2551"]
    log-info = off
  }

  persistence {
    journal.plugin = "akka.persistence.journal.leveldb-shared"
    journal.leveldb-shared.store {
      # DO NOT USE ‘native = off‘ IN PRODUCTION !!!
      native = off
      dir = "target/shared-journal"
    }
    snapshot-store.plugin = "akka.persistence.snapshot-store.local"
    snapshot-store.local.dir = "target/snapshots"
  }
}

SingletonActor.scala

package clustersingleton.sa

import akka.actor._
import akka.cluster._
import akka.persistence._
import com.typesafe.config.ConfigFactory
import akka.cluster.singleton._
import scala.concurrent.duration._
import akka.persistence.journal.leveldb._
import akka.util.Timeout
import akka.pattern._

object SingletonActor {
  sealed trait Command
  case object Dig extends Command
  case object Plant extends Command
  case object AckDig extends Command    //acknowledge
  case object AckPlant extends Command   //acknowledge

  case object Disconnect extends Command   //force node to leave cluster
  case object CleanUp extends Command      //clean up when actor ends

  sealed trait Event
  case object AddHole extends Event
  case object AddTree extends Event

  case class State(nHoles: Int, nTrees: Int, nMatches: Int)

  def create(port: Int) = {
    val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.port=$port")
        .withFallback(ConfigFactory.parseString("akka.cluster.roles=[singleton]"))
        .withFallback(ConfigFactory.load())
    val singletonSystem = ActorSystem("SingletonClusterSystem",config)

    startupSharedJournal(singletonSystem, (port == 2551), path =
        ActorPath.fromString("akka.tcp://[email protected]:2551/user/store"))

    val singletonManager = singletonSystem.actorOf(ClusterSingletonManager.props(
      singletonProps = Props[SingletonActor],
      terminationMessage = CleanUp,
      settings = ClusterSingletonManagerSettings(singletonSystem).withRole(Some("singleton"))
    ), name = "singletonManager")

  }

  def startupSharedJournal(system: ActorSystem, startStore: Boolean, path: ActorPath): Unit = {
    // Start the shared journal one one node (don‘t crash this SPOF)
    // This will not be needed with a distributed journal
    if (startStore)
      system.actorOf(Props[SharedLeveldbStore], "store")
    // register the shared journal
    import system.dispatcher
    implicit val timeout = Timeout(15.seconds)
    val f = (system.actorSelection(path) ? Identify(None))
    f.onSuccess {
      case ActorIdentity(_, Some(ref)) =>
        SharedLeveldbJournal.setStore(ref, system)
      case _ =>
        system.log.error("Shared journal not started at {}", path)
        system.terminate()
    }
    f.onFailure {
      case _ =>
        system.log.error("Lookup of shared journal at {} timed out", path)
        system.terminate()
    }
  }

}

class SingletonActor extends PersistentActor with ActorLogging {
  import SingletonActor._
  val cluster = Cluster(context.system)

  var freeHoles = 0
  var freeTrees = 0
  var ttlMatches = 0

  override def persistenceId = self.path.parent.name + "-" + self.path.name

  def updateState(evt: Event): Unit = evt match {
    case AddHole =>
      if (freeTrees > 0) {
        ttlMatches += 1
        freeTrees -= 1
      } else freeHoles += 1
    case AddTree =>
      if (freeHoles > 0) {
        ttlMatches += 1
        freeHoles -= 1
      } else freeTrees += 1

  }

  override def receiveRecover: Receive = {
    case evt: Event => updateState(evt)
    case SnapshotOffer(_,ss: State) =>
      freeHoles = ss.nHoles
      freeTrees = ss.nTrees
      ttlMatches = ss.nMatches
  }

  override def receiveCommand: Receive = {
    case Dig =>
      persist(AddHole){evt =>
        updateState(evt)
      }
      sender() ! AckDig   //notify sender message received
      log.info(s"State on ${cluster.selfAddress}:freeHoles=$freeHoles,freeTrees=$freeTrees,ttlMatches=$ttlMatches")

    case Plant =>
      persist(AddTree) {evt =>
        updateState(evt)
      }
      sender() ! AckPlant   //notify sender message received
      log.info(s"State on ${cluster.selfAddress}:freeHoles=$freeHoles,freeTrees=$freeTrees,ttlMatches=$ttlMatches")

    case Disconnect =>  //this node exits cluster. expect switch to another node
      log.info(s"${cluster.selfAddress} is leaving cluster ...")
      cluster.leave(cluster.selfAddress)
    case CleanUp =>
      //clean up ...
      self ! PoisonPill
  }

}

SingletonUser.scala

package clustersingleton.frontend
import akka.actor._
import clustersingleton.sa.SingletonActor
import com.typesafe.config.ConfigFactory
import akka.cluster.singleton._
import scala.concurrent.duration._

object SingletonUser {
  def create = {
    val config = ConfigFactory.parseString("akka.cluster.roles=[frontend]")
      .withFallback(ConfigFactory.load())
    val suSystem = ActorSystem("SingletonClusterSystem",config)

    val singletonProxy = suSystem.actorOf(ClusterSingletonProxy.props(
      singletonManagerPath = "/user/singletonManager",
      settings = ClusterSingletonProxySettings(suSystem).withRole(None)
    ), name= "singletonUser")

    import suSystem.dispatcher
    //send Dig messages every 2 seconds to SingletonActor through prox
    suSystem.scheduler.schedule(0.seconds,3.second,singletonProxy,SingletonActor.Dig)

    //send Plant messages every 3 seconds to SingletonActor through prox
    suSystem.scheduler.schedule(1.seconds,2.second,singletonProxy,SingletonActor.Plant)

    //send kill message to hosting node every 30 seconds
    suSystem.scheduler.schedule(10.seconds,15.seconds,singletonProxy,SingletonActor.Disconnect)
  }

}

ClusterSingletonDemo.scala

package clustersingleton.demo

import clustersingleton.sa.SingletonActor
import clustersingleton.frontend.SingletonUser

object ClusterSingletonDemo extends App {

  SingletonActor.create(2551)    //seed-node

  SingletonActor.create(0)   //ClusterSingletonManager node
  SingletonActor.create(0)
  SingletonActor.create(0)
  SingletonActor.create(0)

  SingletonUser.create     //ClusterSingletonProxy node

}
时间: 2024-12-17 15:22:13

Akka(12): 分布式运算:Cluster-Singleton-让运算在集群节点中自动转移的相关文章

redis演练(10) redis Cluster 集群节点维护

通过<redis演练(9)>演练,借助自带的redis-trib.rb工具,可"秒出"一个6节点的主从集群:还可以阅读服务器的响应:还演练了下自动failover效果. 接上回继续演练.本文演练内容涵盖以下内容. 为6节点集群环境,添加新节点 删除新增的新节点 集群间迁移 1.添加新节点 #环境清理 [[email protected] create-cluster]# ./create-cluster clean [[email protected] create-clu

分布式数据库集群节点数据一致性校验

某500强客户要上线一个功能,其后台所有数据库是我司设计开发的NoSQL数据库. 为了避免数据库集群中,数据节点不一致而导致问题,需要对数据库节点间的数据进行校验. 理论上说,数据库节点之间的数据,应当保持最终一致性.而我司的数据库,是在对主节点对数据进行操作时,coord节点会(立即)通知备节点拉取数据,从而保持数据的一致性.所以,对于正常运行的数据库来说,一个集群内每个节点上的数据,是完全一致的. 客户是上帝,我们所作的就是要让客户放心.虽然我们强调我们的数据库集群内的节点中数据是一致的,让

java架构师课程、性能调优、高并发、tomcat负载均衡、大型电商项目实战、高可用、高可扩展、数据库架构设计、Solr集群与应用、分布式实战、主从复制、高可用集群、大数据

15套Java架构师详情 * { font-family: "Microsoft YaHei" !important } h1 { background-color: #006; color: #FF0 } 15套java架构师.集群.高可用.高可扩展.高性能.高并发.性能优化.Spring boot.Redis.ActiveMQ.Nginx.Mycat.Netty.Jvm大型分布式项目实战视频教程 视频课程包含: 高级Java架构师包含:Spring boot.Spring  clo

弹性分布式数据集:一个支持容错的集群内存计算的抽象

注:本文章是翻译自:Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing 概要     我们提出了弹性分布式数据集(Resilient Distributed Datasets,简称RDDs)的概念,这是一个分布式内存的抽象,允许编程在大规模集群 上编写出以内存计算为基础的程序,并且该模型支持容错.RDD概念的提出主要启发于这样一种现象:有两种类型的应用程序,使用现

Redis Cluster 4.0高可用集群安装、在线迁移操作记录

之前介绍了redis cluster的结构及高可用集群部署过程,今天这里简单说下redis集群的迁移.由于之前的redis cluster集群环境部署的服务器性能有限,需要迁移到高配置的服务器上.考虑到是线上生产环境,决定在线迁移,迁移过程,不中断服务.操作过程如下: 一.机器环境 1 2 3 4 5 6 7 8 9 10 11 12 13 迁移前机器环境 ----------------------------------------------------------------------

windows+nginx+iis+redis+Task.MainForm构建分布式架构 之 (nginx+iis构建服务集群)

本次要分享的是利用windows+nginx+iis+redis+Task.MainForm组建分布式架构,由标题就能看出此内容不是一篇分享文章能说完的,所以我打算分几篇分享文章来讲解,一步一步实现分布式架构:下面将先给出整个架构的核心节点简介,希望各位多多点赞: . 架构设计图展示 . nginx+iis构建服务集群 . redis存储分布式共享的session及共享session运作流程 . redis主从配置及Sentinel管理多个Redis集群 . 定时框架Task.MainForm提

企业级JAVA大型分布式电商项目实战高并发集群分布式系统架构

并发,在操作系统中,是指一个时间段中有几个程序都处于已启动运行到运行完毕之间,且这几个程序都是在同一个处理机上运行,但任一个时刻点上只有一个程序在处理机上运行. "高可用性"(High Availability)通常来描述一个系统经过专门的设计,从而减少停工时间,而保持其服务的高度可用性. 一. 设计理念 空间换时间 多级缓存,静态化 客户端页面缓存(http header中包含Expires/Cache of Control,last modified(304,server不返回bo

集群环境中使用Redis实现分布式锁两种方式

一.介绍 互联网的应用场景中,为了支持高并发的请求,服务都是执行的分布式部署,相同的任务可以在集群中不同的服务器上执行,并且现在的服务容器都是支持多线程,相同的任务也可能会被同一个容器多次执行,都要求执行结果都满足幂等性的设计原则. 分布式锁,就是为了确保在分布式的环境下,相同任务只会执行成功的执行一次,后续的执行不会对这些已经产生了变化的业务再次产生影响. 分布式锁的实现有不少的方式,如: 使用RDBMS数据库本身的表锁或行锁特性: 使用Redis做为分布式锁: 使用Zookeeper做为分布

Redis集群部署一直卡在Waiting for the cluster to join ......(Redis集群总线配置)

redis集群总线端口为redis客户端端口加上10000,比如说你的redis 6379端口为客户端通讯端口,那么16379端口为集群总线端口 我搭建的redis集群中端口号是从 7001 ~ 7006的,其中 7001.7003.7005 为主节点,7002.7004.7006为从节点:那么redis集群中总线端口为17001.17003.17005,如图所示: 所以,所有服务器的点需要开通redis的客户端连接端口和集群总线端口 注意:firewall放开,如果有安全组,也要放开这两个端口