Building a Service Mesh with HAProxy and Consul

转自:https://www.haproxy.com/blog/building-a-service-mesh-with-haproxy-and-consul/

HashiCorp added a service mesh feature to Consul, its service-discovery and distributed storage tool. In this post, you’ll see how HAProxy is the perfect fit as a data plane for this architecture.

HAProxy is no stranger to the service mesh scene. Its high performance, low resource usage, and flexible design allows it to be embedded within various types of service-oriented architectures. For example, when Airbnb needed to scale its infrastructure to support a growing number of distributed services, it developed SmartStack. SmartStack is a service mesh solution that relies on instances of HAProxy relaying communication between services. Using HAProxy allowed SmartStack to take advantage of advanced load-balancing algorithms, traffic queuing, connection retries, and built-in health checking.

HAProxy Technologies is working with HashiCorp to bring you a Consul service mesh that utilizes HAProxy as a data plane. This will allow you to deploy the world’s fastest and most widely used software load balancer as a sidecar proxy, enabling secure and reliable communication between all of your services.

In Consul 1.2, HashiCorp released Connect, which is a feature that allows you to turn an existing Consul cluster into a service mesh. If you’re familiar with Consul, you’ll know it for its distributed key/value storage and dynamic service discovery. With the addition of Connect, you can register sidecar proxies that are colocated with each of your services and relay traffic between them, creating a service mesh architecture. Best of all, it’s a pluggable framework that allows you to choose the underlying proxy layer to pair with it.

With Connect, HAProxy can be combined with the Consul service mesh too. This provides you with many advantages when designing your architecture. First, let’s take a step back and cover what a service mesh is and why it benefits you to use one.

Why Use a Service Mesh?

Why would you use a service mesh anyway? It comes down to the challenges of operating distributed systems. When your backend services are distributed across servers, departments, and data centers, executing the steps of a business processes often involves plenty of network communication. However, if you’ve ever had the responsibility of managing such a network, L. Peter Deutsch’s Fallacies of Distributed Computing will ring true. The fallacies are:

  • The network is reliable.
  • Latency is zero.
  • Bandwidth is infinite.
  • The network is secure.
  • Topology doesn’t change.
  • There is one administrator.
  • Transport cost is zero.
  • The network is homogeneous.

Obviously, the network is not always reliable. Topology does change. The network is not secure by default. How can you manage these risks? Ultimately, you want to apply mechanisms that addresses these issues without adding tons of complexity into every service that you own. In essence, each application shouldn’t need to be aware that a network exists at all.

That is why service meshes are becoming so popular. They abstract away the network from the application code. A service mesh is a design pattern where, instead of having services communicate directly to one another or directly to a message bus, requests are first passed to intermediary proxies that relay messages from one service to another.

The benefit is that the proxies can support functions for dealing with an unruly network that all services need, but which are more easily handled outside of the services themselves. For example, a proxy could retry a connection if it fails the first time and secure the communication with TLS encryption. You end up separating network functionality from application functionality.

The network-related problems that we mentioned before are mitigated by capabilities within HAProxy. For example, the solutions include:

  • The network is not reliable = Retry logic for connecting to services
  • Bandwidth is not infinite = Rate limiting to prioritize certain traffic or prevent overuse
  • Topology changes = Consistent endpoints, such as always communicating with localhost, service discovery, and DNS resolution at run time
  • The network is not secure = Truly end-to-end encryption with mutual TLS authentication
  • There is never just one administrator = Authorizing which services are allowed to connect to which other services
  • Transport cost is not zero = Observability of service traffic

Let’s see how the pieces fit together.

Control Plane and Data Plane

How does Consul relate to HAProxy? What are the responsibilities of each? Think of the whole thing like a real-time strategy video game. One of my favorites is Starcraft. In this game, you, the player, control an army of workers whose sole mission in life is to await their marching orders from you and then execute them. For example, you could tell them to go mine for minerals or explore a new area of the map and they’ll happily go off and do what’s been asked of them.

Compared to a service mesh, you, the player, represent Consul. Consul gives the marching orders to all of the proxies under its influence. It provides each one with the configuration details it needs: information about where other services can be found, which port to listen on for incoming requests, which TLS certificate to use for encryption, and which upstream services the local service depends on. In service mesh terminology, Consul is the control plane.

HAProxy, the Advanced Proxy

The proxies take on a more drone-like approach to life. You don’t need to configure each instance of HAProxy individually. They are responsible for querying the central Consul registry to get their configuration information on their own. This proxy layer is called the data plane. Of course, they are just as important as your video game workers. The proxy technology that you choose determines the capabilities your service mesh will have. Will it be able to encrypt communication? Will it have logic for reconnecting to a service if the first attempt fails?

HAProxy gives you features that a distributed architecture requires. It is the product of the hard-working open-source community and has become known as the fastest and most widely used software load balancer in the world. You get TLS encryption, connection retry logic, server persistence, rate limiting, authorization between services, and observability all out-of-the-box and in a lightweight process.

The Components

The key pieces of your service mesh will include the following:

  • Your service
  • A Consul agent, running locally, that your service registers with
  • The proxy, which is registered as a sidecar
  • A quorum of Consul agents that are in server mode, responsible for managing the service registry and configuration information

Let’s cover these components in more detail.

Your Service (aka your business logic)

Your service is at the heart of the design. It exposes functionality over a TCP port and, optionally, relies on other distributed services. The goal is to minimize the ways in which you need to change it in order to fit into the service mesh. Ideally, it should continue on as it always has, oblivious to the topology of the outside network. Consul gives you this separation of concerns.

Like in our video game analogy, let’s say that we’re talking about a service that mines for minerals. However, maybe it needs to talk to the map service to find out where to start working. Ordinarily, you would need to configure it with a known address or DNS name of the map service. If these settings ever changed, your mining service’s configuration also had to change to point to the new endpoint.

With Consul, service discovery allows you to query a central registry to find out where services live. However, with Connect, it gets even better. A local instance of HAProxy is created next to the service so that it’s listening on localhost. Then, your service always queries it as if it were the remote map service. This is known as a sidecar proxy: Each service gets its own local proxy and, through the relaying of messages, can reach other remote services as if they too were local. Now, you can point your local service’s configuration at localhost and never need to change that endpoint again.

Consul Agent

Also local to your service is a Consul agent. An agent is the Consul executable running in regular agent mode, as opposed to server mode. You register the local service with this agent by adding a JSON file to the /etc/consul.d folder. This gets synced to the server-mode agents.

Think of it like a walkie-talkie to the agents that are running in server mode, which hold the source of truth. The registration gets sent up and saved to the global registry. Then, when HAProxy wants to discover where it can find services on the network, it asks its local Consul agent and that agent pulls the information back down. Afterwards, it gives the answer to your proxy.

A locally run Consul agent is one of the key components of the service mesh

The same goes for the upstream services you’re calling. They each get their own local Consul agent. They register with it so that others can find them and they also tell Consul about any services that they, in turn, depend on. All of this information is used to configure the proxies so that in the end, all services talk to localhost and all communication becomes proxy-to-proxy.

The HAProxy Sidecar

Next to each service and Consul agent is a sidecar proxy, which is an instance of HAProxy. You don’t configure it directly. Instead, you install it with a specialized handler that queries the Consul agent to know which upstream services your local service depends on. Then, HAProxy sets up listeners on localhost so that your application can talk to the remote endpoints that it needs to, but without needing to know exactly where those endpoints live. In essence, you’re abstracting away the network.

A benefit to routing traffic through a local proxy is that the proxy can enforce fine-grained authorization rules. Consul lets you define intentions, which are rules that govern whether one service can talk to another. At runtime, HAProxy queries the local Consul agent to check if an incoming connection is allowed.

To review, your service registers with its local Consul agent information about itself and the upstream services on which it depends. That agent sends that information up to the Consul servers, which maintain the central registry. The local instance of HAProxy then asks the local Consul agent for configuration information and the agent pulls back the data that the Consul servers have compiled.

HAProxy then configures itself, listening for incoming requests to the local service and also for outgoing requests that the service makes to other services. The HAProxy handler continues to check for changes to service registrations and updates itself when needed.

Each service and Consul agent have a sidecar proxy, an instance of HAProxy

In the end, all service-to-service communication ends up going through proxies. You can see why it’s important to choose a proxy implementation that meets your needs. When it comes to speed and the right mix of features, HAProxy has a lot of benefits.

Server-Mode Agents

You’ve probably got a good idea about how the Consul agents that are running in server mode fit into this architecture. They maintain the service registry, which is the source of truth about where each service endpoint can be found. They also store the settings that each proxy needs to self-configure itself, such as TLS certificates. The Consul servers host the Consul HTTP API that local agents use to send and receive information about the cluster.

Agents in server mode must elect a leader using a consensus protocol that is based on the Raft algorithm. For that reason, you should dedicate an odd number of nodes, such as three, to participate in the quorum. These agents should reside on separate machines, if possible, so that your cluster has resiliency.

The Implementation

Baptiste Assmann presented a solution for integrating HAProxy with Consul at this year’s HashiConf conference in San Francisco.

During his presentation, he demonstrated using HAProxy as a sidecar that’s configured using information from the local Consul agent. It uses a Golang binary as the handler for configuring HAProxy. That will be available in Q1 of 2019.

In the meantime, he has reproduced its behavior using Bash. The HAProxy/Consul Github repository uses a Bash script to configure local instances of HAProxy. You can spin up this demo environment using Docker Compose. To get started, download the repository and then run the following commands:

  cd blog/building_service_mesh
  docker-compose up -d consul-server
   
  # Wait for consul-server to bootstrap itself
  sleep 10
   
  # Create ACL token that agents use to connect to consul-server
  docker-compose exec consul-server curl --request PUT --header "X-Consul-Token: mastertoken" --data ‘{ "ID": "agenttoken", "Name": "Agent Token", "Type": "client", "Rules": "node \"\" { policy = \"write\" } service \"\" { policy = \"write\" }" }‘ http://localhost:8500/v1/acl/create
  sleep 1
   
  # Create agents
  docker-compose up -d www redis

You will then be able to open a browser and go to http://localhost:8500/ui to see the Consul dashboard. You’ll need to go to the ACL screen first and enter the master token secret: “mastertoken”, which we’ve set in consul-server/consul.d/basic_config.

The Consul Dashboard

Notice how a *-sidecar-proxy service has been generated for the two services we’re creating, redis and www. The www app is a Node.js application that connects to redis via the service mesh. It’s able to connect to Redis on localhost and the connection is routed to the right place.

For this example, each service is hosted inside of a Docker container. Within each container is the service, a local Consul agent, a running instance of HAProxy, and a script called controller.sh that configures HAProxy on the fly. There’s also a Lua file, authorize.lua, that validates the connections between services by checking the client certificate passed with each request. The shell script and Lua file are the same for both services.

They have different start.sh files though, which Docker executes on container startup. The start.sh script installs the service, which is the Node.js application for www and Redis for the redis container, and then registers it with the local Consul agent. Here is the JSON registration for www:

  {
  "service": {
  "name": "${SERVICENAME}",
  "port": 8080,
  "address": "${MYIP}",
  "connect": {
  "sidecar_service": {
  "proxy": {
  "upstreams": [
  {
  "destination_name": "redis",
  "destination_type": "connect",
  "local_bind_address": "127.0.0.1",
  "local_bind_port": 6379
  }],
  "config": {
  "unsecured_bind_port": 21002,
  "local_service_mode": "http"
  }
  }
  }
  }
  },
  "acl_datacenter":"dc1",
  "acl_default_policy":"deny",
  "acl_down_policy":"extend-cache",
  "acl_token":"agenttoken"
  }

The Node.js app listens on port 8080 locally, but is exposed through the service mesh on an automatically assigned port chosen by Consul. There’s a dependency on the upstream Redis service. HAProxy binds it locally to port 6379, but proxies it to the Redis container on a port chosen by Consul. In this way, the Node.js app can access Redis at 127.0.0.1:6379.

Also note that a configuration parameter called unsecured_bind_port allows you to access the app from outside of the service mesh. So, you can go to http://localhost:21002 on the machine where you’re running Docker. Here’s a screenshot of what it looks like:

Your app can be accessed from outside of the service mesh

In the example code, we’ve enabled Consul ACLs with a default deny rule. Before the www app can connect to Redis, you must add an Intention. Intentions are rules that allow or deny connections between services. You could allow the Node.js app to connect to Redis by adding a new Intention:

Adding new Intentions allows the Node.js app to connect to Redis

Now, the app will succeed when it tries to read from or write to Redis:

By adding new Intentions the app can successfully read from or write to Redis

After you remove the Intention, it will fail again due to the default deny set up by the ACL.

You can also see the HAProxy Stats page for the app by going to http://localhost:1936. This is great for troubleshooting connection problems.

To extend this example, add more containers, patterning them off of the given Dockerfiles. Then, update your start.sh file to install your application and register it with Consul. Last, add the new service to the docker-compose.yml file.

Conclusion

Consul’s Connect feature enables you to transform a Consul cluster into a service mesh. Connect is a pluggable framework that allows you to choose the proxy technology that fits your needs best.

In this blog post, you learned how HAProxy can be selected as the data plane, giving you access to features like TLS encryption, connection retry logic, server persistence, rate limiting, authorization between services, and observability. I hope you’re as excited as we are about the possibilities that this creates! In the coming months, we will be releasing more information about our integration with Consul and the new Golang implementation.

原文地址:https://www.cnblogs.com/rongfengliang/p/11118378.html

时间: 2024-08-13 08:44:48

Building a Service Mesh with HAProxy and Consul的相关文章

consul 1.2 支持service mesh

主要说明: This release supports a major new feature called Connect that automatically turns any existing Consul cluster into a service mesh solution. Connect enables secure service-to-service communication with automatic TLS encryption and identity-based

什么是 Service Mesh

作者|敖小剑 微服务方兴未艾如火如荼之际,在 spring cloud 等经典框架之外,Service Mesh 技术正在悄然兴起.到底什么是 Service Mesh,它的出现能带来什么,又能改变什么?本文整理自数人云资深架构师敖小剑在 QCon 2017 上海站上的演讲. 简单回顾一下过去三年微服务的发展历程.在过去三年当中,微服务成为我们的业界技术热点,我们看到大量的互联网公司都在做微服务架构的落地.也有很多传统企业在做互联网技术转型,基本上还是以微服务和容器为核心. 在这个技术转型中,我

基于事件驱动机制,在Service Mesh中进行消息传递的探讨

翻译 | 宋松 原文 | https://www.infoq.com/articles/service-mesh-event-driven-messaging 关键点 当前流行的Service Mesh实现(Istio,Linkerd,Consul Connect等)仅满足微服务之间的请求 - 响应式同步通信. 为了推进和采用Service Mesh,我们认为支持事件驱动或基于消息的通信是至关重要的. 在Service Mesh中实现消息传递支持有两种主要的体系结构模式:协议代理sidecar,

第八章 跨语言服务治理方案 Service Mesh

8.1 Service Mesh 概述 新兴的下一代微服务架构,被称为下一代微服务,同时也是云原生技术栈的代表技术之一. 8.1.1 Service Mesh的由来 从2016年到2018年,service mesh经历了从无到有的过程 8.1.2 Service Mesh的定义 服务网格是一个基础设施层,用于处理服务间通信.现代云原生应用有着复杂的服务拓扑结构,服务网格负责在这些拓扑结构中实现请求的可靠传递.实践中,服务网格通常被实现为一组轻量级网络代理,它们与应用程序部署在一起,对应用程序透

Service Mesh服务网格清单

Service Mesh服务网格清单 Istio Istio官网 Istio中文官网 Istio开源 无需太多介绍Service Mesh明日之星,扛把子,截止2019.11还有太多问题没解决 复杂性,性能让人望而却步,能上生产的是真要技术厉害,还得内心强大,项目允许 Linkerd Linkerd官网 Linkerd中文官网 A service mesh for Kubernetes and beyond. Main repo for Linkerd 2.x. Linkerd is a ser

Service Mesh对企业安全而言意味着什么?

你听说过Service Mesh(服务网格)吗? 我相信你听说过.Service Mesh正成为容器生态圈愈发重要的一部分. 本文将简要概述Service Mesh的作用,并深入探讨它们对于企业安全性的意义. Service Meshes是什么?它为何如此重要? 连接问题 要想理解Service Mesh的存在原因,首先考虑一下容器环境中的网络连接. 想象一下当你运行一个云原生应用时会发生什么.但凡它具有一定的规模和复杂性,它通常都需要由大量单独的服务组成,这些服务间为了能够像一个单体桌面应用程

什么是Service Mesh?

转至大佬宋净明的博客:https://jimmysong.io/posts/what-is-a-service-mesh/ Service mesh 又译作 "服务网格",作为服务间通信的基础设施层.Buoyant 公司的 CEO Willian Morgan 在他的这篇文章 WHAT'S A SERVICE MESH? AND WHY DO I NEED ONE? 中解释了什么是 Service Mesh,为什么云原生应用需要 Service Mesh. 如 Willian Morg

微服务架构下 Service Mesh 会是闪亮的明天吗?

7月7日,时速云企业级容器 PaaS 技术沙龙第 10 期在上海成功举办,时速云容器架构负责人魏巍为大家详细讲解了 Service Mesh 中代表性的实践方案.并以 Istio 为例详细讲解了 Service Mesh 中的技术关键点,包括 Istio 控制平面.Istio 数据平面等.以下内容根据魏巍分享整编,希望对大家了解 Service Mesh 有所帮助. 魏巍:大家下午好,刚才几位讲师讲了 K8S 的存储.PaaS 在企业的落地实践等,我们接下来要讲的是企业有了 PaaS 平台.并且

阿里巴巴中间件团队在 Service Mesh 的实践和探索

摘要: 所有软件最重要的使命不是满足功能要求,而是演进,从而持续成长. 精彩观点导读:? 我们去探索一项技术,并不会仅仅因为其先进性,而是因为我们目前遇到了一些无法解决的问题,而这项技术正好能解决这个问题. ? 所有软件最重要的使命不是满足功能要求,而是演进,从而持续成长. ? 微服务本质是对服务的拆分,微服务架构符合工程领域常用的"分而治之"范式. 近日,在Aliware Open Source?成都站-Apache Dubbo 开发者沙龙上,阿里巴巴中间件高级技术专家李云(至简)向