消息队列的优势

Top 10 Uses For A Message Queue

Geese love queues.
(Image by D.Hilgart)

We’ve
been working with, building, and evangelising message queues for the
last year, and it’s no secret that we think they’re awesome. We believe
message queues are a vital component to any architecture or application,
and here are ten reasons why:

  1. Decoupling
    It’s
    extremely difficult to predict, at the start of a project, what the
    future needs of the project will be. By introducing a layer in between
    processes, message queues create an implicit, data-based interface that
    both processes implement. This allows you to extend and modify these
    processes independently, by simply ensuring they adhere to the same
    interface requirements.
  2. Redundancy
    Sometimes
    processes fail when processing data. Unless that data is persisted,
    it’s lost forever. Queues mitigate this by persisting data until it has
    been fully processed. The put-get-delete paradigm, which many message
    queues use, requires a process to explicitly indicate that it has
    finished processing a message before the message is removed from the
    queue, ensuring your data is kept safe until you’re done with it.
  3. Scalability
    Because
    message queues decouple your processes, it’s easy to scale up the rate
    with which messages are added to the queue or processed; simply add
    another process. No code needs to be changed, no configurations need to
    be tweaked. Scaling is as simple as adding more power.
  4. Elasticity & Spikability
    When
    your application hits the front page of Hacker News, you’re going to
    see unusual levels of traffic. Your application needs to be able to keep
    functioning with this increased load, but the traffic is anomaly, not
    the standard; it’s wasteful to have enough resources on standby to
    handle these spikes. Message queues will allow beleaguered components to
    struggle through the increased load, instead of getting overloaded with
    requests and failing completely. Check out our Spikability blog post for more information about this.
  5. Resiliency
    When
    part of your architecture fails, it doesn’t need to take the entire
    system down with it. Message queues decouple processes, so if a process
    that is processing messages from the queue fails, messages can still be
    added to the queue to be processed when the system recovers. This
    ability to accept requests that will be retried or processed at a later
    date is often the difference between an inconvenienced customer and a
    frustrated customer.
  6. Delivery Guarantees
    The
    redundancy provided by message queues guarantees that a message will be
    processed eventually, so long as a process is reading the queue. On top
    of that, IronMQ provides an only-delivered-once guarantee. No matter
    how many processes are pulling data from the queue, each message will
    only be processed a single time. This is made possible because
    retrieving a message "reserves" that message, temporarily removing it
    from the queue. Unless the client specifically states that it‘s finished
    with that message, the message will be placed back on the queue to be
    processed after a configurable amount of time.
  7. Ordering Guarantees
    In
    a lot of situations, the order with which data is processed is
    important. Message queues are inherently ordered, and capable of
    providing guarantees that data will be processed in a specific order.
    IronMQ guarantees that messages will be processed using FIFO (first in,
    first out), so the order in which messages are placed on a queue is the
    order in which they‘ll be retrieved from it.
  8. Buffering
    In
    any non-trivial system, there are going to be components that require
    different processing times. For example, it takes less time to upload an
    image than it does to apply a filter to it. Message queues help these
    tasks operate at peak efficiency by offering a buffer layer--the process
    writing to the queue can write as fast as it’s able to, instead of
    being constrained by the readiness of the process reading from the
    queue. This buffer helps control and optimise the speed with which data
    flows through your system.
  9. Understanding Data Flow
    In
    a distributed system, getting an overall sense of how long user actions
    take to complete and why is a huge problem. Message queues, through the
    rate with which they are processed, help to easily identify
    under-performing processes or areas where the data flow is not optimal.
  10. Asynchronous Communication
    A
    lot of times, you don’t want to or need to process a message
    immediately. Message queues enable asynchronous processing, which allows
    you to put a message on the queue without processing it immediately.
    Queue up as many messages as you like, then process them at your
    leisure.

We
believe these ten reasons make queues the best form of communication
between processes or applications. We’ve spent a year building and
learning from IronMQ, and our customers are doing amazing things with
message queues. Queues are the key to the powerful, distributed
applications that can leverage all the power that the cloud has to
offer.


If you‘d like to get started with an efficient, reliable, and hosted message queue today, check out IronMQ. If you’d like to connect with our engineers about how queues could fit into your application, they’re always available at get.iron.io/chat.

原文地址

消息队列的优势,布布扣,bubuko.com

时间: 2024-10-12 01:05:07

消息队列的优势的相关文章

Linux进程间通信 -- 消息队列 msgget()、msgsend()、msgrcv()、msgctl()

下面来说说如何用不用消息队列来进行进程间的通信,消息队列与命名管道有很多相似之处.有关命名管道的更多内容可以参阅我的另一篇文章:Linux进程间通信 -- 使用命名管道 一.什么是消息队列 消息队列提供了一种从一个进程向另一个进程发送一个数据块的方法.  每个数据块都被认为含有一个类型,接收进程可以独立地接收含有不同类型的数据结构.我们可以通过发送消息来避免命名管道的同步和阻塞问题.但是消息队列与命名管道一样,每个数据块都有一个最大长度的限制. Linux用宏MSGMAX和MSGMNB来限制一条

消息总线VS消息队列

前段时间实现了一个基于RabbitMQ的消息总线,实现的过程中自己也在不断得思考.总结以及修正.需要考虑各个维度:效率.性能.网络.吞吐量.甚至需要自己去设想API可能的使用场景.模式.不过能有一件事情,自己愿意去做,在走路.吃饭.坐公交的时候都在思考如何去改进它,然后在实践的过程中,促使去思考并挖掘自己知识面的空白,也是一件让人开心的事情. 借此记录下自己在实现的过程中,以及平时的一些想法. 这是第一篇,先谈谈消息总线跟消息队列的区别,以及对于企业级应用需要将消息队列封装成消息总线的必要性.

linux进程间的通信(C): 消息队列

一.消息队列(message queue) 消息队列也是System V IPC机制之一. 消息队列与命名管道类似, 但少了打开和关闭管道方面的复杂性. 但使用消息队列并未解决我们在使用命名管道时遇到的一些问题, 如管道满时的阻塞问题. 消息队列提供了一种在两个不相关进程间传递数据的简单有效的方法. 与命名管道相比, 消息队列的优势在于,它独立于发送和接收进程而存在, 这消除了在同步命名管道的打开和关闭时可能产生的一些困难. 消息队列提供了一种从一个进程向另一个进程发送一个数据块的方法. 而且,

IPC——消息队列

Linux进程间通信——使用消息队列 下面来说说如何用不用消息队列来进行进程间的通信,消息队列与命名管道有很多相似之处.有关命名管道的更多内容可以参阅我的另一篇文章:Linux进程间通信——使用命名管道 一.什么是消息队列 消息队列提供了一种从一个进程向另一个进程发送一个数据块的方法.消息队列是消息的链接表,存放在内核中并由消息队列标识符标识.  每个数据块都被认为含有一个类型,接收进程可以独立地接收含有不同类型的数据结构.我们可以通过发送消息来避免命名管道的同步和阻塞问题(命名管道要读端和写端

网络通信 --> 消息队列

消息队列 消息队列提供了一种从一个进程向另一个进程发送一个数据块的方法. 每个数据块都被认为含有一个类型,接收进程可以独立地接收含有不同类型的数据结构.可以通过发送消息来避免命名管道的同步和阻塞问题.但是消息队列与命名管道一样,每个数据块都有一个最大长度的限制.Linux用宏MSGMAX和MSGMNB来限制一条消息的最大长度和一个队列的最大长度. msgget函数 该函数用来创建和访问一个消息队列. int msgget(key_t, key, int msgflg); key: 与其他的IPC

linux程序设计——消息队列(第十四章)

14.3    消息队列 这章介绍第三个也是最后一个System V IPC机制;消息队列(message queue).消息队列与命名管道有许多相似之处,但少了在打开和关闭管道方面的复杂性.使用消息队列并未解决在使用命名管道时遇到的一些问题,比如管道满时的阻塞问题. 消息队列提供了一种在两个不相关的进程之间传递数据的相当简单且有效的方法. 与命名管道相比,消息队列的优势在于,它独立与发送和接收进程而存在,这消除了在同步命名管道的打开和关闭时可能产生的一些困难. 消息队列提供了一种从一个进程向另

Linux IPC机制:消息队列示例

程序msg1.c用于接收消息 <span style="font-size:12px;">#include <stdio.h> #include <unistd.h> #include <string.h> #include <errno.h> #include <stdlib.h> #include <sys/msg.h> struct my_msg_st { long int my_msg_type

[转]Linux进程间通信——使用消息队列

点击此处阅读原文 另收藏ljianhui的专栏初学Linux 下面来说说如何用不用消息队列来进行进程间的通信,消息队列与命名管道有很多相似之处.有关命名管道的更多内容可以参阅我的另一篇文章:Linux进程间通信--使用命名管道 一.什么是消息队列 消息队列提供了一种从一个进程向另一个进程发送一个数据块的方法.  每个数据块都被认为含有一个类型,接收进程可以独立地接收含有不同类型的数据结构.我们可以通过发送消息来避免命名管道的同步和阻塞问题.但是消息队列与命名管道一样,每个数据块都有一个最大长度的

Linux进程间通信(1)——消息队列

下面来说说如何用不用消息队列来进行进程间的通信,消息队列与命名管道有很多相似之处.有关命名管道的更多内容可以参阅我的另一篇文章:Linux进程间通信——使用命名管道 一.什么是消息队列 消息队列提供了一种从一个进程向另一个进程发送一个数据块的方法.  每个数据块都被认为含有一个类型,接收进程可以独立地接收含有不同类型的数据结构.我们可以通过发送消息来避免命名管道的同步和阻塞问题.但是消息队列 与命名管道一样,每个数据块都有一个最大长度的限制. Linux用宏MSGMAX和MSGMNB来限制一条消