ZeroMQ with producer-consumer

</step00>

Make sure what you need !

Let‘s see the map below:

</step01> If your data centre send many many data to you by socket ...

Let‘s see the map again ..

</step02> So you may know how to use the ZeroMQ deal with your data ...

We assume the the socket.recv just like a client ...(Actually,it‘s server/client )

So the code can be like the below ...

</pre><pre name="code" class="python">#!/usr/bin/python

import zmq 

import time

context = zmq.Context()

socket = context.socket(zmq.REQ)

socket.connect("tcp://localhost:5559")

for request in range(1,11):

        socket.send(b"Hello:-->%d" % request)

        message = socket.recv()

        print("Received reply %s [%s]" % (request,message))

        time.sleep(1)

</step03> Here,You have the data source ... But where is the ZeroMQ ..

Good ! let‘s create one ,Because it‘s work like a broker , so we name it as "broker.py"

#!/usr/bin/python

import zmq

context = zmq.Context()

frontend = zmq.Context()

frontend = context.socket(zmq.ROUTER)

backend = context.socket(zmq.DEALER)

frontend.bind("tcp://*:5559")

backend.bind("tcp://*:5560")

poller = zmq.Poller()

poller.register(frontend,zmq.POLLIN)

poller.register(backend,zmq.POLLIN)

while True:

        socks = dict(poller.poll())

        if socks.get(frontend) == zmq.POLLIN:

                message = frontend.recv_multipart()

                backend.send_multipart(message)

        if socks.get(backend) == zmq.POLLIN:

                message = backend.recv_multipart()

                frontend.send_multipart(message)

</step04> we have a broker ,so He/She looking for someone work for him/her ....

Here,We will hire some worker ...(Actually,We make them ...)

How to make it ...Let‘s see the code below

#!/usr/bin/python

import zmq

context = zmq.Context()

frontend = zmq.Context()

frontend = context.socket(zmq.ROUTER)

backend = context.socket(zmq.DEALER)

frontend.bind("tcp://*:5559")

backend.bind("tcp://*:5560")

poller = zmq.Poller()

poller.register(frontend,zmq.POLLIN)

poller.register(backend,zmq.POLLIN)

while True:

        socks = dict(poller.poll())

        if socks.get(frontend) == zmq.POLLIN:

                message = frontend.recv_multipart()

                backend.send_multipart(message)

        if socks.get(backend) == zmq.POLLIN:

                message = backend.recv_multipart()

                frontend.send_multipart(message)

</step05>

That‘s all !  Thanks :)

Ref:http://zguide.zeromq.org/page:all

时间: 2024-11-11 09:35:24

ZeroMQ with producer-consumer的相关文章

JAVA多线程(七)模式-Producer Consumer

Producer Consumer 生产者创建数据,通过中介控制流量并安全传递给消费者. 适用环境 生产者生产数据的速度与消费者处理数据的速度不一致,中介者通过缓存和阻塞对消费者的数据压力进行调整. 样例 4生产者生产产品,放入市场,2消费者消费. 产品 package ProducerConsumer; public class Product { private String prdId=null; public Product(String prdId) { this.prdId=prdI

Kafka 学习笔记之 Producer/Consumer (Scala)

既然Kafka使用Scala写的,最近也在慢慢学习Scala的语法,虽然还比较生疏,但是还是想尝试下用Scala实现Producer和Consumer,并且用HashPartitioner实现消息根据key路由到指定的partition. Producer: import java.util.Properties import kafka.producer.ProducerConfig import kafka.producer.Producer import kafka.producer.Ke

kafka producer consumer

package demo; import java.util.Properties; import kafka.javaapi.producer.Producer; import kafka.producer.KeyedMessage; import kafka.producer.ProducerConfig; public class producer { private final Producer<String, String> producer; public final static

kafka java producer consumer实践

java提供了方便的API进行kafka消息处理.简单总结一下: 学习参考:http://www.itnose.net/st/6095038.html POM配置(关于LOG4J的配置参看 http://www.cnblogs.com/huayu0815/p/5341712.html) <dependencies> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka

Python实现:生产者消费者模型(Producer Consumer Model)

#!/usr/bin/env python #encoding:utf8 from Queue import Queue import random,threading,time #生产者类 class Producer(threading.Thread): def __init__(self, name,queue): threading.Thread.__init__(self, name=name) self.data=queue def run(self): for i in range

Kafka的Producer和Consumer的示例

我使用的kafka版本是:0.7.2 jdk版本是:1.6.0_20 http://kafka.apache.org/07/quickstart.html官方给的示例并不是很完整,以下代码是经过我补充的并且编译后能运行的. 分布式发布订阅消息系统 Kafka 架构设计 http://www.linuxidc.com/Linux/2013-11/92751.htm Apache Kafka 代码实例 http://www.linuxidc.com/Linux/2013-11/92754.htm A

rabbitMQ、activeMQ、zeroMQ、Kafka、Redis 比较

Kafka作为时下最流行的开源消息系统,被广泛地应用在数据缓冲.异步通信.汇集日志.系统解耦等方面.相比较于RocketMQ等其他常见消息系统,Kafka在保障了大部分功能特性的同时,还提供了超一流的读写性能. 针对Kafka性能方面进行简单分析,相关数据请参考:https://segmentfault.com/a/1190000003985468,下面介绍一下Kafka的架构和涉及到的名词: Topic:用于划分Message的逻辑概念,一个Topic可以分布在多个Broker上. Parti

Go语言(golang)开源项目大全

转http://www.open-open.com/lib/view/open1396063913278.html内容目录Astronomy构建工具缓存云计算命令行选项解析器命令行工具压缩配置文件解析器控制台用户界面加密数据处理数据结构数据库和存储开发工具分布式/网格计算文档编辑器Encodings and Character SetsGamesGISGo ImplementationsGraphics and AudioGUIs and Widget ToolkitsHardwareLangu

RabbitMQ与java、Spring结合实例详细讲解

林炳文Evankaka原创作品.转载请注明出处http://blog.csdn.net/evankaka 摘要:本文介绍了rabbitMq,提供了如何在Ubuntu下安装RabbitMQ 服务的方法.最好以RabbitMQ与java.Spring结合的两个实例来演示如何使用RabbitMQ. 本文工程免费下载 一.rabbitMQ简介 1.1.rabbitMQ的优点(适用范围)1. 基于erlang语言开发具有高可用高并发的优点,适合集群服务器.2. 健壮.稳定.易用.跨平台.支持多种语言.文档

GO语言的开源库

Indexes and search engines These sites provide indexes and search engines for Go packages: godoc.org gowalker gosearch Sourcegraph Contributing To edit this page you must be a contributor to the go-wiki project. To get contributor access, send mail t