Master Slave SQL

http://turbogears.readthedocs.io/en/latest/cookbook/master-slave.html

SQLAlchemy Master Slave Load Balancing

Since version 2.2 TurboGears has basic support for Master/Slave load balancing
and provides a set of utilities to use it.

TurboGears permits to declare a master server and any number of slave servers, all the
writes will automatically redirected to the master node, while the other calls will
be dispatched randomly to the slave nodes.

All the queries executed outside of TurboGears controllers will run only on the
master node, those include the queries performed by the authentication stack to
initially look up an already logged in user, its groups and permissions.

Enabling Master Slave Balancing

To enable Master Slave load Balancing you just need to edit yourmodel/__init__.pymaking the sessionmaker use the TurboGears BalancedSession:

from tg.configuration.sqla.balanced_session import BalancedSessionmaker = sessionmaker(autoflush=True, autocommit=False,
                     class_=BalancedSession,
                     extension=ZopeTransactionExtension())

Doing this by itself will suffice to make load balancing work, but still
as there is only the standard database configuration the BalancedSessionwill just be redirecting all the queries to the only available serve.

Configuring Balanced Nodes

To let load balancing work we must specify at least a master and slave server
inside our application configuration. The master server can be specified
using thesqlalchemy.masterset of options, while any number of slaves
can be configured using thesqlalchemy.slavesoptions:

sqlalchemy.master.url = mysql://username:[email protected]:port/databasenamesqlalchemy.master.pool_recycle = 3600sqlalchemy.slaves.slave1.url = mysql://username:[email protected]:port/databasenamesqlalchemy.slaves.slave1.pool_recycle = 3600

The master node can be configured also to be a slave, this is usually the
case when we want the master to also handle some read queries.

Driving the balancer

TurboGears provides a set of utilities to let you change the default behavior
of the load balancer. Those include the @with_engine(engine_name) decorator
and the DBSession().using_engine(engine_name) context.

The with_engine decorator

The with_engine decorator permits to force a controller method to
run on a specific node. It is a great tool for ensuring that some
actions take place on the master node, like controllers that edit
content.

from tg import [email protected](‘myproj.templates.about‘)@with_engine(‘master‘)def about(self):
    DBSession.query(model.User).all()
    return dict(page=‘about‘)

The previous query will be executed on the master node, if the @with_enginedecorator is removed it will get execute on any random slave.

The with_engine decorator can also be used to force turbogears
to use the master node when some parameters are passed by url:

@expose(‘myproj.templates.index‘)@with_engine(master_params=[‘m‘])def index(self):
    DBSession.query(model.User).all()
    return dict(page=‘index‘)

In this case calling http://localhost:8080/index will result in queries
performed on a slave node, while calling http://localhost:8080/index?m=1 will
force the queries to be executed on the master node.

Pay attention that the m=1 parameter can actually have any value, it just
has to be there. This is especially useful when redirecting after an action
that just created a new item to a page that has to show the new item. Using
a parameter specified in master_params we can force TurboGears to fetch
the items from the master node so to avoid odd results due to data propagation
delay.

Keeping master_params around

By default parameters specified in with_engine master_params will be
popped from the controller params. This is to avoid messing with validators
or controller code that doesn’t expect the parameter to exist.

If the controller actually needs to access the parameter a dictionary can be
passed to @with_engine instead of a list. The dictionary keys will be
the parameters, while the value will be if to pop it from the
parameters or not.

@expose(‘myproj.templates.index‘)@with_engine(master_params={‘m‘:False})def index(self, m=None):
    DBSession.query(model.User).all()
    return dict(page=‘index‘, m=m)

Forcing Single Queries on a node

Single queries can be forced to execute on a specific node using theusing_engine method of the BalancedSession. This method
returns a context manager, until queries are executed inside this
context they are run on the constrained engine:

with DBSession().using_engine(‘master‘):
    DBSession.query(model.User).all()
    DBSession.query(model.Permission).all()DBSession.query(model.Group).all()

In the previous example the Users and the Permissions will be
fetched from the master node, while the Groups will be fetched
from a random slave node.

Debugging Balancing

Setting the root logger of your application to DEBUG will let
you see which node has been choose by the BalancedSessionto perform a specific query.

时间: 2024-08-04 15:06:35

Master Slave SQL的相关文章

Windows下搭建MySQL Master Slave

转:http://www.cnblogs.com/gaizai/p/3248207.html http://www.cnblogs.com/gaizai/archive/2013/03/15/2961868.html   MySQL表数据迁移自动化 http://www.cnblogs.com/gaizai/archive/2012/10/23/2735556.html  Ubuntu10下MySQL搭建Master Slave 一.背景 服务器上放了很多MySQL数据库,为了安全,现在需要做M

配置master/slave主从数据库

http://wangwei007.blog.51cto.com/68019/965575 生产环境master/slave主从数据库手动同步 需求:master已经在运行,不可锁表更不可停用它,在线运行添加一个slave数据库. 方法:基本配置网上查找,现在说说关键部分:如何同步数据操作 1.备份导出主机的需要同步的数据库文件 [[email protected] c_learn]# /usr/local/mysql/bin/mysqldump -uroot -p --lock-tables

Amoeba在Master/Slave结构下的读写分离测试

Amoeba在Master/Slave结构下的读写分离 MySQL主从复制原理图 MySQL使用3个线程来执行复制功能(其中1个在主服务器上,另两个在从服务器上. 当发出START SLAVE时,从服务器创建一个I/O线程,以连接主服务器并让它发送记录在其二进制日志中的语句. 主服务器创建一个线程将二进制日志中的内容发送到从服务器.该线程可以识别为主服务器上SHOW PROCESSLIST的输出中的Binlog Dump线程. 从服务器I/O线程读取主服务器Binlog Dump线程发送的内容并

MySQL的Master/Slave集群安装和配置

本文讲述MySQL的Master/Slave集群安装和配置,安装的版本是最新的稳定版本GA 5.6.19. 为了支持有限的HA,我们使用Master/Slave简单的读写分离集群.有限的HA是指当Master不可用时,数据不会丢失,但在Master宕机的情况下是不可写的,必须手工处理故障.如果要支持更高的可用性,可以使用两台Master来做热切换. Master和Slave的MySQL安装是相同的,只是my.cnf的配置不同,需要配置二进制日志文件复制. 没有特殊说明,命名中带#的为root用户

Slave SQL: Error 'Incorrect string value ... Error_code: 1366

背景: 主从环境一样,字符集是utf8. Slave复制报错,平时复制都正常也没有出现过问题,今天突然报错: 150610 17:47:10 [ERROR] Slave SQL: Error 'Incorrect string value: '\xD3\xC3\xB6\xD2\xBB\xBB...' for column 'remark' at row 1' on query. Default database: 'jute'. Query: 'INSERT INTO jute_file_use

mongodb之master/slave模式 + auth

## 主从带认证: 主服务器和从服务器必须开启安全认证:--auth, 主服务器和从服务器的admin数据库中必须有全局用户, 然后主服务器的local数据库和从服务器的local数据均有名为repl且密码相同的用户名. 注:local:本地数据库 这个数据库不会同步,主要存放同步的信息.在MongoDB2.0.2版本测试时,从服务器的admin数据库中没有全局用户时也能进行复制(Deven:我们就是采用这个方式, 从服务器admin数据库没有建立用户),尽管admin中无用户,客户端连接此服务

Jenkins Master/Slave架构

原文:http://www.cnblogs.com/itech/archive/2011/11/11/2245849.html 一 Jenkins Master/Slave架构 Master/Slave相当于Server和 agent的概念.Master提供web接口让用户来管理job和slave,job可以运行在master本机或者被分配到slave上运行.一个 master可以关联多个slave用来为不同的job或相同的job的不同配置来服务. 当job被分配到slave上运行的时候,此时m

自动安装MongoDB Master, Slave, Arbiter脚本

最近有一个新项目需要用到MongoDB的Master,Slave,Arbiter的架构,去官网翻了一下文档,写了一个简陋的脚本.脚本可以在我的github上找到https://github.com/sangrealest #!/bin/bash #Author:Shanker #Time:2016/03/04 SlaveIP='10.128.129.45' SlaveName='Databse-Slave' SlaveMongoPort='27017' ArbiterIP='10.128.129

Redis 的 master/slave 复制

Redis 的 master/slave 复制:    Redis 的 master/slave 数据复制方式可以是一主一从或者是一主多从的方式,Redis 在 master 是非阻塞模式,也就是说在 slave 执行数据同步的时候,master 是可以接受客户端的 请求的,并不影响同步数据的一致性,然而在 slave 端是阻塞模式的,slave 在同步 master 数据时,并不能够响应客户端的查询  Redis 的 master/slave 模式下,master 提供数据读写服务,而 sla