设置Distribution clean up 每次删除Command的数量

Replication Job “Distribution clean up: distribution” 默认设置是,每10minutes运行一次,每次删除2000个Command。这对于有1.9亿条Commands的distribution来说,显得力不从心。需要修改 PROCEDURE [dbo].[sp_MSdelete_publisherdb_trans],重新设置每次删除的Commands 数量,我的设置是每次删除20000 command。

设置的过程比较简单,在PROCEDURE [dbo].[sp_MSdelete_publisherdb_trans]中,查找2000,替换为 20000,需要修改三个地方

1, DELETE TOP(20000) MSrepl_commands WITH (PAGLOCK)

    WHILE 1 = 1
    BEGIN
        DELETE TOP(20000) MSrepl_commands WITH (PAGLOCK) from MSrepl_commands with (INDEX(ucMSrepl_commands))
            WHERE publisher_database_id = @publisher_database_id
                AND xact_seqno IN (SELECT DISTINCT snap_xact_seqno
                                    FROM @snapshot_xact_seqno)
            OPTION (MAXDOP 1)

        SELECT @row_count = @@rowcount

        -- Update output parameter
        SELECT @num_commands = @num_commands + @row_count

        IF @row_count < 20000 -- passed the result set.  We‘re done
            BREAK
    END

2,DELETE TOP(20000) MSrepl_commands WITH (PAGLOCK)

WHILE 1 = 1
    BEGIN
        if @has_immediate_sync = 0
            DELETE TOP(20000) MSrepl_commands WITH (PAGLOCK) from MSrepl_commands with (INDEX(ucMSrepl_commands)) where
                publisher_database_id = @publisher_database_id and
                xact_seqno <= @max_xact_seqno and
                (type & ~@snapshot_bit) not in (@directory_type, @alt_directory_type) and
                (type & ~@replpost_bit) <> @scriptexec_type
                OPTION (MAXDOP 1)
        else
            -- Use nolock hint on subscription table to avoid deadlock
            -- with snapshot agent.
            DELETE TOP(20000) MSrepl_commands WITH (PAGLOCK) from MSrepl_commands with (INDEX(ucMSrepl_commands)) where
                publisher_database_id = @publisher_database_id and
                xact_seqno <= @max_xact_seqno and
                -- do not delete directory, alt directory or script exec commands. they are deleted
                -- above. We have to do this because we use a (nolock) hint and we have to make sure we
                -- don‘t delete dir commands when the file has not been cleaned up in the code above. It‘s
                -- ok to delete snap commands that are out of retention and perform lazy delete of dir
                (type & ~@snapshot_bit) not in (@directory_type, @alt_directory_type) and
                (type & ~@replpost_bit) <> @scriptexec_type and
                (
                    -- Select the row if it is older than max retention.
                    xact_seqno <= @max_immediate_sync_seqno or
                    -- Select the snap cmd if it is not for immediate_sync article
                    -- We know the command is for immediate_sync publication if
                    -- the snapshot tran include articles that has virtual
                    -- subscritptions. (use subscritpion table to avoid join with
                    -- article and publication table). We skip sync tokens because
                    -- they are never pointed to by subscriptions...
                    (
                        (type & @snapshot_bit) <> 0 and
                        (type & ~@snapshot_bit) not in (@syncinit, @syncdone) and
                        not exists (select * from MSsubscriptions s with (nolock) where
                            s.publisher_database_id = @publisher_database_id and
                            s.article_id = MSrepl_commands.article_id and
                            s.subscriber_id < 0)
                    )
                )
                OPTION (MAXDOP 1)

        select @row_count = @@rowcount
        -- Update output parameter
        select @num_commands = @num_commands + @row_count

        IF @row_count < 20000 -- passed the result set.  We‘re done
            BREAK
    END

使用Script 查看Command的分布图

USE distribution
GO

SELECT
    T.[publisher_database_id],
    DATEPART(mm, [entry_time]) ‘month‘,
    DATEPART(dd, [entry_time]) ‘day‘,
    DATEPART(hh, [entry_time]) ‘hour‘,
    COUNT(C.[xact_seqno]) ‘count of commands‘
FROM [dbo].[MSrepl_transactions](nolock) T
INNER JOIN [dbo].[MSrepl_commands](nolock) C
    ON T.[xact_seqno] = C.[xact_seqno]
GROUP BY    T.[publisher_database_id],
            DATEPART(mm, [entry_time]),
            DATEPART(dd, [entry_time]),
            DATEPART(hh, [entry_time])
ORDER BY 1, 2, 3, 4

引用文档《 How to resolve when Distribution Database is growing huge (+25gig)

Yes, I know, huge database is kind of relative, but generally if you see Distribution database growing more the 25gig it means the Cleanup processes is having a hard time deleting replicated transactions.  I’ll cover the how and why on Cleanup processes later, but for now I wanted to post a technique we’ve used to purge rows from the Distribution database.  This solution involves modifying the SQL Replication stored procedures to increase the number or rows being deleted per transaction.  If you’re uncomfortable making the code change, skip down to STEP 7).

This first posting coverage a “conservative” approach.  Later I’m post steps for a more “aggressive” solution.

1) script msrepl_commands cleanup proc and save original sp code

sp_helptext  sp_MSdelete_publisherdb_trans

2) change from CREATE to ALTER

ALTER PROCEDURE sp_MSdelete_publisherdb_trans

3) change all 3 locations from 2000 to 100000 rows

DELETE TOP(2000) MSrepl_commands . . .

4) script msrepl_transaction cleanup proc and save original sp code

sp_helptext sp_MSdelete_dodelete

5) change from CREATE to ALTER

ALTER PROCEDURE sp_MSdelete_dodelete

6) change both locations from 5000 to 100000 rows

delete TOP(5000) MSrepl_transactions . . .

7) Determine oldest day containing transactions

USE distribution
GO

SELECT
    T.[publisher_database_id],
    DATEPART(mm, [entry_time]) ‘month‘,
    DATEPART(dd, [entry_time]) ‘day‘,
    DATEPART(hh, [entry_time]) ‘hour‘,
    COUNT(C.[xact_seqno]) ‘count of commands‘
FROM [dbo].[MSrepl_transactions](nolock) T
INNER JOIN [dbo].[MSrepl_commands](nolock) C
    ON T.[xact_seqno] = C.[xact_seqno]
GROUP BY    T.[publisher_database_id],
            DATEPART(mm, [entry_time]),
            DATEPART(dd, [entry_time]),
            DATEPART(hh, [entry_time])
ORDER BY 1, 2, 3, 4

8) Execute cleanup via SSMS or a TSQL job to delete JUST oldest day.  (24 hours @ 5 days = 120), then continue to reduce the @max_distretention valued by a few hours for each run.

EXEC dbo.sp_MSdistribution_cleanup @min_distretention = 0, @max_distretention = 120

Example output: (4 hours to removed 340million rows)

Removed 3493 replicated transactions consisting of 343877158 statements in 15043 seconds (22859 rows/sec).

时间: 2024-10-25 00:22:49

设置Distribution clean up 每次删除Command的数量的相关文章

多个 gradle 文件夹 \.gradle\wrapper\dists\ 设置gradle不是每次都下载

韩梦飞沙  韩亚飞  [email protected]  yue31313  han_meng_fei_sha 设置gradle不是每次都下载 \.gradle\wrapper\dists\ ======= 在你导入项目的时候,有个选项的: 你要是选了Use default gradle mapper就会下载一次,Use local gradle distribution就会用你制定的gradle了 ====== 设置gradle不是每次都下载 \.gradle\wrapper\dists\

将SD卡的音频设置为手机铃声后删除,手机铃声没有恢复到默认的问题

1. Android7.0,将存储卡中MP3设置为铃声,删除该MP3后,settings中的铃声没有变化,来电铃声也没有变化. 原因:android7.0的新特性 google 默认如此设计,在选择铃声的过程中,会将删除的铃声进行缓存,在删除铃声后,播放为缓存文件                         1. google 目前将铃声分为actual default ringtone和cache ringtone,前者以ringtone为key将文件uri存储在xml文件里,后者是以st

Python3 tkinter基础 Canvas coords 移动直线,itemconfig 设置矩形的颜色, delete 删除一条直线

? python : 3.7.0 OS : Ubuntu 18.04.1 LTS IDE : PyCharm 2018.2.4 conda : 4.5.11 type setting : Markdown ? 基础 code """ @Author : 行初心 @Date : 18-9-30 @Blog : www.cnblogs.com/xingchuxin @GitHub : github.com/GratefulHeartCoder """

设置Git不需要每次push都输入用户名和密码

正如苹果的设计理念中的一条--当你频繁地进行某项操作的时候,做这件事情就会变成一种机械的运动.删除邮件时出现的警告框是如此,git push时每次都需要输入用户名和密码也是如此.那么就可以通过如下的操作来避免每次都需要输入用户名和密码了(以windows为例). 第一步:生成RSA KEY 在用户文件夹下点右键运行Git Bush,运行如下命令: ssh-agent bash 然后通过以下命令生成RSA密钥: ssh-keygen -t rsa -C [email protected] 这样在<

设置tomcat虚拟路径,删除指定文件

一.背景 服务器经常被人黑.webapps下时常莫名其妙的多了一个未知的恶意war文件.对此的反应: 1.修改tomcat虚拟路径. 2.定时检测webapps和work文件夹下的文件,删除不知名的文件. 二.tomcat的server.xml 在C盘下新建webApp和.webWork两个文件夹,将server.xml中的host节点修改为: <Host name="localhost" appBase="C:/webApp/" workDir="

MongoDB疑难解析:为什么升级之后负载升高了?

本文是"我和MongoDB的故事"征文比赛的二等奖得主李鹏冲的文章.下面我们一起来欣赏下. 问题 近期线上一个三分片集群从 3.2 版本升级到 4.0 版本以后,集群节点的 CPU 的负载升高了很多(10% -> 40%), 除了版本的升级,项目逻辑和操作量均无变化.关闭 Balancer 以后 CPU 负载回归正常,稳定在 10% 以下.为此,只能经常关闭当前正在写入表的 balancer , 每周二打开 balancer 开启均衡,在此期间节点的 CPU 负载持续稳定在 40

(二)NS3如何编译、运行脚本和 Command Line命令行参数设置

二.编译.运行脚本和Command Line命令行参数设置 7. 编译和运行脚本主要步骤 1) 将编写的脚本复制到ns-3.22/scratch目录下(可以在ubuntu窗口界面直接复制) 进入ns3目录: /ns-3.22 $ cp examples/tutorial/first.cc  scratch/myfirst.cc将脚本复制到scratch目录下 2) 构建(编译) $ ./waf 3) 运行 $ ./waf --run scratch/myfirst (可能会有运行权限问题,可在r

18.n个数字(0,1,…,n-1)形成一个圆圈,从数字0开始, 每次从这个圆圈中删除第m个数字(第一个为当前数字本身,第二个为当前数字的下一个数字)。 当一个数字删除后,从被删除数字的下一个继续删除第m个数字。 求出在这个圆圈中剩下的最后一个数字。

转载请注明出处:http://www.cnblogs.com/wuzetiandaren/p/4263868.html 声明:现大部分文章为寻找问题时在网上相互转载,此博是为自己做个记录记录,方便自己也方便有类似问题的朋友,本文的思想也许有所借鉴,但源码均为本人实现,如有侵权,请发邮件表明文章和原出处地址,我一定在文章中注明.谢谢. 题目:n个数字(0,1,…,n-1)形成一个圆圈,从数字0开始, 每次从这个圆圈中删除第m个数字(第一个为当前数字本身,第二个为当前数字的下一个数字). 当一个数字

织梦删除不需要的文件及文件安全设置

       织梦后台精简可以删掉的不需要的文件 如果是一开始就不想要的话,安装时plus目录下进行如下操作. 删除:guestbook文件夹[留言板,后面我们安装更合适的留言本插件]: 删除:task文件夹和task.php[计划任务控制文件] 删除:ad_js.php[广告] 删除:bookfeedback.php和bookfeedback_js.php[图书评论和评论调用文件,存在注入漏洞,不安全] 删除:bshare.php[分享到插件] 删除:car.php.posttocar.php