Rakefile实例教程

一、简介

简单的说Rakefile就是使用Ruby语法的makefile, 对应make的工具就是rake. 在Ruby on Rails里面, 不管是数据库的初始化, 内容初始化, 删除, 还是测试, 都是用rake来完成的.

特点:

1.以任务的方式创建和运行脚本

2.追踪和管理任务之间的依赖

二、语法

Rakefile分几个基本的build规则,

依赖关系: =>

默认任务: default

命名空间: namespace

任务描述: desc

任务调用: invoke

三、实例

程序1:数据备份

# = S3 Rake - Use S3 as a backup repository for your SVN repository, code directory, and MySQL database
#
# Author::    Adam Greene
# Copyright:: (c) 2006 6 Bar 8, LLC., Sweetspot.dm
# License::   GNU
#
# Feedback appreciated: adam at [nospam] 6bar8 dt com
#
# = Synopsis
#
#  from the CommandLine within your RubyOnRails application folder
#  $ rake -T
#    rake s3:backup                      # Backup code, database, and scm to S3
#    rake s3:backup:code                 # Backup the code to S3
#    rake s3:backup:db                   # Backup the database to S3
#    rake s3:backup:scm                  # Backup the scm repository to S3
#    rake s3:manage:clean_up             # Remove all but the last 10 most recent backup archive or optionally specify KEEP=5 to keep
#                                            the last 5
#    rake s3:manage:delete_bucket        # delete bucket.  You need to pass in NAME=bucket_to_delete.  Set FORCE=true if you want to
#                                        #   delete the bucket even if there are items in it.
#    rake s3:manage:list                 # list all your backup archives
#    rake s3:manage:list_buckets         # list all your S3 buckets
#    rake s3:retrieve                    # retrieve the latest revision of code, database, and scm from S3.
#                                        #   If  you need to specify a specific version, call the individual retrieve tasks
#    rake s3:retrieve:code               # retrieve the latest code backup from S3, or optionally specify a VERSION=this_archive.tar.gz
#    rake s3:retrieve:db                 # retrieve the latest db backup from S3, or optionally specify a VERSION=this_archive.tar.gz
#    rake s3:retrieve:scm                # retrieve the latest scm backup from S3, or optionally specify a VERSION=this_archive.tar.gz
#
# = Description
#
#  There are a few prerequisites to get this up and running:
#    * please download the Amazon S3 ruby library and place it in your ./lib/ directory
#      http://developer.amazonwebservices.com/connect/entry.jspa?externalID=135&categoryID=47
#    * You will need a ‘s3.yml‘ file in ./config/.  Sure, you can hard-code the information in this rake task,
#      but I like the idea of keeping all your configuration information in one place.  The File will need to look like:
#        aws_access_key: ‘‘
#        aws_secret_access_key: ‘‘
#        options:
#            use_ssl: true #set it to true or false
#
#  Once these two requirements are met, you can easily integrate these rake tasks into capistrano tasks or into cron.
#    * For cron, put this into a file like .backup.cron.  You can drop this file into /etc/cron.daily,
#      and make sure you chmod +x .backup.cron.  Also make sure it is owned by the appropriate user (probably ‘root‘.):
#
#      #!/bin/sh
#
#      # change the paths as you need...
#      cd /var/www/apps//current/ && rake s3:backup >/dev/null 2>&1
#      cd /var/www/apps/staging./current/ && rake s3:backup >/dev/null 2>&1
#
#    * within your capistrano recipe file, you can add tasks like these:
#
#     task :before_migrate, :roles => [:app, :db, :web] do
#        # this will back up your svn repository, your code directory, and your mysql db.
#        run "cd #{current_path} && rake --trace RAILS_ENV=production s3:backup"
#     end
#
# = Future enhancements
#
#  * encrypt the files before they are sent to S3
#  * when doing a retrieve, uncompress and untar the files for the user.
#  * any other enhancements?
#
# = Credits and License
#
#  inspired by rshll, developed by Dominic Da Silva:
#    http://rubyforge.org/projects/rsh3ll/
#
# This library is licensed under the GNU General Public License (GPL)
#  [http://dev.perl.org/licenses/gpl1.html].
#
#
require ‘s3‘
require ‘yaml‘
require ‘erb‘
require ‘active_record‘
namespace :s3 do

  desc "Backup code, database, and scm to S3"
  task :backup => [ "s3:backup:code",  "s3:backup:db", "s3:backup:scm"]

  namespace :backup do
    desc "Backup the code to S3"
    task :code  do
      msg "backing up CODE to S3"
      make_bucket(‘code‘)
      archive = "/tmp/#{archive_name(‘code‘)}"

      # copy it to tmp just to play it safe...
      cmd = "cp -rp #{Dir.pwd} #{archive}"
      msg "extracting code directory"
      puts cmd
      result = system(cmd)
      raise("copy of code dir failed..  msg: #{$?}") unless result

      send_to_s3(‘code‘, archive)
    end #end code task

    desc "Backup the database to S3"
    task :db  do
      msg "backing up the DATABASE to S3"
      make_bucket(‘db‘)
      archive = "/tmp/#{archive_name(‘db‘)}"

      msg "retrieving db info"
      database, user, password = retrieve_db_info

      msg "dumping db"
      cmd = "mysqldump --opt --skip-add-locks -u#{user} "
      puts cmd + "... [password filtered]"
      cmd += " -p‘#{password}‘ " unless password.nil?
      cmd += " #{database} > #{archive}"
      result = system(cmd)
      raise("mysqldump failed.  msg: #{$?}") unless result

      send_to_s3(‘db‘, archive)
    end

    desc "Backup the scm repository to S3"
    task :scm  do
      msg "backing up the SCM repository to S3"
      make_bucket(‘scm‘)
      archive = "/tmp/#{archive_name(‘scm‘)}"
      # archive = "/tmp/#{archive_name(‘scm‘)}.tar.gz"
      svn_info = {}
      IO.popen("svn info") do |f|
        f.each do |line|
          line.strip!
          next if line.empty?
          split = line.split(‘:‘)
          svn_info[split.shift.strip] = split.join(‘:‘).strip
        end
      end

      url_type, repo_path = svn_info[‘URL‘].split(‘://‘)
      repo_path.gsub!(/\/+/, ‘/‘).strip!
      url_type.strip!

      use_svnadmin = true
      final_path = svn_info[‘URL‘]
      if url_type =~ /^file/
        puts "‘#{svn_info[‘URL‘]} is local!"
        final_path = find_scm_dir(repo_path)
      else
        puts "‘#{svn_info[‘URL‘]}‘ is not local!\nWe will see if we can find a local path."
        repo_path = repo_path[repo_path.index(‘/‘)...repo_path.size]
        repo_path = find_scm_dir(repo_path)
        if File.exists?(repo_path)
          uuid = File.read("#{repo_path}/db/uuid").strip!
          if uuid == svn_info[‘Repository UUID‘]
            puts "We have found the same SVN repo at: #{repo_path} with a matching UUID of ‘#{uuid}‘"
            final_path = find_scm_dir(repo_path)
          else
            puts "We have not found the SVN repo at: #{repo_path}.  The uuid‘s are different."
            use_svnadmin = false
            final_path = svn_info[‘URL‘]
          end
        else
          puts "No SVN repository at #{repo_path}."
          use_svnadmin = false
          final_path = svn_info[‘URL‘]
        end
      end

      #ok, now we need to do the work...
      cmd = use_svnadmin ? "svnadmin dump -q #{final_path} > #{archive}" : "svn co -q --ignore-externals --non-interactive #{final_path} #{archive}"
      msg "extracting svn repository"
      puts cmd
      result = system(cmd)
      raise "previous command failed.  msg: #{$?}" unless result
      send_to_s3(‘scm‘, archive)
    end #end scm task

  end # end backup namespace

  desc "retrieve the latest revision of code, database, and scm from S3.  If  you need to specify a specific version, call the individual retrieve tasks"
  task :retrieve => [ "s3:retrieve:code",  "s3:retrieve:db", "s3:retrieve:scm"]

  namespace :retrieve do
    desc "retrieve the latest code backup from S3, or optionally specify a VERSION=this_archive.tar.gz"
    task :code  do
      retrieve_file ‘code‘, ENV[‘VERSION‘]
    end

    desc "retrieve the latest db backup from S3, or optionally specify a VERSION=this_archive.tar.gz"
    task :db  do
      retrieve_file ‘db‘, ENV[‘VERSION‘]
    end

    desc "retrieve the latest scm backup from S3, or optionally specify a VERSION=this_archive.tar.gz"
    task :scm  do
      retrieve_file ‘scm‘, ENV[‘VERSION‘]
    end
  end #end retrieve namespace

  namespace :manage do
    desc "Remove all but the last 10 most recent backup archive or optionally specify KEEP=5 to keep the last 5"
    task :clean_up  do
      keep_num = ENV[‘KEEP‘] ? ENV[‘KEEP‘].to_i : 10
      puts "keeping the last #{keep_num}"
      cleanup_bucket(‘code‘, keep_num)
      cleanup_bucket(‘db‘, keep_num)
      cleanup_bucket(‘scm‘, keep_num)
    end

    desc "list all your backup archives"
    task :list  do
      print_bucket ‘code‘
      print_bucket ‘db‘
      print_bucket ‘scm‘
    end

    desc "list all your S3 buckets"
    task :list_buckets do
      puts conn.list_all_my_buckets.entries.map { |bucket| bucket.name }
    end

    desc "delete bucket.  You need to pass in NAME=bucket_to_delete.  Set FORCE=true if you want to delete the bucket even if there are items in it."
    task :delete_bucket do
      name = ENV[‘NAME‘]
      raise "Specify a NAME=bucket that you want deleted" if name.blank?
      force = ENV[‘FORCE‘] == ‘true‘ ? true : false

      cleanup_bucket(name, 0, false) if force
      response = conn.delete_bucket(name).http_response.message
      response = "Yes" if response == ‘No Content‘
      puts "deleting bucket #{bucket_name(name)}.  Successful? #{response}"
    end
  end #end manage namespace
end

  private

  def find_scm_dir(path)
    #double check if the path is a real physical path vs a svn path
    final_path = path
    tmp_path = final_path
    len = tmp_path.split(‘/‘).size
    while !File.exists?(tmp_path) && len > 0 do
      len -= 1
      tmp_path = final_path.split(‘/‘)[0..len].join(‘/‘)
    end
    final_path = tmp_path if len > 1
    final_path
  end

  # will save the file from S3 in the pwd.
  def retrieve_file(name, specific_file)
    msg "retrieving a #{name} backup from S3"
    entries = conn.list_bucket(bucket_name(name)).entries
    raise "No #{name} backups to retrieve" if entries.size < 1

    entry = entries.find{|entry| entry.key == specific_file}
    raise "Could not find the file ‘#{specific_key}‘ in the #{name} bucket" if entry.nil? && !specific_file.nil?
    entry_key = specific_file.nil? ? entries.last.key : entry.key
    msg "retrieving archive: #{entry_key}"
    data =  conn.get(bucket_name(‘db‘), entry_key).object.data
    File.open(entry_key, "wb") { |f| f.write(data) }
    msg "retrieved file ‘./#{entry_key}‘"
  end

  # print information about an item in a particular bucket
  def print_bucket(name)
    msg "#{bucket_name(name)} Bucket"
    conn.list_bucket(bucket_name(name)).entries.map do |entry|
      puts "size: #{entry.size/1.megabyte}MB,  Name: #{entry.key},  Last Modified: #{Time.parse( entry.last_modified ).to_s(:short)} UTC"
    end
  end

  # go through and keep a certain number of items within a particular bucket,
  # and remove everything else.
  def cleanup_bucket(name, keep_num, convert_name=true)
    msg "cleaning up the #{name} bucket"
    bucket = convert_name ? bucket_name(name) : name
    entries = conn.list_bucket(bucket).entries #will only retrieve the last 1000
    remove = entries.size-keep_num-1
    entries[0..remove].each do |entry|
      response = conn.delete(bucket, entry.key).http_response.message
      response = "Yes" if response == ‘No Content‘
      puts "deleting #{bucket}/#{entry.key}, #{Time.parse( entry.last_modified ).to_s(:short)} UTC.  Successful? #{response}"
    end unless remove < 0
  end

  # open a S3 connection
  def conn
    @s3_configs ||= YAML::load(ERB.new(IO.read("#{RAILS_ROOT}/config/s3.yml")).result)
    @conn ||= S3::AWSAuthConnection.new(@s3_configs[‘aws_access_key‘], @s3_configs[‘aws_secret_access_key‘], @s3_configs[‘options‘][‘use_ssl‘])
  end

  # programatically figure out what to call the backup bucket and
  # the archive files.  Is there another way to do this?
  def project_name
    # using Dir.pwd will return something like:
    #   /var/www/apps/staging.sweetspot.dm/releases/20061006155448
    # instead of
    # /var/www/apps/staging.sweetspot.dm/current
    pwd = ENV[‘PWD‘] || Dir.pwd
    #another hack..ugh.  If using standard capistrano setup, pwd will be the ‘current‘ symlink.
    pwd = File.dirname(pwd) if File.symlink?(pwd)
    File.basename(pwd)
  end

  # create S3 bucket.  If it already exists, not a problem!
  def make_bucket(name)
    msg = conn.create_bucket(bucket_name(name)).http_response.message
    raise "Could not make bucket #{bucket_name(name)}.  Msg: #{msg}" if msg != ‘OK‘
    msg "using bucket: #{bucket_name(name)}"
  end

  def bucket_name(name)
    # it would be ‘nicer‘ if could use ‘/‘ instead of ‘_‘ for bucket names...but for some reason S3 doesn‘t like that
    "#{token(name)}_backup"
  end

  def token(name)
    "#{project_name}_#{name}"
  end

  def archive_name(name)
    @timestamp ||= Time.now.utc.strftime("%Y%m%d%H%M%S")
    token(name).sub(‘_‘, ‘.‘) + ".#{RAILS_ENV}.#{@timestamp}"
  end

  # put files in a zipped tar everything that goes to s3
  # send it to the appropriate backup bucket
  # then does a cleanup
  def send_to_s3(name, tmp_file)
    archive = "/tmp/#{archive_name(name)}.tar.gz"

    msg "archiving #{name}"
    cmd = "tar -cpzf #{archive} #{tmp_file}"
    puts cmd
    system cmd

    msg "sending archived #{name} to S3"
    # put file with default ‘private‘ ACL
    bytes = nil
    File.open(archive, "rb") { |f| bytes = f.read }
    #set the acl as private
    headers =  { ‘x-amz-acl‘ => ‘private‘, ‘Content-Length‘ =>  FileTest.size(archive).to_s }
    response =  conn.put(bucket_name(name), archive.split(‘/‘).last, bytes, headers).http_response.message
    msg "finished sending #{name} S3"

    msg "cleaning up"
    cmd = "rm -rf #{archive} #{tmp_file}"
    puts cmd
    system cmd
  end

  def msg(text)
    puts " -- msg: #{text}"
  end

  def retrieve_db_info
    # read the remote database file....
    # there must be a better way to do this...
    result = File.read "#{RAILS_ROOT}/config/database.yml"
    result.strip!
    config_file = YAML::load(ERB.new(result).result)
    return [
      config_file[RAILS_ENV][‘database‘],
      config_file[RAILS_ENV][‘username‘],
      config_file[RAILS_ENV][‘password‘]
    ]
  end
时间: 2024-11-15 07:19:22

Rakefile实例教程的相关文章

React 入门实例教程

React 入门实例教程 作者: 阮一峰 日期: 2015年3月31日 现在最热门的前端框架,毫无疑问是 React . 上周,基于 React 的 React Native 发布,结果一天之内,就获得了 5000 颗星,受瞩目程度可见一斑. React 起源于 Facebook 的内部项目,因为该公司对市场上所有 JavaScript MVC 框架,都不满意,就决定自己写一套,用来架设 Instagram 的网站.做出来以后,发现这套东西很好用,就在2013年5月开源了. 由于 React 的

室外设计 3D室外效果图教程 Sketchup园林景观教程 室外漫游动画实例教程 景观设计手绘教程

热门推荐电脑办公计算机基础知识教程 Excel2010基础教程 Word2010基础教程 PPT2010基础教程 五笔打字视频教程 Excel函数应用教程 Excel VBA基础教程 WPS2013表格教程 更多>平面设计PhotoshopCS5教程 CorelDRAW X5视频教程 Photoshop商业修图教程 Illustrator CS6视频教程 更多>室内设计3Dsmax2012教程 效果图实例提高教程 室内设计实战教程 欧式效果图制作实例教程 AutoCAD2014室内设计 Aut

室内设计 3Dsmax2012教程 效果图实例提高教程 室内设计实战教程 欧式效果图制作实例教程

热门推荐电脑办公计算机基础知识教程 Excel2010基础教程 Word2010基础教程 PPT2010基础教程 五笔打字视频教程 Excel函数应用教程 Excel VBA基础教程 WPS2013表格教程 更多>平面设计PhotoshopCS5教程 CorelDRAW X5视频教程 Photoshop商业修图教程 Illustrator CS6视频教程 更多>室内设计3Dsmax2012教程 效果图实例提高教程 室内设计实战教程 欧式效果图制作实例教程 AutoCAD2014室内设计 Aut

php页面get方法实现ajax,入门实例教程

ajax,入门实例教程 本例针对php页面,做了一个小的demo加深对ajax的理解 1.文档结构: 共有ajax.php 和action.php 2个页面. 2.源码如下: /*ajax.php页面*/<!DOCTYPE html> <html lang="en"> <head> <title> ajax</title> <script type="text/javascript"> func

Omnet++ 4.0 入门实例教程

http://blog.sina.com.cn/s/blog_8a2bb17d01018npf.html 在网上找到的一个讲解omnet++的实例, 是4.0下面实现的. 我在4.2上试了试,可以用.照着做就能完成,有些小地方不同而已 Omnet++ 4.0 入门实例教程根据http://omnest.com/webdemo/ide 上的实例,自己动手做了做.新版本的4.0 跟它视频上的版本有些差别,配图说明一下我的操作过程,供大家一起学习.现在开始.首先,开发环境选择simulation 的视

网页设计Dreamweaver网页制作 商业网站建设案例课程 ASP.NET基础实例教程 淘宝开店教程

热门推荐电脑办公计算机基础知识教程 Excel2010基础教程 Word2010基础教程 PPT2010基础教程 五笔打字视频教程 Excel函数应用教程 Excel VBA基础教程 WPS2013表格教程 更多>平面设计PhotoshopCS5教程 CorelDRAW X5视频教程 Photoshop商业修图教程 Illustrator CS6视频教程 更多>室内设计3Dsmax2012教程 效果图实例提高教程 室内设计实战教程 欧式效果图制作实例教程 AutoCAD2014室内设计 Aut

Yii2.0论坛实例教程

Yii2.0现在已经出来Beta了,Yii2.0总的来说和Yii1.x还是相差挺大的.现在的教程大部分都还是1.x的,所以这论坛就作为Yii2.0的一个入门实例吧.我也会尽量把Yii2.0的各个新特性以及开发中的技巧一一列出来.如果哪位有兴趣可以一起交流一起来完成.功能列表:http://www.yiifans.com/forum.php?mod=viewthread&tid=68Git地址:https://github.com/yiifans/yiiforum另外,一般情况下会在每天早上提交一

Python logging模块实例教程

position:static(静态定位) 当position属性定义为static时,可以将元素定义为静态位置,所谓静态位置就是各个元素在HTML文档流中应有的位置 podisition定位问题.所以当没有定义position属性时,并不说明该元素没有自己的位置,它会遵循默认显示为静态位置,在静态定位状态下无法通过坐标值(top,left,right,bottom)来改变它的位置. position:absolute(绝对定位) 当position属性定义为absolute时,元素会脱离文档流

Python getopt模块处理命令行选项实例教程

分享下Python getopt模块处理命令行选项的一些例子. 在python编程中,getopt模块与shell中的getopt参数模块一样灵活而实用. getopt模块用于抽出命令行选项和参数,也就是sys.argv 命令行选项使得程序的参数更加灵活.支持短选项模式和长选项模式例如 python scriptname.py -f 'hello' --directory-prefix=/home -t --format 'a' 'b' import getopt, sys shortargs