Using Vagrant and Salt Stack to deploy Nginx on DigitalOcean

The base Vagrantfile

Salting the image

Adding Digital Ocean

Deploying the image

Managing the deployed image

Reprovisioning the image

One server does not an infrastructure make

I believe that managing your infrastructure can and should be fun. Recently I have been toying around with Vagrant and Salt Stack to make this a reality. This weekend, I managed to combine these tools to automatically provision a new Nginx server on Digital Ocean.

This in itself is nothing new - the interesting part is where I have published the entire script as a Github repository without sacrificing any security.

If you‘re not interested in the story and just want to go and reproduce my infrastructure, go ahead and fork my repo on Github.

The base Vagrantfile

I began by using Vagrant, an exciting tool that abstracts away all VM hassles into a single configuration file. Using a VirtualBox image I created earlier using Veewee, the following Vagrantfile allowed me to spin up and destroy a local Debian Wheezy VM.

Vagrant.configure("2") do |config|

# Default configuration for Virtualbox

config.vm.box = ‘debian-wheezy-64‘

config.vm.box_url = ‘https://www.dropbox.com/s/00ndb5ea4k8hyoy/debian-wheezy-64.box‘

# Name the VM

config.vm.define :nginx01

end

Just by having this simple file, I can now manage a VM with the commands vagrant up, vagrant ssh and vagrant destroy.

Salting the image

Starting a VM like this is already a sweet experience, but it gets better. The salty-vagrant plugin allows me to automatically install and configure software on the VM using the super sweet Salt Stack framework.

Yes, I know, Vagrant supports Puppet and Chef provisioning out of the box, but some time ago I decided that I don‘t just want provisioning for my infrastructure. I want a remote execution framework as well. And that‘s how you end up with Salt Stack.

Anyway, the following lines in my Vagrantfile were enough to enable Salt Stack:

# Mount salt roots, so we can do masterless setup

config.vm.synced_folder ‘salt/roots/‘, ‘/srv/‘

# Forward 8080 to nginx

config.vm.network :forwarded_port, guest: 80, host: 8080

# Provisioning #1: install salt stack

config.vm.provision :shell,

:inline => ‘wget -O - http://bootstrap.saltstack.org | sudo sh‘

# Provisioning #2: masterless highstate call

config.vm.provision :salt do |salt|

salt.minion_config = ‘salt/minion‘

salt.run_highstate = true

salt.verbose = true

end

This, and some Salt files of course. They can be found in my Github repo under salt/roots. In this case, the Salt files just install and configure a simple Nginx server, but it‘s the principle that counts.

Also note that technically, the vagrant-salt-plugin is able to install Salt for you as well. However, for some reason the plugin has decided that this requires a complete recompile of python-zmq, which I am not interested in. So I use the official method of installing Salt before I start the plugin.

And now, after doing a vagrant up, the VM is automatically provisioned with a running Nginx server, accessible through http://localhost:8080.

Adding Digital Ocean

Once again a sweet experience, but hosting my infrastructure on my development machine is not really future-proof. Which brings me to the next part: deploying the exact same configuration on a real VPS.

Given the current list of available Vagrant plugins, and the fact that I don‘t want to spend too much on this right now, I decided on using Digital Ocean. They offer nice small SSD-backed VPSs for only $5,- a month. And you pay by the hour. Which means that this entire exercise has cost me $0.05 so far.

The README of the vagrant-digitalocean plugin is self-explanatory, but it has one major flaw: it puts your client ID and API key in the main Vagrantfile. Call me old-fashioned, but I don‘t like sharing this information on Github.

Luckily, Vagrant has a complete settings-merging process in place, which meant I could simply create the following ~/.vagrant.d/Vagrantfile:

Vagrant.configure("2") do |config|

config.vm.provider :digital_ocean do |provider, override|

override.ssh.private_key_path = ‘~/.ssh/id_dsa‘

override.vm.box_url = ‘https://github.com/smdahlen/vagrant-digitalocean/raw/master/box/digital_ocean.box‘

provider.client_id = ‘MY-SECRET-ID‘

provider.api_key = ‘MY-SUPER-SECRET-API-KEY‘

end

end

Note the override.vm.box_url setting - my beautiful preinstalled Wheezy VM is useless on Digital Ocean, so I just use their dummy box. Always.

Having set up my private information, I just needed to add the following lines to my main Vagrantfile:

# VM-specific digital ocean config

config.vm.provider :digital_ocean do |provider|

provider.image = ‘Debian 7.0 x64‘

provider.region = ‘New York 1‘

provider.size = ‘512MB‘

end

Deploying the image

The proof is in the pudding (apparently), so with great trepidation I did a vagrant up ??provider digital ocean.

You should try it yourself -  this was really quite exciting. Just a few minutes later, I could access my professionally provisioned Nginx VPS on http://192.241.146.220. Without me ever SSH-ing to the server itself.

MISSION SUCCESFUL

Or was it?

Managing the deployed image

At the moment, Vagrant does not support multiple providers at the same time. So in order to start a local VM (vagrant up), you should do a vagrant destroy on the current provider first.

This is not good. The vagrant destroy command does exactly what it says, and it destroys your VPS. Which is sort of missing the point.

In order to switch back to local development, you should remove the .vagrant/machines/$NAME/digital_ocean/id file. This makes Vagrant forget everything it knows about your VPS and vagrant up will start a local VM as expected.

And now for the nice part: the vagrant-digitalocean plugin actually does not care about this. The next time you do a vagrant up --provider digital_ocean, it will detect your existing VPS by name, and automatically reinstate the id-file.

Reprovisioning the image

Provisioning a server is nice, but being able to reprovision a running server is even better. There are three ways to do this.

The first one is vagrant provision, which just runs the provisioning scripts again. This is great for incremental updates, and it keeps your server online, but it does not guarantee that provisioning works from an initial state as well.

The second one is vagrant destroy ; vagrant up --provider digital_ocean. This will recreate your VPS from the ground up, ensuring a future-proof provisioning. Unfortunately, Digital Ocean does not guarantee that this will give you the same IP address. You will also occur a few minutes of downtime.

The final one is vagrant rebuild, which does guarantee the same IP address and seems to be functionally equivalent to the previous method. This too gives you a few minutes of downtime.

One server does not an infrastructure make

All of this has of course merely touched the surface of real infrastructure provisioning. Because I don‘t let Digital Ocean manage my DNS, I have to manually update my records, the server does not do any monitoring, the current configuration is not really exciting, and using a masterless Salt minion setup sort of defeats the purpose of using Salt.

So what.

This exercise has shown me that having your infra as a repo is a viable position, and I am determined to continue down this path. It might even result in another blog post.

时间: 2024-10-11 04:15:19

Using Vagrant and Salt Stack to deploy Nginx on DigitalOcean的相关文章

salt stack 工具之一——远程命令

salt stack 远程命令 salt stack是一种自动化的运维工具,可以同时对N台服务器进行配置管理.远程命令执行等操作. salt stack分为两个部分: salt-master,部署在控制服务器上,用于发出运维指令: salt-minion,部署在所有需要批量运维的线上服务器上: salt安装 服务端:yum install salt-master -y 客户端:yum install salt-minion -y 配置文件: 服务端:/etc/salt/master 服务端配置:

Docker培训课程 - 谈docker,chef,puppet,ansible,salt stack延伸

Docker培训课程 - 谈docker,chef,puppet,ansible,salt stack延伸 Docker training course - Discuss docker, chef, puppet, ansible, salt stack Hangout本文翻译自:http://www.slideshare.net/Flux7Labs/docker-training-course-discuss-docker-chef-puppet-ansible-salt-stack-han

centos 7 搭建saltstack以及salt stack模块的用法

一.archive模块实现系统层面的压缩包调用,支持gzip.gunzip.rar.tar.unrar.unzip等 archive.cmd_unzip archive.cmd_zip archive.gunzip archive.gzip archive.rar archive.tar archive.unrar archive.unzip archive.zipcmd模块实现远程的命令行调用执行(默认具备root操作权限,使用时需评估风险) cmd.exec_code cmd.exec_co

salt stack的远程命令如何执行-笔记

salt-key 查看证书 salt-key -L  这里是查看认证的  -L 是列举出来. 接受指定的证书 salt-key -a KeyName    -a 是加入认证的key hostname 接受所有未认证的证书 salt-key -A  # 这个一般不会轻易操作,如果是在生存环境中的话,一不小心就会全部都认证了. 删除所有证书 salt-key -D   # 这里是删除所有证书,一般生存环境中,也很少用到,-d 是删除指定的证书. 删除指定的证书 salt-key -d KeyName

salt stack

#步骤1:升级python yum update python #步骤2:下载yum源 rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm #步骤3:分别安装salt-master和salt-minion yum install salt-master yum install salt-minion #步骤4:修改配置文件 vi /etc/salt/master interface:

salt stack 入手

salt 的安装就不写了,网上很多,记录下自己踩得坑吧. 之前安装好后,配置了半天老是不行,后从51上看了一个入门教学视频才知道,配置文件是yaml,又搜索了yaml通用格式,结合python本身就是根据空格和TAB的语法格式,总之:配置过程中空格和缩进很重要还有忘记TAB键 箭头 的地方都得注意空格 另外写这个 用  一个emediter 的编辑器挺好 一直希望在vim中显示空格,找了一圈没找到,有知道怎么让vim显示空格的同行请留言!

第一天salt stack 笔记

Saltstack是一个大型分布式的配置管理系统(安装升级卸载软件,检测环境),也是一个远程命令执行系统.通过c/s的模型实现.服务器端对远程客户机的操作: Saltstack部署: master:192.168.63.163     www.oms.com minion:192.168.63.129     www.omsclient.com 部署要求:两台机器网络互通,最好关闭防火墙.关闭selinux. 1.修改/etc/hosts [[email protected] salt]# ca

Deploy Nginx + Mysql + Spawn-Fcgi On Debian/ubuntu

Install Required Packages apt-get update & upgrade apt-get install mysql-server nginx php5-cli php5-cgi spawn-fcgi nginx php5-cli php5-cgi spawn-fcgi php5-gd php5-mysql 2 Create Directories For Your Site mkdir -p /srv/www/yourdomain/public_html mkdir

salt stack安装与使用

Saltstack三大功能 远程执行(执行远程命令) 配置管理(状态管理) 云管理 Saltstack特征 1)部署简单.方便: 2)支持大部分UNIX/Linux及Windows环境: 3)主从集中化管理: 4)配置简单.功能强大.扩展性强: 5)主控端(master)和被控端(minion)基于证书认证,安全可靠: 6)支持API及自定义模块,可通过Python轻松扩展. Master与Minion认证 1)minion在第一次启动时,会在/etc/salt/pki/minion/(该路径在