Passenger user guide contains a simple Capistrano recipe for application server restarts. It works pretty well in almost all the cases, but there is a huge problem when you use a multi-server setup: it restarts all Passengers at the same time, so all client requests will hang (or even drop) during the time needed to start your application. The simplest solution is to restart Passengers one by one with some shift in time (for example, 15 seconds — choose this value based on how long it take to get your application up and running), so at any given moment only one of your application servers will be unavailable. In this case Haproxy (you use it, don’t you?) won’t send any requests to the restarting server, and most of your users will continue their work without any troubles.
Let me show you how we could achieve this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | namespace :deploy do desc <<-EOF Graceful passengers restarts. By default, it restarts \ passengers on servers with a 15 interval, but \ this delay could be changed with the smart_restart_delay \ variable (in seconds). If you specify 0, the restart will be \ performed on all your servers immediately. cap production deploy:smart_restart Yet another way to restart passenger immediately everywhere is \ to specify NOW environment variable: NOW=1 cap production deploy:smart_restart EOF task :smart_restart, :roles => :app do delay = fetch(:smart_restart_delay, 15).to_i delay = 0 if ENV['NOW'] if delay <= 0 logger.debug "Restarting passenger" run "touch #{shared_path}/restart.txt" else logger.debug "Greaseful passengers restart with #{delay} seconds delay" parallel(:roles => :app, :pty => true, :shell => false) do |session| find_servers(:roles => :app).each_with_index do |server, idx| # Calculating restart delay for this server sleep_time = idx * delay time_window = sleep_time > 0 ? "after #{sleep_time} seconds delay" : 'immediately' # Restart command sleeps a given number of seconds and the touches the restart.txt file touch_cmd = sleep_time > 0 ? "sleep #{sleep_time} && " : '' touch_cmd << "touch #{shared_path}/restart.txt && echo [`date`] Restarted Passenger #{time_window}" restart_cmd = "nohup sh -c '(#{touch_cmd}) &' 2>&1 >> #{current_release}/log/restart.log" # Run restart command on a given server session.when "server.host == '#{server.host}'", restart_cmd end end end end end |
The trickiest part is at the lines 25-26. There we use the parallel
method to run all our commands in parallel, but it has a great limitation: there is no way to substitute command parts on the fly based on server where the command is going to be executed. So instead we are building a condition for each server in the :app
role, and calculate time shift based on its index.
Sometimes it’s necessary to perform an immediate restart (for example, a database migration breaks old code). We use an environment variable to do this: cap production deploy:restart NOW=1
In Scribd we use a single QA box for testing, with multiple configured applications on it. The only difference between corresponding deployment scripts is an application path (e.g. /var/www/apps/qa/01, /var/www/apps/qa/02, etc.) So how do we keep them DRY? First we have created a single deployment stage called qa, and deployed with cap qa deploy QAID=1
. Works, but smells bad. Today’s version is much more elegant, but it took some effort to implement:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | (1..10).each do |idx| qid = '%02d' % idx name = "qa#{qid}" stages << name desc "Set the target stage to `#{name}'." task(name) do location = fetch(:stage_dir, "config/deploy") set :stage, :qa set :qa_id, qid load "#{location}/qa" end end # This is a tricky part. We need to re-define [cci]multistage:ensure[/cci] callback # (which is simply raises an exception), so it will not be executed for our newly # defined stages. if callbacks[:start] idx = callbacks[:start].index { |callback| callback.source == 'multistage:ensure' } callbacks[:start].delete_at(idx) on :start, 'multistage:ensure', :except => stages + ['multistage:prepare'] end |
In the qa stage script we set the :deploy_to
variable from :qa_id
. Now we can deploy using cap qa01 deploy
. I leave the implementation of cap qa deploy
, which selects a free QA box and then performs deploy there, up to you (check the Hint 4: Deploy locks explaining how to prevent stealing QA boxes by overwriting deployments using a simple locks technique).
This is the most straightforward and easy to implement feature:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 | begin gem 'tinder', '>= 1.4.0' require 'tinder' rescue Gem::LoadError => e puts "Load error: #{e}" abort "Please update tinder, your version is out of date: 'gem install tinder -v 1.4.0'" end namespace :campfire do desc "Send a message to the campfire chat room" task :snitch do campfire = Tinder::Campfire.new 'SUBDOMAIN', :ssl => true, :token => 'YOUR_TOKEN' room = campfire.find_room_by_name 'YOUR ROOM' snitch_message = fetch(:snitch_message) { ENV['MESSAGE'] || abort('Capfire snitch message is missing. Use set :snitch_message, "Your message"') } room.speak(snitch_message) end desc "Send a message to the campfire chat room about the deploy start" task :snitch_begin do set :snitch_message, "BEGIN DEPLOY [#{stage.upcase}]: #{ENV['USER']}, #{branch}/#{real_revision[0, 7]} to #{deploy_to}" snitch end desc "Send a message to the campfire chat room about the deploy end" task :snitch_end do set :snitch_message, "END DEPLOY [#{stage.upcase}]: #{ENV['USER']}, #{branch}/#{real_revision[0, 7]} to #{deploy_to}" snitch end desc "Send a message to the campfire chat roob about the rollback" task :snitch_rollback do set :snitch_message, "ROLLBACK [#{stage.upcase}]: #{ENV['USER']}, #{latest_revision[0, 7]} to #{previous_revision[0, 7]} on #{deploy_to}" snitch end end ############################################################# # Hooks ############################################################# before :deploy do campfire.snitch_begin unless ENV['QUIET'].to_i > 0 end after :deploy do campfire.snitch_end unless ENV['QUIET'].to_i > 0 end before 'deploy:rollback', 'campfire:snitch_rollback' |
To deploy without notifications use cap production deploy QUIET=1
(but be careful, usually it’s not a good idea).
Sometimes it’s useful to lock deploys to a specific stage. The most common reason is that you pushed a heavy migration to the master and want to run it yourself, before the actual deploy, or performing some production servers maintenance and want to be sure nobody will interfere with your work.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 | namespace :deploy do desc "Prevent other people from deploying to this environment" task :lock, :roles => :web do check_lock msg = ENV['MESSAGE'] || ENV['MSG'] || fetch(:lock_message, 'Default lock message. Use MSG=msg to customize it') timestamp = Time.now.strftime("%m/%d/%Y %H:%M:%S %Z") lock_message = "Deploys locked by #{ENV['USER']} at #{timestamp}: #{msg}" put lock_message, "#{shared_path}/system/lock.txt", :mode => 0644 end desc "Check if deploys are OK here or if someone has locked down deploys" task :check_lock, :roles => :web do # We use echo in the end to reset exit code when lock file is missing # (without it deployment will fail on this command — not exactly what we expected) data = capture("cat #{shared_path}/system/lock.txt 2>/dev/null;echo").to_s.strip if data != '' and !(data =~ /^Deploys locked by #{ENV['USER']}/) logger.info "\e[0;31;1mATTENTION:\e[0m #{data}" if ENV['FORCE'] logger.info "\e[0;33;1mWARNING:\e[0m You have forced the deploy" else abort 'Deploys are locked on this machine' end end end desc "Remove the deploy lock" task :unlock, :roles => :web do run "rm -f #{shared_path}/system/lock.txt" end end before :deploy, :roles => :web do deploy.check_lock end |
Now use can use cap production deploy:lock MSG="Running heavy migrations"
.
Another interesting and sometimes pretty useful task is to fetch the list of servers for a deploy from some external service. For example, you have an application cloud, and do not want to change your deployment script every time you add, remove, or disable a node. Well, I have a good news for you: it’s easy!
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | namespace :deploy do task :set_nodes_from_remote_resource do # Here you will fetch the list of servers from somewhere nodes = %w(app01 app02 app03) # Clear servers lists of :app and :db roles roles[:app].clear roles[:db].clear # Fill :app role servers lists nodes.each do |node| parent.role :app, node end # First server in list is a primary node and db node (to run migrations) primary = roles[:app].first primary.options[:primary] = true roles[:db].push(primary) # Show information in log about where we are going to deploy to nodes_to_deploy = roles[:app].servers.map do |server| opts = server.options[:primary] ? ' (primary, db)' : '' "#{server.host}#{opts}" end.join(', ') logger.info "Deploying to #{nodes_to_deploy}" end end on :start, 'deploy:set_nodes_from_remote_resource' |
When you run cap production deploy
, something like this will be printed to your console:
1 2 3 | triggering start callbacks for `deploy' * executing `deploy:set_nodes_from_remote_resource' ** Deploying to app01 (primary, db), app02, app03 |
That’s all for today. Deployment automation could be a really tricky task, but with a right tool it turns out to be a pleasure. Do you have any questions, suggestions, or some other example deployment recipes? Do me a favor, put them in a comment! Also I have (surprise!) a Twitter account @kpumuk, and you simply must follow me there. No excuses!
The post Advanced Capistrano usage first appeared on Dmytro Shteflyuk's Home.]]>
1 2 3 4 5 6 7 8 9 10 11 | common: support_email: [email protected] root_url: myhost.com photos_max_number: 6 production: email_exceptions: true development: root_url: localhost:3000 photos_max_number: 10 |
In this example you can see three sections: common will be used as a base configuration for all environments, production and development — environment specific options. Optional sections are production, development, and testing, or any other custom environment name. I’ve placed this file in config/config.yml and created lib/app_config.rb which looks like this:
1 2 3 4 5 6 7 8 | # Load application configuration require 'ostruct' require 'yaml' config = YAML.load_file("#{Rails.root}/config/config.yml") || {} app_config = config['common'] || {} app_config.update(config[Rails.env] || {}) AppConfig = OpenStruct.new(app_config) |
Now I’m able to use constructions like AppConfig.support_email
and AppConfig.root_url
. Looks like I’ve kept all my configs as DRY as possible :-)
First, I need to describe my current Apache 2.0 configuration: I have SVN module enabled with auth_digest module and rails with fcgid. Fist thing I’ve seen was error during upgrade:
1 | Starting web server (apache2)...Syntax error on line 32 of /etc/apache2/mods-enabled/dav_svn.conf: Invalid command "AuthUserFile", perhaps misspelled or defined by a module not included in the server configuration failed! invoke-rc.d: initscript apache2, action "start" failed. |
Weird! In Apache 2.2 configuration format has been changed therefor we need to update scripts:
1 2 3 4 5 6 | AuthType Digest AuthName SVN AuthDigestDomain "/" AuthDigestProvider file AuthUserFile /var/www/svn/passwd AuthzSVNAccessFile /var/www/svn/authz |
Started! But what about rails? It does not working at all. Instead of my application I see contents of dispatch.fcgi! Looks like no fastcgi module installed therefor I’ve tried to install libapache2-mod-fcgid or libapache2-mod-fastcgi. Both of them are depending on Apache 2.0! Looks like I have no other ways to run application except running it under Mongrel. Here is my configuration:
1 2 3 4 5 6 7 8 9 10 11 12 13 | <Proxy balancer://appsite> BalancerMember http://localhost:3000 BalancerMember http://localhost:3001 </Proxy> <VirtualHost *:80> ServerName appsite.myhost.com ServerAdmin [email protected] Options Indexes FollowSymlinks RewriteEngine On ProxyPass / balancer://appsite/ ProxyPassReverse / balancer://appsite/ </VirtualHost> |
Please note, if you have your url in ProxyPath ended with trailing slash, you need balancer to be ended with trailing slash too. Moreover, in this case you don’t need slash at the end of BalancerMember directive, otherwise you will get routing error in Rails. If you got error “[warn] proxy: No protocol handler was valid for the URL /. If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule.“, be sure you’ve enabled modules proxy, proxy_balancer, and proxy_http.
The post Upgrading Apache to version 2.2 in Debian first appeared on Dmytro Shteflyuk's Home.]]>