Tag Archives: rails

Rails redirect_back_or_default

In a recent project I found myself writing recirect_to :back alot, and then found myself worrying that what if for some reason there is no :back.

Drawing inspiration from this blog, I wrote two helpers in my application_controller.rb.

The store_location stores current URI (or referer URI in case of non-GET request) into session[:return_to] for later usage:

def store_location
  session[:return_to] = if request.get?
    request.request_uri
  else
    request.referer
  end
end

And redirect_back_or_default tries its best to redirect the user to somewhere, in the following order:

  1. previously stored session[:return_to]
  2. Referer URI
  3. Given default URI
  4. or root_url if all else fails

The code itself

def redirect_back_or_default(default = root_url, options)
  redirect_to(session.delete(:return_to) || request.referer || default, options)
end

I’ve found that when rewriting redirect_to :back, notice: 'something' into redirect_back_or_default-call, adding this alias helps:

alias_method :redirect_to_back_or_default, :redirect_back_or_default

But of course, if you are testing your code (and you should be), it’s better to stick to one variant of above and use tests to catch all erroneous incarnations.

Rails 3: Merge scopes

I run into a case where I had User.search method and I wanted the GroupMember model be searchable by the user’s attributes. The most DRY way to accomplish this in Rails 3 is to merge scopes. In the User model:

# user.rb
class User < ActiveRecord::Base
  has_many :memberships, :class_name => "GroupMember", :foreign_key => "user_id"

  def self.search(search)
    if search.present?
      query = []
      params = []
      %w(uid email name).each do |field|
        # The field name must be fully qualified to merge scopes
        query << "#{self.table_name}.#{field} LIKE ?"
        params << "%#{search}%"
      end
      query = query.join(" OR ")
      where(query, *params)
    else
      scoped
    end
  end
end

NB! It’s important to have the User’s field names fully qualified so that they won’t be applied to the GroupMember table. And in the GroupMember model:

# group_member.rb
class GroupMember < ActiveRecord::Base
  belongs_to :user
  belongs_to :group

  def self.search(search)
    if search.present?
      # We search GroupMembers by the user attributes
      scoped.joins(:user).merge(User.search(search))
    else
      scoped
    end
  end
end

Now it’s possible to search for GroupMembers by the User attributes:

group = Group.find 1
group.group_members.search('david')

This results in SQL query:

SELECT "group_members".* FROM "group_members" INNER JOIN "users"
ON "users"."id" = "group_members"."user_id" WHERE "group_members"."group_id" = 1
AND (users.uid LIKE '%david%' OR users.email LIKE '%david%'
OR users.name LIKE '%david%')

Ruby Rack servers benchmark

Facing the question which Ruby Rack server perform best behind Nginx front-end and failing to google out any exact comparison, I decided to do a quick test myself.

The servers:

Later I tried to test UWSGI server too as it now boasts built-in RACK module, but dropped it for two reasons: (1) it required tweaking OS to raise kern.ipc.somaxconn above 128 (which none other server needed) and later Nginx’s worker_connections above 1024 too and (2) it still lagged far behind at ~ 130 req/s, so after successful concurrency of 1000 requests, I got tired of waiting for the tests to complete and gave up seeking it’s break point. Still, UWSGI is very interesting project that I will keep my eye on, mostly because of it’s Emperor and Zerg modes and ease of deployment for dynamic mass-hosting Rack apps.

As UWSGI was originally developed for Python, I wasted a bit of time trying to get it working with some simple Python framework for comparison, but probably lack of knowledge on my part was the failure of it.

Testing

The test platform consisted of:

To set up a basic testcase, I wrote a simple Rack app that responds every request with the request IP address. I dediced to output IP because this involves some Ruby code in the app, but should be rather simple still.

ip = lambda do |env|
  [200, {"Content-Type" => "text/plain"}, [env["REMOTE_ADDR"]]]
end
run ip

Tweaking the concurrency number N (see below) with resolution of 100, I found out the break point of each of the servers (when they started giving errors) and recorded the previous throughput (the one that didn’t give any errors).

Results

The results are as follows:

  1. Unicorn – 2451 req/s @ 1500 concurrent request
  2. Thin – 2102 req/s @ 900 concurrent requests
  3. Passenger – 1549 req/s @ 400 concurrent requests

The following are screenshots from JMeter results:

Unicorn @1500 concurrent request

Thin @900 concurrent requests

Passenger @400 concurrent requests

None of these throughputs are bad, but still Unicorn and Thin beat the crap out of Passenger.

Details

The JMeter testcase

  1. ramp up to N requests concurrently
  2. send request to the server
  3. assert that response contains IP address
  4. loop all of this 10 times

Nginx configuration:

    # Passenger
    server {
      listen 8080;
      server_name localhost;
      root /Users/laas/proged/rack_test/public;
      passenger_enabled on;
      rack_env production;
      passenger_min_instances 4;
    }
 
    # Unicorn
    upstream unicorn_server {
      server unix:/Users/laas/proged/rack_test/tmp/unicorn.sock fail_timeout=0;
    }
 
    server {
      listen 8081;
      server_name localhost;
      root /Users/laas/proged/rack_test/public;
 
      location / {
        proxy_pass http://unicorn_server;
      }
    }
 
    # Thin
    upstream thin_server{
      server unix:/Users/laas/proged/rack_test/tmp/thin.0.sock fail_timeout=0;
      server unix:/Users/laas/proged/rack_test/tmp/thin.1.sock fail_timeout=0;
      server unix:/Users/laas/proged/rack_test/tmp/thin.2.sock fail_timeout=0;
      server unix:/Users/laas/proged/rack_test/tmp/thin.3.sock fail_timeout=0;
    }
 
    server {
      listen 8082;
      server_name localhost;
      root /Users/laas/proged/rack_test/public;
 
      location / {
        proxy_pass http://thin_server;
      }
    }

As is only logical, having processes match the number of cores (dual HT = 4 cores) gave best results for both Thin and Unicorn (thouch the variations were small).

Unicorn configuration

Passenger requires no additional configuration and Thin was configured from command line to use 4 servers and Unix sockets, but Unicorn required a separate file (I modified Unicorn example config for my purpose):

worker_processes 4
working_directory "/Users/laas/proged/rack_test/"
listen '/Users/laas/proged/rack_test/tmp/unicorn.sock', :backlog => 512
timeout 120
pid "/Users/laas/proged/rack_test/tmp/pids/unicorn.pid"
 
preload_app true
  if GC.respond_to?(:copy_on_write_friendly=)
  GC.copy_on_write_friendly = true
end

Disclaimer

I admit that this is extremely basic test and with better configuration much can be squeezed out from all of these servers, but this simple test surved my purpose and hopefully is of help to others too.

RailsVacation

For those who have a unix mailserver with procmail-based vacation scripts AND users who do not feel comfortable editing .procmailrc AND you do not use any webmail software (that comes with a vacation plugin) this might solve your problem.

http://github.com/borgand/RailsVacation/tree/master

This is Ruby on Rails app that uses SFTP to set up/edit/remove vacation messages to/from mailserver and is backed by (My)SQL database where users are authenticated from. Auth is done by RESTful authentication and the database should be compliant.

This app is localized to English and Estonian, but translation is easy and (extremely) short, so new languages can be added easily.

image © leonardog