Developing high traffic APIs using Message Queue
Mon 24 Dec 2018

Rabbitmq and Sneakers would bring to us a ton of benefits. That is one of many solution for improving performance by placing synchronous work into a background work asynchronously. However, we can still keep it looks "synchronous". If we are looking for some tools to do it, Rabbitmq and Sneakers are the good ones. Now let's go straight to a specific problems and solutions. It would be more practical. That's how I call IKES - Information -> Knowledge -> Experience -> Skills.


Developing a POST API to serve high traffic requests. This API needs to be fast so that we use background job for saving processing. However after this API returned, GET API must be ready for these data, no matter the background jobs have been done or not.


Firstly, I consider about using Sidekiq for background job. System will read job's parameters in order to provide GET API. However, we have some problems with Sidekiq and Redis. As we all know, Sidekiq uses Redis which is in-RAM memory. The application works just fine when it's small and does not have too much traffics. It becomes problem if we have high traffics and there are a ton of background jobs like that. It leads to out of memory. 

Let's have a look on the comparison here:

Backends Features

I would love using Sidekiq because I'm familiar with it and Sidekiq provided a ton of useful API for playing with job in queue. However, as an architect I need to set the "robust" in high priority. I found Sneakers is the better choice. Sneakers is a fast background processing framework for Ruby and RabbitMQ. Furthermore, using pub/sub mechanism with Workers benefit a lot. That is when Workers, exactly consumers which are listening on a message queue, helping us to make processing in advance, in order to get the results for being used later (GET fast).

Sneakers and Rabbitmq solve some problems from Sidekiq and Redis.

  • Redis is a volatile store. Sneakers solves this problem by ack! function, to guarantee that a job has been processed. If the consumer fails to send an “ack” signal in the specified time period the message is put back at the front of the queue.Sidekiq has memory problem due to redis. RabbitMQ provides a “lazy queue” which keeps all it’s messages on disk if possible. That can be able to support very long queues (many millions of messages).
  • Although it's really good but still has some problems at this moment. I tried to integrate Sneakers with ActiveJob but all messages only went to "default" queue name. Rabbitmq did not create any other queue name because it's not been implemented yet (lol). Let's have a look on this article for more details

Finally, I ended up with using Worker instead of integrating Sneakers with ActiveJob. It's still good. Because Workers alway run in the background. They are good consumers which react instantly if there are new messages in queue.

It's easy to implement POST method. However, we have many problems with GET methods:

  • Consumers process so fast so that we might lose messages if using Rabbitmq CLI in order to get messages from queue. Because messages are not in queue at processing time. Using consumers to catch messages. 
  • If we use another consumers to look for specific message in queue (That message is still in queue if not processed yet), the question is how long does it take for this consumer? GET method needs to return a result, doesn't it? Put Redis writing process in POST Worker, at the beginning to make sure the POST information is ready for GET (reading from Redis later by API).
  • There is delayed time between involving create method of ActiveRecord and deleting Redis key. Delaying deleting processes 1 second later after record created.
  • The transition time from "publish" involved to consumers catch the messages. If we put the Redis SET process in worker, users are not able to get the result sometimes. Put Redis SET process after publishing message (just got id).
  • What happen if there are a ton of messages in redis which make it out of memory?
  • Setting Redis key properly to query: primary key and foreign keys.



  • Using lazy queues to get predictable performance. If unlucky, somehow we can't control the number of connections or channels. Memory will be running out. Lazy queues will avoid Rabbitmq server from crashing.
  • Every time a worker starts, paying attention on the "workers" parameter of Sneakers's configuration. It should be one to keep one connection at startup.


  • Keep minimum connections: paying attention on Rabbitmq UI. Any "new" methods from Bunny or Sneakers might create unnecessary connections.
  • Making a proper configuration. Pay more attention on workers, threads, prefetch.  
    Sneakers.configure :amqp => 'amqp://guest:guest@',
        :vhost => '/',
        :exchange_type => :direct,
        :timeout_job_after => 120,      # Maximal seconds to wait for job
        :prefetch => 10,                # Grab 10 jobs together. Better speed.
        :threads => 10,                 # Threadpool size (good to match prefetch)
        :durable => true,               # Is queue durable?
        :workers => 1,                  # workers x threads = total threads
        :ack => true                    # Only delete message if "ack" involved.​
    Many clients make requests to database at the same time might be due to FALTA. Need to get balance among them with connection pool of database.
    2018-12-31T11:17:26Z p-6976 t-ox2vt0k74 ERROR: [Exception error="FATAL:  sorry, too many clients already\n" error_class=PG::ConnectionBad backtrace=​


Using Sneakers publisher to delay messages. We should provide "connection" to opts parameter in order to avoid creating new connection. Reading this article for more details about scheduling messages.

module Sneakers
  class Publisher
    def initialize(opts = {})
      @mutex =
      @opts = Sneakers::CONFIG.merge(opts)

    def publish(msg, options = {})
      @mutex.synchronize do
        ensure_connection! unless connected?
      to_queue = options.delete(:to_queue)
      options[:routing_key] ||= to_queue {"publishing <#{msg}> to [#{options[:routing_key]}]"}
      @exchange.publish(msg, options)

    attr_reader :exchange

    def ensure_connection!
      # If we've already got a bunny object, use it.  This allows people to
      # specify all kinds of options we don't need to know about (e.g. for ssl).
      @bunny = @opts[:connection]
      @bunny ||= create_bunny_connection
      @channel = @bunny.create_channel
      @exchange =[:exchange], @opts[:exchange_options])

    def connected?
      @bunny && @bunny.connected?

    def create_bunny_connection[:amqp], :vhost => @opts[:vhost], :heartbeat => @opts[:heartbeat], :logger => Sneakers::logger)

(to be continued...)