How to develop realtime feature using action cable in rails and angular
Wed 01 Jan 2020

Imagine that we have a rails application which have been running for more than six year. Suddenly, we have a requirement about realtime feature. We have two choices. The first one is using third party such as pusher, pubnub and sendbird. Second one is we build it by ourselves. Using a lot third parties is not a good choice. Because our application would rely on them. It would affect the speed and many other things. We are using rails application so we end up with Action Cable. We have this for both server side by Rails and client side by Angular.

You can find a bunch of articles on the internet about this bundle. Now let me tell you something important while doing this. When I write this article I use Rails 5.2.3 and Angular 7. Versions for introducing tools I will put at the end of this article.

The efficient tools:

Those are sufficient.

Server side

Action Cable at Server is extremely simple.

class GuruChannel < ApplicationCable::Channel

  def unsubscribed
    # Any cleanup needed when channel is unsubscribed

  def subscribed
    transmit(stream_state) if waiting?

  def receive(data)
    message = to_json(data).fetch(:message)
    ActionCable.server.broadcast(stream, message)


  def stream

  def job_slug

  def to_json(data)
    JSON.parse(data.to_json, symbolize_names:true)

  def stream_state

  def waiting?
    stream_state == 'wait'

Using transmit to send message to only the current connection.

def subscribed

Using broadcast to broadcast message to all connections. Message won't be sent directly from client to client, but from client to server. Server broadcast message to clients then.

def receive(data)
    message = to_json(data).fetch(:message)
    ActionCable.server.broadcast(stream, message)

Server will call receive if client send. Below line just sends from Server to all clients.

ActionCable.server.broadcast(stream, message)​

Note that we must configure redis same as the server using Socket /cable if using multiple servers.

  adapter: redis
  url: redis://localhost:6379

  adapter: async

  adapter: redis
  url: redis://

  adapter: redis
  url: <%= ENV.fetch("REDIS_URL") { "redis://localhost:6379/1" } %>

For example. We have the second one for background job. If we use ActionCable in background job, we have to point to the Redis that the first server uses - The same Redis to let two servers see the same number of connections.

server '', user: 'truong', roles: %w{app db web}, primary: true
server 'yy.yyy.yyy.yyy', user: 'truong', roles: %w{app worker whenever}

Client Side

The most important part is a service for using action cable. We build it as a service for isolating purpose. That would take it easy for managing dependencies.

import { Injectable } from '@angular/core';
import { ActionCableService, Channel } from 'angular2-actioncable';
import {environment as env} from '../../environments/environment';
import {Subscription} from 'rxjs';

    providedIn: 'root'
export class GuruStreamService {
    subscription: Subscription;
    channel: Channel;
    jobSlug = null;

    constructor(private cableService: ActionCableService) {

    subscribe(jobSlug: string, callback): void {
        if ( || jobSlug == null) { return }
        this.jobSlug = jobSlug;

    disConnect(): void {
        if ( {

    get subscribed(): boolean {
        return this.subscription != null;

    private createChannel(): void { = this.cableService
            .channel('GuruChannel', {job_slug: this.jobSlug });

    private createSubscription(callback): void {
        this.subscription = => {

Deployment with Nginx and Passenger

Add location to Server block in nginx.conf. This is the route we defined in Rails Routes for receiving connection.

location /cable {
   passenger_app_group_name actioncable_websocket;
   passenger_force_max_concurrent_requests_per_process 0;

Refer here for above arguments.

Take a look on this to see how to calculate worker pool size. 

See why we have this configuration.

Errors you might see

We see this due to incorrect configuration about max number of concurrent requests per process for Passenger and worker pool size. Check all above configuration. Make sure we calculate exact numbers.

Source code: