Front end development boundary in a Rails app

During my recent trip to our Vancouver office, our frontend developers think if we have more Rails developers building backend APIs, the frontend development works could go swifter and faster.

I conducted a team learning session on this topic. I talked about the font end development boundary and how far we can push the frontend work without any help from the server end. The final take away is that with some preparation and simple technics, it is possible to build an entire React frontend app without any server end APIs. Once the server end APIs are in place, we would only need to touch a couple of places at max, to “turn on” the frontend React app to be production ready.

Let’s talk about the boundary between frontend and backend first. React with Flux gives us an easy to understand uni-directional data flow. We know our React app should be fully functional as long as we have “data” in our store. All this means is that our store should be the boundary between our React app and the server APIs. As long as our store is the only place that cares about retrieving and persisting data, once the store emits changes, other parts of our React app should just work! React with flux makes defining the boundary super easy. I still remember in the old days with hand-rolled JS frontend apps, this defining boundary business takes a lot more consideration and discipline. Not so much these days!!!

Now knowing the boundary. What are the technics we could use to make our React app behave as if it is connected to the backend APIs? The answer is easy: FAKE IT.

Take a look at data retrieval for example. In a React store, we typically retrieve data like this

class DummyStore extends EventEmitter {
  // many lines left out
  loadServerData() {
    const url = Envisio.Js.Routes.dummy_data_path();
    Ajax.get(url, (data) => {
      this.setData(data); // setData calls this.emitChange();
    });
  }
  // many lines left out
}

Dispatcher.register((payload) => {
  switch (payload.type) {
    case Constants.DUMMY.LOAD_DATA:
      DummyStore.loadServerData();
      break;

    // many lines left out
  }
});

The only interesting part is the Ajax.get with the callback lines. We have a couple of things need to fake here. First, we need to be able to get some fake data, which we can pass to this.setData(). Second, we need to make sure we fake the async nature of the Ajax.get call. We are looking at something like below

  loadServerData() {
    // const url = Envisio.Js.Routes.dummy_data_path();
    // Ajax.get(url, (data) => {
    //   this.setData(data); // setData calls this.emitChange();
    // });
    setTimeout(()=> {
      const fakeData = FixtureDataFactory.getDummyData();

      this.setData(fakeData);
    }, 1);
  }

The Ajax.get call is replaced by setTimeout, which is async. In order to pass fake data to this.setData, we created a FixtureDataFactory class with a getDummyData method. The getDummyData method is super simple. It just returns some fixture JSON data.

The above technics are very simple, but very effective and has some awesome side effects.

  • The use of setTimeout to simulate async behaviour means once we switch to the real Ajax.get, all React components’ lifecycle calls will not be upset. It’ll be an async to async switch. Nothing should care, nothing should change.
  • The introduction of FixtureDataFactory means 2 things. First, FixtureDataFactory will be used to provide fixture data for our tests. Second, the JSON data returned by getDummyData is an easy to read contract between the frontend app and the backend APIs. Backend API developers should be able to take the fixture JSON data and write the API accordingly. Easy job!
Published: 2016-05-24

React with Rails in a real app

WARNING: this post is LONG.

This post will be a re-cap of my first hand experience with adopting ReactJS in an existing Rails application, which is in production and used by real users.

We have all seen enough ‘template’, ‘boilerplate’ or ‘starter’ apps showing us how to move our Rails front end development to React. They all work great, if you start from scatch or if your Rails app is still small (meaning your can afford to rewrite a lot of existing front end code).

Unfortunately, it’s not my case. My Rails application’s first commit was done on 9th of March 2012. The app has been developed as a typical Rails app until Aug 2015 (when I introduced React to the dev stack). That’s about 3.5 years of your typical “Rails way” development. This means a lot of HAML, SCSS, Coffeescript files. I simply cannot afford to rewrite everything from scatch.

I obviously started with react-rails and react-router-rails. The issue with my-asset-rails gems is always about the upgradability. At the time writing this post, HEAD of react-router-rails is still pinned to react-router 0.13.3. react-router on npm is already at 2.0.1. Since I am not using server side render, using my-asset-rails gems to just use their UJS hooks doesn’t seem to be a smart idea.

As someone who’s big on being pragmatic, I removed react-rails and react-router-rails and simply registered the following JS object in the head of the rails layout file

// html head inside application.html.haml
var LazyReactComponent = {
  react_component_name: null,
  react_component_props: {},
  dom_id: null,
  type: "component",
  lazy_mount_react_component: function() {
    ReactDOM.render(
      React.createElement(eval.call(window, this.react_component_name), this.react_component_props),
      document.getElementById(this.dom_id)
    );
  },

  lazy_mount_react_router: function() {
    var routerNode = document.getElementById(this.dom_id);
    var routes = eval.call(window, this.react_component_name);

    ReactDOM.render(React.createElement(ReactRouter, {history: ReactRouterHistory}, routes), routerNode);
  }
}

In my application, I decided to render at most one main react component for any given Rails route. There’s no magic in the above code. It simply plays within the rules I set on my own app. It lays out an object with a few values to be filled by the Rails view file (discussed later) and a couple of mount functions (One for plain React components; the other for react-router wrapped components). The reason why the above JS code needs to be included in the layout header is that my app’s main application.js is loaded async at the end of the HTML body using the following JS code.

// before closing body inside application.html.haml
function downloadJSAtOnload() {
  var element = document.createElement("script");
  element.src = "#{javascript_path('application')}";
  document.body.appendChild(element);
}

if (window.addEventListener) {
  window.addEventListener("load", downloadJSAtOnload, false);
} else if (window.attachEvent) {
  window.attachEvent("onload", downloadJSAtOnload);
} else {
  window.onload = downloadJSAtOnload;
}

The application.js manifest file is the beast. It requires for all those good old coffeescripts, as well as 2 special pieces.

  • One being the webpack transpiled js bundle file, dist_react_components.js, for all of my React components
  • The other being a special mount_react_component.js file.

The mount_react_component.js file is super simple. When it comes to life (loaded on to DOM asynchronously), it invokes one of the LazyReactComponent’s mount functions depends on the LazyReactComponent.type value set by the Rails view file (I’ll go into later).

// mount_react_component.js
if(LazyReactComponent.react_component_name !== null && LazyReactComponent.dom_id !== null) {
  if (LazyReactComponent.type === "router") {
    LazyReactComponent.lazy_mount_react_router();
  } else {
    LazyReactComponent.lazy_mount_react_component();
  }
}

The dist_react_components.js file exposes all mountable React backed UI components in a JS object literal. It’s something like below. Note that there’s no export, since this file would be used outside the module system. It’s included by Rails’ application.js.

// dist_react_components.js
import { Router, browserHistory } from "react-router";
import React from "react";
import ReactDOM from "react-dom";
import ActivityReport from "./activity_report/components/app";

Envisio.React = {
  // The following objects are required by the LazyReactComponent. I expose them here because I'm only loading in React and ReactRouter using NPM.
  React: React,
  ReactRouter: Router,
  ReactDOM: ReactDOM,
  ReactRouterHistory: browserHistory,

  // All React backed UI components are here. I use ActivityReport as an example.
  ActivityReport: ActivityReport
};

After I write the ActivityReport React component, I now only have 1 thing left to do. If I have a Rails route like below

# routes.rb
# contrived Rails route example
get '/activity_report', to: 'ActivityReports#index'

We all know how to do the normal Rails controller, action, view stuff. The only thing interesting here is the view file, activity_reports/index.html.erb

<!-- activity_reports/index.html.erb -->
<% title 'Activity Report' %>

<div id="activity-report-react"></div>

<%= javascript_tag do %>
  LazyReactComponent.react_component_name = "Envisio.React.ActivityReport";
  LazyReactComponent.dom_id = "activity-report-react";
  LazyReactComponent.react_component_props = {initialPropsThatYouWantToPassToClientSide: {}};
<% end %>

JS snippet is actually being written in the erb view file. It simply sets up the required values on the LazyReactComponent, make LazyReactComponent’s mount methods ready to be invoked by mount_react_component.js discussed above.

What’s described above has been a journey for me. Many trials and errors. My goal is to gradually introduce React to an exisitng Rails team working on a Rails app without compromising team’s productivity. After about 6 months of pushing forward, I can proudly say I achieved my initial goal. We still build assets using the Rails asset pipeline. During development, only extra step for developers is to remember running npm run watch-js in their local terminal consoles. The npm watch-js script is plain simple, webpack --progress --colors --watch. It bridges to webpack to transpile ES7/ES6/JSX to ES5 javascript code, which will then be used by Rails asset pipeline for rake asset:precompile.

In future posts, I’ll discuss more about my learnings around react-router and flux.

Published: 2016-03-29

Avatar Cropping with Carrierwave and MiniMagick

I need to implement an image cropping feature on user uploaded avatars. The ever gold RailsCasts episode shows us the way.

All sweet stuff, until MiniMagick tells me it’s mogrify shell command cannot find the image resource. Backtrace points to image.crop(x, y, w, h) line. After some messing around and examining the MiniMagick source. Here’s the fix.

manipulate! do |image|
  x = model.crop_x.to_i
  y = model.crop_y.to_i
  w = model.crop_w.to_i
  h = model.crop_h.to_i
  image.crop("#{w}x#{h}+#{x}+#{y}")
  image
end

Problem solved by changing the image.crop method argument from x, y, w, h to an interpolated string that complies to Imagemagick’s geometry format spec.

EDIT: while writing this post, I found the exact solution has been mentioned on RailsCasts episode’s comments section. Damn, I want my time back :`(

Published: 2015-03-12

Regex Key Hash

While I was implementing the home dashboard widgets with Gridster, I needed to ajax load the contents of each widget base on the widget key. The Gridster widget keys are string based, however some keys are dynamically generated.

Let’s look at an example.

The simple case first. A widget key can be my_calendar. I need ruby to translate it to a symbol such as :my_calendar. I’ll then be able to utilise the translated value :my_calendar to fetch the widget content. To achieve this, it’s dead simple … A contrive way to do it can be somethign like this

class WidgetConfigTranslator

  MAP = {'my_calendar' => :my_calendar}

  # lines omitted
end

WidgetConfigTranslator::MAP['my_calendar'] #=> :my_calendar

Now let’s go to the deep end. Many widgets have dynamically generated keys like my_type_123 and my_type_456. I need to extract out :my_type, as well as the integers (sort of like type IDs) such as 123 and 456.

So … let’s see if we can implement a Hash-alike data structure that allows us to fetch values by passing in strings that can be recognised by a Regex.

class RegexKeyHash < Hash

  def [](search_term)
    search_term = search_term.to_s
    self.each do |key, value|
      if match_data = key.match(search_term)
        return [value, Hash[match_data.names.zip(match_data.captures)].symbolize_keys]
      end
    end
    nil
  end

end

class WidgetConfigTranslatorRevised

  DYNAMIC_WIDGET_KEY_SPLIT_MAP = RegexKeyHash[
    'my_calendar'               => :my_calendar,
    /^my_type_(?<type_id>\d+)$/ => :my_type
  ]

  # lines omitted
end

WidgetConfigTranslatorRevised::DYNAMIC_WIDGET_KEY_SPLIT_MAP['my_calendar'] #=> [:my_calendar, {}]
WidgetConfigTranslatorRevised::DYNAMIC_WIDGET_KEY_SPLIT_MAP['my_type_123'] #=> [:my_type, {type_id: 123}]
WidgetConfigTranslatorRevised::DYNAMIC_WIDGET_KEY_SPLIT_MAP['my_type_456'] #=> [:my_type, {type_id: 456}]

With the above in place, I can use the DYNAMIC_WIDGET_KEY_SPLIT_MAP constant as if it’s just a regular Hash object. The translated return values can be used to call up named URLs defined in my routes.rb and have the ability to pass in those additional params.

I don’t know if anybody would have a similar use case to use this implementation. However it solved my problem quite nicely.

Published: 2014-11-28

Gridster

We recently had a requirement of implementing a new Dashboard inside Envisio. The new dashboard needs to be customisable, meaning the end users can add, remove, resize and drag-n-drop widgets on the dashboard. The configuration of the user dashboard needs to be memorisable.

I went through a few options and settled on Gridster. It fits our requirements very well after some tweaking. The only complaint is that Gridster isn’t reponsive, which is less of a concern for our app.

To set it up, first load it using Bowerfile, which is enabled by the bower-rails gem.

# Bowerfile
asset 'gridster'

Load the JS and CSS into the corresponding manifest files.

# application.js
//= require gridster/dist/jquery.gridster

# application.css
*= require gridster/dist/jquery.gridster

How to setup the dashboard as well as the widget HTML and how to initialise Gridster is very subjective. To highlight how I got serialisation, resizing working, I’ll disect the JS and explain a little.

In order to remember user configuration, we needed to pay attention to the serialize_params, resize.stop and draggable.stop options.

home_gridster = $(".gridster ul.grid").gridster(
  # lines omitted
  serialize_params: ($w, wgd) ->
    col:    wgd.col,
    row:    wgd.row,
    size_x: wgd.size_x,
    size_y: wgd.size_y,
    key:    $($w).data('key') #this key is important, it'll be saved and used on server side
  resize:
    enabled: true
    start:
      # lines omitted
    stop: (event, ui, $widget) ->
      # lines omitted
      save_grid_configuration(home_gridster.serialize())
  draggable:
    stop: (event, ui) ->
      save_grid_configuration(home_gridster.serialize())
).data('gridster')

An issue I ran into was when resizing, the scrollable content loaded inside each grid does not resize. When I want to reduce the widget size, The content height doesn’t seem to be adjusted and it will stick out. To overcome this, I pulled some dodgy hacks by tapping into the resize.start callback to put a tint over the widget that’s being resize to hide the ugliness.

home_gridster = $(".gridster ul.grid").gridster(
  # lines omitted
  resize:
    enabled: true
    start:
      $widget.children('.panelbox-heading').hide()
      $widget.children('.panelbox-body').hide()
      $widget.addClass('resizing')
    stop: (event, ui, $widget) ->
      set_grid_widget_height($widget)
      $widget.children('.panelbox-heading').show()
      $widget.children('.panelbox-body').show()
      $widget.removeClass('resizing')

      save_grid_configuration(home_gridster.serialize())
  # lines omitted
).data('gridster')

Your mileage may vary, but the aboves are a few obstacles I encountered (amongst many others …). There’s also this nice comparison article talking about various alternative plugins to Gridster.

Published: 2014-11-26

Pow Alternative on Ubuntu

I had a need to prepare a Ubuntu VM image, which can house the Envisio app for development. Setting up a Rails dev environment on Ubuntu is a no brainer. Being able to find an alternative solution to Pow.cx on Ubuntu is a different story.

After setting up a working Rails dev environment, which allows me to run rails s. I had to do a few things.

First thing is to be hack the host file so that I can use *.envisio.dev instead of localhost. I quickly installed the ghost gem and registered a few host entries

rvmsudo ghost add envisio.dev
rvmsudo ghost add admin.envisio.dev
rvmsudo ghost add client_one.envisio.dev

After this I can point my browser to client_one.envisio.dev:3000. Next up, we need to get rid of the port 3000 part.

sudo ufw enable
sudo iptables -t nat -A OUTPUT -p tcp --dport 80 -d 127.0.0.0/8 -j REDIRECT --to-port 3000
sudo apt-get update
sudo apt-get install iptables-persistent

That’s it. client_one.envisio.dev now works.

Published: 2014-08-20

BarcodeScanner with Ionic

I’m messing around with the new Ionic Framework. Ionic builds on top of Angular and builds using Cordova. Integrating a QR code scanner cannot be any simpler with the Cordova BarcodeScanner plugin in place.

But things are never as simple as they look … I run into funny issues on Android. Basically the problem is if a user cancels the barcode scanning process on Android using the back button, the application will simple quit itself, if I don’t do an alert() or something similar. It seems like the Android back button click was registered twice, first time in the QR scanner, and 2nd time in the main app. I had to write the following hacks to make sure it doesn’t quit my app.

First, qr_scan_service.js. Very straight forward service class wrapping the cordova plugin.

var app = angular.module('app.services');

app.factory('QRScanService', [function () {

  return {
    scan: function(success, fail) {
      cordova.plugins.barcodeScanner.scan(
        function (result) { success(result); },
        function (error) { fail(error); }
      );
    }
  };

}]);

Next, putting some voodoos in the calling controller code. See comments.

var app = angular.module('app.controllers');

app.controller('SomeCtrl',
               ['QRScanService', '$ionicPopup', '$ionicModal',
                function(QRScanService, $ionicPopup, $ionicModal) {

  this.scanIt = function() {
    QRScanService.scan(function(result) {
      if (result.cancelled) {
        // this is a super hack. When QR scan gets cancelled by
        // clicking the back button on android, the app quits...
        // doing a blank modal to catch the back button press event
        $ionicModal.fromTemplate('').show().then(function() {
          $ionicPopup.alert({
            title: 'QR Scan Cancelled',
            template: 'You cancelled it!'
          });
        });
      } else {
        $ionicPopup.alert({
          template: 'Result: ' + result.text
        });
      }
    }, function(error) {
      $ionicPopup.alert({
        title: 'Unable to scan the QR code',
        template: 'Too bad, something went wrong.'
      });
    });
  };

}]);
Published: 2014-05-29

Feature Release Management

Git’s cheap branching and merging ability coupled with Git Flow makes software feature development process a lot easier. However product development is only 1 piece of the puzzle when it comes down to feature release. It’s almost certain that when we release a feature, there will be involvements from support/sales/marketing team. Feature code ready simply doesn’t mean release ready. Some may say, why not just keep the non-released feature code in the Git feature branch… I personally don’t like keeping feature branches long-lived and prefer quick merge-backs. This leads to the question, how can we decouple product development code release and product feature release.

Since knowing that the feature toggle requirement is all on UI side for now, I quickly rolled up my sleeves and hacked together a solution.

First create a features model and back it up with a DB table (yes, it’s the Rails way …). The only significant method is the class method feature_map, which returns me a hash of active features.

class EnvisioFeature < ActiveRecord::Base

  attr_accessible :name,
                  :active,
                  :active_status_changed_at

  validates :name, presence: true, uniqueness: true

  def self.feature_map
    where(active: true).pluck(:name, :active).to_h.symbolize_keys
  end

end

Next up, create a plain Ruby class with a class method get_map, which does a cache fetch on a given hash key.

class EnvisioFeatureMap

  def self.get_map
    Rails.cache.fetch('envisio-features') do
      EnvisioFeature.feature_map
    end
  end

end

Then, I created a couple of helper methods as such

module Corporate
  module EnvisioHelper

    def envisio_show(feature_name, &block)
      if EnvisioFeatureMap.get_map.has_key?(feature_name.to_sym)
        capture(&block)
      end
    end

    def envisio_alternative(feature_name, &block)
      unless EnvisioFeatureMap.get_map.has_key?(feature_name.to_sym)
        capture(&block)
      end
    end

  end
end

Last, in my (HAML) view files, I can just wrap the feature related view code inside the envisio_show or envisio_alternative helper methods like this

= envisio_show :feature_one do
  Awesome feature one

  = envisio_show :feature_one_dot_one do
    Some more enhancement for feature one

= envisio_alternative :feature_one do
  Feature one coming soon

That’s it. The view files are still declaritive without if/else checks.

Of course, there’s an admin only control panel, where administrators can signin to and toggle on/off features without involvement from development team.

Published: 2014-05-27

Migrate from Resque to Sidekiq

I couldn’t believe it’s been 6 months since my last programming related blog post. I’ve been busy taking photos lately intead (see my photo blog here).

Back to some Ruby. Today I finally made the decision to spend some time in the current sprint to migrate our background processing gem from Resque to Sidekiq. Not that Resque is doing anything particularly wrong, it’s just the Resque setup code was done by me about 3 years ago (not long after I just started working full time on Ruby), and the setup has been copied from project to project, but never improved. After reading so many good things about Sidekiq, I thought I’ll give Sidekiq a try.

First thing first, Gemfile.

# gem 'resque',                           require: 'resque/server'
# gem 'resque-scheduler',                 '~> 2.0.0', require: 'resque_scheduler'
# gem 'resque_mailer',                    '~> 2.2.1'

gem 'sidekiq',                            '~> 3.0.0'
gem 'sinatra',                            require: false
gem 'devise-async'

and bundle install it.

Next a global search on Resque.enqueue and replace with Sidekiq::Client.enqueue.

Then go to the old Resque worker classes to find @queue = :abc and replace with include Sidekiq::Worker. Of course, as the awesome Sidekiq doc points out, turn def self.perform into def perform

For mailer classes, in my old Resque setup, I use the resque_mailer gem and have the following async_mailer.rb, which gives me a good super class for any async mailer classes that can extend from.

class AsyncMailer < ActionMailer::Base
  include Resque::Mailer

  private

  def mail_header(subject)
    {to:       @user.email,
     from:     Settings.mail.from,
     reply_to: Settings.mail.reply_to,
     subject:  subject}
  end

end

I simply took out the include Resque::Mailer line from AsyncMailer. Another global search on .deliver to find all application codes that send out emails. For all the matches, simply change the email deliver call to use the Sidekid’s Delayed extension syntax for ActionMailer.

More hacks required to make the Devise mailer happy.

In devise.rb initializer, add config.mailer = 'Devise::Async::Proxy'. Yes, I’m using a very old version of Devise …

Create a new devise_async.rb under initializers, and add single line Devise::Async.backend = :sidekiq

Last, in routes.rb, replace the old Resque web admin interface with the Sidekiq equivalent, mount Sidekiq::Web, at: '/sidekiq'

That’s pretty much it. Of course, I had to do more work to make Heroku, Unicorn and Sidekiq all happy and live in harmony. That’ll be another topic for another post.

Published: 2014-04-25

AngularJS and Rails

A bit update first. I’ve moved to Vancouver for 1.5 months now. Despite all the good things (e.g. work, people), my banking experience has been super bad! I used to think the banks in Australia were bad, I now clearly see what’s worse! I opened my bank account with BMO on the first day I landed here, and till now, there’re still unresolved issues. It made me so upset, I had to setup a random script on Heroku to tweet how bad @BMO is everyday … So if you follow me on twitter and see my rants, it’s that!

Anyway, some interesting stuff now! Call me late adopter, after all these buzz about AngularJS, I finally gave it a shot. This great blog post certainly helped me to get started. But I had to figure some stuff out myself in order to get the AngularJS app to talk to the Rails API app. It’s Friday afternoon, I’m not really in the mood of writing too much here … My play app source code can be found here on github, and I’ll keep playing and pushing the changes there.

Published: 2013-10-18