Create a web site in 53 seconds

N.B. Make sure to read the follow-up to this article that explains how to support Office 365 authentication.

Do you want to create a responsive web site built on Bootstrap with support for Facebook, Twitter, Linkedin and Google+ login in less than a minute? Read on!

I have been a fan of Ruby on Rails since 2007. For those who are not familiar with the framework, Ruby on Rails, or just Rails, is a platform that makes it possible to build web sites in no time at all. But what is even more important is that the focus on convention over configuration, one of the pillars of Rails, makes it simple to extend upon a site later on, even by someone not involved in the original development. Other platforms I have had experience with do not necessarily share the level of adherence to conventions that allow this with the same ease. Or at all.

Most of the web sites I build these days are based on Rails, whether they are small simple concept sites or big complex sites. I create private sites as well as solutions for my work. And most of the time I find myself adding the same set of features to every site. They are responsive and based on Bootstrap, they allow users to log on with OAuth using Devise and they include role based permissions based on CanCan. I got tired to redoing the same steps for every single site and created a template. You can check out a demo version of the output of the template by looking at – please note that due to inactivity the site may need a few seconds to spin up. The purpose of this article is to share this template and how it is used.

To follow the instructions below you need to register at various sites:

  • A git repository. I recommend Bitbucket since creating private repositories is free for personal use. Github is a popular alternative but without an paid account the code will be open. That may or may not be acceptable by you depending on your project.
  • The OAuth providers you want to use (Facebook, Twitter, Linkedin, Google+). My guess is that most already have an account with these. Note that for Google+ the email address of the account will be visible to users logging in so you may want to consider creating an account specifically for your app.
  • Somewhere to host the site. I use Heroku and the example below is based on that. Heroku provides a free tier for small web sites that can then scale with the load. It is certainly possible to host this elsewhere, including on a server of your own. That may be a topic for another article some other time.


Create Heroku site, DNS name and code repository

First, log onto Heroku and create a new app. You have a choice of hosting the site in the US or in Europe. You can also define a name for the site if you don’t want Heroku to create a name for you. I normally define a name as that makes it easier to tell them apart later on and I also add a suffix to the hostname to identify the location where the site is hosted.

For this walkthrough I will use the name “myweb” as the name of the app. A possible name in Heroku would then be The part is added by Heroku and is common to all sites hosted there.

If you want to use a custom DNS name now is the time to go into the configuration settings for your domain(s) and create an alias hostname that points to the name provided by Heroku. Please note that the Heroku hostname may not resolve to a static IP address so you want to create an alias for the Heroku hostname, not an A record for the IP address currently used by Heroku for the site.

Again, continuing with the example a record in the DNS zone could be
myweb IN CNAME

If you add a custom DNS name you also need to go back into the Heroku settings and add that alias to the list of names for the site. Otherwise, Heroku will not be able to associate incoming requests to the correct site. In Heroku there would thus be both the name as well as

Finally, you should create a repository for the code. For Bitbucket this is quickly done by clicking on “Create” and giving the repository a name (e.g. myweb). The rest of the information there is optional and pretty self-explanatory.

Create apps within the various OAuth providers

Next step is to select which OAuth providers you want to use. The template currently supports Facebook, Twitter, Linkedin and Google+ and they are selectively enabled by setting the variable at the top of the script to true for the services you want to use. Within the template there are also links to the developer portals of each of the four supported OAuth providers. In addition to these it supports local users. It is easy to configure the script to use any combination of these. You may want to include all of them or perhaps just one. The choice is yours. One thing to keep in mind is that currently the template does not support associating users with one another. If an individual user logs on using different OAuth providers those will be different local accounts. A special caveat is that if those accounts use the same email address it most likely will not work. For this reason you may want to use all of the providers but instead pick based on your app. Twitter does not provide the email address of the user and can safely be combined with any of the others.

For all of the providers you need to provide the URL of the site. This would be the or, if you do not use a custom hostname, In many cases you can also provide an application icon that is shown to the end user when logging on. Some specific things to keep in mind for each provider are:

  • Facebook: Copy the “app ID” & “app secret” values to the template. Do not forget to publish the app. If you do not do this only yourself (or other developers you designate) will be able to log on. It will work fine for you but when you ask others to try it out they will not be able to log on. In order to publish the app you need to provide a contact email address.
  • Twitter: Copy the “API key” & “API secret” values to the template. Twitter also requires the callback URL to be specified –
  • Linkedin: Copy the “API key” & “secret key” values to the template. Enable the r_emailaddress and r_basicprofile scopes.
  • Google+: Copy the “client ID” & “client secret” values to the template. Turn on the two APIs “Contact API” and “Google+ API”

Note: The template makes a differentiation between development and production mode. The keys and secrets for the production site must be unique for the individual application. However, if you plan to use this template for multiple apps you can share the development amongst them, that is how I do it. The development keys and secrets needs to be created separately. They would be very similar to the production site but the hostname would be localhost:3000 instead of

Create the actual site

OK, so I lied slightly. This has taken way more than 53 seconds. That time was for the actual creation of the site from the template, which is what we are getting to now.

Download the latest template from, currently fourteen-twelve.rb. In theory you could use the template directly from GitHub as described in the readme file but you should adapt it to your needs so I recommend you make a local copy, either by cloning the entire project or by downloading just the template.

Then edit the local template file with your favourite editor. If you use a Mac for your development I would personally recommend Textmate.

Some things you want to edit in the file are:

  • The choice of authentication providers
  • The keys and secrets to the OAuth providers used
  • The choice of Bootswatch theme
  • The data model
  • In general, search for the text TODO in the template file and follow the instructions.

    When you have finished updated the template it is time to actually create the site. Just run the command:

    rails new myweb -m

    Depending on your computer and your Internet connection this will take around a minute (or more). If everything worked you should be able to do:

    cd myweb
    rails s

    After that you should be able to open your browser and access http://localhost:3000.

    If you intend to deploy the code to Heroku you should add the gem rails_12factor. You do this by adding this to the file Gemfile in your application root and then run “bundle install”:

    gem 'rails_12factor'


    Now it is time to push the code to the repository as well as to Heroku. First Bitbucket (other repositories would be similar):

    git remote add origin[username]/myweb.git
    git push -u origin --all
    git push -u origin --tags

    Where [username] is your Bitbucket username. The above commands can be found in the repository web page.

    Then push the code to Heroku to deploy:

    heroku git:remote -a myweb-eu
    git push heroku master

    Finally, with the code on Heroku you need to migrate your database. To do so run the following command from the root folder of your application:

    heroku run rake db:migrate

    Good luck! If you run into problems or need help, please send a tweet to @spotwise.

Yosemite desktop clock


The desktop is a very underused resource on many computers. It often just sits there as a backdrop to files and folders. But using the tool Geektool on OS X the desktop can be put to very good use.

Personally have have a big clock in the lower left corner of my desktop. I have done this by just dragging two shell script boxes from the Geektool application. In these two I run the following two shell commands respectively:

date '+%A, %b. %d'
date '+%H:%M'


I use the font Lucida Grande regular and set the font colour to white. For the date I use the size 24pt and for the clock 96pt. I then set the clock to update every ten seconds and the date to update every five minutes. With this I could potentially disable the clock in the menu bar but I have chosen to keep it there for those times that the desktop clock is obscured by windows.

In addition to this I also use this method to display an always up-to-date text based todo list right on the desktop of my various computers.

Switch BU-353 to NMEA mode

BU-353 is a nice little USB based GPS receiver that I use for several types of project. Straight out of the wraps it defaults to outputting NMEA data at 4800 baud which is exactly what I want. However, it also supports the SIRF binary protocol and may switch to that format. This can happen if you connect it to a system that uses gpsd. That daemon supports both the NMEA and the SIRF protocol but will switch the GPS to the latter if it can.

So how do you switch it back? Maybe you can just leave it be and the supercap inside will discharge and it will revert to its default settings. The FAQ kind of hinted at that. However, I wasn’t patient enough to see if that works so I needed a quicker option.

It turns out that if you use Windows it is not too difficult. You can follow any one of several guides on the net, for instance this, straight from the horse’s mouth.

But I needed a way to do this from Linux and this is how.

First make sure you have gpsctl in your path. If it is not installed you can install it by running:

sudo apt-get install gpsd-clients

I am doing this on an Ubuntu system but it should work on most Debian derivatives.

Then connect your BU-353 and type (assuming that your GPS device turns up at /dev/ttyUSB0):

sudo stty -F /dev/ttyUSB0 4800
sudo gpsctl -n -D 4 /dev/ttyUSB0

Then it should be back on NMEA.

Run script on USB device insertion

Udev is a device manager for the Linux kernel and can be of great help to trigger activities when devices are added or removed. Once such scenario could be to run a script when the user connects a USB memory stick.

The first thing necessary to write udev rules is to have information about the device to create filters. However, I had a hard time finding information about what filters were available when I wanted to write a set of matching rules that would not cause a number of false triggers. To do this use the following command (replacing the device name to whatever device you want information about):

udevadm info -a -n /dev/sdb1

The following is an example of a command that can be used to run a script whenever a USB device is inserted.

ACTION=="add",KERNEL=="sd?1",SUBSYSTEM=="block",RUN+="/usr/bin/usb_insert %k"

The parameter %k to the script will be converted by udev to the device that caused the event. Note that the line is wrapped here but must be stated as a single line in the system.

Access Windows servers from OS X

After a hiatus of a couple of years I have recently come back to working with Windows servers – besides OS X and various Linux distributions, which have been my usual working tools lately. I realised that my old tools to access Windows servers would no longer do the trick. The Microsoft Remote Desktop Connection Client for Mac was never really very good but I had become used to Cord and it, too, would no longer work.

RDP problem

Apparently the problem is due to a difference in protocols between the client and the server and the two couldn’t negotiate properly on what protocol to use. I first tried to modify the server to default to the old RDP protocol and not try TLS. I even kept a virtual Windows client at hand to run whenever I felt a need to connect to a remote Windows server.

It turns out the absolutely best solution in these cases is to use the Microsoft Remote Desktop app, available on the Mac App Store for free. It is truly a great application for this purpose and allows the user to keep a list of servers and connect to them quickly and easily. For the ultimate in user experience, run the Windows terminal full screen and use the three finger swipe to quickly switch between the remote server and the local OS X system. Sweet!

Git and Textmate

Since I started using git (and when using svn before that) I have always typed commit messages on the command line using the -m switch. While that works the usability is not exactly fantastic. The funny thing is that it is very easy to change. I do all my development on OS X and my text editor of choice is Textmate. With that combination it is just a matter of issuing the following command:

git config --global core.editor "mate -w"

This will also make it easier to abide by best practices about how to write commit messages.

Snappier virtual Ubuntu

The use of Unity as the default interface for later versions of Ubuntu is a welcome addition for some as it makes better use of limited screen sizes. But when Unity 2D was discontinued in Ubuntu 12.10 it caused problems for people, like myself, who keep a virtual Ubuntu installation at hand for those tasks that require a Linux system. Running Ubuntu 12.10 or later on VirtualBox will often result in terrible performance, making it virtually useless.

One solution is to stay with Ubuntu 12.04. But another solution is to fallback to the old interface. Luckily, this is very easy to do by installing the package gnome-session-fallback.

sudo apt-get install gnome-session-fallback

After having installed the package just log out. When logging in select the Gnome Classic interface from the login page. Subsequent logins will use the same interface as the previous session. It is also possible to change the default interface.

sudo /usr/lib/lightdm/lightdm-set-defaults -s gnome-classic

The death of the corporate laptop

Let me say from the beginning that I do not predict the death of the laptop as such. Others have been bolder and predicted the outright demise of the laptop. I don’t agree. As long as the methods for data entry in other platforms are so mediocre there will be at least some part of the workforce that will have to use what we today refer to as laptops. The important word in the heading is “corporate”.

Ten years ago there were no smartphones and there were no tablets. There were a lot of laptops but they all ran Windows. Mobile phones were abundant but people used them solely for voice communication. There was a lot of talk about the mobile web but compared to what we have today it was a joke. WiFi was around but had not really picked up. Security was of course important but it had a lot to do with securing the physical network in the offices and the connection between them. And on the browser scene Internet Explorer reigned almost supreme.

In other words, there was very little choice for users. Basically it was down to the brand of laptop – Dell, HP and a few others in various shades of black and grey. Design and usability took a backseat.

Then a couple of things started happening at roughly the same time. Internet access both at work and in homes improved immensely and wireless data really took off. Apple released their line of Macbook laptops followed by the iPhone and later iPad. Android took up the challenge and soon surpassed iOS in number of devices. Internet saw the birth of social media and a whole range of cloud based services. Linux became really usable for laptop users and Internet Explorer lost its dominant position.

This has led to a much wider choice available to users. There are still Windows laptops of course but OS X and Linux work almost as well in the corporate setting. And in the smartphone industry Android and iOS are the dominant players with Microsoft and Blackberry competing for third place.

Bring your own device

The next big trend became apparent in 2009 when people began brining their own devices to their workplace and expected to be able to use them. Today employees routinely bring smartphones, tablets and small netbooks to their work, often because they feel that they can work better or that they just like those devices better than whatever their employer provides them with. Some companies are trying to stop this trend but that battle was lost before it even got started. An employer who does not allow its employees to bring their own devices to work will look unattractive and inflexible. Who would like to work there?

Work from anywhere and on anything

More and more corporate services are moving out from the corporate network to the cloud. File storage is moving to Dropbox, mail to Google, source code to Github or Bitbucket. And so on. A smart company will preempt and embrace this trend, because if they don’t it will happen anyway and then it will not be on their terms. Already today employees are setting up their own file sharing schemes with co-workers and external people using cloud based services like Dropbox.

Meanwhile, mobile communication is getting better all the time. 4G networks can provide better connectivity than many people get via wired broadband connections. And the connectivity keeps improving.

Combining the BYOD trend with the move to cloud services and increasingly better wireless connectivity, it is easy to see the trend towards a mobile workforce where anyone in the company can, and expect to be able to, work from anywhere and anytime on any device. Managers with a control freak tendency will try to halt this trend but they will just end up hurting their companies. The era when a manager could stand in a corner office and judge the state of the company by counting the cars in the parking lot is long gone. Today they are better off using IM, burn-down charts and issue ticketing logs to judge the speed of the organisation.

Down the line

So what happens to the corporate laptop? I believe that all services will ultimately move to the cloud where they are accessible to anyone whether they are sitting in the office, travelling or tending to sick kids at home. When there are no longer any local services, the corporate network will be reduced to just an access network with very little reason for protection other than ensuring that external people are not allowed to get free Internet access.

Many devices will not be owned by the company. So instead of securing the physical network and the client computer it is the server perimeter that must be protected. To avoid data loss if a device is lost the obvious trend is to do more and more editing in real time over a mobile connection.

This also means that companies can more or less let employees get whatever computer they want. They might even want to consider handing over the responsibility of the computer to the employees altogether.

In the end we are back at having content stored on big servers in the cloud and accessing that data from thin clients. Which brings us back to the mainframe era, albeit with nicer looking clients with capacity for local processing and the freedom of mobility that comes with wireless data communication.

Create bootable Ubuntu USB stick

I don’t create bootable USB sticks that often but every time I do it I think that it’s harder than it really is and start to search the web for walkthroughs.

This post could also simply be written: Look at the Ubuntu download page.

Here is the process for OS X:

  1. Download ISO file of the operating system you want to put on the USB stick
  2. Open the terminal
  3. Convert the ISO file using the convert option of hdiutil: hdiutil convert -format UDRW -o /path/to/target.img /path/to/source.iso
  4. Run diskutil list to get the current list of devices
  5. Insert the USB stick
  6. Run diskutil list again to determine the device node assigned to your USB stick
  7. Unmount the USB stick: diskutil unmountDisk /dev/diskN
  8. Write the image to the USB stick: sudo dd if=/path/to/disk.img of=/dev/rdiskN bs=1m
  9. Eject the USB stick: diskutil eject /dev/diskN

Install rmagick gem in OS X 10.8

When installing the rmagick gem under OS X 10.8.1 (Mountain Lion) I got the following error on a system using RVM, Ruby 1.9.2 with ImageMagick installed through Brew.

Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension.
checking for stdint.h... *** extconf.rb failed ***

The error appears to be due to some issues with the OpenMP implementation in OSX. The solution that worked was to uninstall ImageMagick, then reinstall it without OpenMP support:

brew uninstall imagemagick
brew install imagemagick --disable-openmp

Since I didn’t have X11 on this computer I got warnings about missing X11 and had to dismiss a number of dialogs during compilation. But it went through and seems to be working. At least I could then do a bundle install on my Rails application.

I know bits

%d bloggers like this: