Front End Deployment For The Rest Of Us

March 20th, 2015

If you are like me – as a front end developer – you just want to craft great user experiences. To hell with all the dev ops voodoo, back end gibberish and whatever else stands between you and the interface master piece you are going to deliver.

But, if you enjoy a modern front end workflow – which is highly recommended – you can't ignore the deployment altogether. That's why – in my humble opinion – deployment workflows and related tool chains should be as simple as possible, without too many bells and whistles to worry about.

What's a front end deployment? In a nut shell: it's you, moving assets (HTML, CSS, JavaScript, …) from your local machine to the web. And if it's done right, it's a machine doing it for you, automatically.

Do you already use an automated deployment workflow?

In the last couple of years I've seen and build a number of different deployment workflows. Small ones, big ones, good ones, crappy ones.

In large scale projects it's very likely that there is a dedicated deployment server involved like Jenkins or Travis. They need quite a bit of effort to get up and running. That's not what I'm talking about here.

I'm going to show you my deployment workflow that is very easy, yet professional. It evolved over time from “good enough" via “somewhat broken" to “pretty sleek".

Assumptions

Our front end tool chains and workflows evolved quite a bit over the last couple of years. This involves working locally without fiddling around on the live server. Otherwise we wouldn't need a deployment workflow at all. This is maybe the most important assumption for this article.

Do you usually work live or locally?

Besides this, it's very likely that your next project involves tools like these:

Failed Approaches

I've done a number of quite stupid things in terms of deployment. Let's look at some of the worst, have a good laugh and see what we can learn from it.

FTP software

Before I started thinking about automated deployment workflows, I used FTP software like Transmit to move my assets to the web.

What's so wrong about that? First of all: it takes manual effort. That means, at one point or another, we all mess things up – like:

  • Moving/Deleting files by mistake
  • Updating the wrong or not all files
  • Let's just try this one little thing on the live server… whoops, which files did I change again?

And that's just if you're on your own. I've heard very “funny" stories about developers ruining each others work by simultaneously working via FTP on the same files.

Take away: We have such great tools that bring modularity and automation into our workflows. Why messing it all up at the most critical point, when our work finally goes live?

Beanstalkapp, DeployHQ, FTPloy

A few years ago I was researching tools and services for automated deployment workflows. A lot of them looked just too clunky and difficult for my use cases – like Travis CI, Jenkins or Strider.

Deployment tools should just work and get out of the way, so we can concentrate on the fun stuff.

I ended up using Beanstalkapp. It basically is a Git hosting service (like GitHub) that has a deployment pipeline attached. That means you can push something to Beanstalkapp and tell it to sync the files from the repository to your web server. It does this very nicely by only copying files that are not already up to date at their destination.

FTPloy and DeployHQ basically do the same, but without the Git hosting. They pull the files from your GitHub or Bitbucket account. The rest is the same.

I really thought I had a pretty nice workflow going on for quite a while. But using this approach actually made me do a lot of awkward things.

Committing The Entire CMS

Let's start with the one I'm least proud of. Beanstalkapps deployment is limited to moving files from the repository to the web server. That's why I ended up committing all files that were necessary for the website to run, to the repository. Including the CMS with all configuration files that held database passwords etc.

This resulted in a myriad of files to push every time I updated the CMS (Craft in this case). And it left me with a lot of files in my repository that I wasn't responsible for. I didn't want to touch the CMS application logic. Why should I have them in my repository anyway?

Take away #1: There should be only files in the repository that developers for the project are responsible for. Everything else is overhead and might even lead to a situation where someone modifies files that are replaced with their next vendor update.

Take away #2: Don't version control the CMS. It should live an autonomous life on the server. Config files with security relevant data shouldn't be committed, either. You don't want to share those information with everybody who might get access to the Git repository.

Committing Compiled Files

I wasn't able to run Grunt or Gulp with Beanstalkapp. Therefore I had to commit the resulting files (minified CSS, uglified JavaScript, generated sprites, …) to the Git repository. That was the only chance for me to move them to the server with Beanstalkapp.

This led to some unpleasant workflow routines and questions:

Changing Sass files: Do I commit the modified .scss files together with the compiled .css files? Or separately as an “Update dist files" commit? Both ways aren't ideal.

Remember to Grunt: It's easy after a small change to forget to run Grunt/Gulp before you make your commit. That way, the compiled file wouldn't make it to the server and potentially break things.

That's why I created a grunt task grunt deploy. It would build and then commit all the relevant files automatically. Unfortunately there were edge cases where this didn't work properly. Especially if something went wrong with the commit.

Take away: Committing compiled files results is an error prone workflow. Compiled files aren't meant to be manually modified, so there is no point in having them under version control.

The Root Of All Evil

My failed attempts with the above tool set comes down to one major drawback: The deployment tool wasn't able to run my tool chain (Grunt in this case).

Take away: The deployment tool should be able to run the build tool.

The solution: Codeship

It turned out that Codeship solves all of the above mentioned problems.

You connect your GitHub or Bitbucket repository to it, tell it which dependencies are needed (Node.js in my case) and which commands you would like to run, before moving everything to the web server.

It then starts a virtual machine with your desired setup, pulls your source code from GitHub, runs the required terminal commands and let's you define where to move everything. It's basically like your terminal window in the cloud. Very straight forward.

Let's see how it works in detail…

Setup

Let's say our repository has two Git branches: develop and master. We want Codeship to deploy everything to the server as soon as something is pushed to the master branch. This involves the following steps:

  • Installing npm dependencies by running npm install
  • Installing Bower dependencies by running bower install
  • Build the compiled files by running gulp build
  • Copy the compiled files to the web server

I won't go into every detail regarding Codeship, because they have a really good step by step introduction page that let's you interactively set everything up. I'll focus on the things that might need a bit of explanation.

Remember: The gulp build command compiles, uglifies and moves everything to the www/public/dist and www/craft/templates folder. For a detailed look at the Gulp setup, you can look at my GitHub repository for this website.

Run The Build Tool With Codeship

We want Codeship to run gulp build each time something is pushed to a branch. This is called a “test" in Codeship. It will direct you to the test configuration page in the step-by-step guide, automatically. If not, just click on the project you want to set up and click “Project Settings" and then “Test Settings".

Codeships projects settings page where tests can be set up

This is all you got to do, to let Codeship run your Gulp workflow

Before Codeship can run our command, we need to make sure it has all the dependencies installed. Node.js is the major dependency in this case. Luckily Codeship gives us a little dropdown to prepopulate the setup commands.

Select Node.js from the dropdown and it adds these lines to the textarea:

# By default we use the Node.js version set in your package.json or 0.10.25
# You can use nvm to install any Node.js version.
# i.e.: nvm install 0.10.25
nvm install 0.10.25
nvm use 0.10.25
npm install

This makes sure that all dependencies are installed before we run our Gulp command. To actually run it, we just need to type it into the “Configure Test Pipelines" field like this:

gulp build

Save the changes and that's it. From now on, every time something is pushed to any branch (develop or master in this case), it will run those commands and build our assets.

But this of course doesn't move anything to a web server. Let's see how that works…

Pro Tip: Notice that I don't explicitly install Bower in the setup above. Taking a look at the package.json explains why: I install it as a local devDependency and run ./node_modules/bower/bin/bower install from the local Node module as a post install script.

I could have installed it globally instead via npm install -g bower to make Bower available for bower install. But Codeship doesn't cache global Node modules. Which means that Bower would be downloaded and installed on each build, which takes a while and is unnecessary overhead. Bonus: other developers only have to type npm install to install all the dependencies for the project, including the Bower dependencies.

Deploying To The Web Server

Now we want Codeship to move the compiled files that we just created using gulp build to the web server. We do this by heading to the “Deployment" section within the project settings.

Codeships Project Settings Page To Set Up The Deployment

There are a number of services we could connect to. But we use a custom script to move our files. There is a very comprehensive article that explains exactly what you need to do.

The Fast Way: Deployment Via FTP

The easiest and fastest way is to move files via FTP. You just use your FTP credentials and a lftp command to move the files. That would look something like this:

lftp -c "open -u $FTP_USER,$FTP_PASSWORD ftp.yoursite.com; set ssl:verify-certificate no; mirror -R --delete ${HOME}/clone/www/public/dist/ /remote/www/public/dist/"

Notice the --delete flag that I added to the example. It makes sure that files that don't exist locally, will be deleted remotely. Use it with caution.

Pro Tip: Using FTP as the protocol to transfer the data is fine as long as you only transfer a few files. But if you have more files to transfer it can take quite long, because it will always transfer all the files. When I used FTP to deploy my last project with all the template files, each deployment took about 3.5 to 5 minutes. That's quite a lot. After I switched to RSYNC, which only transfers the modified files, it now takes only 30 seconds. It's worth using it.

The Recommended Way: RSYNC

Watch Out: Have a look at your web hosting plan, if it allows access via SSH. You'll need it to get things going. It's worth the effort.

The first thing you'll have to do is to follow the instructions to add Codeships public SSH key to your web server.

It depends on your web hosting provider how you can accomplish that. Just google somethings like ‚[hosting provider name] add SSH key' and you'll be fine.

After you set this up, Codeship is able to run commands on your web server via SSH. Now, all you got to do is telling it, which directories should be synced. You should know that Codeship throws all the files you create into a clone folder.

In my case the rsync command looks something like this:

rsync -av --delete ~/clone/www/public/dist/ ssh-user@medoingthings.com:/www/live/public/dist/
rsync -av --delete ~/clone/templates/ ssh-user@medoingthings.com:/www/live/craft/templates/

Notice again the --delete flag. This tells rsync to remove files from the server, that don't exist locally.

Using The Workflow

Now that we have everything set up properly, developing new features is a breeze of fresh air. You build new features on the develop branch and every time you push something, Codeship will run gulp build or whatever command you like it to execute. If something got mixed up, you'll be notified via email that the branch wasn't built successfully. This becomes really powerful when you add unit tests or other quality control tools.

Since we set up the deployment workflow for the master branch, every time we merge from develop back into master, Codeship will build the assets and move them to the web server, automatically. Nothing more to do.

Codeships projects overview for the most recent builds

This is how the build history looks like. To get notified about the most recent builds, there is a nice Chrome extension called Shipscore.

Why This Is Great

I think this workflow really helps to get a lot of error prone manual effort out of the way. It speeds things up, too. I use it for my personal projects als well as for my client projects and am super happy with it.

Having a dedicated deployment service like Codeship is a great thing for a number of reasons:

  • One central deployment config that doesn't need to get updated for every developer
  • No need to share (FTP) login information or ssh access with everybody
  • Integration into other services like Basecamp, Slack etc. is easy
  • No need to maintain the deployment server by yourself
  • Queueing builds, which means no worries about two developers deploying at the same time

Try It Yourself

Head over to codeship.com to get your hands dirty. It's free for open source projects and for up to 5 private projects including 100 private builds a month. You'll come quite far with that.

How do you like the Codeship deployment workflow?

comments powered by Disqus