Swift Dependency Managers

After jumping into Swift and now that its at 2.0, I wanted to share how the Dependency Managers compared to previously with Objective-C projects. Working with swift and including libraries in a project is now slightly different. In the early days of Swift Cocoapods had some issues with integration as swift now requires dynamic libraries in order to integrate a library with a swift project. Objective-C libraries can still be included in your project, but in some cases you need to add a special bridging header in order for them to work.

There are a few options of dependency manager, and I will describe my experience with the top ones at the moment.

Since the release of swift, a dependency manager written in the language it was designed for makes sense. Since it only supports swift, this dependency manager is not for everyone. It is simpler than cocoapods and requires dependencies to be manually integrated. You need to add the frameworks to XCode project manually after they have been built with Carthage.

You can install Carthage on OSX with brew.

You add dependencies by creating a Cartfile in your project.

There libraries can then be built with the following command:

This package manager has been very popular and was around before swift. Because of its history, it has mostly been for objective-c projects. The Cocoapods teams have had some early hiccups but they have been able to build in Swift support as well as Objective-C support for Swift projects.

It also has a really good index of packages, this makes discovering packages and new versions of packages really easy. Even when using Carthage, I found myself going here to find the latest version number of a package.

In some cases, some Objective C libraries require a Bridging Header in order to be included into a project. For more information you can view the apple documentation. You can do this by:

File > New > File > (iOS or OS X) > Source > Header File

This file would look something like this:

There are some other dependency managers worth mentioning as well, SWM and Taylor. You can find them on Github. Taylor is similar to NPM and Bower, dependencies are added to a JSON file

There is a lot of options and flexibility when it comes to dependency management with iOS. Since using these different managers on different projects I have a personal preference of Cocoapods. I prefer having the packages automatically integrated, the easily searchable repository and compatibility with Objective-C and Swift for all of my projects.

Ansible – Inventory Managment

In this series of blog posts, I will be talking about Ansible. Ansible is a powerful automation tool that you can learn quickly.

When using Ansible, I have been constantly uneasy about how inventory is managed. If you feel the same, in this article I will try to explain how I structure my inventories and where I think it could be improved.

The Ansible documentation recommends to put servers in groups by type, like so:

In your playbook you can then target specific parts to one group. For example your webservers will have different packages compared to your db servers. It also allows you to limit deployment to one group. For example with the –limit flag you can limit your deployment to just the dbservers.

An alternative approach I have been taking is to create seperate inventories for each environment. Like Production, UAT and Testing. For each environment I create an inventory like so:


This means that when I run my Ansible command, I have a separate command for each environment. This means there is less of a chance of deploying to the wrong servers and I can deploy an entire environment all at once easily. Those commands would look like this:

The Ansible best practices document recommends this as well and naming your groups based on their location (data center).

Ansible – Structuring your playbooks

In this series of blog posts, I will be talking about Ansible. Ansible is a powerful automation tool that you can learn quickly.

Ansible is very flexible. When starting you will most likely do everything in a single .yaml file. As things get more complex, it becomes very useful to start separating things out into files and folders. There are several ways to do this and some recommendations from Ansible about the recommended ways of going about this.

The simplest way is to just include another file, for example:

There are much better ways to structure your files though. If you head over to the Ansible website there is some best practices which are very good. It can be a bit confusing at first digesting all of the recommendations at once, so I will attempt to explain what I have found the most useful.


For starters if you name folders in a particular way in your playbook directory, Ansible will automatically discover them and allow you to use them in your playbook. The most useful usage of this is if you have a common set of instructions that would would run on all of your machines. If you create a folder called roles in and then inside create a folder called common,

Then inside your main playbook you can just list the role that you just created and it will run for that particular host:

For a recent playbook I built, I my directory structure looks like the one below. For all of my hosts I ran the common script and then on the other hosts I picked and chose the roles I needed for them. This also makes it easy to write a common setup role that you could share across playbooks.

As you can see, this offers you many benefits. It makes your playbooks easier to read and maintain. Splitting up your playbook also allows you to pick and choose roles that you would like to run on a particular server.

Ansible Packages

In this series of blog posts, I will be talking about Ansible. Ansible is a powerful automation tool that you can learn quickly.

One of the important Ansible Modules that I use in almost every playbook is the package manager module to install and update system packages. This is important for installing dependencies for your application.

You can use this in your playbook by using either the yum or apt modules depending on your target operating system.



For single packages its quite easy:

You can even specify options like latest which will ensure the package is the latest version.

Moving onto bigger things, if you have a whole lot of packages you want installed on the server you can bunch them all up together, for example:

Its also possible to use the command to install RPMs and as a bonus it wont mess up your yum database. You can do this by providing the path to the RPM like so:

As you can see, this is a simple way to have packages on your servers installed and kept up to date. There are many more options on these modules as you will find in the ansible documentation, but these simple examples should get you started.

Learning Ansible

In this series of blog posts, I will be talking about Ansible. Ansible is a powerful automation tool that you can learn quickly.

I have recently been working on setting up servers for different applications and having to build quite a lot of identical servers. I have been frustrated with our old infrastructure setup. Some of the problems are different package versions across clusters and having to manually update each box (which is prone to human error). This has caused many problems, for example one server in a load balanced cluster displaying issues the others not. This can cause customers issues and can take a long time to debug such issues. Also, building new servers is teduous, slow and error prone.

At other organisations, I have previously used Puppet for this situation. By using Puppet I can automate the build process on the server and since the instructions are in code, I can repeat them on as many servers as needed. Testing it also a lot easier. Even though all of these benefits of Puppet there were still a few frustrations. The configurations would get quite complex and sometimes things would not work as expected and took a long time to debug what was going wrong. Also being an agent based system, I would need to install puppet on the servers first and I wanted to be able to do as much setup as possible automatically.

Doing some research on solutions to this opportunity, I short listed down to the modern tools that looked like improvements on Puppet. The options were Salt Stack, Chef and Ansible. For my proof of concept and analysis I chose Ansible.

Ansible is an agentless solution which uses SSH to perform all of its operations. This means it works with very little installed on the remote machine and a lot of different types of servers. When Ansible starts, it gathers “facts” about the server which it uses in its playbooks. This means you can target specific OS versions for particular operations.

The way to build an Ansbile playbook is to describe how you would like the end state of the server to be and then Ansible will ensure that the server ends up in that state by installing any missing applications or configuration. It has a large amount of Modules inbuilt that wrap common tasks that you would like to do. This makes my playbooks look quite simple and easy to read.

I have been very impressed with Ansible for quite a few projects and I plan to use it for future projects as its easy to use and get a server up and running, configured the way I need with little effort.

I plan to post some further posts in the future about my experiences and share some of my code from my playbooks.


I will be attending GovHack this year and I will be with a new team this year.

Our team is a lot more balanced and not as technical as last year so this should give us a better chance at delivering a more well rounded product this year. What is really exciting is that we have people with a technical skill focus well as business, data crunching and creative talent. These skills are really important and you want to be able to build something during the event, ensure that the idea targets the needs of the government agencies that are running the various competitions and creative talent to create an exciting video as well as the whole team to collaborate on your message to communicate effectively to your judges what your idea is all about.

If you are entering the competition, good luck!

RubyMotion Android Beta Tip

I have been trying out the new Android RubyMotion beta and have been trying out the code samples.

One thing I was frustrated about was having to keep putting the NDK and SDK path in the Rakefile. Here is a quick tip to make it easier.

Create a file somewhere like ~/motion/common.rb

@app.sdk_path = '/Android/sdk'
@app.ndk_path = '/Android/ndk'
@app.api_version = "19"
# other common configs

Inside the Rakefile for each app:

require 'motion/project/template/android'

Motion::Project::App.setup do |app|
# Use `rake config' to see complete project settings.
app.name = 'Paint'
@app = app
require '~/motion/common.rb'

Thanks to JP, you can also add this to your ~/.bash_profile or ~/.zshrc

export RUBYMOTION_ANDROID_SDK=~/android-rubymotion/sdk
export RUBYMOTION_ANDROID_NDK=~/android-rubymotion/ndk

If you have a better way to do it, post in the comments below.

It was interesting to find out how they confirmed that MH370 crashed over the indian ocean.

The planes navigation system was communicating to a satellite even though the main transponder was turned off and it communicated every hour. The Satcom “which collects information such as location, altitude, heading and speed, and sends it through Inmarsat’s satellites into their network”. The signal didnt contain any gps or location data so the analysts used techniques never used before, using the doppler effect (which describes how a wave changes frequency relative to the movement of an observer) and mathematical analysis so they could work out where the plane last was before it crashed. It was suspected the plane was above 30,000ft before it crashed.

The challenge now will be finding the black box before the battery runs out for the device in the black box that “pings” its location. If they do find it then the challenge of recovering it from such great depths in sea that has strong currents and dangerous weather.



Ruby Constants

I was doing some refactoring recently and made some interesting discoveries about how constants work in ruby.

What tripped me up was how much of a mess this code made:

Can you spot the problems here? The first obvious one is merge with a bang, and the other I will explain below.

What is a constant?

Constants in ruby are anything starting with a capital letter. So class names are constants as well as all capital variables.

Constants aren’t completely similar in other language. For example in Java and PHP you cannot re-assign or change a constant. In ruby you can:

Now you do get a warning, but its not an error and will not stop you from continuing. In the first example, the merge! actually modified the original constant so this was applied too all other classes using this constant.

There is one thing you can do if you want to ensure that the Object that the constant holds will not be modified, and that is by using the method freeze.


But as you can see below, the Constant is still able to be re-assigned (but still gives us the warning).

As pointed out by Andrew, you can freeze the class constant that an object refers to which will stop it from being modified.

Some other useful things about class constants is how easily they can be accessed. Constants defined in a class can even be reached without creating an instance of the class. You can even dynamically call the constant if you have a reference to the Class variable that contains the constant. For example:

Im still not certain on how useful constants are for settings that never change but I prefer using them to class methods that re-define hashes every time they are called or yaml files. They can sometimes make testing easier and other times harder.

Do you use constants much and how do you use them? Reply in the comments below.