Automatically installing all Ansible role dependencies

I think it was a couple of months ago now that I first started to fiddle around with Ansible. Being used to work with Puppet as a system provision tool before, Ansible seemed to make this task I disliked so bad before very easy. Almost fun I guess.

One thing that I found missing though, was a way to automatically install all the role dependencies. This still seems to be a task you have to do manually. Or anyway, I can’t find a way how to do it with the current Ansible version. So, being a programmer and all, I wrote a script for it.

The script itself is very simple:

  1. Loops through every role installed in roles/
  2. For every role, it fetches dependencies from meta/main.yml
  3. Installs the dependencies via Ansible galaxy

Personally I haven’t found a use case for other features, but if you do, please let me know.

The script source is available here: GitHub link
Available for download here: Direct download

Happy coding!

Charles is my superhero!

As a true geek I am a big fan of several superheroes. Actually, Iron Man is by far my favourite one. Or actually, the coolest part about Iron Man is J.A.R.V.I.S. if you ask me. The Just A Rather Very Intelligent System, is a clever system that assists Tony Stark in virtually everything that he considers “everyday tasks”.

Unfortonately I do not embody a superhero, but I do have several tools that assist me with my everyday tasks, and one of these tools is Charles Proxy.
Like my IDE helps me with being productive in writing, testing and troubleshooting code, Charles Web Debugging Proxy helps me with most of the things that are related to HTTP. And as a web developer, you know, that’s a lot. Here are some of the most significant contributions of Charles in my everyday work.

Communication insight

Specifically when developing APIs, a lot of communication is happening “under water”. A website does a request to an API, or a mobile device calls some webservices, and you want to know what’s going on. As with most proxies, Charles can show the HTTP-client requests. What’s connecting when, to where, and with what headers, cookies, request/response bodies etc.

Request tampering

Screen Shot 2013-07-25 at 20.34.59By setting a breakpoint on a specific request, you can modify the entire HTTP request the client is sending to the server. Change headers, cookies, request bodies, query parameters. In short, everything that you need to troubleshoot most common HTTP-related issues. Activate this in the context menu of a specific request -> Breakpoints, and the next time your client requests this URL, charles will break on the request.

Give colleagues and devices access to your dev environment

Another very convenient feature is that if you set your proxy server as a system-wide proxy on let’s say, your mobile device or colleagues’ machine, that they can see everything web-accessible on the machine you’re running Charles on.
So in practice, it means that you can check out your application that’s running on let’s say, a host-only connected virtual machine. This is something that we use a lot here in the office. Pretty neat, huh?

But there’s more…

Actually, the list above is just the tip of the iceberg of Charles’ features. There are a lot more features in the application, like: Bandwidth throttling, Request filtering, Recording/Replaying and saving sessions, DNS spoofing, SSL proxying, and there’s still more.

So, why a blog about Charles’ features while I could also visit it’s website?

Well, if you’ve read this far, I’ve got your attention. And that’s exactly what I was after. My goal with this post is to show that there’s a tool that can help you with things that can be very cumbersome without a proxy. And more! Charles is just great, and every web developer should at least know about it’s existence.

Of course, Charles is not the only Proxy program out there, and probably there are tools that have more or less the same features. If you know such a tool, could you let me know in the comments?
Oh, and Charles runs on virtually every desktop platform!

An example on running Behat from Jenkins

Time for a very short article. I just want to share a little trick that helped me run my Behat testsuite from a Jenkins build.
Most of the articles that I found on Google did not really cut to the chase. Here it is:

1) Create an “execute shell” build step that executes Behat, and outputs it’s results to a jUnit format:

2) Create a post-build “Publish JUnit test result report” action that reads the build/ directory for the jUnit files.

That’s it! If you want to know more about the formatting features that Behat provides, click here.

Screen Shot 2013-05-10 at 00.47.42

Javascript minification in capistrano/webistrano

Photo: http://www.sxc.hu/photo/347930

Recently i had to automate the minification of specific javascripts. Since we don’t have a real build tool, i thought it was best to do this during deployment.
For (automatic) deployments our tool of choice is Webistrano, which is based on Capistrano, which is written in Ruby.

Capistrano has this concept of recipes, which are small scripts that you write to do specific tasks during deployment.
So, one of these tasks could be the minification of some javascript.

But for the minification and deployment of that, i was bound to some constraints:

  • We’re using the yuicompressor (java)
  • Deploy target machines do not have java installed
  • So we have to do the minification locally

After doing some initial research it was quite easy to do, and i came up with the following code:

The approach can be considered somewhat unusual, as the project’s files are first uploaded to the server. And after that the javascript is minified and uploaded seperately. This is mainly because I couldn’t find another hook within capistrano that fits my need.
Of course, you can rewrite this script to work with any other local command too.

Note/disclaimer: This script uses the copy_cache directive. I’m not a capistrano expert, but this variable might not be available to all deployment strategies. This script might not work for you if copied 1:1. But i suppose you get the idea.

PHP CLI remote debugging with PhpStorm & Zend Debugger

This post aims to give an overview of how to get remote cli debugging working with PhpStorm and Zend Debugger. You can find more detailed information under the links in this article.

Some time ago i wanted to debug a PHPUnit test in PhpStorm, debugging PHP scripts using the cli sapi turned out to require some extra effort.

The situation is that my project files reside in a virtualbox, and open the project files -via a mounted directory- in PhpStorm. This means that, for normal web debugging, you have to specify some server settings, including the path mapping.

Since you work on the server directly, you have to set up a remote debugging connection to your IDE. In his blog post, Kevin Schroeder explains how to do this.
You can do this by running the following command, as copied from Kevin’s post:

Don’t forget to change the IP-address and port to the address that you’re IDE is running on.

Now there is one more thing you have to do to make PhpStorm understand where the files are, otherwise you still wouldn’t hit those breakpoints.
According to this article from the JetBrains team, this can be done by setting the PHP_IDE_CONFIG environment variable to “serverName=name-of-server” where name-of-server is the name as configured in Project Settings -> PHP -> Servers.

And oh, make sure your “Listen to Debug connections” button is green!

As i work with different projects on the same virtualbox, i have created this small script that you can put in your .bashrc

You can enable debugging by running:

$ bugon server-name

And disable by running:

$ bugoff

Happy debugging!

Fast search and replace in large files with sed

Last week i had to search and replace all occurences of a string inside a relatively big MySQL database dump file. My previous experiences with search and replace actions in files of similiar size or bigger suggested that this was going to take me a while. Normally i would write a small PHP script to do the search and replace action for me. However, recently i’ve been looking to find better, more productive, ways to do everyday things. So after a quick google i found the *nix tool sed.

What sed is, is best described from it’s manual:

Sed is a stream editor. A stream editor is used to perform basic text transformations on an input stream (a file or input from a pipeline). While in some ways similar to an editor which permits scripted edits (such as ed), sed works by making only one pass over the input(s), and is consequently more efficient. But it is sed’s ability to filter text in a pipeline which particularly distinguishes it from other types of editors.

The command to search and replace is similar to the syntax you would use in vi.
Let’s say you have a file database.sql and want to replace every occurence of myolddomain.com to mynewdomain.com. You would use the following sed command:

$ sed -i 's/myolddomain.com/mynewdomain.com/g' database.sql

By executing this command sed will go through your file, searching and replacing every occurence of myolddomain.com within moments. In my case, on files of 18MB and 32MB, the search and replace took under a second. But will take just moments on files much bigger than that.
Since sed is a command line tool and accepts all kinds of input, be it streamed or piped, it is a tool that is suited for a lot of different use cases.

Sed is probably good for a lot more than just search and replace. But i’ll have to look more into it to write something meaningful about it. If you want to read more about sed or check out the slides of the presentation “Sed & Awk – The dynamic duo” by Joshua Thijssen.

Oh by the way, as i mentioned sed is a linux/unix tool, but also seems to be available for Windows. It’s probably included in Mac OS distributions as well.