Automating shelving of Jenkins jobs

The current version of the shelve plugin does not support shelving multiple projects. This can be a pain if you have a large Jenkins deployment. Luckily, it is possible to shelve multiple projects using a simple bash script.

If you have the jenkis-cli jar on your system, this can be can in two steps:

  1. First generate a list of jobs to shelve:

  1. Iterate over the list to shelve the projects:

This script uses a small delay between every couple shelve requests. This is useful if your jobs contain a lot of data and Jenkins will take too long to shelve them.



SonarQube 5.6 upgrade

Overall tips for upgrade a SonarQube instance from 5.4 to 5.6 LTS.



SonarQube database migration: MariaDB 5.5 to MySQL 5.7

Newer versions of SonarQube have stopped supporting MariaDB and you may need to switch to MySQL instead. While I’d rather use MariaDB, I understand that it is not within the supported matrix. Therefore I am documenting the steps here. The steps are focused on RHEL 7 (they probably would work on Centos 7 as well, but I did not test it).

Before start with this process, shutdown your SonarQube instance as well as any other analysis that may access the database.

The first step is to use mysqldump to create a backup of your MariaDB database:

After the backup is complete, shutdown and disable your MariaDB instance:

Then install MySQL 5.7 from Software Collections. The process is documented on this page. These steps are for 5.6, however, you can just replace 56 with 57 in all the steps and the result is the same.

With the new MySQL 5.7 installed and running, create the database:

In the MySQL sheel recreate the database with the same credentials and permissions as the old one:

Then you can recover the backup into the new MySQL 5.7 instance:

Now you can go to your SonarQube server and start it.



DNS configuration links

Long story short: recently I needed to configure a DNS subzone for some of my team’s CI services. It’s been more than I decade since I configured DNS for the last time.

This post is just a loose collection of links for DNS configuration tutorials and related stuff that I used to refresh my mind on the subject:





Processes, automation and discipline

Regardless the introduction of tools such as Puppet, Vagrant, Apache Maven, Jenkins and many others tools that automate the job away, a lot of software development teams still rely on outdated processes and manual labor to perform the bulk of delivery.

Unsurprisingly, the excuses for relying on outdated development practices haven’t changed either:

  • We don’t have resources.
  • We don’t have time.
  • It does not create value.
  • It does not fit our development process.
  • We just do simple stuff, or a variation, we just write small stuff.
  • We don’t have the skills to do it.

There is a vast amount of literature rebutting these misguided – and often short-sited – opinions, therefore it’s not my intention to rebut them.

What I want to point out is that more than just laying out algorithms in a text file, delivering great products involve processes, automation and discipline (see observation below). Just like a pit stop in a Formula 1 race:

(An outdated, manual and loosely disciplined approach versus a modern, automated and highly disciplined approach).

Obs.: discipline as in a systematic, ordered approach to development, and not to be confused with blindly following the rules or an unquestioning behavior.

Vagrant and Parallels Desktop

Maybe this is not news anymore, but Vagrant now supports Parallels. It seems to work with Parallels Desktop 8 and above, but I wasn’t able to run it 9 on OS X Yosemite. Upgrading to Parallels Desktop 10 seems to have fixed the issue and it worked like a charm. One additional problem is that there’s a shortage of images in the Vagrant Cloud. Although I believe this will be fixed as the community grows and share more templates on the cloud, this may be an nuisance to some users.

Fun with Grok and Logstash regexes

I have been using Logstash extensively lately. Along with ElasticSearch, it’s a great tool to centralize the logs and simplify access to them. The only difficulty I had was related to supporting multiline log messages, such as those printed by Java stacktraces. I have found some good examples online,  but none seemed to work the way I wanted. In some cases, I also got  my messages tagged as  _grokparsefailure, which indicated that the parser failed to process the regex. I ended up with one that it’s not so different after all but which did match exactly the way we log messages with log4j:

It’s also worth mentioning the Grok Debugger website along with an adequate regex tutorial are two priceless resources to have at hand.