SonarQube database migration: MariaDB 5.5 to MySQL 5.7

Newer versions of SonarQube have stopped supporting MariaDB and you may need to switch to MySQL instead. While I’d rather use MariaDB, I understand that it is not within the supported matrix. Therefore I am documenting the steps here. The steps are focused on RHEL 7 (they probably would work on Centos 7 as well, but I did not test it).

Before start with this process, shutdown your SonarQube instance as well as any other analysis that may access the database.

The first step is to use mysqldump to create a backup of your MariaDB database:

After the backup is complete, shutdown and disable your MariaDB instance:

Then install MySQL 5.7 from Software Collections. The process is documented on this page. These steps are for 5.6, however, you can just replace 56 with 57 in all the steps and the result is the same.

With the new MySQL 5.7 installed and running, create the database:

In the MySQL sheel recreate the database with the same credentials and permissions as the old one:

Then you can recover the backup into the new MySQL 5.7 instance:

Now you can go to your SonarQube server and start it.

 

 

Using Apache Cassandra with Apache Hadoop

I am currently working on a data analytics website for my own educational purposes and to fulfil my hacking/learning needs, I decided to use Apache Cassandra as the input/output storage engine for and Apache Hadoop map/reduce job.

The job in question is as simple as it gets: it reads the data from a table stored in a Cassandra database and identifies what are the most commonly used adjectives for each of the major communication service providers (CSPs) in Brazil. After processing, the results are stored in another table in the same Cassandra database. Basically, it is a fancier version of the famous Hadoop word count example.

Unfortunately, there seem to be a lack of modern documentation about integrating Hadoop and Cassandra. Even the official guide seem to be deficient/outdated about this subject. To add insult to the injury, I also wanted to use composite keys, which complicated things further. After reading the example source code in Cassandra source code, I was able to successfully implement a working job.

Despite the lack of documentation and the hacking required to figure out how to make it work, the process is quite simple and even an unexperienced Cassandra/Hadoop developer such as myself can do it without much trouble. In the paragraphs below you will find additional details about the Hadoop and Cassandra integration and what is required to make it work.

Finally, as it’s usual for my coding examples, the source code is available in my Github account under the open source Apache License v2.

Continue reading Using Apache Cassandra with Apache Hadoop