Archive for the 'Coding' Category

Simplifying JMS testing with embeddable brokers

Ever hoped for an easy way to run embed brokers and simulate a full cycle JMS messaging cycle? Well, this pet project of mine may be useful for you (or, at least, give you an idea for an implementation). It works on top of JUnit, by providing a JUnit test runner, a JMS provider, a set of JMS-related annotations that will inject object instances and additional utility classes to simplify dealing with JMS. At the moment it works only with ActiveMQ, but adding additional providers should be fairly easy.

Here’s example of the a test class that sends and receives data through an embeddable ActiveMQ broker:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
@RunWith(JmsTestRunner.class)
Provider(
value = ActiveMqProvider.class,
configuration = ActiveMqConfiguration.class)
public class RequestReplyStringTest extends AbstractRequestReply<TextMessage> {
@Consumer(address = Defaults.REPLY_TO_QUEUE)
private MessageConsumer consumer;

@Producer
private MessageProducer producer;

@JmsSession
private Session session;

@Listener
private ServerListener listener;


@Before
public void setUp() throws JMSException {
listener.setReply("polo");
}

@Test
public void testSendReceiveText() throws JMSException {
Destination replyTo = session.createQueue(Defaults.REPLY_TO_QUEUE);

Message request = session.createTextMessage("marco");
request.setJMSReplyTo(replyTo);

producer.send(request);

Message response = consumer.receive(1000 * 5);
// handle the response
}
}

It’s pretty simple as of now, but I can see some interesting uses for running certain types of tests.

JSON manipulation: reference material

While working on ways to export my backlog in Trello, which I wrote about in this post, I came across the following articles that I think might be useful for those manipulating JSON data:

The titles are speak for themselves. Have fun.

Using jq to convert a backlog hosted in Trello

Disclaimer #1: Trello is awesome and it can export its data to CSV if you sign-up for one of the business plans. Because I was using it as an alternative solution for a couple of weeks, I did not feel the need to subscribe the service. If you have a large backlog, that’s the way to go.

Disclaimer #2: I understand each team may use a different board/checklist format for their history, therefore please interpret this article as generic instructions about how to export the data.

Pre-steps: In order to perform these steps, you will need to export Trello data to CSV. You can follow these steps to export the data.

Consider, for the purpose of this post, that your product and sprint backlog look like this (click for a larger version):

Sample Trello board

Sample Trello board

The colors (labels) represent either the effort, in points, for each history or whether it is in progress or delivery.

Each use case is composed of a checklist that represents the user histories. Pretty much like this:

Sample Trello Backlog

Sample Trello Backlog

To export the product backlog as well as the sprint backlog, you can use:

1
cat sample.json| jq -c '.actions[] | select(.type=="createCard") | [.data.card.name, .data.list.name, .date, .memberCreator.fullName ] ' > backlog.tmp

The backlog.tmp should look like this:

1
2
3
4
5
6
["Use case 1","Sprint 01","2015-01-14T12:04:22.901Z","Otavio Rodolfo Piske"]
["Use case 5","Product Backlog","2015-01-14T12:03:54.013Z","Otavio Rodolfo Piske"]
["Use case 4","Product Backlog","2015-01-14T12:03:51.521Z","Otavio Rodolfo Piske"]
["Use case 3","Product Backlog","2015-01-14T12:03:39.129Z","Otavio Rodolfo Piske"]
["Use case 2","Product Backlog","2015-01-14T12:03:34.760Z","Otavio Rodolfo Piske"]
["Use Case 1","Product Backlog","2015-01-14T12:03:33.409Z","Otavio Rodolfo Piske"]

To export the labels, which contains the effort (points) for the use cases, you can run:

1
cat sample.json| jq -c '.cards[] | select((.labels[] | length ) > 0) | [ .name , .labels[].name , .dateLastActivity ]' > backlog-points.tmp

The backlog-points.tmp should look like this:

1
2
3
4
5
6
7
["Use case 1","2","In Progress","2015-01-14T12:05:06.405Z"]
["Use case 1","2","In Progress","2015-01-14T12:05:06.405Z"]
["Use case 3","5","2015-01-14T12:05:36.068Z"]
["Use case 4","4","2015-01-14T12:05:43.963Z"]
["Use case 5","3","2015-01-14T12:05:48.597Z"]
["Use case 1","2","2015-01-14T12:07:38.449Z"]
["Use case 2","4","2015-01-14T12:07:26.804Z"]

Finally, you can export the progress of the project with the following command line:

1
cat sample.json| jq -c '.actions[] | select(.type=="updateCheckItemStateOnCard") | [.data.card.name, .data.checkItem.name, .data.checkItem.state, .date] ' > progress.tmp

Although readable, the exported files contain data that may not yet adequate to import into LibreOffice (or any other CSV-capable reader). It is recommend to filter the files of invalid characters. In this example, both “[” and “]” should be filtered. Here’s a sample command line that can do the trick:

1
cat file.tmp | sed 's/\[//g' | sed 's/\]//g' | sed 's/\",/\"\;/g' > final.csv

Just remember to replace file.tmp with one of the files generated in the steps above.

 

 

 

 

 

Note to self: embedded HSQLDB for Java

This article explain how to setup and run an embedded HSQLDB (very useful for unit tests). Basically just instantiate a Server object, set the path, set database name and start it.

 

 

Processes, automation and discipline

Regardless the introduction of tools such as Puppet, Vagrant, Apache Maven, Jenkins and many others tools that automate the job away, a lot of software development teams still rely on outdated processes and manual labor to perform the bulk of delivery.

Unsurprisingly, the excuses for relying on outdated development practices haven’t changed either:

  • We don’t have resources.
  • We don’t have time.
  • It does not create value.
  • It does not fit our development process.
  • We just do simple stuff, or a variation, we just write small stuff.
  • We don’t have the skills to do it.

There is a vast amount of literature rebutting these misguided – and often short-sited – opinions, therefore it’s not my intention to rebut them.

What I want to point out is that more than just laying out algorithms in a text file, delivering great products involve processes, automation and discipline (see observation below). Just like a pit stop in a Formula 1 race:

(An outdated, manual and loosely disciplined approach versus a modern, automated and highly disciplined approach).

Obs.: discipline as in a systematic, ordered approach to development, and not to be confused with blindly following the rules or an unquestioning behavior.

Running Apache Camel within an Application Server

This week I needed to show a colleague how to use Apache Camel, Apache CXF and Spring to create a web-based integration application. To do so, I created a Camel-based implementation of the Simple Apache CXF examples I wrote in 2012. Although this topic is covered more than once on Camel documentation, some details are either missing, which can make it tricky to run this setup this the first time, or are specific to a the application server where the code will run.

Therefore, I created this example (which you can find in this repository in my GitHub account) to complement the official documentation with additional details. I used the open source GlassFish application server to run the code.

Continue reading ‘Running Apache Camel within an Application Server’

Gentoo Linux Box for Vagrant Parallels Provider

As I explained in an earlier post, Vagrant now supports Parallels as a provider. Since I wanted to test how they were working together, I created a standard 64bit Gentoo Linux box that you can download and use. In addition to a standard Gentoo install, the box also comes with Puppet installed, so you can do some actual work on it.

Since I presume you already have the Parallels provider setup by now, this is how you can download and use the box:

1
vagrant init orpiske/gentoo-linux-64 && vagrant up

After the box is downloaded from the cloud you can use vagrant as usual (ie.: vagrant ssh, etc).

 

Vagrant and Parallels Desktop

Maybe this is not news anymore, but Vagrant now supports Parallels. It seems to work with Parallels Desktop 8 and above, but I wasn’t able to run it 9 on OS X Yosemite. Upgrading to Parallels Desktop 10 seems to have fixed the issue and it worked like a charm. One additional problem is that there’s a shortage of images in the Vagrant Cloud. Although I believe this will be fixed as the community grows and share more templates on the cloud, this may be an nuisance to some users.

Fun with Grok and Logstash regexes

I have been using Logstash extensively lately. Along with ElasticSearch, it’s a great tool to centralize the logs and simplify access to them. The only difficulty I had was related to supporting multiline log messages, such as those printed by Java stacktraces. I have found some good examples online,  but none seemed to work the way I wanted. In some cases, I also got  my messages tagged as  _grokparsefailure, which indicated that the parser failed to process the regex. I ended up with one that it’s not so different after all but which did match exactly the way we log messages with log4j:

1
(^.+Exception.+)|(^\s+at .+)|(^\s+... \d+ more)|(^\s*Caused by:.+)

It’s also worth mentioning the Grok Debugger website along with an adequate regex tutorial are two priceless resources to have at hand.

 

 

Using Apache Cassandra with Apache Hadoop

I am currently working on a data analytics website for my own educational purposes and to fulfil my hacking/learning needs, I decided to use Apache Cassandra as the input/output storage engine for and Apache Hadoop map/reduce job.

The job in question is as simple as it gets: it reads the data from a table stored in a Cassandra database and identifies what are the most commonly used adjectives for each of the major communication service providers (CSPs) in Brazil. After processing, the results are stored in another table in the same Cassandra database. Basically, it is a fancier version of the famous Hadoop word count example.

Unfortunately, there seem to be a lack of modern documentation about integrating Hadoop and Cassandra. Even the official guide seem to be deficient/outdated about this subject. To add insult to the injury, I also wanted to use composite keys, which complicated things further. After reading the example source code in Cassandra source code, I was able to successfully implement a working job.

Despite the lack of documentation and the hacking required to figure out how to make it work, the process is quite simple and even an unexperienced Cassandra/Hadoop developer such as myself can do it without much trouble. In the paragraphs below you will find additional details about the Hadoop and Cassandra integration and what is required to make it work.

Finally, as it’s usual for my coding examples, the source code is available in my Github account under the open source Apache License v2.

Continue reading ‘Using Apache Cassandra with Apache Hadoop’

Next Page »


Categories