Monday, December 8, 2014

Linux, BlueJeans and SpringSource Tool Suite

Not everything is gold that shines

This has been a long, long day. I learned once again that people don't care about users using their programs but they care very much about those few that give them money. This is also the case with BlueJeans...

The problem

I was a bad smell from the beginning. To use BlueJeans you need to install the nprbjn plugin which is provided as a deb/rpm package. On Linux Mint (which I'm happily using for many months now) I choose the deb package, installed it and voila! Everything was up and running. It was such a relief from using this piece of shit WebEx that it felt almost surreal. I soon realized that my joy was premature. After I restarted STS (while on the meeting..) I was kind of shocked that just a few seconds into using it it just shut down on its own. No warning, no message, no nothing - just bang! and it was gone.

I tried doing pretty much everything: installing and reinstalling Eclipse, reinstalling STS - nothing helped. And so I arrived at removing the BJ plugin after which everything immediately went back to normal. And I said to myself "what a piece of crap!" but hey - you need to work with it and besides the fact that it is screwing with the main application you're using on a daily basis (pretty much 80% of the time) everything else looks and works kind of nice.

I even went through the trouble and created a support ticket explaining the whole situation but there was no concrete solution beyond we'll look into it and 3 months later there's still no fix for that annoying bug.

The solution

Or should I rather say workaround... How can we make it work both ways?

1. Start firefox with -p parameter - this will open a new window for selecting profiles
2. Create a new profile, let's call it Eclipse
3. Check the 'Use the selected profile without asking at startup' and click Start Firefox - the new profile will be initialized
5. Start STS, navigate to Window -> Preferences and select General -> Web Browser
6. Highlight 'Firefox' in the list of browsers, click 'Edit' and in the 'parameters' field type

-p Eclipse

7. Click OK, then Apply and close STS
8. Close Firefox, start it again with -p but this time select the 'default' profile so that everything is back to normal with your browser

This will ensure that Eclipse has its own clean profile and you can use STS without interruption.

Happy coding!

Saturday, November 22, 2014

Opening and closing resources

It's the silliest thing that's been bothering me for agest: why on earth would someone go through all the trouble to write code like that:

InputStream in = null;
try {
    in = openInputStream();
    // do something with in
} finally {
    if (in != null) {
        in.close();
    }
}

First of all, if opening a stream didn't succeed then an exception is thrown and "in" is going to be as null as it gets - that's true, but the actual exception is going to get propagated upwards too - so why the hell do the it in the try block and check for null in finally? It's just so pointless I can't stand it..

InputStream in = openInputStream();
try {
    // do something with in
} finally {
    in.close();
}

Now isn't that a lot more readable, simpler and everything? If the opening of a stream blows up everything blows up too - just like in the code above.

Can anyone please be kind and explain this complete insanity to me?

Sure, with Java 7 we have the try-with-resources feature - but is it doing anything more than the second form?

try (InputStream in = openInputStream()) {
    // do something with in
}

The visibility of the stream is limited to the body of the try statement which seems to be the absolutely only difference to the previous version (the pre-java7 one).

Thursday, November 20, 2014

JSON RPC framework in 12 lines

There are lots of times when I just shake my head in admiration of what Groovy actually is. This has been one of the times today and I'm here to tell you I'm not easily impressed.

We're introducing a communication layer between our microservices based on JSON-RPC (just because it's cool and fast). Here's an initial implementation of a Groovy-based framework for doing JSON-RPC calls:

class JsonRpcClient extends HTTPBuilder {
    JsonRpcClient(String uri) {
        super(uri)
    }

    def methodMissing(String name, args) {
        def result
        request(POST, JSON) { req ->
            body = [ 
                "jsonrpc" : "2.0", 
                "method" : name, 
                "params" : args, "id" : 1 ]
            response.success = { resp, json ->  result = json }
        }
        return result
    }
}

I means sure it's not complete but using it is very much possible!

def http = new JsonRpcClient('http://localhost:8080/example/api/hello')
println http.sayHello("John")

I mean how cool is that, ha? Some meta programming with an existing framework (HTTPBuilder) and you get a micro implementation of JSON-RPC protocol in a few lines of code!

Happy coding! Groovy Rulez!

Monday, October 20, 2014

Data structures are not tabelar

Recently one of my colegues has been designing a piece of structure that was supposed to store some data about something. It isn't really important why or what data. The point is that for the most part there have been 1:1 or 1:n relations. I can't tell why or how it came to me but the itch was just to strong to let it go and I really had to ask what is actual thing he's trying to achieve.

The data model consisted of 8 tables, 2 additional artificial concepts (just for the sake of storing data) and a bunch of names that didn't really make any sense.

It turns out that when we tried to create example records we wrote them down very naturally in JSON because storing data in tables on paper was just to cumbersome.

A rule of thumb: if you can't picture something don't do it!

I started to look for an alternative storage for those documents and found out that there are not too many options available for free. There's obviously the (almost infamous) MongoDB but with its recent bad press and the lack of embedded mode I felt it's not the right way to go. Luckily we've stumbled upon OrientDB - a multi-paradigm database implemented in Java. Since the application was already on the JVM having an option to do an embedded document database seemed like the perfect match.

Now the whole thing is just a document with some embedded documents since all the data comes as one. And using OrientDB's pseudo-SQL dialect is super simple!

Go ahead and try it out yourself! It's super simple!


Happy coding!

Friday, October 10, 2014

Continous deployment - Java style

Recently the topic of continuous deployment has been a hot topic around the office. CD this, CD that, we need to CD because [increase productivity, faster time to market, agile deployment]... You name - we have heard it. I'm sure you don't really need to be told what continuous deployment / delivery is. It's all about making deployments easy, boring and natural part of your work day.

When faced with the task in any of the scripting languages (take PHP for instance) the task is extremely easy: just rsync the files and you're done. This means that you need to take into account that some session data may be null in certain circumstances and you're pretty much done. It's like having a zillion nano applications, each file you hit from the browser being one of them and some services (the includes). Bum! You're done!

In Java the case is a little bit different. First of all one needs to understand that the usual deployment of Java applications involves deploying of 3rd party components in binary (a.k.a. compiled) form. The library takes a form a zip file with a ".jar" extension and contains lots of folders with ".class" files (the product of compilation of ".java" sources). For that very reason doing CD on Java applications isn't all that easy. The second thing that makes it even more attractive is that the actual application is packaged in yet another zip file, this time with a ".war" extension (as in "Web Application aRchive" I presume). The third insanity level is the packaging of multiple web applications alongside some business layer services all packaged into yet another zip archive, this time with the ".ear" extension (no, it's not your hearing organ, it's the "Enterprise Application aRchive"). This has historically had only one reason: to be able to provide a packaging mechanism and to minimize the overhead of data transfer over the wire (I mean the must have been something else but I didn't find anything on that topic so far so I take it I'm right on this one).

To be completely fair there is a way to deploy both unpackaged .war's as well as .ear's (however strange that sounds :D) to an application server, but since it doesn't really matter if in the application a single ".jsp" (as in Java Server Pages, similar to ASP's - Active Server Pages in the Redmond world) file gets updated because it most likely uses some binary ".class" file that will not get updated. There are paid solutions to this problem but I think it's going to be a fairly seldom case where you'd want to pay lots of money to get CD done (unless you can spare then off you go!).

For the purpose of this discussion we're going to focus only on .war deployment descriptors and only on the reference implementation of the servlet container, Apache Tomcat, and only in version 7+.

What do you need to get continuous deployment done? The answer couldn't be simpler: proper naming!

Here's an example: we're working with an application called "example" (for lack of a better name) and we want the following:

1. Users using the system will not experience any undesired results, that includes:
- sudden unavailability of the system
- change in behaviour
2. Users using the system will make a semi-conscious decision to start using the new version
3. The old version will be automatically uninstalled once every user makes the decision from pt. 2

So here we go. The first version can be named anything. So let's go simple and call it example.war. Since it is most likely that the application will utilize some server state in the form of a session the client will get a cookie with unique ID called JSESSIONID. This is what binds the user to a deployed version of the application on the server. Now if a user logs out then a new JSESSIONID is generated. This is very important. Read on.

Tomcat has the capability to run multiple versions of the same application out of the box since version 7. How is it done? By naming the next versions properly:

example##001.war
example##002.war
example##003.war

Please note that the naming of the version is alphanumeric therefore I took the leading zero to pad the version number so that it always increases. The main point here is that the resolution which user will hit which version is done by the JSESSIONID!

- if no JSESSIONID is sent from the client - newest version
- if JSESSIONID is sent from the client but cannot be bound to any existing version - newest version
- otherwise there's a JSESSIONID matching a running version

An automated shell script to get the next version number from a remote server is as follows:

#!/bin/bash

user='your-user-on-remote-machine'
host='name-or-address-of-remote-machine'
location='location-of-webapps-folder'
apppatern='base-name-of-your-application'

number=$(ssh ${user}@${host} "ls ${location}/${apppatern}*war -1 \
        |sed 's,.*${apppatern},,;s,##,,;s,\.war,,'|sort -n|tail -n1")

if [ -z $number ]; then
    numpad=3
else
    numpad=${#number}
fi

number=$(expr $number + 1)
nextnumber=$(printf %0${numpad}d ${number})


echo ${nextnumber}

To make sure the application gets automatically undeployed when everybody's session is either timed out or otherwise invalidated include the

undeployOldVersions='true'

parameter in the "Host" element of your server.xml configuration file. Done.

So to bottom line this for you:

1. Use naming convention in the form of appname##version.war remembering that it is alphanumeric and _not_ numeric so padding is crucial
2. add the undeployOldVersions="true" parameter to Host in server.xml
3. Start rolling updates

Of course the entire process in real life is a lot more complex. It involves automated testing of the application before it gets released, automated copying of files to the server - stuff like that. But the essential piece is there and you get it absolutely for free. Please note that since it is the entire version of the application being updated it is OK to have your dependencies updated as well with such an update. This will just work.

Here's the link to the relevant configuration options in Tomcat:

http://tomcat.apache.org/tomcat-7.0-doc/config/context.html#Parallel_deployment

Happy CDing!

Wednesday, September 17, 2014

Running transmission as a different user

Sometimes things should just be easier. One config file, restart, done. This time I faced quite a different daemon so I though I'll share since it took me a while to figure it out. It's about the transmission-daemon running as a different user.

The why?
I need to download files from the Internet using BitTorrent protocol (like Ubuntu ISO for example) and I'd like to do that using a computer that's serving as my home server.

What's difficult?
First of all there's no place in any of the configuration files to tell you which user is it going to be that's running the daemon. That secret is safely guarded inside /etc/init.d/transmission-daemon. You'll find this kind of line:

USER=debian-transmission

So you'd think that's all then. We change it to something like

USER=nobody:nogroup

and life's easy. Well, not exactly. If you try to do this you'll see that transmission-daemon tries to start but fails rapidly. To diagnose what's wrong you'll want to use this daemon in foreground mode like this

transmission-daemon -f --config-dir /var/lib/transmission-daemon/info --log-debug

But that'll only tell you that there are permission issues and that some files that apparently may even have permissions like 666 on folders that will have 777 permissions will still not be out of reach. The problem lies in the default configuration of the daemon. It keeps it's configuration data in /var/lib/transmission-daemon/info however it's customary to store such information in /etc which Ubuntu and Mint do. And so there's a /etc/transmission-daemon/settings.json with the ownership of debian-transmission and it's then linked to the place where Transmission awaits it (/var/lib/transmission-daemon/info/settings.json)

The solution
So here's what I did. I first stopped the deamon or else my configuration file would get overwritten. Then I changed the ownership of the entire structure of /var/lib/transmission-daemon to nobody:nogroup like so

chown -R nobody:nogroup /var/lib/transmission-daemon

Then I removed the /var/lib/transmission-daemon/info/settings.json link and replaced it with the /etc/transmission-daemon cusine:

sudo mv /etc/tranmission-daemon/settings.json /var/lib/transmission-daemon

and I updated the ownership of that file again:

chown -R nobody:nogroup /var/lib/transmission-daemon/info/settings.json

That's it! Transmission now runs as user nobody:nogroup creating new files and folders as nobody:nogroup and life is easier again

Monday, September 8, 2014

Migrating a project from Google Code to GitHub

Recently I've grown very impatient to the progress being made to psi-probe. I use it at work and wherever else I can because it is a fantastic piece of software but the fact that the last commit was around 6 months ago leads me to understand that the project is simply dead.

The original author assured me that he's got major interest in keeping this project alive but it seems he's got no time to do so. Also keeping the project maintained with Subversion these days seems a bit too vintage for me. And so I decided to migrate the whole thing to GitHub.

Migrating the repository itself is quite simple and there's more than one tutorial on the Internet to help you out with it. The major thing that needs to come out at the end is a repository with git tags (not Subversion ones), git branches (same as Subversion ones) and an .gitignore file containing the set of things you'd like to not care for.

Migrating issues is also not very difficult once you have all the tools in place. https://github.com/arthur-debert/google-code-issues-migrator is your biggest friend. As with any friend there's love and there's hate involved. Basically the tool does everything properly up until some damn comment contains some god forsaken character in which case the whole thing blows up in your face.

Traceback (most recent call last):
  File "./migrateissues.py", line 387, in
    process_gcode_issues(existing_issues)
  File "./migrateissues.py", line 288, in process_gcode_issues
    add_comments_to_issue(github_issue, issue)
  File "./migrateissues.py", line 120, in add_comments_to_issue
    body = u'_From {author} on {date}_\n\n{body}'.format(**comment)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 362: ordinal not in range(128)

There are 2 places where a similar problem causes the tool to stop working. The other one is at the addition of an issue. I've worked around by catching that error and providing some dummy text like "Unable to import comment". I'll later on modify that comment by hand and using the Copy/Paste method I'll bring it on the level.

To keep the numbering of issues the same as on Google Code I needed to create some dummy issue because someone deleted issue nr 1 and the importer doesn't recognize this fact and skips the creation of the first, missing issue. Fortunately enough it's quite easy what the first issue should contain in a migration project like that so I used that to my advantage :)

Anyways.. If you'd like to see the Psi Probe flurish again you can always post me a thank you card for all the hard work I'm doing :) Or better yet post a pull request with fix to one of the 100+ issues imported from the original project - the choice is yours!

Happy coding!