Wednesday, December 28, 2011

Grails, Heroku and spring-security-core

This one is going to be quick, but made me spent almost 2 hours to google it...

If you install the spring-security-core plugin and want to deploy your application o Heroku you'll end up with a nasty-looking exception like this:

java.lang.IllegalStateException: No thread-bound request found: Are you referring to request attributes outside of an actual web request, or processing a request outside of the originally receiving thread? If you are actually operating within a web request and still receive this message, your code is probably running outside of DispatcherServlet/DispatcherPortlet: In this case, use RequestContextListener or RequestContextFilter to expose the current request.

If you google for No thread-bound request found you'll find some Jira issues that'll tell you pretty much nothing besides that it is a Grails issue.

Among other search results is also this one stating that the actual issue is with the webxml plugin being snatched in the wrong version and that you need to force Grails to use the right one like this:
compile ":webxml:1.4.1"
Just put the line above in your build config and life is good again.

See ya!

Edit 2012-04-05

As of Thu Dec 22 22:35:30 2011 -0500 Burt has upgraded to webxml-1.4.1 so you shouldn't experience this problem anymore.

Saturday, December 3, 2011

Grails routing plugin family updated

It's about time the routing plugin providing EIP functionality for Grails be updated to work with the latest version of Grails which at the time of writing is 2.0.0.RC3. And so today the plugin has been updated to 1.1.4-SNAPSHOT with the plan to release 1.1.4 by the time Grails 2.0.0 final will be out.

Here are the highlights of 1.1.4-SNAPSHOT:

  • Apache Camel core libraries updated to 2.9.0-RC1 (with the plan to upgrade to 2.9.0 final once it gets out)
  • Apache ActiveMQ (routing-jms plugin) upgraded to 5.5.1

Also the example project has undergone some extensive upgrade:

The last feature is a response to an issue posted on GitHub that had to do with providing access to services from processors which Camel is already capable of - it just requires some additional work but it is so not without reason.

I hope you'll enjoy!

Monday, November 28, 2011

Grails 2.0 - the power is back

Recently I've been bitching about the stability of Grails 2.0 and specifically the lack of it. Tons of stuff was just wrong: the new GSP parser was buggy like hell, hot-reloading using the new agent just didn't work - you name it. But with the RC1 release most of it is gone and the new version of Grails starts to look like something you could actually use!

Hot reloading.

Since its early days Grails was the number one framework and that in most part due to its hot-reloading capabilities. You could just change a view, controller or service, wait just a few seconds and the result would emerge after refreshing the page. That presented a huge advantage over classic Java EE development where a single change in the code forced the developer to recompile and restart the whole application. That's why I hate regular Java EE development with a passion. Sure there's the Java Rebel thingy but it costs money - a lot of money!

Grails 2.0 uses a different approach than the one in 1.x series but a similar one to Java Rebel. There's a Java agent that does all the heavy lifting of incorporating new code into existing application. So let's summarize what works and what doesn't:

- reloading controller - check
- reloading services - check
- reloading url mappings definition - check (with the exception of mapping to HTTP codes)
- reloading domain classes - not working
- reloading Groovy sources under /src/groovy - check!
- reloading Java sources under /src/java - check!

There might be other areas but the general impression is really awesome!

New scaffolding templates

Well.. they are nice and green and dandy and... well I liked the old ones better (from the general look-and-feel point of view). The new ones still use the word "main" for everything: CSS, template and the application.js. Does anyone knows who made that stupid decision to call "main" or "application" something that should be called "scaffold"??? In the current state of things if you're thinking about doing anything serious with Grails you should install the templates and rename them to something that makes sense, like for example main.gsp -> scaffold.gsp or main.css -> scaffold.css. Other than that it's looking really good! Just the fricken naming...

Resources and other capabilities

Well, finally the resources plugin made it into the core! So from now on the problems like having lots of small JavaScript or CSS files and cramming'em together for the release version are gone. This is probably the only platform out there (besides ASP.NET :D) that does it right from the start. Granted it doesn't come for free - you need to configure it properly and it can be an unpleasant experience all along (unlike in ASP.NET where it's actually a part of the page itself). But at the end of the day it is really useful and you should use it.

Also the Datasources plugin found its sibling in the new release! This will be a blessing for everyone that needs to work with multiple databases in their Grails applications. In the databases plugin configuration was "almost" like the one you'd normally have but different enough to make you curse the creator more than once. I'm delighted to say that this has been unified and finally all datasources can be specified in the same configuration file, DataSource.groovy. Yay!!!

And last but not least - testing. Finally someone took the hard dependency on old 3.x JUnit and flushed it down the toilette! Man that was really what I was looking for. All test cases are now JUnit version independent and use the coolest Groovy feature ever: Mixins!
Finally testing domain classes in isolation is possible and doesn't make you puke at the mockDomain() call every time you see it. Finally the power is back!


I've received some notifications that the plugin architecture has change a bit and that some of my plugins (namely the json-rest-api) don't work properly. It's sad but not unexpected. The general idea is that plugin authors will have to take a good look into regression before they can 'certify' their plugins as 2.0-ready.


The general feeling of Grails 2.0 is really great. Finally playing around with most of the UrlMappings is simple and fast (which it wasn't in 1.x), testing is fun again (even with Spock and the like) and all the rest is just great. So I guess the final message should be "go and use and forget about 1.3.7 that has been around for like ages".

And finally I really need to get this out of my chest: when it comes to regular Java EE development like with JSF (outch), Struts or what have you one should really go kick some corporate butts to get them to start thinking again that web development shall not be a nightmare anymore. We have the technology, you know, to make it right for a change!

Go spread the word!

Sunday, November 6, 2011

Why using a plain-text editor is a good thing

For the past years there's been a boom on the market for web applications. Along with that the possibilities for supporting technologies grew like never before. This has a two-fold implication.
First it is imperative that we recognize the need to write less and do more - that's where the new languages come in. Second it is obvious that new languages tend to emerge a lot faster than companies would like them to and as a result the fat IDE support is missing.

Obviously enough it's going to make for a strong case for plain-text editors (probably with some syntax highlighting like the good old vim or emacs). Those editors don't bother making deep sense of what the code really is. All they care about is that it is text.

"But I need refactoring support to be productive" you might say. Well, refactoring in new languages tends to be a lot less tedious than in (for example) Java where changing a file name leads to all sorts of changes that need to happen just to do this simple thing. New languages like F# for example (which is not all that new if you take into account its roots) have some refactorings kind of built in into the language and most plain-text editors support it out of the box. I'm talking here about method extraction and the way the Tab key works in most modern editors on more than one line of text.

In addition to that many new languages give you the power of succinctness never seen previously in specific areas. Take CoffeeScript or LessCSS for example. With those two new languages you can put together code in such a manner that when compared to the original language you'd write it in (JavaScript and CSS) can cause your head to explode.

At the end of the day it is up to you if you want to write more with (for the most part) good IDE support or if you're ready to give up some of that luxury in favor of more readable and less error prone code. But remember: the less code you have the less bugs you're going to have and that alone should make you want try out the latest inventions.

Have fun!

Sunday, October 16, 2011

Hazelcast and Grails

If you'll ever find yourself in need to partition your Grails application in order to provide high or availability or if you're lucky enough in order to increase processing power because your database is already not the bottleneck (which it will be for a long long time) then you should definitely check out Hazelcast and its Hazelcast WM module. Configuration is dead simple (a few lines in web.xml) and the thing is capable of auto discovery using multicast so you just add instances of both the Hazelcast distributed cache as well as instances of your application and everything just works.

Man, it's been a while since I found a tool that just does the job no questions asked and Hazelcast definitely is one of them!

And live is good again :)

Creating standalone applications with Java and Maven

The challenge

In this installment we're going to take a look at what does it take to create a standalone Java application that's a little bit more sophisticated than the famous Hello, World. Let's start with the simplest example:
package org.example;

public class Main {
    public static void main(String[] args) {
        System.out.println("Hello, world!");
To compile and run this one we need to invoke the javac compiler, package it to some jar file (say example.jar), create a manifest file (or not..) and start the application passing on the proper classpath and class name that is the entry point to our application. This might look something like this:
java -cp . org.example.Main

The "proper" way

I know for a fact that Maven can be a pain in the ass if used by some inexperienced fellow that wanted it to do everything but didn't know how to ask for it. For example for tasks that Maven is good at like specifying dependencies using Ivy and Ant for it makes no sense, right? Well, you wish! I've seen those kind of nonsense a lot of time with pom.xml reaching out beyond the magic 100k boundary...

Instead of cranking up the heat I'd like to see people develop simple mojos solving one problem at a time and not resorting to ant or anything like it. But above all, for crying out loud, use what's already there to do the job!

Doing Spring in standalone Java application

If you can imagine how hard would it be to prepare a standalone Java application that uses external configuration, external libraries and resembles what users are already used to (the bin folder, the lib folder, maybe some conf or etc for the configuration files) you'll appreciate the fine job application assembler is going to do for you.

That's pretty much it! All you have to do now is to call maven to do your bidding.
mvn clean package appassembler:assemble
Please note I didn't change anything in the application itself. Well that's because it's not a post about how to instantiate Spring in a console application but rather how to package everything so that it works as expected. You can take a look at a fully working example on GitHub.

Have fun!

Saturday, October 8, 2011

Eclipse hint: Aptana Studio

I know I wrote the other day that I don't use Eclipse and I don't when working with anything else that Java. You just can't get away editing Java code on any serious level in any plain-text editor because the language is not designed to do it (like Groovy or Ruby for example). Recently I've been doing some research on what's available out there in terms of plugins and I've stumbled upon Aptana Studio. Even though I don't plan to code in Ruby any time soon I've decided to install it and give it a try.

All things considered I have found the one piece of that "studio" that has convinced me that working with Eclipse might be fun: the Terminal window :)

Imagine you're working with your project and all of the sudden you need to do some command-line processing (for whatever reason). With Aptana Studio it's extremely easy: you just right click on a node in project explorer or package explorer and select Open in -> Terminal :D

Man I like those little discoveries that make my life easier...

Thursday, October 6, 2011

Gitorious - a working guide to install

Finally someone did a guide on installing Gitorious on Ubuntu 11.04 and it WORKS!!!

Go check it out at

Saturday, October 1, 2011

Using DWR with JBoss and EJBs

Today we're going to get our hands dirty with the Direct Web Remoting (DWR) library in conjunction with JBoss and the EJB mechanism.

The why

DWR is a powerful library for all things related to remote method calls using HTTP. It can serialize lots of things, use DTOs if provided but above all it does one thing so easy it should be forbidden: It allows you to create or retrieve an instance and call its method directly from your JavaScript. This is why I'll always favor DWR over hand-written mechanisms or misuse of REST libraries like RESTfully. REST is all about resources - Ajax not necessarily...

The how

We're going to use DWR version 3.0.M1 as this is the latest available in Maven repository at the time of writing. Creating the project itself is outside of the scope of this post however configuring the framework isn't so we're start with that.


  "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN"
  "" >

As you can see here this is pretty much the basic that you'd expect from a web application that has one servlet in it. The other stuff will be done using a HTML page and a stateless local bean so let's get on with it.

Service interface and ServiceBean implementation

package com.aplaline.example.ejb;

import javax.ejb.Local;

public interface Service {
	String action();
Nothing fancy here - let's move on to the implementation:
package com.aplaline.example.ejb.impl;

import javax.ejb.Stateless;
import javax.persistence.EntityManager;
import javax.persistence.PersistenceContext;

import com.aplaline.example.ejb.Service;

public class ServiceBean implements Service {
	public String action() {
		return "Hello, world! from EJB!";
Again... absolutely nothing fancy here - standard Hello, world! style bean. Let's see how we can configure DWR to serve the Service.action() method...


<?xml version="1.0" encoding="UTF-8"?>
    "-//GetAhead Limited//DTD Direct Web Remoting 2.0//EN"

    <creator id="ejb3" class="com.aplaline.dwr.Ejb3Creator" />
    <create creator="ejb3" javascript="ServiceBean">
      <param name="bean" value="ear/ServiceBean/local" />
    <param name="interface" value="com.aplaline.example.ejb.Service"/>
Now that's the meat I'm talking about! Let's get a closer look at what's in there:

The creator

The creator class originally coming from DWR is best suited for other J2EE containers but it has a huge issue with JBoss so we're implementing our own, JBoss-friendly one:
package com.aplaline.dwr;

import java.util.Properties;

import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;

import org.directwebremoting.create.AbstractCreator;
import org.directwebremoting.extend.Creator;
import org.directwebremoting.util.LocalUtil;
import org.directwebremoting.util.Messages;

public class Ejb3Creator extends AbstractCreator implements Creator {
  private String bean = "";
  private String interfaceClass = "";
  public void setBean(String bean) {
    this.bean = bean;

  public void setInterface(String interfaceClass) {
    this.interfaceClass = interfaceClass;

  public Class getType() {
    try {
      return LocalUtil.classForName(interfaceClass);
    } catch (ClassNotFoundException ex) {
      throw new IllegalArgumentException(
          Messages.getString("Creator.BeanClassNotFound", interfaceClass)

  public Object getInstance() throws InstantiationException {
    Context jndi = null;

    try {
      Properties props = new Properties();
      jndi = new InitialContext(props);
      return jndi.lookup(bean);
    } catch (Exception ex) {
      throw new InstantiationException(bean + " not bound:" + ex.getMessage());
    } finally {
      if (jndi != null) {
        try {
        } catch (NamingException ex) {
          // Ignore
What it does is it allows you to specify the interface class as well as full bean name as depicted in the dwr.xml example above.

The create

In here we're specifying all the bits and pieces needed for the framework later on to create JavaScript proxy and identify our bean when the time comes. This is the time to see how we can use it, shall we?

The index.html

  <script type="text/javascript" src="/dwr/dwr/interface/ServiceBean.js"></script>
  <script type="text/javascript" src="/dwr/dwr/engine.js"></script>
  <script type="text/javascript" src=""></script>
  <script type="text/javascript">
    $(document).ready(function() {
      ServiceBean.action(function(response) {
        $("#output").append("<p>" + response + "</p>");

  <div id="output"></div>
This needs a word or two of explanation. At first we're including a JavaScript proxy class that will serve as a mediator between JavaScript and the server side. Then there's the required engine.js inclusion. This includes all the bits and pieces of client side DWR. Then we're including jQuery from Google CDN because I hate to re-get the library over and over again. Then there's the most interesting part - the usage: The ServiceBean object is created by inclusion of the ServiceBean.js resource. It automatically has a action method that takes all the parameters as the server-side counterpart would (none in this example) and as the last parameter there's a callback to execute after the response is returned. Pretty simple, right?

Bottom line

If you'll ever find yourself in need to call some EJB (or Spring or Guice) managed instances from JavaScript you should seriously consider using DWR as it makes life a lot easier.

As always here is a ready-to-use example for you to check out (tested with JBoss 4.2.3).

I've been using it from within Eclipse thus the name of EAR is "ear" (and thus the JNDI name of the EJB starts with "ear/"). If you run it from command line please update the name accordingly before use!

Have fun!

Wednesday, September 28, 2011

What the hell is wrong with collections API in Java?!

Today is the day when steam came out of my ears and I started screaming "WTF?!?!". So what's wrong? Let me explain...

The good part

I came to pure Java from Groovy. I never liked Java as a language in particular (I'd even go as far as "I hated it with a passion") but life is life and I've had to put my hands into the dirty world of legacy Java code once again. Being the Groovy fan for quite some time now tough me that functional programming style is something that makes your life easier and the code itself a lot more readable. That's especially true in regards to collections!

I don't want to bring up obvious examples like sorting a collection using spaceship operator or something along those lines. That's been done to death. Instead I'm going to tell you a story...

The story so far...

Back in the days there have been collections and a handful of utilities to back up the bare collection objects (java.util.Collections). At some point people saw that what the creators of Java the runtime library gave them to play around was not enough so they invented commons-collections. And life was good again because we could do things easily that were cumbersome before. That was in the pre-1.5 days so no generics were involved as you might imagine. That in turn forced the developer to do idiotic type casts from Object to the respective type when implementing for example a Transformer.
After a while Java the language 1.5 came to existence so people sat there and wondered how we can make the best out of it. This is how commons-generic came to be: The same set of utilities but with generic parameters so you can avoid doing silly casts. That must have been a brilliant idea, you might wonder... But as it turns out the compatibility is virtually non-existent so you can stick it up a$$ since everyone else is using the old commons-collection anyways...
If that wouldn't ring any bells yet not so long ago Google came up with yet another professional and good looking collections library, the google-collections project later on included into google-guava. Yet again the same stuff happened: separate predicate definitions, mapping functions - you name it!

Groovy the savier

I know you're going to say that Groovy is a dynamic language and I should back off of Java and not compare apples and oranges together. But then wouldn't it be at least sane to allow some classes/interfaces to be extended in some way? Like they do it in C# or VB using extension methods... Oh and btw. if you're saying that they will be in version 8 I rush to explain that C# had them from 3.0 which is not so far away from Java 1.5 as far as I can remember...
So back to Groovy... If you want to transform some list of objects into another list of objects you have the all mighty collect method that does exactly what you need. If you want to extract just a single property from all instances in collection you just type for example people.firstName and that's it. It's really simple...

But there we are... in Java

If you'd like to make use of your code in all the fancy collection utilities hyper super duper tools you'll have to create your own version of the predicates/mapping functions (to be on the safe side, of course) and then create tons of adapters just to satisfy the compiler and the ever growing egos of collection utilities libraries creators.

Have fun!

Friday, September 23, 2011

CORS filter for Java applications

Hi there, in today's installment we're going to allow Ajax calls from other domains to be answered and accepted by browsers.

The what

This thing (completely forgotten by many) is called Cross Origin Resource Sharing and works with standard Ajax requests your browser can send. You can read about it in depth on Wikipedia or on the site.

The how

Let's get to the meat - shall we? On the site there are many recipes for all kind of servers and they respective configuration but what if you'd like to enable CORS just for a part of your application? If you're lucky enough and you're coding your application in Java then there is a standard mechanism to do just that! It's called filters.
Here's the most simple way of implementing CORS response headers:
import javax.servlet.*;
import javax.servlet.http.*;

public class CORSFilter implements Filter {

	public CORSFilter() { }

	public void init(FilterConfig fConfig) throws ServletException { }

	public void destroy() {	}

	public void doFilter(
		ServletRequest request, ServletResponse response, 
		FilterChain chain) throws IOException, ServletException {

			"Access-Control-Allow-Origin", "*"
		chain.doFilter(request, response);
As you can see here all we're doing is adding the Access-Control-Allow-Origin header so that the browser can accept the response sent by server.

You can use this filter as follows in your web.xml:

Have fun!

Wednesday, September 21, 2011

Using JRebel from pure Maven in a web application

Hi there folks!

I'm sure you've read every bit and piece about how to bend Maven to do your bidding. This time I'm most probably going to duplicate a lot of stuff found on other sites but I'm simply sick and tired of looking it up every single time I need it. We're going to configure a Maven web application project to run under Jetty (mvn jetty:run) with JRebel to do the reloading. So let's get started!

Installing Maven and JRebel

That one is a no brainer but for the sake of completeness I list this particular step here as well.

The pom.xml

The pom.xml file we're going to use looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="" xmlns:xsi="" xsi:schemaLocation="">



  <name>Example Web application</name>

    <!-- stop stuppid Maven message about build being platform-dependant -->

      <!-- make sure we're using Java 1.6 and not some stone age version -->
      <!-- enable jetty:run mojo -->
      <!-- enable generation of jrebel.xml - needed for the agent -->

Running Maven with JRebel

You need to specify the javaagent as JRebel in order for the class-reloading to work. You do that by extending the environment variable MAVEN_OPTS with the following:
set MAVEN_OPTS=-javaagent:C:\progra~1\ZeroTurnaround\JRebel\jrebel.jar %MAVEN_OPTS%
With that in place run mvn jetty:run or mvn tomcat:run and you're all set!

Doing reloading

Please bare in mind that Maven is not Eclipse! When you save a file it doesn't automatically recompile it so in order for the modified class to be reloaded you need to recompile the project using mvn compile!

Well that pretty much summarizes it. It's not very difficult to setup as you see here but there are steps that are not described all in one place so here it is for your entertainment :)

Have fun!

Sunday, September 18, 2011

Java 6 has a web server inside!

The what

I'm into miniaturization - anyone who knows me also knows as much. Probably because I'm not very tall for today's standards but mostly due to the sole nature of small things: they are easy to figure out. For example getting to understand every bit and piece of the whole car (whatever brand) is impossible. In many cases no one would even allow you to go that far. But having a deep understanding how manual shift gear box works shouldn't be a problem for moderately intelligent person... The exact same principle adheres to software development as far as I am concerned. I like small parts I can grasp in short amount of time and have it used right after the learning process knowing what the hell I'm doing. You might think about it in terms of components of a desktop application, a JavaScript library for dynamic web pages or (as I've found out today) in terms of a small set of classes that do exactly what they are supposed to do and you can have them do their thing in a matter of minutes. I'm obviously (as the title suggests) talking here about the embedded web server found in the package.

The how

Besides being fixated on miniaturization I'm also a Groovy freak as I thing this is the best language for JVM ever created and there's nobody to convince me otherwise :D For that very reason the following example showing a miniature web application with embedded web server is a Groovy script:

class ExampleHandler implements HttpHandler {
	void handle(HttpExchange exchange) throws IOException {
		def response = "Hello, world!"

		exchange.sendResponseHeaders(200, response.length());
		def outout = exchange.responseBody

public class ExampleAuthenticator extends BasicAuthenticator {
	static users = [ "john": "john123" ]
	public ExampleAuthenticator(String realm) {

	public boolean checkCredentials(String username, String password) {
		return users[username] == password;

def server = HttpServer.create(new InetSocketAddress(8000), 0);
def context = server.createContext("/example", new ExampleHandler());
context.authenticator = new ExampleAuthenticator("Example application")
server.executor = null
Let's start from the beginning...

The ExampleHandler class

This particular class is responsible for generating all the output sent later on to the server. As you can see it's a very minimalistic thing here, even to the point where the body needs to know how big it actually is. The line Exchange.sendResponseHeaders(...) shows that. The rest is pretty self explanatory so I'm not going to drill into it.

The ExampleAuthenticator class

This class provides basic authentication for our small application. It's so damn easy anyone will understand it right away but for the sake of clarity here's the bottom line:
  • Every credential is stored in the static users=[:] map

The meat

Now we have finally come to the good part! The actual use of HttpServer. Here you see how the server is created using a factory method (HttpServer.create) that takes an InetSocketAddress and some other argument I didn't drill down to just yet and gives you back an HttpServer instance ready to work. That instance does pretty much nothing so far other than a response 404 for everything you ask. To change that we create a new application context (very similar to the one found in a Servlet container) and register it under a specific path on that server. In the next line we tell the context that it is guarded by basic authentication - that's my favorite :) - and we're almost all set. The last thing is to (for whatever reason) assign an empty executor that in turn is supposed to have a side effect in the sense that a default executor will be assigned instead. That's weird...

The outcome

In 32 lines we've created a dynamic web application with authentication - ready to conquer the world! That's maybe a lot more than Sinatra or Spark but still - it's something worth knowing about. If you'd like you can utilize it directly from Java and it is still not a lot more code than what's seen here!

Have fun!

Tuesday, September 13, 2011

How I wrote my own version control system

This is what happens when people have too much spare time traveling from home to work and back 6 hours a day... They create useless software that allows them to pass the time.

The idea

The idea was to create something that'll be astonishing, maybe not new but great in functionality and to do it in Groovy. I thought what the hell - why not write your own version control system :) Let's call it pico

The credits

The piece is 100% cloned in idea (and most of the solution as well) from The All Mighty And Only True Version control system - meaning Git.

What's it doing?

For now it is really simple: it can commit with a message (the user interface is sooooo cruel - need to work on it a bit), dump a sophisticated log and checkout the latest version. It's faaaaaaar from being complete and most probably it'll never get where Git is today but hey - that's what passing time means :D


It does what it does and is written in Groovy! Granted that I can do better but trust me coming up with a working solution like that in less than 3 hours on a train was my point here - not the beautiful code :) The latter one is due some time in the future to fill the time...

The code

You can find the code as well as a batch file (yes, I am a Windows freak) here.

Have fun!

Thursday, September 8, 2011

Grails, multi-tenant plugin and a bag of small issues

If you'll ever need a multi-tenant solution and you're lucky enough to use Grails going with the multi-tenant-core plugin is definitely the way to go. It's pretty easy to use in the "multiTenant" mode but some strange issues come up when trying yo use the "singleTenant" mode. In this installment I'm going to walk you through a solution that'll allow you to understand how things are and what you should avoid.

The debug solution

To make our "explorer" live easier we're going to provide a custom tenant resolver so that we'll be able to specify a query string like ?tid=1 to select the tenant with id = 1.
package org.example

import javax.servlet.http.HttpServletRequest
import grails.plugin.multitenant.core.TenantResolver

class TestTenantResolver implements TenantResolver {
    Integer getTenantFromRequest(HttpServletRequest request) {
        def tid = request.queryString =~ /tid=(\d+)/
        return tid ? Integer.parseInt(tid[0][1]) : 0
To use it just register a bean with name tenantResolver in resources.groovy like
this and you're all set.

The data sources

Although the documentation will tell you that in the tenant configuration DSL you can specify JDBC URLs directly it is unfortunately not true. You need your JDBC datasources registered in JNDI for the multi-tenant plugin to pick them up. It's done in Config.groovy like this:
grails.naming.entries = [
    "jdbc/foo": [
        type: "javax.sql.DataSource", 
        driverClassName: "org.hsqldb.jdbcDriver",
        url: "jdbc:hsqldb:file:target/db-dev-tid-1;shutdown=true",
        username: "sa",
        password: "",
        maxActive: "8",
        maxIdle: "4"
    "jdbc/bar": [
        type: "javax.sql.DataSource", 
        driverClassName: "org.hsqldb.jdbcDriver",
        url: "jdbc:hsqldb:file:target/db-dev-tid-2;shutdown=true",
        username: "sa",
        password: "",
        maxActive: "8",
        maxIdle: "4"
And then you need to tell the plugin about your datasources:
tenant {
    mode = "singleTenant"
    dataSourceTenantMap {
        t1 = "java:comp/env/jdbc/foo" 
        t2 = "java:comp/env/jdbc/bar" 

Catch No. 0 (zero)

Be warned that the tenant 0 (zero) has a special meaning that's not mentioned anywhere in the docs. It means "use the connection returned originally by TransactionAwareDataSourceProxy". This means that tenant with id 0 is off limits for you. Don't ever use it!!!


Other than the 2 small issues the plugin is really fun to work with. Good job guys!

Wednesday, August 31, 2011

New guy in town: knockout.js

Today we're going to take client-side JavaScript to a whole new level with the MVVM pattern using knockout.js.

The "why"

Imagine you're writing an application using, let's say, WPF or Silverlight. What's nice is that you have a clean separation of concerns between what's being displayed in terms of the layout (which is in fact the first V in MVVM) and some object backing up that view (which is the VM in MVVM in this case). This allows the application to be extensible and well structured.
Granted you can make a mess everywhere and MVVM architecture is no exception. You still have to use your brain while coding :)
Up until recently there has been no framework giving you the opportunity to have that kind of clear separation in JavaScript. Granted there have been tools like ExtJS and jQuery to help you out with modern UI elements like calendars and grids as well as low-level operations on DOM and events but there has been nothing so far that'd help you out write in a clear MVVM declarative style. Now there is!

The "what"

That's quite simple: the mini framework is called knockout.js and implements the MVVM pattern in pure JavaScript and DOM (not necessarily HTML but that'll work for DOM creation as well).

The "how"

Well, here's the good part, which will be extremely easy:


<script type="text/javascript">
$(document).ready(function() {
var viewModel = {
data: ko.observable("Hello, world!")


<p>The current value of data is <span data-bind="text: data"></span></p>
<p><input type="text" data-bind="value: data"/></p>

See how there's no "Apply changes" button? This is because once you've entered the text into the input box it'll be automatically transferred to the viewModel object and since that's an observable it'll notify all subscribers about the fact that the value has changed and the text on the page gets updated automagically.

Going fancy

I'd like to see some fancy stuff in here like responding to button events and the like. Let's see how we can do that with KO:


<script type="text/javascript">
$(document).ready(function() {
var viewModel = {
data: ko.observable("Hello, world!"),
show: function() {
alert("Current value is: " +;


<p>The current value of data is <span data-bind="text: data"></span></p>
<p><input type="text" data-bind="value: data"/></p>
<p><button data-bind="click: show">Click me</button>

Again, the same page but with 2 additions:
1. A callback function called show as part of the viewModel
2. Declarative binding of the above function to a button using the last data-bind. Cute, isn't it?

I know that I'll put it to some good use in my next project. Just for the heck of it :) I like the idea that it is completely cross-platform and yet so incredibly easy to use!

Tuesday, August 30, 2011

JSF 2.0 - Is it really better?

I know most of you will hate me for the post I'm writing but I have no choice. I really need to stigmatize the framework and make clear to anybody still using it that they have been lost in the woods. Literally...

Like you (as the reader of my blog) already know I hate JSF. I hate it so much that I've dedicated an entire blog and site to bitch about what a pain in the ass it is to use. Being the enlightened man I am from time to time I like to come back to stuff I hate to make sure they didn't do anything stupid like making the framework usable or so. This time was no different.

So I did it. I've installed NetBeans (I usually use the Far Manager's build-in editor but for this exercise I've decided to go crazy a bit and use the full-blown IDE supported by JSF's creators). I've followed the standard File -> New procedure, selected some JSF JPA CRUD example (because I couldn't find anything simple on the net) and started reviewing the app.

First thing that hit me was the really cruel style of the web UI the application presents. In one word "HORRIBLE". Not that it ain't usable - God forbid - but plain "usable" these days just will not cut it. So the GUI is butt ugly - how about the code?

Now that's the place I like JSF most for: the code was just exemplary bloated :D Let me give you some bottom line figures:

1. It's a pure CRUD application. Absolutely nothing fancy. No Ajax or the like cool features whatsoever.
2. It's an application that manages 7 entities (customer, discount code, manufacturer, micro market, product code, product and purchase order).
3. Everything that was generated is 706068 bytes in 72 files!!!

Now compare this to, let's say, Rails, Grails or Sinatra + Data Mapper is about 8 times bigger. I know it is an enterprise sort of thing and that it's not meant for mare mortals. It's the enterprise kind of solution that you'd use at work and not for fun.

Anyways, JSF still sucks, big time. Lot's of hand-written XML, tons of Java code to implement pretty much every aspect of the application (with parts of it having cyclomatic code complexity at the level of 20+). That's just a no-go for new projects that need to deliver solutions on time, on budget and in scope.

One last thing: For crying out loud the web is stateless! Why would anyone force a solution that inherently introduces state which in turn means no scaling possibilities and stupid ideas like "page life cycle" and "control binding"? Why would you ever want to fear JavaScript programming when there are layers of abstraction like Ext JS or jQuery?

I wonder if the MyFaces implementation still differs from the reference one in the way that component libraries will work on one and not the other. I didn't check it out myself - I'm not that brave...

The conclusion is that if you're still doing JSF these days just drop it. Now. It's horrible!

Thursday, August 25, 2011

The Process

I've been working this past few years for a company that utilized Rational tools for issue tracking and source code versioning. Long story short it was a nightmare to use ClearCase and ClearQuest with its stone age web interface was more than annoying but it was to some extent usable.

Nowadays the same workflow is doable using opensource tools like Git and GitHub for anyone. Here's how the process works:

- request a change to the system
- do the changes
- integrate them to the so called main line

Using GitHub it's dead simple:

- file an issue
- clone the repository, do some changes, check them in
- put out a pull request for the maintainer to integrate your changes

All that takes some 2-3 minutes if you know what you're doing and if the test suite you're dealing with is fast enough. Piece of cake.

Now if you'd like know how it looks like using Rational tools (and I mean the good stuff, fully integrated) take a look at the following screen cast.

I hope ClearCase will soon be seen for what it really is: a performance and evolution blocker.

Wednesday, August 10, 2011

Git: publishing a site after push

Today we're going to do an amazing thing: publish a website using Git :) This will be very similar in outcome as the deployment model used by Heroku :) Interested? Read on...

The usual thing is that you have a folder that's exposed on the web and using your favorite web browser you can make the server send you files stored in the folder in question. There's nothing special about that. What's actually very annoying is that every time you do some changes there's a need to upload the new files to the server.

There's been tons of different solutions to that issue ranging from automated synchronization agents to manual ftp operations. The one solution I like most is to use Git to do the heavy lifting for me. Here's how it's done.

You have a repository at, let's say, /srv/git/my-website.git and want to keep the actual files to be served at /var/www. To make git populate the latest versions of your files once you've pushed your changes you need to:

1. Create a file named /srv/git/my-website.git/hooks/post-receive with the following content:
GIT_WORK_TREE=/var/www git checkout -f
2. Set it's permissions so that it is executable
chmod +x /srv/git/my-website.git/hooks/post-receive
3. Set the ownership of /srv/git/my-website.git/hooks/post-receive so that whatever user runs the server process can execute it.

And before I forget: the destination folder needs to already exist and the user that's managing the repository (for example "git" or "apache") needs to have read/write access to that folder!

And that's it! Once you do a git push your changes will be automatically propagated to the proper folder :)

Have fun!

Monday, August 1, 2011

Java 7 and diamond operator

Java 7 has finally been released. After 5 years we've finally been given a new toy to play around with. Better, faster, lighter.. One would hope for a breeze of modern features in the language after reading statements like "Type inference" and so on.

What I'd like to make sure everyone understands is that the so called "Diamond operator", further more specified by Oracle as "Type inference for generic instance creation" is nothing more than a lie sold to us once again. Let me show what I mean.

Java 1.6 code:
List<String> names = new ArrayList<String>();
In this case there's nothing to inference because the code is given to the compiler with all the details. Now what actually happens when the code gets compiled?
The right-hand operand (as well as the left-hand side) is stripped of the generic parameter because of the type erasure, one of the most incredible nonsense in Java. So in fact it's no different that saying
List names = new ArrayList();
In the light of the last statement here's what the creators of Java 1.7 made to ease our pain:
List<String> names = new ArrayList<>();
In this case instead of forcing you to specify both generic arguments they only make you specify it once (where the compiler will actually need it). That's it. The type erasure still takes place so the compiler really couldn't care less about what kind of generic freak show is being assigned to the variable as long as it satisfies the assignment (which in this case has nothing to do with generics but simply with the fact that an ArrayList is a List).

Now let's look at what type inference looks in other languages. Let's start with Groovy:
def names = new ArrayList<String>();
Here we see that the type of the names variable has been inferred to match the type being assigned to it. In Scala the situation is almost identical:
var names = new ArrayList[String]()
Again here's a real type inference in action.
Let's switch platforms for a moment and see how it's done in C#, a statically typed language for .NET:
var names = new List<String>();
What you see here is once again an additional keyword that marks the variable as being subject to type inference.

At the end of the day for me it is more important what the variable name is and that's what exposed in C#, Groovy, Scala and pushed to the background with all the type declaration fuzz in Java.

We can go on and on with examples from other languages where type inference goes beyond the simple fact that you don't need to specify the generic type of the variable twice. There is however one thing that can make you wonder what the hell is actually the type inference in Java style for? Let's see the following example:
List<String> names = new ArrayList<String>();
names.removeRange(0, 1);
This code obviously will not compile but this snippet (in Scala) will:
var names = new ArrayList[String](); 
names.removeRange(0, 1);
Why is that the case? In the Java snippet what we're doing is we're specifically saying that he variable is of type List<String> not ArrayList<String> which in turn means that the method removeRange is not available. In the second example what we're saying is that the variable names should be of type ArrayList[String] because that's what the type inference will ultimatelly figure out. But do we really want to have a List in the Java version? If so why do we specify ArrayList as the class to define the construct that we want to instantiate? Shouldn't we have that instance injected from somewhere else and then code against an interface instead? And if we're instantiating an ArrayList do we really need to strip ourselves from the actual thing that we have oh so obviously specified and play cripples just for the fun of it? Or better yet here's how the Java code could have been written:
List<String> names = ArrayList<String>();
((ArrayList)names).removeRange(0, 1);
Cute, isn't it? And so damn readable!!!

Again we can go on and on with examples and theories what actually is type inference and if what Oracle is feeding us is a trick to make us believe Java is still evolving. To my liking there's no point in coding in pure Java, a language that didn't see a major change for 7 years (September 30, 2004 where generics have been introduced). Or do you think that underscores in integer literals deserve to be taken as a major breakthrough? :D

Have fun!

Monday, July 25, 2011

Grails, Camel and Hibernate session

It's been a while since I thought about manually opening Hibernate sessions to do some work on the database. The usual thing is simply to call one of the GORM methods like get() or save() and have the framework tackle all the bits and pieces regarding low-level Hibernate usage for me.

From time to time however it is important to have a high-level understanding of what's going on behind the scenes. One example of that is saving entities in services utilized as part of Camel routes. But let's start from the beginning...

In Grails opening Hibernate session is handled by a Spring class called OpenSessionInViewFilter. This is a filter that opens a Hibernate session before anything else is happening and closes it after the processing is done. With this filter in place a Hibernate session is available throughout the whole request cycle.

When doing processing in Camel we're in a separate thread completely outside the normal Grails request processing so there's nothing there to open the Hibernate session for us. As a side note one needs to remember that the same happens in pretty much every case where a separate thread is fired to do some processing. In all those cases Hibernate session is not available.

But I still want to use GORM in those places! How can I open a session manually?

Well, that's the easiest thing ever. GORM-managed entities expose a method called withNewSession that takes a closure with one parameter (the newly created session) and life is good again.
To make this more common I'll give you two other examples of such a method: withTransaction and withSession. The first one fires of a new transaction and closes it after the processing is done and the second one is kind of a utility that allows you to access the currently active Hibernate session if you find yourself in need doing so.

I hope this will save someone some time :)

Friday, July 22, 2011

Grails and Spring inc.

I just found a very nice article by Peter Ledbrook showing how you can integrate a regular Spring MVC part of your application with Grails. If you need to do any sort of Spring-related stuff in your Grails application I suggest you check it out!

Have a nice day!

Grails, routing plugin and GORM

I's been brought to my attention today that it is possible that Camel routes start sooner than some of the other services in Grails (GORM in particular). This leads to issues like missing methods in domain classes. To get around this problem you can do as follows:

In your route definition add the .noAutoStartup() part like this:
In BootStrap.groovy's init closure add the following code:
import org.codehaus.groovy.grails.web.context.*
def ctx = ServletContextHolder.servletContext.getAttribute(GrailsApplicationAttributes.APPLICATION_CONTEXT)
This will give you the possibility to start and even stop a route anytime you want.

You can read more on this topic on

I hope this will help someone :)

Tuesday, July 19, 2011

Grails routing 1.1.3 released!

I'm happy to announce the immediate availability of the routing and routing-jms plugins. The major theme for this release is hot reloading of Camel inner-workings when service classes are changed.

So.. well... it works. For the most part at least. It wasn't easy but let's examine the changes and new capabilities one step at a time.

Apache Camel.

In Apache Camel there's no obvious way to force the routes to re-get the bean from Spring context. An endpoint caches an instance of BeanProcessor that's created with ConstantBeanInfo instance and that's pretty much it.
Thankfully the BeanProcessor instance is not final and is lazy initialized so if you force it to be null by calling setProcessor(null) the next time the endpoint is called it needs to re-create the processor and all is good. It's a pity no one thought of a "forceReinitialize" method in Camel but this way works too.


In Grails it is pretty much standard that you are allowed to overwrite the existing instance of a service class with a new implementation. This mechanism is sort of baked in into the core.
However when reloading services that are marked as transactional there's some AOP going on with cglib and caching that I couldn't quite crack so those kind of services when reloaded will produce an exception that says "SomeService cannot be cast to SomeService" or something of this sort. I'll have to get back to it some day and figure out if this is a general problem with Grails or if it is something I'm missing in my onChange handler. For now if you need your method to participate in transaction you'll have to manually call the withTransaction { } helper.

Bottom line.

The bottom line is that you finally can use the routing plugin to do normal Grails development and it should catch you with no surprise (other than the transactional services...). Go and give it a shot. Let me know if it works for you!

Happy Cameling!

Grails routing plugin - report from the battlefield

Just wanted to let you all know that I've resumed work on improving the routing plugin for Grails. There's already version 1.1.2 that introduces a new method added to controllers, services and (if Quartz plugin is installed) jobs that allows you to send messages with headers.

The next task however isn't simple and will take some time. It's the long standing problem with Camel routes that target beans that are reloaded.

Will see where this will lead me.

Monday, June 27, 2011

New guy in town - Spark!

Today I've discovered the most briliant, simplistic and pure in fashion tool for creating web-enabled applications. It's called the Spark.

Apart from being a very young project it has all the necessary ingredients a Sinatra clone should have: simplicity, agileness and above all clearness in translating intentions into working code.

Well, from all the efforts I've seen so far this is probably the most elegant of them all. You should definitely check it out.

Monday, June 20, 2011

STS 2.7.0.M2 - huge dissapointment

I've tried it, I did it. I normally don't use IDE for Grails development, there's no point to it. But from time to time the urge to try something new strikes again... This time was no different.

I've installed the latest STS because there was the promise that code completion in GSPs work with the map elements you return back from a controller. A big lie! It's nowhere near working.

Next I've tried the most annoying thing ever which is the silly button that should take you from your controller to GSP. This thing didn't work ever and I thought since this is such an advanced version they finally fixed it. Nope...

But the most anger and fury stepped into me when I was done with installing all of the plugins and keyboard configuration. I've got about 30 keyboard shortcuts that I use in all IDEs and learning the Eclipse way just to use one IDE is plain stupid. So I've gone through the pain of configuring everything as I like it. Then the IDE asked me if I'd like to restart Eclipse because one last plugin has been installed (the git integration). I said no, because I was in the middle of finishing some stupid shortcuts like alt+up that for no apparent reason moves lines up and alt+down which does the same but down. After that I've restarted Eclipse and when it came up again all custom configuration was gone!!! About an hour worth of work was flushed down the toilet!!!

Screw this! Shift+Del in Far Manager and just as the keyboard shortcuts Eclipse ended up in /dev/null.

When I'll find an IDE that works I'll give it a shot. Eclipse sucks big time!

Saturday, June 18, 2011

GitHub migration complete

Hi there Git geeks!

Finally I've moved all my projects I'm working on (or worked on some time ago) to GitHub. I must say it's been a breeze to work with after I've got around to learn git at least at the basic usage level. Now a days I can't imagine going back to either Mercurial or (God forbid) Subversion. Centralized systems just don't work in opensource and Mercurial is just not Git :)

Quite honestly it's not so much about Git as a version control system as it is for GitHub as a whole. Every single feature I need from such a tool is just there for me to use. Issue tracker, wiki, ability to receive push requests.. And everything just works :) I know that offers a similar set of features but to my taste GitHub just does it better :)

I guess I'm just a freak :D

Monday, June 13, 2011

Resurrecting the dead: Delphi source analysis tool

Hello to all Delphi fans!

It's been some time since I worked with Delphi (2,5 years to be exact) and I've missed the old days quite a bit. Everything was so simple those days: there was one unit testing framework that nobody used, there were no source code quality checking tools like Sonar in Java or stylecop/fxcop in .NET and everything was up to you as a developer to do a good job. Man that was really something!

Anyways... Back in the days I wrote a tool to check for the complexity of Delphi source code. And I've had a reason for it, namely I worked on a project that (even for Delphi standards) seemed to be way above the complexity human beings can understand. For the sake of understanding what kind of beast I'm trying to actually tackle with it I've decided to write a small application that'd measure a couple of things.

First of all here are the sources, open and for your entertainment :)

What it does?

1. Dump debug source tree
2. Dump uses tree
3. Dump advanced uses tree
4. Dump Cyclomatic Complexity of methods - my favorite! :D

So just to let you know we human beings do understand right away methods with CCC around 5-6, some brainiacs do even 10!

And here's the bummer: the project that I've had this bad feeling about had the top CCC of one method around 2000 :) Man that was something!

Some day I'll get back to those glorious days,... someday...

Subversion branching strategies

For the longest time I've been fighting a battle I cannot win with people trying to use Subversion's branching mechanism the improper way. In this installment I'll try to lay out some of the "proper" ways and underline why doing it any other way causes issues that are far more troublesome than the alternative.

Usually when you do branching without any version control system what you end up with is a separate folder with your sources containing a sort of "soft-frozen" version of your project. The "soft-" part comes from the fact that you actually can (and probably will) modify those sources. In this case what you're creating is a so called release branch. At the end of the development cycle you usually create a ZIP archive, give it a solid name (like and burn it on some DVD or something else that's meant to last forever. This is a "tag" meaning a "hard-frozen" version of your project.

There's a second case when you might want do create a copy of your project (read: branch it) that comes quite often. Imagine you're about to do some spike and don't want your regular development to go to pieces. What you end up with is something called "feature branch".

Now let's examine how those branching strategies are implemented in Subversion.

Branches in subversion are copies of some other locations. Same thing for tags. End of story :)

A release branch is usually created to make sure you include only those features and fixes your clients should have in a particular version. In this case it makes perfect sense to name your branch with the partial version number, for example 1.0.x, where the 1.0 is the actual major version and the .x part is the release that didn't happen yet. So when you do release you substitute the last part with some incremental number, by convention starting with 0 and incremented by 1, for example 1.0.0, 1.0.1 and so on.

When your main development is being done on trunk only the things that should go into the next release is being merged from trunk to release branch. So the direction changes are traveling is obvious: trunk -> release branch and while releasing they end up in a tag.

With feature branches the case is different. When you do the work you're supposed to do (be it a fix for something or a new cool feature for your project) you first integrate the latest changes from trunk into your feature branch making it "compatible" with the rest of the system (you can actually and should do that periodically) and after you're done your feature branch is the equivalent of the trunk + the feature you were implementing. At this stage when you're ready with it you should "reintegrate" your feature branch with trunk and close the feature branch.

The good news about what I described above is that Subversion helps you out every step of the way by keeping the so called merge information and not allowing you to merge the same commits twice by mistake. So when you're integrating changes from trunk into your feature branch you can do the same command over and over again and nothing will go wrong. The key here is to remember that you need not to do the classical merge when reintegrating changes from feature branch into trunk but to use a feature called "reintegrate". What it does is it takes the diff between trunk and branch and applies it to trunk disregarding merge information all together.

Same thing goes for release branches but here you'll most probably merge only selected changes to the release branch which in turn makes the merging process both extremely easy and painful when your release branch contains release-specific modifications for the purpose of integration with new patches. This is the first sign you should finally stop the branch and create a new one!

So what happens when you do the wrong thing? And what is the wrong thing you can do when branching in Subversion anyways?

Imagine you led the project on trunk for the most part. Then at some point you've done a branch and from that moment on you stopped development on trunk all together, probably due to the fact that you want your trunk to be stable at all times. This is silly and immature point of view knowing that most of the time the only branch that's going to be CI-tested is trunk so 99% of the job you're doing is unchecked. That leads to cranky developers, hard feelings, bad language.. You get the idea :/

The worst part of it is that there might be more than one part of your team working on more than one so called branch at the same so called time. This is where things start to be reaaaaly interesting: people try to merge months of work with trunk, that works only for the first team because they have had the least difference when they started, the second team is way of course and spends long days, possibly even weeks trying to figure out what the other team did just to make the application work again. And in the mean time because the first team was so damn productive they introduce yet another refactoring to the mix and others are left with their pants down. Subversion does not understand properly what came from where, strange merge errors occur, heads are rolling, hard feelings, bad language, the usual...

Another thing that's not closely related to Subversion as a system but as a bad practice is not to have CI on trunk or not to have CI at all (wrrrrrrrrr... that gives me goosebumps)! Man you just don't think about it - you just do it and get over it that the CI server tells you that the build has been broken. Get over it! This kind of police man in your project is the good kind of cop.

So to summarize:

1. Do release branches, merge selected revision from trunk to the release branch as needed and from them freeze tags.
2. Do feature branches whenever you're doing anything that requires more than an hour of work. Before reintegrating to trunk bring your branch up to date with the latest and reintegrate. After that there's no going back to the branch and you should keep that as the hardest rule of all.
3. When your CI server tells you that you've fu..ed up the compilation process or tests nothing in the whole world is more important than fixing the build problem because there might be tons of people depending on you doing a good job in not messing up their life.
4. Never ever allow your project to come to a point when 2 huge branches need to be integrated at once. This causes you problems you can't possibly foresee early. And you will pay for it dearly!
5. This is the coolest one of all: Don't create a branch of a branch of a branch. Just don't do it. You'll not be able to integrate the changes made on those things (however you want to call it) ever!

So my friends, that's it. 5 simple rules to follow, right? Actually no. There's nothing more powerful than your experience and your understanding of what you're trying to accomplish. If you're ok with doing what I so harshly criticized in pt 4 then go ahead! Maybe doing feature branches isn't all that necessary? Maybe you'll prefer often failing builds instead and make it part of your working cycle to deeply care what Hudson or CruiseControl have to say?

However I am confident that if you'll follow those 5 simple rules and give Subversion a chance to help you out you'll be in a much peasant situation than those gurus out there that keep saying "Subversion has a poor merging mechanism" or worse.

Happy branching!

Saturday, May 28, 2011

Warbler 1.3.1 is here!

Finally the fix that I mentioned here has found its way to the next release! What this means is that when you install Warbler now and you get the version 1.3.1 then you're again capable of using the config.gems option to bundle gems with your web application using Warbler configuration file alone.

Here's an ultra simplistic example to show how it works: (the Rack application):

run lambda { |env| [ 200, [ "Content-Type" => "text/plain" ], "Hello, world!" } do |config|
config.gems = ["sinatra"] # this is just an example here...
With that in place you can now run
jruby -S warble
and have the .war file created for you in a matter of seconds.

And live is good again :)

Thursday, May 26, 2011

String + String = Problems

This is just a quick note to everyone not knowing it.

If for whatever reason you do have in your code something that concatenates strings using the + (plus) operator and it's going on in any form of loop or other that means executing this part regularly you might want to rethink what you've become over the years.

Man I never thought I'll see this thing again but I did! One of the developers was (again) producing a CVS line using a + b + c + d + shit load of other stuff. I mean come on! There have been books written about it already! :)

So if you're a Java developer choose one of either String.format, StringBuilder or StringBuffer classes to do the concatenation for you. If you're a .NET use either String.Format or StringBuilder to do it efficiently.

If you're a lucky Groovy developer your string concatenation will not be efficient anyway so you're free to use string interpolation - you can't get any worse than that :) At least it's damn good readable!

Sure thing is that depending on which platform you use concatenating might be more efficient if you use (for example in C#) the String.Concat method. But overall if you're concatenating a large number of strings (say in a loop or so) use the builder things at your disposal - they're a safe bet.

Monday, May 23, 2011

Simple design as product architecture


What I'm about to express in this post is going to turn your world upside down, make you cry for help or worse.

A bit of history...

Since some late 1999 I've been creating what's called Enterprise applications. On some point they have been highly complex calculations of central heating and other pipe-related networks alongside with some flight planning system in Delphi and on the other end web sites (because that's what they really are) but with some huge backend in Java. And then some desktop applications in C#. All applications had one thing in common: they were unsustainable by a small group of people (say 3 smart guys) and needed a "team" of analysts, developers and (God forbid) configuration managers (I still don't know what the hell his role was!). They needed to work in "iterations", have defined "stories", maintain existing "system" and all that kind of s...tuff (damn, that was close).

Some thoughts to spice it up

During this more/less stupid time I've learned one thing and one thing alone: There's no such system that can't be decomposed, chopped with a master axe into pieces that are maintainable by hundreds of developers all around the damn world.
In one of my previous positions (which I hated so much I can't even begin to describe) I was forced to create database diagrams, then create domain classes (manually because this damn tool was not clever enough to generate Java code at all) and later on to keep them in sync (what was the guy that came up with this stupid idea thinking about?!?!?!), then to implement GUI in Java Server Faces (with no instant reloading of any fricken sort (JRebel did exist at that time if you wanna know) and - here's the damn best part of it - as a single, monolithic Java EE all go no quit application including in its beast's belly a CRM, Admin module, Shop, custom communication module, some master data editing.. pretty much everything in one big ball of mud. So here's how it went:

1. Code, code, code... code some more and then some.
2. Build, get a coffee in the mean time cuz it takes so damn long...
3. Ok - build is done - with errors - rebuild - next coffee (I'm becoming addicted to coffee at this point)
4. Build passed, deploy to Tomcat - and have a cigarette or two

Ok, so with JRebel this idiotic cycle is down to Ctrl+S and that's cool but our master of disaster (read: manager and architect in one person) didn't like the idea to spend extra bugs on software so we were stuck with this shitty workflow. Very frustrating indeed.

This was the first time I was writing an enterprise-grade application that I thought went completely haywire. In fact I was so convinced I couldn't work with this project no more and I decided to quit. As far as news go the team didn't have a release as of last week so after about 4 years of development the project didn't begun to earn money. CRAZY!!!

Sanity revealed

If creating enterprise-grade applications looks this way oh why would anybody do stuff like this at all??? The answer is simple and so painful: money! Some people even say that making such complex systems is fun. For the purpose of this blog entry I'm going to call them (quoting after Linus Torvalds) ugly and stupid. They should not be in the business - they should be in some mental institution taking their medication regularly.

Learning how to chop things to pieces is one of the most fundamental principles one learns when doing object-oriented programming. Somehow we forget that this same principle applies to the whole as well. It might be hard (hey, nobody said it's going to be easy!) to find the right balance between infancy and complication. But somewhere out there lies the right spot towards which all roads should lead.

Examples are what I like the most so I'm going to describe my idea by means of a project that most of us are familiar with - the blog :P

A blogging engine starts easy: a controller with a couple of methods (list, post), 2 or 3 views to display the data, one or two models to store this damn thing in a database - yeah! We've got a blog engine!
Then the real world kicks in and requirements start to pop up. We should probably be capable of administering the site, so we create a separate controller, do all the magic behind it and life is good again. And new requirements come in to be able to measure the popularity of the page so we create a filter to do the hard work for us. The adventure continues...

STOP! Isn't there something that we could extract and make it a separate module? As a OO developer you recognize the fact at once: there's a program that does more than one thing so the single responsibility principle is flushed down the toilet. Can't we do better?

As it turns out we can. Having a modular architecture for our application (be it OSGi or better yet Grails with plugins) can help us split the problem into pieces, implement them individually, have them tested to the bone and described so that when a newbie comes in he can take the docs and start coding right away. And before you say anything: since a module is doing one thing only the docs aren't that big. I mean, how much can you write about storing data in the database and then displaying them on a page, right? And if the blog engine turns into a full-blown CMS can't we have the management as a separate application? Will this hurt our feelings?

The same thing goes for calculations, statistics and god knows what else comes to that little screwed mind of ours.

Scalability === Maintainability

In the sense that things you can't chop you can't scale you can say the exact same thing about software products. If you can't split the problem into smaller problems you can't scale it to have many developers working efficiently on a product as a whole. But if you do you can hire hundreds of programmers, split them into really small teams and have them implement any system in a matter of (almost) weeks.

So you say you can't..

One thing I hear over and over again is "we can't split this up because.." and at this point I stop listening. The reason why is not important and is always the same: things depend on each other in a way they should never have.
We do have fantastic architectures like CQRS, CQS (being its predecessor), RESTful design and all that kind of belt of tools we can employ to make things chop-able (in a matter of speaking). The sky is the limit!

Reality is a bitch

Sure thing I'd love to live in a perfect world with requirements known up front that don't change till the end of time (meaning next version), perfectly feasible to code at all levels of abstraction developers and high quality standards met every day. The reality however means you have to get your hands dirty till elbows in some serious shit just to get the simplest things to work/fix (there you go - I've finally said it)

Live would be so ridiculously easy if we all wrote simple software.

Tuesday, May 17, 2011

Explorative Programming

Recently I've been struggling with a friend of mine about the actual meaning of Explorative Programming (or short ExP). The term Explorative is used here so that the pressure is put on the "exploring" part of it. I actually know it's not a proper English word but who cares :D

Anyways... Explorative programming - what is it all about?

Let's say you need to write the infamous "Calculator" class with the (also infamous) "add" method. In this particular case there's little to explore, right? You know (hopefully) that 1+3 equals 4 and there's nothing in the whole universe that will convince you otherwise :)
Let's explore this simple case in a little more TDD/ExP way. In TDD you'd write a class called (let's say) CalculatorTest. In that class you'd write a method with a @Test annotation (if you're using JUnit). That method would then instantiate your Calculator class (which does not exist yet) and call the method "add" (which does not exist yet) with some well known parameters that produce a well known output.

All good so far, meaning your test does not compile :).

So what you do next is that you add your class (preferably by means of some IDE's refactoring or what have you), then add the method (same as before - you actually can do this manually but who would do that when Eclipse and all the others are there to help you), fix the parameters (didn't you just write "def" to declare your variable to state that you don't really care what the fricken type is?) and finally after all those IDE-enabled steps (which let's be honest didn't take all that long) your test is compiling but failing. Great success!!!
The next obvious thing is to add the simplest thing possible which is to return the proper value from your function, then to add next test, see it failing, add the proper logic to satisfy both test cases.... you get the idea. This is TDD in its purest form as described by many mentors out there.

Where ExP is different is where the IDE needs to come in. Why for the love of god do you need a sophisticated tool to create a file for you when you actually don't need it? What you need is a class - not a separate file!

In ExP I propose that what you do is you do have the class (or method) you're testing but in the same file as your assertions. So for example:
def add(a, b) {
//the actuall implementation whatever it is goes here

assert add(1, 2) == 3
assert add(4, 5) == 9

Ain't that nice? You can sort of "explore" your domain "in place" without the need to have a sophisticated IDE to create unnecessary files for you (you're just creating a method for all you'd care at this point).

Let's examine a slightly more proper example for the actual "explorative" part of the thing, shall we?

Let's assume you're not alone in the world, which is a pretty good assumption in my personal opinion. Next let's assume you didn't write the code you're working on all by yourself (which in my personal experience is more than likely, or actually it's taken for granted if you add to the fact that you didn't write Spring or Hibernate all by yourself).

What you can't do at this point is to wave a magic wand and say "let it do what the client wants!". Instead what you can do is the next good thing which is to say "let's see what the user actually wants". And in the spirit of ExP what you'd do is to spawn a console of some sort within the currently latest version of your application (hello rails and grails!), sit down either alone (with tons of useless documentation trying to make your way thorough it) or with your actual customer, code 5 or 10 lines, write the assertions when the user says that it is the desired result.

Once you have it all figured out there's tons of knowledge that you better store for posterity! To do that you move the necessary bits and pieces where they belong, cutting your assertions in oh so many places into separate methods that'd satisfy your good taste for testing a single thing at a time and of course moving the class you just wrote to a separate file. From this moment on it's TDD only, my friend. You've got to keep'em separated.

Oh, and before I forget: the best thing in all this is you can actually use your eyes and brain to interpret the imperfect results before writing any assertions. Let's be hones: the computer (or the assertions rather) are only as good as you wrote them. If you already know what the actual outcome is (like in the case of some tests you might find on the net) it's fine to do it the old fashion way with separate tests and all that. What you do have to take into consideration while working with real life stuff is mostly the imperfect steps in between and how you can first spot them with sort of debuglets (I love this term!) or other means like the outcome of your script which is a debuglet all along.

I hope I've shed some light into this ridiculous term that I've tried to coin. Just as a word of caution: it might make no sense whatsoever when applied to your situation. What I strongly believe in is that it gives you the possibility to use simple text editor to do the TDD kind of thing with more freedom and options to choose from. That being said I officially declare that I don't use an IDE for anything and I mean it. IDE is bad for oh so many reasons I won't even go there. They make you a cripple when you need to actually come up with an idea of your own!

Oh! and before I forget: Once you've moved stuff back to where they belong you need to "integrate" your class with the rest of the system. That's where the regular TDD, IDEs and all kind of stuff that's floating around comes in.

Would you believe this all came to be from a single conversation with a supervisor that didn't actually end up being my supervisor at all? Funny how things play out...

There's a number of people that have advocated this style of programming. Recently on the 33rd degree conference Dierk Koenig showed how you'd go about writing your quicksort implementation. Previously Luca Bolognese showed the same kind of approach while presenting F# to the audience. It's all about learning, exploring and happiness :)

Have fun!

Sunday, May 15, 2011

The easy but you have to know it...

Yeah, this is the one that is probably make me look like a complete newbie. I've been developing apps in Ruby mostly on Ubuntu but lately I've turned back to Windows and found out that on this platform not everything is taken for granted like it was on Ubuntu. So for that reason I'm making this note so that I don't forget.

If you're installing Ruby gems on Windows and they require compilation of some native extensions (like the JSON thing or Thin) then you need a C compiler to do that. In fact you can do that in multiple ways but there's one easy one that'll take the pain out of your neck in seconds.

What you do is you download the DevKit package, un-7zip it (it's actually a self-extracting archive so that's easy), cd to the directory where you unzipped it and execute the following 3 commands:
ruby dk.rb init
ruby dk.rb review (and make sure your Ruby installation is at correct place)
ruby dk.rb install
With that in place test it by issuing for example:
gem install json
and you should see the following
Fetching: json-1.5.1.gem (100%)
Temporarily enhancing PATH to include DevKit...
Building native extensions. This could take a while...
Successfully installed json-1.5.1
1 gem installed
Have fun!

Wednesday, May 11, 2011

Cross-origin resource sharing

Today I've stumbled upon quite an interesting topic while watching this video from MIX11. Around 40th minute Giorgio Sardo brought up the topic of cross-site Ajax calls. At first I thought "OK, yet another proprietary extension in IE9" but then I started googling and what I found (and subsequently here) has exceeded my wildest expectations. As it turns out it is implemented in all modern browsers. There's a catch to it: the server needs to be aware that such a request will come and attach a specific HTTP header in response.

Since I don't do well with plain text explanations I've put together a Sinatra client and a Rack server to demonstrate this feature.


This is the client code using jQuery's load method to do the Ajax heavy lifting:

No surprise there, right? It looks like a regular Ajax request. The difference here is that it queries localhost on a different port (9292 - default Rack port) than the one that the request originated from (4567 - default Sinatra port).


Let's see how we can respond to such a request using a pure Rack application.
run lambda { |env| [ 
  'Access-Control-Allow-Origin' => env['HTTP_ORIGIN'] || '*',
  'Access-Control-Allow-Headers' => 'X-Requested-With'
] }

The key here is the line that defines the Access-Control-Allow-Origin response header. When it's set to * it means that everyone can access this resource from everywhere. It's a sort of catch-all if you will. The value of env['HTTP_ORIGIN'] is the value sent by the browser to the server saying "Hey, I'm calling from here. Can I access your resource please?" And if the server agrees (which it does from everywhere at this point) the browser will honor the response and return the data back to the script. The Access-Control-Allow-Headers header is sometimes required for example by ExtJS' Ajax calls.

Here's the example for your convenience. You can start it by issuing the following commands:
cd server
start ruby -S rackup
cd ..\client
start ruby client.rb

After that open your favorite browser (I sure hope it's not IE < 8), navigate to http://localhost:4567 and observe the environment string from the server dumped onto your screen.
The client is obviously written in Sinatra, the best micro web framework ever and the server is a pure Rack application.

Sunday, May 8, 2011

Going ultra simple with JRuby, Tomcat and Rack

This one is for those that don't believe that web applications can be written in a single line :D With Rack and Warbler it is actually possible :D

First install the prerequisites and required gems as described in this post.

Create a file called with the following content
run lambda { |env| [ 200, { 'Content-Type'=>'text/plain' }, "Hello World!\n" ] }

Then run warbler like this:
jruby -S warble
... and deploy the resulting war file to tomcat. DONE!

Oh! And in case you were wondering: the performance of this application is just incredible! On a virtual machine with 2 cores at 2.19GHz each and 512MB RAM (Dell D830 laptop) it delivers around 2000 requests per second!

Ruby rocks!

Running Sinatra application on Tomcat

My recent fascination with the Sinatra framework has made me look for viable deployment options. At first I started with JRuby and Tomcat6.

So what do I need (top-to-bottom with only JDK installed) to successfully deploy a hello-world style application on Tomcat?

Installing prerequisites.

First you need a copy of JRuby and Tomcat6. Unzip both of them some place you'll have easy access to. In this tutorial I'll be using the root of drive C: because this is a folder everyone will have.
After doing so you should have 2 folders on your disk with the appropriate applications:


Installing required gems:

  C:\jruby-1.6.1\bin\jruby -S gem install sinatra sinatra-reloader warbler bundler

Creating the application.

Creating the hello-world style application (a bit extended version) is really simple. First you create a folder in a place of your choosing. In that folder you just need to create 2 files:

  require 'rubygems'
require 'sinatra'
require 'sinatra/reloader' if development?

get '/' do
@message = "Hello, world! from Sinatra running JRuby on Tomcat"
erb :index

  <h1><%= @message %></h1>
The hello.rb is the main application file and views/index.erb is the supporting view.
All further command line commands must be executed from the folder you've just created!

Testing if everything works as expected.

To test if the application performs as expected issue the following command inside the folder you have created:
  C:\jruby-1.6.1\bin\jruby hello.rb
and if there are no errors you'll have the application available on port 4567 right away. Test it with your favorite browser navigating to http://localhost:4567

Creating deployment files.

This is a little bit more involving. We'll create 3 files now:

  source :rubygems
gem "sinatra"
  require 'rubygems'
require 'hello'

set :run, false

run Sinatra::Application

config/warble.rb: do |config|
config.dirs = %w(views)
config.includes = FileList["hello.rb"]
config.jar_name = "hello"

The Gemfile is just the definition which gems are required by the application. The file defines how rack application is supposed to start. config/warble.rb is to tell the warbler tool which files go into the actual WAR file and what the output file name should be.

Create hello.war

To create the output hello.war file issue the following command:
C:\jruby-1.6.1\bin\jruby -S warble
At this point you should have the hello.war file ready for deployment.

Deploying to Tomcat.

This is the easy part. Simply copy the file hello.war to C:\apache-tomcat-6.0.29\webapps and start Tomcat.


It wasn't that hard, was it? Next time we'll see how we can do exactly the same with Apache HTTP Server and Passenger (a.k.a. mod_rails) running on a freshly installed Ubuntu 10.04.1.

And last but not least (because I know you've been waiting for it): the source code is obviously here for your entertainment :)

Have fun!

Edit on May 9th, 2011
I found out why specifying gem dependencies in config/warbler.rb doesn't work as it should and why the workaround with Gemfile is needed. Obviously for any bigger project it is better to use Gemfile and bundler as it solves the problem when another developer jumps in and wants to start hacking the application. For simple projects the config/warbler.rb should be enough though. Up until now specifying gems didn't work as expected due to an obvious bug in warbler. Here's a fix that I proposed to solve the problem:

With that in place the config/warbler.rb would look like this: do |config|
config.jar_name = "hello"
config.dirs = %w(views)
config.includes = FileList["hello.rb"]

config.gems = ["sinatra"]
and naturally the Gemfile would go away (one file less to keep track of :D in such a simple application)