Saturday, November 12, 2016

Flashing NodeMCU and Espurino

Howdy!

This is going to be a quick one. When flashing NodeMCU with new firmware you need to remember that esptool.py requires some params for it to work correctly. And let's not forget the flash needs to be erased before we do anything.

esptool.py --port /dev/ttyUSB0 erase_flash
esptool.py --port /dev/ttyUSB0 write_flash -fm dio -fs 32m -ff 40m \
  0x000000 nodemcu-integer.bin \
  0x3fc000 esp_init_data_default.bin

Let me know if it works for you! Oh and btw - you can build the firmware online - I'm loving it!

With Espurino you do pretty much the same thing

esptool.py --port /dev/ttyUSB0 write_flash -fm dio -fs 32m -ff 40m \
  0x0000 "boot_v1.6.bin" \
  0x1000 espruino_esp8266_user1.bin \
  0x3FC000 esp_init_data_default.bin \
  0x3FE000 blank.bin

The original documentation for Espurino will tell you to use a higher frequency and the qio access method for the flash but I found that it just doesn't work on my D1 mini clone.

Happy flashing!

Friday, November 4, 2016

Vue js - finally a framework I feel comfortable using

The story so far...

I have switched to frontend development about 10 months ago. I met a lot new friends, interesting ideas and got back to a place where you don't really need to travel half way around the world to react on someone pressing a button. That was great up until I was acquainted with JavaScript framework that I was supposed to work with. Ember.JS. I mean the idea behind it is great, don't get me wrong, but it is the tiny little details that make it a nightmare to use. Features like mixins, mustache syntax with helpers, views in God knows which location, lots of decisions pre-made for the developer (like we're not smart enough to make them ourselves, doh!) and the sick Broccoli souse on top of it - I hate Ember with a passion!

And so naturally I started looking for an alternative. First at AngularJS, then Angular2 - none of those felt like a good choice at the time. At some point I found a presentation where a scared to death person on the stage was criticizing MVC on the front end saying that's actually the wrong thing to do. That was a presentation of ReactJS. Funny enough a long long time ago I have had a conversation with one of my colleges about putting MVC to use in Delphi applications. It ended quite rapidly with me saying You're insane! and I have a strong feeling that it took 8 years for the industry to catch up with my statement.

But ReactJS felt a bit odd. And I don't mean the JSX syntax - I actually love it! But the whole shebang with Redux, functional components, state-full components - it was just a lot to take in. On top of that you needed to acquaint yourself with Webpack which at least fueled my interest in React for quite some time because of the hot module reloading. I even gave a talk about Webpack on a conference - that's how strong I feel it's a good thing

Let there be light (at the end of a tunnel)

And than, one evening, out of the blue someone posted an article on LinkedIn about vue.js. I never heard that name before, I gave it a quick read, got interested but not nearly enough to continue reading that night. Still some itch remained after I read that it was basically a simple framework. I can do simple - I actually prefer that over complex stuff - that is why I hate both Ember and JSF. A couple of days later I came back to it, read the whole user's guide in one evening and basically couldn't believe how obvious it was. I don't mean just simple (vue.js concepts are quite sophisticated at times) but just obvious. It takes all that's best from all the previous frameworks and does the same but in the right way.

Since then I have started a major project in vue.js, scaffolded with vue-cli, developed in Atom, with Java/Spring backend and I basically loved everything about it. The shorthand syntax for assigning event handlers with the @ syntax is just so fricken better than having to have 2 methods one for hooking up those events and one for unhooking them. The shorthand syntax for binding fields to views - awesome! The fact that you can have one file per component (not HAVE-TO but CAN) is such a relief from navigating millions of files in different locations in Ember. Using just properties and not custom-made this.get and this.set methods makes the code just so much more readable. But still - it's not the biggest win from using vue.js. It's how the framework structures and enforces things that makes it completely out of this world by comparison to other frameworks

A short tale of data and props

One of the biggest fuckups in all other frameworks (and in Ember in particular) is the fact that you create the API of your component when you use it. This basically means that there is no single point where you actually can do it. Not that it should bother you when you create the next "Hello, world!" but when it comes to a big project with 30 developers in it, having no definition of an API for a components and no enforcement of defining it turns on productivity after short few weeks.

Short personal note: I have seen a presentation from a very nice girl from the Polimer project at Google where she stated that if you're going to design a new component's API make it work like a button - not like the input.

vue.js actually makes you define the API or you can't use it! It even warns you that props should actually be read-only. You get a warning in the console when you try to assign a value to it!. How cool and thoughtful is that?!

Carry on my wayward son!

I think vue.js is the first of great many frameworks I came across that actually captures what developers need to have to do their job in a professional and maintainable fashion. Back in the days there was ExtJS which has fallen into the MVC trap as well (don't know the current state of things with that one though) but otherwise none of the currently popular frameworks feel like they are good fit for the job. So if you can - try vue.js out! And if your experience is not as mine - share it in comments! I'd like to know why and if that will impact me as well.


Have fun!

Thursday, October 6, 2016

Speeding up NPM

NPM is slow. I mean it is really, really slow when downloading packages. Partially because it makes a ton of requests to remote servers. That can be made faster!

There are 2 options that you'll find useful:

They both create a local cache that npm is later on interact with instead of the remote repository.

npm_lazy

This one is really easy to setup and use. Just do
npm install -g npm_lazy
npm_lazy
and use npm with the local registry like so:
npm --registry http://localhost:8080 install

Nexus

This is a bit more complex. Nexus can be started as a stand-alone app quite easily but I would recommend using Docker as it is a far easier experience. So to start Nexus you type:
docker run -d -p 8081:8081 --name nexus sonatype/nexus
Then navigate to http://localhost:8081, login as admin/admin123 and create an npm-proxy. Select "Repositories" from the list on the left hand side, then from the "Add..." drop down at the top select "Add proxy repository", fill in as follows:
  • Repository ID - npm-registry-proxy
  • Repository Name = NPM Registry Proxy
  • Provider - npm
  • Remote Storage Location - https://registry.npmjs.org
and click on "Create repository".

Use it as follows:

npm --registry http://localhost:8081/content/repositories/npm-registry-proxy install

What's the gain?

Depending on the project you'll see a 30-50 percent speed boost when downloading packages. For the example project that I used here are the stats:
No proxy npm_lazy Nexus
1m43s 1m22s 57s

Have fun!

Edit on 23rd of January 2017: Forget all of the above. Just switch to Yarn and live happily ever after.

Wednesday, October 5, 2016

Guarding Ember's computed properties

Sometimes you hunt a bug that just won't show. It just literally won't allow itself to be found. Recently I've had one of those cases. This time it was just unimaginable what has actually happened. Someone, possibly by mistake, has overridden an Ember.ComputedProperty thus breaking the chain of updates. On top of that the app did updates to the model using websockets changing other properties of that model. This resulted in very strange flickering of values in an edit field. Very hard to diagnose.

After the bug was fixed I started looking at possible ways to guard it in the future and possibly to find other places where the same happens. There are 2 cases where this might happend:

  • using this.set('property', 'value');
  • using Handlebars template

One possible solution would be to reopen the Ember.Object class and to override the Ember.Object.set method. Unfortunately it only guards you from explicit calls to this.set(...) and all the overrides from Handlebars template remain silent.

But there is hope!

There is another class, Ember.ComputedProperty that is actually used as the tool to get and set computed properties. It also features a set method that can be monkey-patched to do our check:

(function() {
  var orgSet = Ember.ComputedProperty.prototype.set;

  Ember.ComputedProperty.prototype.set = function(obj, key) {
    if (obj[key] && obj[key]._getter && !obj[key]._setter) {
      console.error("Overriding computed property " +
        key + " on " + obj._debugContainerKey);
    }
    return orgSet.apply(this, arguments);
  };
})();

In this way if you override the computed property in a handlebars template or in the code you'll know about it. I only wish this would be part of the core framework. It would shave about 10 mandays of my project...

Here is an example project with the fix applied.

Have fun!

Edit on October 6th, 2016

I have been made aware of an Ember addon by Stefan Penner that addresses the same issue called ember-improved-cp. Although the idea of not using private API of the Ember.ComputedProperty is nice using that plugin is invasive and means using addon-provided macros instead of those that are provided straight from the framework which adds to the already high complexity of Ember-based projects. If you have a small enough project with few developers and you can afford that inconvenience you should probably go with the addon. If you need a plug-play-unplug solution you're going to get where you need to be with the monkey-patched version of Ember.ComputedProperty.prototype.set much faster.

Wednesday, June 29, 2016

Getting started with React and Redux

This is going to be a quick post as I have already created the whole thing elsewhere.

https://github.com/padcom/react-example-02

React/Redux/Mocha/Webpack example project is ready! Check it out on GitHub! It has all the bells and whistles one would need to get started with a brand new project so if you're like me - go ahead!

Happy coding!

Wednesday, June 22, 2016

Running tests with Mocha - ES6 style

So you have an ES6 code base and you'd like to test it using Mocha? Let's do it!

npm install --save-dev mocha babel-register babel-preset-es2015
echo '{ "presets": [ "es2015" ] }' > .babelrc
mocha --compilers js:babel-register *.test.js

Easy, right?

Let's go through it line by line. In the first line we're installing mocha, the testing framework, babel-register which will enable babel transpiler and babel-preset-es2015 which is required for the es6-to-es5 transpilation.

Then in the second line we're creating a .babelrc file that will tell Babel how to transpile our code.

In the last line we're configuring Mocha to run our tests using Babel as the compiler for files with js extensions. It'll look for all files ending with .test.js and treat them as tests.

Happy testing!

Monday, June 13, 2016

Teaching Promises to take '5'

Sometimes it is beneficial to execute an action in the Promise chain with delay. A good example would be flashing fields after they have been updated as a result of an Ajax call by writing something like:

Teaching the Promise class to delay execution is very simple: you add a delay method to Promise.prototype that returns a new Promise that waits using setTimeout and then resolves with the arguments that have been passed to the call to then. Here's how it looks:

Neat, right?!

I have seen a similar approach in this post but I don't quite like the use of then to execute another function - however proper this is not looking as good as the Promise.delay :)

Happy coding!

Sunday, June 5, 2016

ReactJS - the way forward

Let me tell you a story...

A long time ago I was at a point where my beloved technology, Delphi, started to decay as a viable option for future development. It started to change into this giant big ball of everything point-and-click and the beauty of the Object Pascal language was lost in the process. From my time with Delphi one particular component has always been my favorite: the TDrawGrid. Imagine if you will a grid component that all it needs is a prescription on how to present data in cells. That's right - present data in cells, not specify what is in them through some intermediate state. In fact TDrawGrid was so awesome each and every project that had to do with presenting data in a grid that I wrote was using it instead of TStringGrid. Making text bold or drawing icons in cells was just simply a matter of defining the correct handler and that was it!

You'd think that the ease of presenting data in a fancy way was the best part of it but it was not. The best part was that if the underlying data changed all it took was to invalidate a cell and the refresh of content happened automagically!

Like I wrote before Delphi derailed a lot from what I liked and it was time for me to choose a different boat to sail from now on. Realistically speaking there were only 2 choices: .NET and Java. Since I was working in a Java shop at the time it was only natural to lean towards Java and so I did. I started working on web applications, remote Swing applications, backend services - you name it. That was the time when I was introduced to Java Server Faces - a technology I intuitively knew was just bad. I looked for alternatives and as it turned out Groovy and Grails was my salvation at the time. I spent a year working on an internal application that I created from scratch in Grails and I was having the best time of my life.

Forward a few years and I felt like shedding the burden of being a Java developer and turned my attention to Ruby, Python and JavaScript. I absolutely fell in love with Sinatra and that was why most of my time outside of work I spent learning the ins and outs of it learning Ruby along the way.

Interestingly enough life has thrown me in the direction of client-side applications. That obviously means JavaScript, HTML and CSS. I have worked with jQuery, a bit backbone, AngularJS, Ionic.. And at every single turn it felt like the MVC pattern while great when working on the server is a bit artificially jammed into the browser world. It just didn't feel right, you know? The apogee was when I started working with EmberJS a few months back. Although I think that having a framework that gives you pretty much everything is not necessarily a bad thing (see Ruby on Rails) my opinion is that if you want to create an application that will prove resilient over time taking an all-in approach is not necessarily the best way to go.

And so, a few weeks back I arrived at a presentation from 2014 (man! so many good things happened in 2014!!!) where a nice lady, stressed out like hell, was trying to convey a message that MVC seems to not account for a good, maintainable architecture in the browser. Luckily enough there was an alternative - ReactJS. I spent a couple of days watching videos on ReactJS and Flux and the biggest surprise was how similar it is to the TDrawGrid from Delphi! Finally there is a single source of data that had a simple way of being presented and the refresh is just there and you don't even have to think about it. No bindings, no mutators, no fancy conventions like Ember enforces - just a simple render method that spits out the prescription of how things should look like.

Just like JSF and SwingML/XwingML in Java I really hope that MVC frameworks on the client will one day be a bad memory that lingers to remind me of times before ReactJS. Who knows, maybe I'll even go as far as starting a http://ihateemberjs.com one day like I did with http://ihatejsf.com? Who knows :D

Thursday, May 19, 2016

Baibulo is born!

Howdy, folks!

As promised (although it took forever and then some) my first Node module is released! Baibulo (version in Chewa) is a very simple versioned content server backed by Redis. Check it out - the README should be enough to get you started but if that is not good enough there is a working example that you can basically take and play with :)

I know there's probably a ton of things that should be done differently. Please share if you think it makes sense or not. Feedback is very much welcomed!

Happy coding!

Wednesday, May 11, 2016

The world around us is changing

A few years back when working on a redesign of a small internal application for market managers at Travelocity I made a decision to go with the new and shiny single page application approach. Back in 2010 it was quite unheard of and spawned questions like "is the browser going to be fast enough?" or "do we have good enough developers to do it?". To tell you the truth both questions should back then be answered negative but as it turns out the competition that I was going against was of such low quality that anything that actually works would have been better. To sum it up the app was created in Grails of which I have mostly used the GSP engine to create the index file and a few controllers to get the API in place. In all seriousness this was quite a simple app but the effect it made on users exceeded my wildest dreams.

Now forward a few years and not only everyone knows what a single page application is but the world gets divided into the client side and the server side. It seems that everybody that forgets this is going to be left behind soon-ish :)

One of the benefits of thinking about the system as 2 entities (the client app backed by an API) is that intuitively one will arrive at a place where those two get to be developed independently. This means 2 version control repositories, 2 separate teams, 2 different build systems and 2 different sets of skills required to do the job right.

As some of you already know I am working on an intelligent home system for myself. Recently I have decided that I'd like it to be a testing field for ideas that I have stumbled upon elsewhere that would allow me to exercise in a more realistic scenario than a hello-world-type app the things others came up with. That being said I did come across a presentation from 2014 on the railsconf about splitting the deployment of static assets (a.k.a the frontend application) and using Redis to store the content of an index file so that it can be, sort of, versioned. This is one of those ideas that are really worth exploring and since I already have made the decision to split up the backend and frontend into two separate repos it was the perfect scenario to play with.

Due to my current professional occupation I've gotten a lot more interested in JavaScript development and that in both the backend as well as frontend. And so I have decided to implement the first version of the versioned content server in Node using Redis as advertised by Luke Melia but with a slight twist. I'm not really interested in all the S3 portion of it since I will be doing a thing that is mostly served by a server I know about which means I can do all the file serving myself. That being said I have decided to just version everything and have Redis as a versioned file system for my static assets.

Since I am a fanatic enthusiast of the Sinatra framework there is no surprise that I have selected ExpressJS as my weapon of choice. Both Sinatra and Express have the notion of middleware that can sit between the bare metal and your app doing god knows what. I my case I wanted to completely take over a portion of server and just serve things from Redis instead of file system like the express.static middleware would. The schema for naming keys I came up with is quite simple:

prefix:/context/path:[version]
prefix:/context/path:content-type:[version]
prefix:/context/path:etag:[version]
and to store the current version that will be served when no version is specified I'd use
prefix:/context:[version]
One of the benefits of taking the "store it all in Redis" approach is that one can query Redis to give back a full list of available versions thus making a sort of a versions selector page an easy task. Specifying the version happens via a version=[specific-version] query parameter as it allows for easy creation of links to particular versions. It just so happens that all the assets being retrieved have the header Referer that contains that particular information from the original URL so other requests (including XMLHttpRequests) can take advantage of the specified version. And since this is mostly used in scenarios I have full control over (for testing, preview and the like) there is no problem with any proxies stripping that info out of the request. And since the version can be any string I am able to deploy a feature branch, a test branch, a new proper version - whatever I want and it just transparently works. The middleware is to be mounted per context, like /hello and will use that mount point as part of the Redis key to differentiate it between different frontend apps in case there will be more than one.

The one problem I needed to solve was how to upload stuff to Redis in an efficient manner. For now I have not solved the "efficient" part of it and I just store each file with some metadata in each version in Redis. This has the added benefit of me being able to completely remove one version from Redis whenever I want in an easy fashion.

The next challenge I am working on is to be able to serve a single version of frontend against 2 or more versions of the backend. This can be quite useful when updating the backend and validating it against the current or upcoming release of the frontend app.

I will soon publish that as part of the Aplaster project and maybe deploy the server and content uploader as a gem for everyone to take a chance to use it? Will see :)

Happy coding!

Monday, March 14, 2016

The anatomy of a unit

The article in a pill: Click here.

Back in the winter of 2005 I started working on a flight planning system for commercial airlines. There I was presented with a shit-load of legacy code written to perform functions and not to be understood. Suffices to say that it took 9 months to get new developers on-board - even brilliant ones like my good friend Adrian. Sure the domain is just hard. All the physics, advanced maths, algorithms, spherical calculations and God knows what else combined with 10+ years code converted originally from Fortran produced a mixture very hard to work with. Around that same time (late 2005/early 2006) I was introduced to the idea of unit testing through a library called DUnit.

A funny (judging from the perspective of 2016) thing happened in the flight planning product. I was introduced to this idea that would make the Pervasive-based database backend (that was actually just pure BTrieve with no SQL over it to sugar coat it) interchangeable with "real" SQL database like Oracle or Microsoft's SQL server. Looking at it from time's perspective switching from a no-sql database that performed really well for the sake of "we need to support X because our clients require that of us" sounds like pure nonsense but it was what it was. And the actual most disturbing thing about that project wasn't the what as you might have guessed but the how of it. Basically what happened was that BTrieve API calls have been literally translated to extensive SQL builder. It was a disaster for a number of reasons: first the idea that the SQL calls would perform anywhere near the speed native BTrieve was just wrong. Nowadays we know that for example the performance of fine-tuned Redis for fast data inserts from multiple sources is better compared to, let's say MySQL - it is a simple fact. But back in the days the desire to run the flight planner off of SQL Server was more important than speed. And even when the project ultimately failed with the Japan government not approving it due to performance reasons (surprise!) it wasn't the biggest sin of that project in my opinion.

Another sub-project of that solution happened in the mean time having to do with parsing weather messages coming off of a satellite dish. Nowadays that'd be a service working mainly on regular expressions (as it is the case with text parsing), having clear separation of pretty much everything and being unit tested to the death but back then it was a piece of work created by just one programmer, the lead programmer of that project, consisting of just one procedure in just one Delphi unit having cyclomatic code complexity at around 6000 (that's six thousand!). It proved to be a fantastic testing ground for my tool to calculate the metric using McCabe's simplified method and gave me a ton of fun to work with. There was just one problem with the entire thing: it just didn't work as it was supposed to.

What both of the pieces had in place was code that was hard to understand, hard to read and fucking hard to fix. What you don't know is that the first one actually had unit tests to it! The coverage wasn't great (around 60%) but its readability factor was no better than the 6000-high complex walpha unit for parsing weather data. Why was that the case? Why both solutions were so bad and how could they have been made better?

To answer that one needs to first understand what a unit in Pascal-like languages is. Let's take a look at the anatomy of it down below

unit MyUnit;

interface

uses
  Classes, SysUtils;

const
  SOME_CONSTANT = 123;

type
  TMyClass = class (TObject)
  private
  protected
  public
    constructor Create; override;
  published
  end;

function GetMyObject: TMyClass;

implementation

uses
  SomeOtherUnit;

{ TMyClass }

{ Private declarations }

{ Protected declarations }

{ Public declarations }

constructor TMyClass.Create;
begin
..
end;

{ Published declarations }

{ Global declarations }

var
  MyObject: TMyClass;

function GetMyObject: TMyClass;
begin
  Result = MyObject;
end;

initialization
  MyObject = TMyClass.Create;

finalization
  FreeAndNil(MyObject);

end.

Without going into too much details one can clearly see the separation of interface and implementation sections, the list of units the code depends on explicitly and implicitly but what is most important is that a unit in this form describes a piece that is, in point of fact, self sufficient. Let's take a look at the sins of the BTrieve API rewrite and the WAlpha madness.

The SQLisation of BTrieve API has been initiated by this guy Nathan. Nathan was an architect back at the company and was high on Java which was the next best thing after the invention of sliced bread back in the days. Nathan was also a very buzz-oriented person so naturally when TDD became a thing in the industry he quickly realized that all pre-TDD code was shit and that all his future inventions will finally be good. Nathan led the project with another college of mine who got strung out on TDD the same way. Nevermind the fact that the project was actually carried out in Delphi and not Java since DUnit was already around they decided to take it to the next level. And that they did. Each and every class was having an interface, each interface and class was put into separate file, each had a unit test for it - all according to the best practices. What it meant for a developer using their code was a screen-long list of uses statement, anemic tests and a system so complex they had no idea how it works. Debugging took weeks and even though the system had such a great code coverage (of which they have been so proud!) it failed when it came to real-world usage.

The WAlpha case is on the other end of the spectrum. It was carried over by an experienced programmer, Irene, who had been with the project for years. I think she might have had the longest participation in the project besides the original author. She was used to the codebase, never paid any attention to suggestions from younger team mates and what is even more frighting is that she was in a position of power having the axe in hand that could expel you from the project in a heartbeat. So as I said before she did the coding on WAlpha all by herself. She wasn't very big on the whole TDD buzz that was going around so she did what she did best - she tested all the code inside the Delphi integrated debugger (a phenomenal piece of software compared to anything else I knew back in 2006!). And when it finally worked she called it a day and collected the awards coming her way for the job, obviously, well done.

For a very short time I took part in the BTrieve API thingy but couldn't stand the stink of Java in Delphi. It was just too much. I said to myself that I can write something better over a weekend that will work faster and will have less code than what all those geniuses did. And I was right! A weekend and six-pack later I have had a fully working read-only solution to the problem with the write portion 80% done and not completed because the weekend run out. Leaning on the shoulders of ADO drivers for SQL Server and Oracle I was able to navigate the tables, search through them and do all that blazing fast. The original project still used the same drivers but was set dead on on the SQL aspect of it which turned out to be a disaster. Soon after I presented my solution to the team I have heard that it is very nice but (and here comes the best part!) we have invested so much time already that we won't back out now. Funny enough my little side project turned out to be a fully working solution that I was able to offer to other companies and their abrupt solution didn't make it to production at all.

Those are just 2 examples of projects that failed to stand the test of time. Both have been very much different in their design, concepts used to created them, developers and their prior experience. What they have in common is that in both cases it wasn't the right thing those developers focused their attention on than what was actually needed for them to succeed. That thing I am referring to is clarity. Back in 1994 I red an article about different developers on the demoscene (Amiga and C64 was my thing back then) and what they viewed as the most important thing when it came to software development. One of them stated that the code doesn't need to work and be bug-free right away but needs to be written so that it is easy to navigate and fix whereas the other stated that he doesn't care at all about those qualities because all that counts is that it looks cool when showed on a copy party on the big screen. In my opinion both of the guys were right in their own areas. When you write code once, make money on it and throw it away (not even pass on for further development - just throw away) concentrating on clarity, test coverage, readability and whatever comes to mind when we talk about the properly engineered code makes absolutely no sense. It is pure waste and everyone should understand it. On the other hand if the code will be maintained for months and years to come forgetting about readability and concentrating only on how big the coverage is and how fast the tests run will cause all kinds of curses from your fellow programmers.

There's one universal truth to software development that has not changed ever: code is much more often read than it is written. It's that simple. If you write code that is tested like Forth Nox but nobody can understand what the hell you meant everyone will be in trouble (most importantly you if you're still around!)

There's another truth that I think is the mother of all statements: In software development there is no substitute for thinking. No discipline is going to make you a professional programmer, no design pattern is going to allow you to create readable code even though we try to tell it to ourselves that design patterns are the vocabulary of modern software development. My friends you can use 10 design patterns and make everybody hate you with a passion at the same time when you don't pay attention to readability and clarity.

There's another piece that I found irritating around the unit testing paradigm - especially with the test-first approach. When I do coding I usually have no idea what will come out of what I am doing. I explore ideas, options, usually figure stuff as I go. I might give a library a go if I think that it might help me out or I might put together some code from stackoverflow.com to see if it actually performs the thing I want it to do. At the time of writing I have no idea if it will be production-quality-top-notch-super-duper of if I'm going to flush it down the drain in a few minutes/hours/days. And as such I try to follow my heart and I don't write tests (much less test-first). I do YAGNI because I think that what I created is shit and nobody will want to see it. Later on when it turns out to be valuable I tend to write system tests to make sure I lock the end result in place. I test the whole thing in as much isolation as possible - but not an inch more. I seldom write real unit tests as such (except when the architect of a solution is still strung out on code coverage then I do that for his pleasure). I think that testing code in isolation makes absolutely no sense whatsoever. Single methods are useless pieces of a whole system that if exercised in separation give one no clue if they work as part of the whole. In Delphi the concept of a unit allows a developer to put together an implementation of a fully functioning unit of work that can be nicely tested through the provided interface. No other language that I know of goes about this the way Delphi does. And the funny thing is that Pascal wasn't even created with that in mind! It was a remedy to switching between header and source in C and C++! But the definition of a unit is in my personal opinion the best there is in all the languages I worked with. Those units make sens to be tested.

Remember: think before you write, read it after you wrote it and in two weeks time. If it makes no sense what you wrote re-write it until it is readable. Refactor, extract, rename, test, unit-test, re-test - do whatever you need to make sure it's not going to be ordeal for whoever will work with that piece next. For all you know it might be you!

Saturday, March 12, 2016

Unit test coverage means nothing

If you're like me or any other person that got hooked up on TDD you might want to take a look at this presentation by the creator of Ruby on Rails on RailsCon 2014

RailsConf 2014 - Keynote: Writing Software by David Heinemeier Hansson

I think this is the missing piece of revelation that I have been looking for for years on end. It always felt like there's something wrong with the world of TDD but I just couldn't figure it out myself. I did have a project with 100% code coverage and tests running bloody fast that broke down in the first week of being online and I did write software working for 10+ years that had no tests whatsoever and still performs its duties Today without a hiccup! I also wrote a ton of software that I'm not proud of but some pieces from 17 years back when I read through them now look as though I had a genius by my side in terms of readability and clarity. This is a pure stunning experience.

Go write some code that you'll be proud reading 10+ years from now!

Windows and Git - config --global not working

When you use Git and you're behind a corporate firewall that blocks access to remote repositories via the git:// protocol you're being advised on many sites to exchange the git:// protocol to https://

git config --global url.https://.insteadOf git://

That's all nice and dandy until you're on Windows where this just simply doesn't work when used from npm. I only experienced that problem on Windows 10 but I'm pretty sure it's going to be equally fucked up on any other version. The reason for it is that npm uses some kind of different user to clone the repos. Let's face it: whatever the reason is Windows just suck big time anyways.

The only way I found that would make it work and allow to install packages using npm is to do the same configuration but system-wide like so

git config --system url.https://.insteadOf git://

I'm far, far away from saying that life's good again but that piece works now.

Saturday, February 27, 2016

Samba public share

Samba is overly complex. The number of configuration options makes it very configurable and therefore cool but some of those options are just completely crazy.

One such example is the creation of publicly available folder - something that I have no doubt is very popular when you create a NAS server at home and you just want to have one network share to exchange files between computers. Doing that using Microsoft Windows is quite simple: you just specify that everyone shall have read/write permissions and that is it. With Samba on Linux the case is not quite so easy. Here's an example configuration that achieves just that:

[public]
  path = /storage-location-of-public-drive
  guest ok = yes
  read only = no
  public = yes
  browseable = yes
  writeable = yes
  create mask = 0666
  force create mode = 0666
  security mask = 0666
  force security mode = 0666
  directory mask = 0777
  force directory mode = 0777
  directory security mask = 0777
  force directory security mode = 0777

I dare someone to logically explain why the hell one needs 4 entries to set the same thing (create, force create, security create and then finally force security create) and then defend that as a sane thing to do.

Anyways... Creating public Samba share demystified

Thursday, February 4, 2016

Top 10 Most Common Mistakes That Java Developers Make

I recently came across a very interesting article by a gentleman called Mikhail Selivanov describing a number of problems young developers struggle with.

Top 10 Most Common Mistakes That Java Developers Make

Even if you're an experienced developer you might find it interesting. Us pros we tend to forget what mistakes can be made. Going through them helps us understand our young colleagues better.

Have a nice day!

Monday, February 1, 2016

ESP8266 ESP-01 and LUA (NodeMCU)

Sometimes the faiths are just too kind. For example I have learned a few days ago about the ESP8266 chip. It has a complete 802.1b/g/n Wifi stack, a few general purpose I/O pins, loads of RAM, an 80 MHz 32 bit CPU and loads of stuff already in the ROM like RTOS to just name one big one.

As it usually is the case with all new hardware nice is pretty much always accompanied by some ugliness creeping around the corner. 8266 is no exception. By default it comes in 2 flavors: as a module that you talk to using AT commands (just like modems) and as NodeMCU module with LUA interpreter inside.

So let's say you bought the ESP-01 module, because it was the cheepest and you have played around with the AT commands for a while. It is really fun for the first hour or so but then with its unexpected reboots and what not programming something that communicates with it (like an Arduino) and making it resilient to all the quirks becomes just distasteful. Another reason for making the switch might be that doing all the communication directly in the chip (as opposed to waiting for user input via serial interface) is just a lot faster (20+ requests per second vs about 1!!!). Luckily enough you actually can load the LUA firmware to the ESP-01.

From this moment on if you're not a Linux user you need to start looking somewhere else because those instructions won't work for you.

First you need the NodeMCU firmware. You can do that locally but you also can (and I would strongly recommend you do that) use the cloud-based service that has been specifically designed for building it. You just enter your email, select the LUA modules you'd like to have available and presto! In a few minutes you'll be sent a link to the binaries.

Once you have it, it is time to upload the firmware to your ESP-01. To do that you'll need the esptool.py utility (just the one file!). Hooking up the board isn't all that difficult:

The diagram is a taken from https://importhack.wordpress.com/2014/11/22/how-to-use-ep8266-esp-01-as-a-sensor-web-client/

Important note: The D0 when not in firmware upload mode should be connected to VCC (or otherwise used) and the CH_PD pin should be connected directly to VCC (and not for example through a resistor).

Once you've double-checked and triple-checked all the wires, made sure the voltage supplied is 3.3V and NOT 5V (I have BURNED 2 modules just because of that!) and when your module responds to commands in the terminal it is time to finally burn the LUA firmware:

./esptool.py --port /dev/ttyUSB0 write_flash 0x0000 nodemcu-*-float.bin

That will do the trick :) Now you get to learning LUA (I like the video tutorial by Derek Banas) and start hacking. Because LUA and NodeMCU are so popular there is a ton of examples on the Internet to learn from. For uploading I strongly recommend you get yourself a copy of ESPlorer which makes the whole experience a lot more approachable for mere mortals.

Happy hacking!

Friday, January 22, 2016

Intermission

Today was kind of a special day. You don't get too many of those days. We have had a company event with presentations and all. At the end of it we have had a pleasure of experiencing J.R. Hawker, a former pilot and leader of the R.A.F. Red Arrows team.

One thing that struck me down (again) was that he was a leader that actually grew out of the team he led. I find that really interesting with all the personas out there like Mark Zuckerberg, ‎Larry Page and ‎Sergey Brin, Jack Dorsey, Evan Williams, Biz Stone, and Noah Glass, Jeff Atwood and Joel Spolsky leading the business they have created. Even if you take a look at the icons of modern computer industry like Bill Gates and Steve Jobs one of the things they all have in common is they grew as a leader out of the business they created. They were down to the details, creating the first prototypes, doing all the dirty work, basically spinning of the product.

Well, I wonder what kind of leader does it take to jump in into a moving bullet-train and steer it in a direction other than a concrete wall... I'm sure it needs to be an exceptional character.

Anyways.. Thank you mr Hawker for the presentation. It was really great to see someone who speaks about something with a passion!

As a further reading I'd recommend the most popular TED talk ever, by Sir Ken Robinson. You'd find that one of the things both talks have in common is the acceptance of being wrong and using that as a tool to get better in what you do. Really inspiring!

Wednesday, January 20, 2016

Web platform performance comparison

I have developed a deeper interest in nodejs recently especially in the area of server applications. Node's HTTP module makes it extremely easy to create small and very fast applications. As a polyglot software developer I have decided to test how node compares to other frameworks. For this I have chosen, rack, Sinatra, node/http and due to my current occupation, Java servlets with Tomcat as the servlet container.

I started the benchmark with Siege and ab but have soon realized that there is something wrong with the world. As it turns out the problem was performance on the side of the request initiator. I have since switched to wrk which does a much better job all round.

Ruby

I started the test with Rack on Thin. It is the framework for Ruby so the idea was that it will be a good reference point for others.

Running 10s test @ http://localhost:9292
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     3.38ms  619.35us  16.05ms   92.73%
    Req/Sec     1.42k   200.93     1.59k    95.56%
  25527 requests in 10.02s, 3.12MB read
  Socket errors: connect 10, read 0, write 0, timeout 0
Requests/sec:   2546.72
Transfer/sec:    318.34KB

I ended up running this test about 10 times with different threading models on Thin and couldn't get it to pass the 3k rps mark

Sinatra

This is the most elegant solution of all. Sinatra is just hands down the cleanest one ever.

Performance-wise one might clearly say that under thin it performs very much OK

Running 10s test @ http://localhost:4567/
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     2.90ms    0.91ms  13.55ms   84.49%
    Req/Sec     1.75k   120.16     1.99k    68.00%
  34765 requests in 10.01s, 7.43MB read
Requests/sec:   3474.70
Transfer/sec:    760.09KB

I mean almost 3.5k requests per second, nice API to program against... What more can we expect?

Node + http

Having established the baseline now was the time to start poking around nodejs.

Well, that is also very succinct, even though the asynchronous API might feel a bit weird at the beginning. Performance wise it was exceptionally good!

Running 10s test @ http://localhost:8080
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   627.53us    1.19ms  37.32ms   99.26%
    Req/Sec     8.95k     1.06k   18.06k    97.01%
  179056 requests in 10.10s, 22.03MB read
Requests/sec:  17729.12
Transfer/sec:      2.18MB

17.7k requests per second! Compared to Ruby it is a 5.1 times better performance!

The platform itself is single threaded so in order to make use of all the CPU power one would simply spin off a few instances on different ports and put a load balancer in front of them. Luckily there is a node module called loadbalancer which makes the whole experience quite approachable for mere mortals:

Running 10s test @ http://localhost:8080/
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   658.55us    0.94ms  17.34ms   95.67%
    Req/Sec     9.55k     1.40k   19.58k    83.08%
  191007 requests in 10.10s, 23.50MB read
Requests/sec:  18912.15
Transfer/sec:      2.33MB

The setup is way more complex, there are 2 types of applications and all but the gain isn't what I would expect.

So I thought since JavaScript is slow I decided to give haproxy a go. After all it is a specialized application for balancing high-load traffic. I would expect way better performance than the JavaScript-based simplistic load balancer. And so I downloaded the latest sources, built it, configured and ran the tests.

And here are the test results

Running 10s test @ http://localhost:8090/
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     0.91ms    0.92ms  12.88ms   93.20%
    Req/Sec     6.53k   729.78    11.21k    80.60%
  130444 requests in 10.10s, 13.06MB read
Requests/sec:  12915.95
Transfer/sec:      1.29MB

What what what??? HAProxy is 30% slower than a proxy written in JavaScript? Can this be true? I'm sure the configuration I came up with can be tuned so I'm going to call this one a draw and move on.

Node has one more option to choose from - it's the cluster. The idea is quite simply to fork the current process, bind the socket to the parent's port and let the parent distribute the load over the spawned children. It's brilliant in that it doesn't add the overhead of making additional proxy request. So it should be really fast!

As you can see it is very simple and also very expressive. If you add to it that this is actually the only file in the entire solution it starts to take a really nice shape. My computer has 4 cores so I'll be spawning 4 processes and let them process the requests in a round-robin way. Now let's take a look at the results:

Running 10s test @ http://localhost:8080/
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   592.01us    2.58ms  80.36ms   97.85%
    Req/Sec    15.96k     2.56k   19.74k    87.62%
  320693 requests in 10.10s, 38.54MB read
Requests/sec:  31751.61
Transfer/sec:      3.82MB

Wow!! 31.7k requests per second! Fricken amazing performance! JavaScript's V8 engine rocks! Let's leave it at that.

Java

Now to get a sense of where Node with those 31.7k rps places itself on the landscape I decided to test Java servlets. I didn't went with spark or anything else of that sort since I wanted to compare only the most prominent solutions (or Sinatra since you just can't ignore that extremely beautiful framework).

As you can see we're doing a Maven project with just one servlet and the web.xml. Let's see the performance on that baby:

Running 10s test @ http://localhost:8080/example/
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.90ms    5.34ms  85.90ms   92.74%
    Req/Sec    21.62k    12.25k   43.35k    50.50%
  430345 requests in 10.00s, 47.69MB read
Requests/sec:  43028.84
Transfer/sec:      4.77MB

Hold your horses! Yes it is faster but one needs to remember that Java needs time to get to the top performance. So I ran the test once again

Running 10s test @ http://localhost:8080/example/
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   804.67us    1.92ms  32.48ms   91.16%
    Req/Sec    31.80k     2.68k   37.12k    77.00%
  632656 requests in 10.00s, 70.10MB read
Requests/sec:  63258.42
Transfer/sec:      7.01MB

Now that is just unbelievable! 63.2k requests a second is twice the speed the fastest node solution was capable of yielding! Twice!

Post scriptum

In reality the performance of the platform doesn't really matter all that much. If you take into consideration the response times they all are below 3ms which in turn means that if you make one call to the database you already blew the performance as that is going to cost you way more than just a couple milliseconds. But it is really nice to know the characteristics and to know that performance-wise it really doesn't matter what you choose these days. The framework is going to perform on an acceptable level. Even Sinatra with the 3.5k is still fast enough to serve thousands of requests a minute which is more than enough for most corporate solutions out there.

For a much more complete comparison of many frameworks and platforms check out the techempowered site

Happy coding!

Thursday, January 7, 2016

ENC28J60 Ethernet Shield V1.0 for Arduino Nano 3.0 RJ45

Wouldn't it be great? You put a shield onto the Arduino Nano which then fits perfectly into the breadboard and then you can put together anything you'd like using just jumper wires.

p>Well, it is perfectly feasible with the ENC28J60 Ethernet Shield V1.0 for Arduino Nano 3.0 RJ45 (the so called Web Server module).

The only problem is that when you do stress it out (for example using ab or siege) then just after a few seconds and sometimes after few minutes or even after a few hours the module will just stop responding and die. Initially when I bought it I was pretty damn sure it must have some sort of manufacturing problem and so I have ordered another one. Oh boy was I crushed when I learned the new one after it was being shipped for over a month came to me with the same kind of problem... Damn!

Internet to the rescue

I started (yet again) digging into the matter using the most sophisticated tool mankind has ever created (that'd be google.com). Lo and behold I found this post where a gentleman who goes by the name of mmorcos found out that a similar issue occurred when connecting a similar module to a Duemilanove. The solution he found was to put a more powerful power source than the USB cable. I tried it using an old breadboard module and now the module is being bombarded (coming up on 11 hours) at a steady 150 requests a second without a single packet being lost! Yeah!

Alternatively you could try powering the hardware from a USB 3.0 which gives 900mAh over 500mAh from USB 2.0

Happy hacking!

Monday, January 4, 2016

Calculating pressure reduced to sea level (JavaScript)

Problem

The pressure as it is read from the BMP180 sensor is an absolute pressure. To be able to compare it with other sensors it is necessary to have a common level above the sea. For example the value of pressure as it is given in your weather forecast is reduced to the sea level. How do we do that in JavaScript?

The solution

The following calculateSeaLevelPressure() function is based on this blog post (in polish!) and validated using a PHP implementation

Happy coding!

Saturday, January 2, 2016

webled - a web-enabled light emitting diode

There are different styles of hello-world applications. Some just spit out to whatever output is provided the Hello, world! message, some read a digital input and react to it and some just toggle an LED. This one is the of the latter kind, using Raspberry Pi and a green LED.

Disclaimer

I am not responsible for any damage you might cause to your hardware. Use at your own risk.

The setup

It's really straight forward to wire everything up. You start by connecting PIN01 through a resistor (let's say 100Ohm) to anode (the longer lead) and then the cathode (the shorter lead) to PIN40. That's it.

Now we need a web application to work with that new high-end device of ours. We start by installing Flask - a Sinatra-inspired framework for Python.

pi@raspberrypi:~ $ sudo pip install flask

Code

That will take a while (~ a minute) and after it is done we finally get to do some coding.

The code is pretty straightforward. First we import flask and jsonify for later use, then we create a Flask application instance, map the root to hello function which when executed toggles the state and stores the new value to PIN40. In the initialization block first we setup the GPIO.BOARD mode to be able to refer to the pins as they are on the board (as opposed to the GPIO.BCM mode which refers to the CPU pins), set the PIN40 mode to GPIO.OUTPUT to be able to switch our device on and off, and initialize the initial state of that pin. When the application finishes we clean things up so that others can use the library.

The host = '0.0.0.0' makes it possible to access the application from other hosts as opposed to host = '127.0.0.1' (which is the default) which would only allow access from the board.

Test drive

To test drive it open up a web browser and navigate to the http://localhost:5000 URL and observe the LED to light up. When you refresh the page it will go dark again. And so on and so forth. Such is the purpose of a web-enabled LED (a webled)

Happy hacking!

Friday, January 1, 2016

Happy new year!

I m going to wish you a new year in which all your dreams come true! Let all your embedded systems work with 100% uptime never failing and always behaving! Let all your web applications work 100% of the time and all your web services respond under a tenth of a second!

May the force be with you! Always...