Thursday, December 31, 2015

Setting clock on DS3231 using RPi

The problem

When using the DS3231 module with for example Arduino-based project there is no easy way to properly set the actual time on that device. There is a way using Linux. Since we'll be using the I2C to hook up the clock module you are going to need either the i2c-tiny-usb interface or for example Raspberry Pi which has the bus with gold pins ready to use.

The solution

Connect the module to the I2C bus, load the appropriate kernel module, initialize the device letting the system know there is a hardware clock attached, synchronize the system clock and finally store that system clock value

pi@raspberrypi:~ $ sudo modprobe rtc_ds1307
pi@raspberrypi:~ $ echo ds1307 0x68 | sudo tee /sys/class/i2c-adapter/i2c-1/new_device
pi@raspberrypi:~ $ sudo ntpd -gq
pi@raspberrypi:~ $ sudo hwclock -w --local

That's it! Happy hacking!

InfluxDB, Telegraf and Grafana on Raspberry Pi 2

Grafana is a great data visualization application. I use it all the time for all kinds of purposes, most notably to chart information from my weather station. Recently there have been some serious changes in Telegraf that will make it play nicely with InfluxDB 0.9+

Since we're at the awesome I have recently bought a Raspberry Pi 2 Model B. It's a great little machine with 4 cores and 1GB of RAM. That got me thinking: how about installing influxDB, Telegraf and Grafana on it?

Unfortunately it is not so simple. There are no official packages of the latest version for Raspbian, not to mention the 0.3.0-beta2 version of Telegraf. So to use it I have had to compile it myself. Compiling Go programs is actually quite easy with GVM and RVM but one needs to learn how to use it first. There are some resources on the Internet to help out with the process but the biggest problem of all is high CPU usage over long period of time during compilation which leads to high CPU temperatures (up to 70°C). There are 3 methods to overcome that problem

  • Buy radiators (already on the way)
  • Cool it down with a fan
  • Find pre-compiled packages

I went with the second option which kept the CPU at a steady 34°C but let me tell you with the Go compiler compilation it took forever (5h+) on the little machine. To not have to do it again I have prepared all 3 packages so that I, and everyone else willing to try it out can grab them and skip the boring part.

influxdb_0.9.6.1_armhf.deb
telegraf_0.3.0-beta2_armhf.deb
grafana_2.6.0_armhf.deb

Now all that is left is to download those packages and to install them using dpkg -i <package-name>.

There are 2 things to note: one, there will be no telegraf database by default (need to create one yourself using the WUI - web user interface - CREATE DATABASE "telegraf") and there is no defaults file for telegraf which will make it impossible for it to start right out of the box. To fix it create one like so:

sudo touch /etc/default/telegraf

Also grafana-server service isn't enabled by default. This is mentioned at the end of the installation but for convenience here are the enchantments to make it work

sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable grafana-server
sudo /bin/systemctl start grafana-server

Edit on January 11th 2016
As oriste pointed out the grafana package is a lot smaller than the one for Linux x64 found on grafana site. I have looked at it and the reason for it is that my package does not include the phantomjs binary in /usr/share/grafana/vendor/phantomjs/ folder that is used to prepare snapshots for sharing. If you'd like to use it you can always get the binary from https://github.com/piksel/phantomjs-raspberrypi/raw/master/bin/phantomjs

Edit on January 11th 2016
I have verified the packages and added some installation procedures that will help out put a monitoring solution using the three tools.

Edit on October 6th, 2016
There are ready-made packages for the armhf for Telegraf and InfluxDB now. Use those instead!
. You might be puzzled as to how to download the deb packages for armhf. What you do is you take the URL of the amd64 package, change the architecture in the URL to armhf and presto - you get a link to the RPi package, for example:

https://dl.influxdata.com/telegraf/releases/telegraf_1.1.1_amd64.deb
change amd64 to armhf and you get
https://dl.influxdata.com/telegraf/releases/telegraf_1.1.1_armhf.deb

Similar with InfluxDB:

https://dl.influxdata.com/influxdb/releases/influxdb_1.1.0_amd64.deb
change amd64 to armhf and you get
https://dl.influxdata.com/influxdb/releases/influxdb_1.1.0_armhf.deb

It seems those packages are available for every build but not directly listed on the page - hence the URL magic you need to do to get them

Have fun!

Sunday, December 27, 2015

Running LCDProc on Digispark

The project I was working on recently was to get the Digispark module to act as a Linux I2C-to-USB driver. It wasn't all so straight forward so I decided to put a little put a little tutorial on how to do it.

That's how the breadboard version of it looks like

Let's get to hacking!

Requirements

You will obviously need a Digispark or a clone of it. I used one I bought from aliexpress.com and it does the trick quite nicely

Let's assume you don't have the micronucleus bootloader installed and let's build one from scratch. You'll need a AVR programmer like USBasp (cheap and does the job very well)

Done. In case you're missing any libraries or tools that's the list of packages that is required

build-essential, avrdude, avr-libc, binutils-avr gcc-avr

You'll also need an LCD with PCF8547A or PCF8547T extender. I used one with T which required me to apply a patch to the LCDProc sources. More on that soon

The project

First we need the littlewire version of the firmware for our Digispark. You can get it form github

git clone https://github.com/nopdotcom/i2c_tiny_usb-on-Little-Wire

Buinding it and installing onto your Digispark is very straight-forward and described in details here.

Let's get to the hacky part of it!

Patching LittleWire firmware

As it turns out the original driver for i2c-tiny-usb was reluctant to recognize the hardware as proper I2C adapter. I needed to update the vendor and device IDs to make it work

After building it and installing again on the Digispark and loading the i2c-tiny-usb and i2c-dev modules using modprobe the command i2cdetect -l finally started spitting out some good news :)

padcom@aphrodite:~$ sudo i2cdetect -l
i2c-0 i2c        i915 gmbus ssc                   I2C adapter
i2c-1 i2c        i915 gmbus vga                   I2C adapter
i2c-2 i2c        i915 gmbus panel                 I2C adapter
i2c-3 i2c        i915 gmbus dpc                   I2C adapter
i2c-4 i2c        i915 gmbus dpb                   I2C adapter
i2c-5 i2c        i915 gmbus dpd                   I2C adapter
i2c-6 i2c        DPDDC-B                          I2C adapter
i2c-7 i2c        DPDDC-C                          I2C adapter
i2c-8 i2c        i2c-tiny-usb at bus 001 device 057 I2C adapter

After hooking up the LCD to pins 0 (SDA) and 2 (SCL) and adding 2 4.7k pull-up resistors the LCD has been properly recognized as well :)

padcom@aphrodite:~$ sudo i2cdetect 8
WARNING! This program can confuse your I2C bus, cause data loss and worse!
I will probe file /dev/i2c-8.
I will probe address range 0x03-0x77.
Continue? [Y/n] 
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- -- 
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
20: -- -- -- -- -- -- -- 27 -- -- -- -- -- -- -- -- 
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
70: -- -- -- -- -- -- -- --                         

Success! Well, actually to get to this point took me over 2 days to figure out all the moving parts but the joy of having it actually working was just great :)

Now it's time to do some serious work with that new piece of hardware we have. Let's use LCDProc to display some system statistics!. Download, extract... and if you're using the PCF8547T version of the extender apply the following patch before building:

All that is left is to configure and build and configure the LCDProc package. Tested also on Raspberry Pi 2 and works flawlessly.

./configure --enable-drivers=hd44780 && make && sudo make install

Now edit the /usr/local/etc/LCDd.conf file as follows

DriverPath=/usr/local/lib/lcdproc/
Driver=hd44780
ServerScreen=off

And under the section [hd44780] make sure you have the following values

ConnectionType=i2c
Device=/dev/i2c-8 # that is the I2C bus id you have from i2cdetect -l
Port=0x27         # that is the I2C device id you have from i2cdetect 8
Check out also other options - they are all properly commented so it should be easy to figure out what they mean

Starting it all up

First start the LCDd daemon. You might want to start it initially with the -f (foreground) parameter to check if it works ok.

LCDd -f

Next you need to run the client. Yes! It is a client-server architecture! Use lcdproc --help for the list of all available options. I use the SMP-CPU version the most

lcdproc -f P

That's it! I admit this is more hassle that I generally like but getting it working is worth every late night minute I spent on it :)

Happy hacking!

Sunday, December 20, 2015

Using HC-06 to program Arduino

I have been fighting the HC-06 module for a bit longer than I am usually comfortable with and decided to put it all together here.

The instructions should be the same on all Linux distributions with the bluez package installed. I have verified it on Ubuntu 15.10 with all the updates and it doesn't work. It does however work on Linux Mint 17.2...

  1. Configure the HC06 module to connect at 115200bps - best done with Arduino IDE and a USB-to-TTL converter

    AT+BAUD8

  2. Using a USBAsp programmer (or another Arduino) burn the Arduino Mini bootloader (Pro Mini didn't work at all in my case!)
  3. Find out the MAC address of module you want to use:

    hcitool scan

  4. Connect the module (replace xx:xx:xx with actual MAC of your module)

    sudo rfcomm connect /dev/rfcomm0 98:D3:32:xx:xx:xx 1

    The red LED on the HC-06 should go solid at this point

  5. Reset the board using on-board reset switch or short-circut the RST pin to the ground
  6. Validate the connection with AVRDUDE

    avrdude -c arduino -p m328p -P /dev/rfcomm0

    You should see something like this:

    user@host:~$ avrdude -c arduino -p m328p -P /dev/rfcomm0 
    
    avrdude: AVR device initialized and ready to accept instructions
    
    Reading | ################################################## | 100% 0.07s
    
    avrdude: Device signature = 0x1e950f
    
    avrdude: safemode: Fuses OK (H:00, E:00, L:00)
    
    avrdude done.  Thank you.
    

Happy connecting!

Wednesday, December 9, 2015

Getting started with STM32 on Ubuntu

I have received an entry-level STM32F103C8T6 board and was eager to take it for a spin. After some digging it occurred to me that the board isn't quite as easy to get started with as Arduino is (even though it was advertised as such). But let's take it one step at a time.

For the sake of clarity I am using Linux Mint but the instructions should be fine for whatever Debian-based Linux distro out there.

Install Arduino

This step is fairly easy. You go to the Arduino website and download the package. I'm using the one for Linux 64 bits.

After downloading extract it somewhere (I like having a programs folder somewhere but you can use /opt for all I care.

Install Arduino STM32 Hardware support

That step we can actually automate quite nicely (not that we can't do the same with installing the Arduino IDE :))

Done. Now the problem is that once you start uploading your sketches you'll bump into all sort of compilation and linkage errors. To overcome them execute the following:

Now I know it ain't the nicest way of putting it all together but unfortunately I didn't find a nicer way of installing the needed stdc++ library.

Start using Arduino

My chosen way of working with the board was to use a USB-to-TTL board. To hook it up properly one uses the +3.3V and GND first then the RX and TX to A9 and A10 (if it isn't working right away swap RX and TX).

Start Arduino IDE. From Tools select Board: STM32F103C series, Variant: 20k RAM, 64K Flash, Upload method: serial. That should set you up nicely.

Hardware setup

Now don't you be forgetting the jumper settings!

Test drive

Now we need an app to run. We're not going to be very sophisticated here and do a blink on PC13.

Now hit the upload button. The compilation step takes a while so don't be alarmed. Once it is done the green LED should blink :)

Happy coding

Tuesday, November 24, 2015

Building configurable applications

As it usually is the case recently I've been reviewing options to build web applications that can be easily transferred between environments. I found some very interesting examples on what can be done to achieve some quite interesting results. But let's start at the beginning..

The project has been released to the public as com.github.testdriven:cfgagent:1.0.0

The problem

The problem usually is that the application we develop tend to have too many configuration options. Database connection string, SMTP server settings, file locations (more than one), configuration of connection pool... just to name a few. As the number of options grows (the project I'm working on now has about 200 of those) passing them all in the command line using JAVA_OPTS is just not possible. And I don't mean like "not practical" but plain and simple not possible due to the limits of what a command-line can take. There has to be a better way to do it.

There actually is a mechanism that one can use with Tomcat called catalina.properties (and I'm sure one would find similar ones in other containers) but that is not going to fly if we want to select which set of options we want to use for this particular run or if the values should come from environmental variables (for example from Docker or Heroku).

JVM Agent to the rescue

I've been looking for a while for a nice, pluggable solution to this problem. My idea was roaming about something that could be fixed but parameterized at the same time, could be specified at execution time (just like JAVA_OPTS are) and in general to be just easy to use regardless of the execution environment.

Looking at it from different angles I turned my attention to what is executed before the main method. As it turns out there is such a thing and it is the concepts of Java Agents.

Without further due let's see the solution:

The code is pretty much self explanatory. First we load the properties from a file name given to the agent (default: system.properties), then we replace all the {placeholders} with environment variable values.

Example usage

At first we create a Docker (instantly cool) container with Oracle. Then we start Tomcat with system properties configured based on the system.properties.

Conclusion

This small utility can easily be used with any type of process where the _OPTS part is getting way too long to be maintained. I've tried it with JBoss, Tomcat, command-line and it worked great every single time :) It also has this nice property that you compose behaviors instead of hard coding them in every application and your products depend only on the concept of system properties (System.getProperty()) that is already present in Java

Happy coding!

It feels great to be blogging again :)

Monday, November 23, 2015

Running SQL queries - the groovy way

Recently I've been tasked with the creation of a utility that executes predefined queries against a database via a command-line interface. A maintenance-sort of utility. I thought this is a a perfect opportunity to freshen up my Groovy DSL skills and see what I can put together to make the code readable.

The idea

The generic idea was to create an object that would describe what the query is all about, with placeholders, then fill in those placeholders and finally execute the statement. It's a one-off operation so we won't be concerning ourselves with connection pooling and the like. Let's KISS.

The implementation

What I'm about to show you here is an over simplified version of a implementation that might be useful. Actually it is quite a bit of overhead for the simplicity Groovy already has in its toolset (the groovy.sql.Sql class) but it opens up a way of thinking of interoperability with the database.

So here we go. First we create a class to house the main property (the query to be executed) and allows for switching context to one where only execution-time properties play a role. In that switched context we execute the query and complete the execution.

Imagine now that instead of just using the Object[] array one would allow for a map to be passed on, like so:

It really doesn't get much more complex than this in regard to predefined queries :) If you'd extract the engine that fills in templates from placeholders and a map of keys (placeholders) and values you could apply the same to pretty much everything that takes a string and parameters and executes it, for example the execution of external commands :)

Happy coding

Sunday, November 22, 2015

Algorithms, data structures, accidental and essential complexity, inheritance, composition and functional programming

Today we're going to go through the tools a programmer has in order to see how and when we can use them to solve programming problems.

Let's start with some of the good old slogans that we have been fed for years:


Every loop can be replaced with functional programming
Every decision can be replaced with inheritance
Every inheritance can be replaced with composition

Does it mean that all our inheritance structures should be 2 level (Object -> MyClass) and that inheritance for itself is bad? Does it mean that if we use a switch/case or an if we're committing a crime against purity? Are those just relics of an era that's come an gone? What do you think?

I strongly believe if there is a tool (even like the goto instruction) and even if the fashion for it has been mentally superseded with a new construct (or even an older one!) then it is still valid to use it in certain contexts. For example goto in the context of text parsers is still very much valid and in use for years to come.

So how about loops and conditions? When is the right place to use them?

First things first - inheritance

Let's tackle the inheritance first because it is the simplest one. You inherited genes from your parents because you're a man. If you'd be a dog you'd not inherit a thing from your human owners as they are not your parents (although your mrs. loves you very much!). An employee is most probably a human being (although there are examples where that would not be true) and a car is a vehicle (when it works otherwise it's a problem).

To put it bluntly: whenever what you inherit is a thing you inherit from then you're doing the right thing. When you're inheriting because you'd like to have that same functionality but extended or twister then you're committing a felony and you should burn in hell.

Complexity

Let's take a look at the types of complexity we're working with on a daily basis. As we've been already told there is essential complexity (driven by the complexity of the domain we're working with) and accidental complexity (driven by the technical details of framework/language you're working in). A good example everyone can relate to is sorting so let's take a look at 2 quicksort implementations:

It is pretty straightforward, right? At first we're slicing the input in half, figuring out the elements that are lesser than the middle point and swapping them out with those that are to the right. Basically we're grouping elements smaller, equal and greater than the pivot and sorting those groups further until there is anything to sort. Efficiency aside the algorithm presented here is pretty clear. Can it get any clearer?

At first when I described the algorithm to you in the above paragraph I told you not how we're grouping things but that we do group them. I even gave you the recipe on assigning elements their destination group. This in turn describes the essential complexity of the algorithm. Can we remove all the accidental complexity? Is that even possible? Let's try it out with Groovy:

Try to read the above code and explain to yourself what it does and how it does it. Immediately you'll notice that it lacks something. I mean it must lack something, right? The Java implementation is 56 lines long and this is just 6 so there surely is a huge gap in those two implementations. But is there really a gap? First we define the exit criteria (list size less than 2), then we figure out the middle point, then we group the elements by their comparative ratio (if there are no elements then return an empty list), then we take all the smaller, sort them, add the equal to the pivot and then sort the bigger ones and add them to. I mean it takes more words to describe what is taking place than really reading the algorithm!

By the way... Take note of the if statement. It's there because the algorithm dictates it.

I think we should go over one more example, maybe less algorithmic but definitely very useful: running external applications from Java.

It is again, very straight forward: get the hold of the runtime, execute the command, read line-by-line the output from a buffered reader. Done. Can it be any simpler?

Wooow! Now that is really the salt itself: just execute the command I gave you and print out the text that came out of it. It's almost like writing a bash script but with the indescribable potential you get when using a full-fledged, mainstream JVM-based programming language.

Personal preference aside I think those two examples demonstrate pretty clearly that software can be created much faster and in a readable fashion. Obviously I'm far from suggesting you write your own quicksort implementation. Those types of lego pieces are already in place and you don't really need to worry about them. But you do need to worry about your algorithms.

Algorithms and data

Remember when you have been introduced to the SOLID principles? My God is that a bizarre harness for a developer when he hears about this the first time! Hands get soaking wet, head shakes in disbelief, and immediately questions like "why" and "how" start to raise but what it really is are means to implement extendable, easy to understand programs.

Single responsibility principle tells you that a piece of code shall only have one reason to be changed. That reason would be for example change in the algorithm but a you probably already know those change very infrequently.

Open/Closed principle tells you to write your software in such way that you can extend it without recompiling.

Liskov substitution principle tells you that no matter which implementation you're going to use the outcome of the algorithm shall stay the same and there should be no difference in the outcome when that algorithm ends. That don't mean it there can only be a single implementation because the sole purpose of doing a different implementation is to inflict change. But the fact is that if your algorithm is generic enough it will always work the same and the changing parts will be properly externalized. A fantastic example here is the template method pattern and composition over inheritance is JDBCTemplate in Spring JDBC. It allows you to tell "in place" what the behavior you'd like to inflict on the result of your query is and takes aside all that nagging boiler-plate away. No matter what you do the algorithm there will always do the same which is iterate over the ResultSet and call your method when appropriate.

Interface segregation principle tells you to only expose as much of functionality as the user of your code would. So for example the JDBCTemplate could theoretically give you the option to close the ResultSet but it doesn't do it because you'll never need it (and if you do you're doing something wrong)

Dependency injection principle modeled after one of the the Hollywood's most famous quotes, Don't call us - we'll call you, is also very well observed in the JDBCTemplate where the algorithm in that implementation takes complete control over the flow of your program and calls you back when your intervention is required.

In addition to this Niklaus Wirth came out with this book of his, Algorithms + Data Structures = Programs. So what's that all about? Has it not been superseded by SOLID principles?

The thruth is that the SOLID principles back up that book's messgage which is to take a point in dividing the essential and the accidental complexity. To make sure you don't include data in your algorithms. What do I mean by this?

Suppose you're writing a very simple program to tell the driver that the current environmental conditions are somehow dangerous. My car has that kind of feature and when it is too cold (below 4°C a subtle alarm will go off and the temperature display will blink a few times. What my car lacks however is that if it is too hot (and that'd be for me > 30°C) then I should be notified by this as well. Otherwise I could fall into the same danger as a boiled frog. I suspect that the program they have has some fixed boundaries, maybe something like this:

What is wrong with this type of programming is not that the temperature is not extracted to some constant or even taken from some function that would combine it with different sensor readings. The bad part of it is that if we would extend it we definitely need to change it and that is not kosher.

That's a whole different story! You can even read the settings from a database or mock the input in test! Cool, ain't it?

Conclusion

Every time you write a program think if what you are doing comes from the essential complexity of the domain you're working with or if it is just because the tools you have are not powerful enough to allow you to express it more elegantly. If you find yourself in the latter position extend the tools (use meta-programming, use reflections, use monkey-patching, use whatever you can) to make sure that if you read the algorithm a year from now it will still read nicely and clearly.

Happy coding!

Friday, August 28, 2015

Maven, Groovy, Appassember...

Today I wanted to show you how to structure a Maven project so that the end result looks like Maven or Ant distribution. Let's start with the basics.

The problem

When you download Maven or Ant you'll notice that the structure has some very well defined folders. There is a bin folder holding a script that you execute, then there is a lib folder where you store all the jars and then there is some kind of configuration folder (conf, etc) that contains the configuration.

Achieving this kind of final project structure with Maven isn't all that hard but it is a combination of at least 2 plugins.

The solution

First of all you need to make sure you are able to package the application properly. I use for that both the appassembler plugin and the assembly plugin. The first one just brings together the necessary project artifacts, introduces platform-specific startup scripts and finishes the final project structure by copying the production configuration files to their proper location. In our case the final structure will look like this:


bin   <-- platform-dependent scripts that start your application are stored
conf  <-- production configuration files are stored
lib   <-- all the jars live here

Next we use the assembly plugin to package everything nicely into a zip file for distribution. It makes sure the unix/linux scripts have the executable bit set too :)

The second part is enabling Groovy language in the project. There are many ways to use Groovy with Maven but my favorite is the eclipse-groovy-compiler. What it does is it does an automatic joint compilation of both .groovy and .java files so you can interchangeably use Groovy from Java and Java from Groovy. I mean it! You see absolutely no difference there (besides the obviously cleaner Groovy code). You put all files into the obvious /src/main/java folder and that's it!

Since we already have Groovy on the classpath and the Groovy compiler processes all the sources wouldn't it be nice to use it in some tests? How about Spock? There is a catch: there are 2 types of tests you would normally write: unit tests/specs and integration tests/specs. To address both needs we'll use 2 Maven plugins: the standard maven-surefire-plugin for everything unit-related and the maven-failsafe-plugin for all them integration tests. Easy!

And last but not least: I like being able to execute the tool right from Maven's command line. To do that I use the exec-maven-plugin. It's very straightforward. You just define the main class, give it some id, done.

Summary

It may seem like a lot of work to set it all up and I certainly didn't do it all at once. But now that I have this in one place I am finding myself using it more and more often. And it just works! And it makes it dead easy to strip parts out to use it for other projects, for example web applications, integration-tests-only module - you name it!

You can explore a sample project that has all that configured on my GitHub page: https://github.com/padcom/groovy-example. It's all nicely documented to make it easier to follow and customize. All the Java and Groovy source code is obviously trash but is only there to illustrate the idea. You can make use of all the configuration like so:


mvn clean install exec:java

You'll see the unit tests/specs executed as part of the build, then appassembler will kick in and build the final structure, then assembler will package it all nicely into a zip file. Next the integration tests will be executed using the failsafe plugins and both artifacts (the jar and the zip) will be installed in your local maven cache (~/.m2/repository by default). Of course you can also find it all in the target folder where it is all initially built.

Enjoy!

Monday, June 22, 2015

Building runC on Ubuntu

I just got really excited with runC after watching the intro presentation on dockercon and wanted to try it out. It seems my knowledge of go programming language and its ecosystem was just not enough to just go ahead and do it and a little bit of googling was required. To ease the pain for the newcomers here's how I did it.

First you need to clone the repository, obviously:
git clone https://github.com/opencontainers/runc
Then you need to make it. To do so you obviously need to have go installed. I installed mine from the following PPA
sudo apt-add-repository -y ppa:evarlast/golang1.4
sudo apt-get update
sudo apt-get install golang
Then you need to do the building
GOPATH="$(pwd)" PATH="$PATH:$GOPATH/bin" make
The last step is installing it which needs to be done as root
sudo make install
That's it! Then you can go an play with runc - it's AWESOME!

Sunday, February 8, 2015

Reading encoders with Arduino

I know that the encoders topic has been done to the death already (it seems it's next in line with the blink example) but I've nowhere seen it done the way I'll present here.

1. We know that 0b10 == 2, 0b11 == 3, 0b00 == 0 and 0b01 == 1 - those are just basic 2-bit values
2. We can put 2 values like that in a nibble (half-byte) so a uint8_t is more than able to house that value
3. We'll assume pins 2&3 (PC1 and PC0)

So the reading could be like so:

volatile uint8_t state = 0;
volatile int32_t counter = 0, oldCounter = 0;
// for full-stop encoder reads only use this state changes
volatile int8_t QEM[16]  = {
 0, 0,  0, 0, -1, 0, 0, 1, 1, 0, 0, -1, 0,  0, 0, 0
};
// for full quadrature decoding use this state changes
// volatile int8_t QEM[16]  = {
// 0, 1, -1, 0, -1, 0, 0, 1, 1, 0, 0, -1, 0, -1, 1, 0
// };

void setup() {
 // configure pin direction
 DDRC &= ~(1<<PC1);
 DDRC &= ~(1<<PC0);

 // enable pullups
 PORTC |= (1<<PC1) || (1<<PC0);

 // load initial encoder values
 if ((PINC & (1 << PC1)) != 0) state |= 0b00000010;
 if ((PINC & (1 << PC0)) != 0) state |= 0b00000001;
}

void loop() {
 // make space for current state; we just want the lower nibble
 // mask everything else out
 state = (state << 2) & 0x0F;

  // read the current state into the lower part of the nibble
 // is half a nibble a nib? ;)
 if ((PINC & (1 << PC1)) != 0) state |= 0b00000010;
 if ((PINC & (1 << PC0)) != 0) state |= 0b00000001;

 // At this stage the state variable contains 4 bits of information
 // containing the previous state and the new state that can be
 // directly used as an index - just the array needs to be a little
 // bit different.
 // 0b0010 = -1
 // 0b0001 =  1
 // 0b1000 =  1
 // 0b1011 = -1
 // 0b1110 =  1
 // 0b1101 = -1
 // 0b0100 = -1
 // 0b0111 =  1
 // which results in the array defined above

  // next we use the state value as an index to the array
 // we do this only on full encoder stops
 counter += QEM[state];

  // react on counter change
 if (oldCounter != counter) {
  oldCounter = counter;

   // counter changed - process
 }
}

Now this is all nice if your loops are quick but if they aren't you're going to loose precision. To get it back we need to change to interrupts - it's not that difficult!

volatile uint8_t state = 0;
volatile int32_t counter = 0, oldCounter = 0;
// for full-stop encoder reads only use this state changes
volatile int8_t QEM[16]  = {
 0, 0,  0, 0, -1, 0, 0, 1, 1, 0, 0, -1, 0,  0, 0, 0
};
// for full quadrature decoding use this state changes
// volatile int8_t QEM[16]  = {
// 0, 1, -1, 0, -1, 0, 0, 1, 1, 0, 0, -1, 0, -1, 1, 0
// };

ISR(PCINT1_vect) {
 // make space for current state; we just want the lower nibble
 // mask everything else out
 state = (state << 2) & 0x0F;

 // read the current state into the lower part of the nibble
 // is half a nibble a nib? ;)
 if ((PINC & (1 << PC1)) != 0) state |= 0b00000010;
 if ((PINC & (1 << PC0)) != 0) state |= 0b00000001;

 // At this stage the state variable contains 4 bits of information
 // containing the previous state and the new state that can be
 // directly used as an index - just the array needs to be a little
 // bit different.
 // 0b0010 = -1
 // 0b0001 =  1
 // 0b1000 =  1
 // 0b1011 = -1
 // 0b1110 =  1
 // 0b1101 = -1
 // 0b0100 = -1
 // 0b0111 =  1
 // which results in the array defined above

 // next we use the state value as an index to the array
 // we do this only on full encoder stops
 counter += QEM[state];
}

void setup() {
 // configure pin direction
 DDRD &= ~(1<<PC1);
 DDRD &= ~(1<<PC0);

 // enable pullups
 PORTC |= (1<<PC1) || (1<<PC0);

 // enable pin-change interrupts for PC1 and PC0
 PCICR |= (1<<PCIE2);
 PCMSK2 |= (1<<PCINT18) | (1<<PCINT19);

 // load initial encoder values
 if ((PINC & (1<<PC1)) != 0) state |= 0b00000010;
 if ((PINC & (1<<PC0)) != 0) state |= 0b00000001;

 // enable interrupts
 sei();
}

void loop() {
 // react on counter change
 if (oldCounter != counter) {
  oldCounter = counter;

  // counter changed
 }
}

As you can see the ISR-driven method isn't all that different from the polling one and gives a significant advantage in terms of flexibility and reliability.

Many thanks for dr Robert Paz for his series of lectures on Arduino programming!

Have fun!

Using the Arduino environment with Eclipse

I've recently fallen in love with the AVR MCUs especially because of the Arduino and it's hugely successful Arduino IDE. It seems that if there is a piece of hardware, a sensor perhaps, then Arduino has a library for it that you can use. It's just great!

There's however one small problem: you need the IDE to build it, the IDE is very small in capabilities (there's no code insight help like in Eclipse for example) and everything seems to be a bitt ... I don't know how to call it.. "amateur" is the word I guess... There's nothing wrong with doing things this way - unless you're used to using something more effective than the notepad for coding - like I am.

So.. You have the Arduino IDE, you have UNO board (possibly a cheep chinese clone) and you have done the Blink example to the death. Now it's time to do some serious damage!

We're going to need the following:

- a couple of packages (sudo apt-get install avrdude avr-libc gcc-avr make openjdk-7-jdk)
- Eclipse for C/C++ developers
- AVR plugin for Eclipse CDT
- Arduino 1.0.6 IDE (may work for a newer one, haven't checked it yet)

I'm going to assume you have downloaded Eclipse, installed the AVR plugin as instructed on the
AVR plugin page and extracted the Arduino IDE into the /tmp folder so you have the /tmp/arduino-1.0.6 location with all the things in it).

First we need to create a programmer to make Eclipse happy. To do so click Window -> Preferences, expand the AVR node and select AVRDude. In the list of programmers click Add, enter "Arduino" as the name of the programmer, select Arduino from the Programmer Hardware list, optionally if you're using a Nano or Mini enter the proper port (I for example have had to enter /dev/ttyUSB0 and I'm using an Arduino Nano with Optiboot) and click OK.

Next we need a project that will use the Arduino environment. Let's create one:

File -> New -> C++ Project
Enter project name, select "Makefile project" and then "AVR GCC Toolchain" and click "Finish".

Next bit is a bit tricky so you need to follow it to the letter. In your freshly created project create a folder called "arduino" and copy there the entire content of the following directories:

/tmp/arduino-1.0.6/hardware/cores/arduino
/tmp/arduino-1.0.6/hardware/variants/standard (that's actually just one file)
/tmp/arduino-1.0.6/libraries/Wire
/tmp/arduino-1.0.6/libraries/Wire/utility
(and any other library you'd like to use)

That'll give you a mini version of the environment to use. Next we need a way to build it. That's quite easy - you just drop in this Makefile into the arduino folder and type "make clean all" and you're done. When you want to use more libraries just drop their files in there and make clean all again.

One last thing to enable Eclipse to understand Arduino libraries. We need to define the symbol ARDUINO with value 106. To do this select Project -> Properties, then C/C++ Build -> Build Variables, click Add, enter "ARDUINO" into "Variable name" and "106" into "Value", click OK, then OK the properties dialog and you're all set!

Now for the fun part - let's create a blinker!

For that we need to create a new file, let's call it sketch.cpp (File -> New -> File, enter the name, press enter, done). In that file we'll enter the following:

#include

void setup() {
pinMode(13, OUTPUT);
}

void loop() {
delay(500);
digitalWrite(13, LOW);
delay(500);
digitalWrite(13, HIGH);
}

This is basic Arduino thing with the addition of one extra include file at the beginning of the file. Nothing more. Now to build that we'll need this Makefile in your project's folder.

Now all you need to do is select "Project -> Make target -> Build..." which will open a list of known targets. Click "Add" and enter "clean load" (that's the set of commands to build and upload the sketch to your board). Unfortunately due to a bug you'll have to close this window and re-open it to see the target you just added - do that and then select it and press "Build". And presto! your Arduino blinks!

There's an added bonus: you can now build your project without any IDE, for example using some sort of continuous integration or something - and ride like a pro! :)

Friday, January 30, 2015

Finding elements with duplicate element IDs on a page

I know - IDs on a page should be unique. That's absolutely right! But have you seen the browser spitting a message that you duplicated one? No? Here's a piece of JavaScript code that'll check your DOM for duplicate IDs:

var elements = document.getElementsByTagName("*"); for (i = 0; i < elements.length; i++) if (elements[i].id != "") for (j = 0; j < i; j++) if (elements[i].id == elements[j].id) { console.log("duplicate id: " + elements[j].id + "; idx " + i + " and " + j); break; }

and now formatted:

var elements = document.getElementsByTagName("*");
for (i = 0; i < elements.length; i++) {
  if (elements[i].id != "") {
    for (j = 0; j < i; j++) {
      if (elements[i].id == elements[j].id) {
        console.log("duplicate id: " + elements[j].id + "; idx " + i + " and " + j); 
        break; 
      }
    }
  }
}

It'll make a list of all the elements in a page, iterate over elements with ID and search for a duplicate entry reporting using console.log so you need to have a fairly modern browser to be using it (I mean ancient IE will not work)

Have fun!