Metrics reporting from Guava Caches

Codahale Metrics is a wonderful library to collect and report various types of metrics from your Java server-side applications. Google Guava is the swiss army knife of libraries that every Java developer should have in their toolbox.

If you use both these libraries, then at some point of time, you may find the need to capture statistics from Guava Caches and report them via Metrics. When you do, perhaps this class would be useful.

package net.antrix.utils;

import static;

import java.util.HashMap;
import java.util.Map;

import com.codahale.metrics.Gauge;
import com.codahale.metrics.Metric;
import com.codahale.metrics.MetricSet;

public class GuavaCacheMetrics extends HashMap< String, Metric > implements MetricSet {

     * Wraps the provided Guava cache's statistics into Gauges suitable for reporting via Codahale Metrics
     * <p/>
     * The returned MetricSet is suitable for registration with a MetricRegistry like so:
     * <p/>
     * <code>registry.registerAll( GuavaCacheMetrics.metricsFor( "MyCache", cache ) );</code>
     * @param cacheName This will be prefixed to all the reported metrics
     * @param cache The cache from which to report the statistics
     * @return MetricSet suitable for registration with a MetricRegistry
    public static MetricSet metricsFor( String cacheName, final Cache cache ) {

        GuavaCacheMetrics metrics = new GuavaCacheMetrics();

        metrics.put( name( cacheName, "hitRate" ), new Gauge< Double >() {
            public Double getValue() {
                return cache.stats().hitRate();
        } );

        metrics.put( name( cacheName, "hitCount" ), new Gauge< Long >() {
            public Long getValue() {
                return cache.stats().hitCount();
        } );

        metrics.put( name( cacheName, "missCount" ), new Gauge< Long >() {
            public Long getValue() {
                return cache.stats().missCount();
        } );

        metrics.put( name( cacheName, "loadExceptionCount" ), new Gauge< Long >() {
            public Long getValue() {
                return cache.stats().loadExceptionCount();
        } );

        metrics.put( name( cacheName, "evictionCount" ), new Gauge< Long >() {
            public Long getValue() {
                return cache.stats().evictionCount();
        } );

        return metrics;

    private GuavaCacheMetrics() {

    public Map< String, Metric > getMetrics() {
        return this;


And here's a Spock specification that verifies that it works.

package net.antrix.utils

import com.codahale.metrics.MetricRegistry
import spock.lang.Specification

class GuavaCacheMetricsTest extends Specification {

    def "ensure guava cache metrics are reported to the registry"() {

        given: "a guava cache registered with a Metrics registry"

            def cache = CacheBuilder.newBuilder().recordStats().build()
            def registry = new MetricRegistry()
            registry.registerAll(GuavaCacheMetrics.metricsFor("MyCache", cache))

        when: "various read/write operations are performed on the cache"

            cache.put("k1", "v1")
            cache.put("k2", "v2")


            cache.get("k4", { "v4" })

            try {
                cache.get "k5", {
                    throw new Exception()
            } catch (Exception expected) {

        then: "the metrics registry records them correctly"

            def gauges = registry.gauges

            2 == gauges["MyCache.hitCount"].value
            3 == gauges["MyCache.missCount"].value
            0.4 == gauges["MyCache.hitRate"].value
            1 == gauges["MyCache.loadExceptionCount"].value
            0 == gauges["MyCache.evictionCount"].value

[x] driven development

I made a new site: [x] driven development.

Go check it out!

The idea for this site came from a conversation at work where we were discussing design choices for a requirement. The designs felt like over-engineering and in a moment of frustration I said, "This feels like audit driven development!" We had a good laugh and then went back to hashing out a design that would satisfy Audit while still meeting the requirements.

That conversation stayed in my head and eventually, took the form of this new site.

Incidentally, the development of this site itself was a case of résumé driven development! I'd been reading about RethinkDB for a while now and was looking for a project in which to try it out. So when starting development on this site, I dived right in and coded the first version of this site using the Flask web framework and a RethinkDB based storage layer. Thankfully, sanity prevailed soon enough and I converted the site into a 100% static website. There's a bit of AJAX going on when you navigate around the site but that's still all served up as static content. Nothing fancy though, the few dozen lines of source will give you the complete behind the scenes story.

If you liked the site, you can subscribe to the feed using Google Reader your favourite news reader or get updates from @devdrivenby on Twitter.

Roku, Netflix and the Raspberry Pi

I recently bought a Roku 3 media player from Amazon, taking advantage of the latter's free international shipping offer.

Roku 3

This new Roku, the latest in a line of low-priced devices from the company, is essentially a streaming media device that connects to your TV and allow you to watch content streamed over the network. Featuring a dead-simple interface - be it on-screen or in your hands - the Roku makes browsing for and consuming content in the living room virtually effortless. It's the promise of 'Smart TVs' - actually delivered.

Unfortunately, if you are outside the USA, much of the content that the Roku promises to stream is unavailable -- locked away behind geographic restrictions. You can use something like the Plex Media Server to stream your local media to it - and it does an excellent job of that. In fact, you can even treat this tiny device as just the best way to get your Plex content on your living room television.

Aside: My earlier effort at using the Raspberry Pi as a living room media center didn't quite work out due its inability to output surround sound to my home theater.

There's a veritable cottage industry of services that have sprung up around mechanisms that let users bypass these content geo-restrictions. One such mechanism utilizes the DNS system to enable access. In this category are services like Unblock-Us and UnoDNS. I'll just point you to one of these services for an explanation of how this works.

If you read that explanation, you know that it boils down to using these services are your DNS provider. This does come with certain downsides. Specifically:

  1. You must trust these services not to indulge in malicious DNS spoofing.
  2. You must be okay with the results returned by these services for the rest of the Internet's domain names.

While #1 in the above list may be dismissed as a paranoid concern, the second is a legitimate, everyday concern. As an example, some of these services delegate to Google's DNS for resolving names that aren't core to their service (i.e. not,, etc.). I personally don't use Google DNS because I've had issues with them not resolving my work place's remote-access domain to the correct load-balanced server.

Simply put, I'd be loath to use one of these services as my primary DNS.

One nice middle ground would be to configure the Roku to use, say, UnoDNS as the DNS provider and continue using my preferred DNS provider for all the other the devices in my network.

Unfortunately, the Roku provides no way to specify the DNS server to use!

If I had a hackable router, there's some iptables trickery that could be put to use:

iptables -t nat -I PREROUTING -i br0 -s <roku.ip.addr> -p udp --dport 53 -j DNAT --to <dns.ip.addr>
iptables -t nat -I PREROUTING -i br0 -s <roku.ip.addr> -p tcp --dport 53 -j DNAT --to <dns.ip.addr>

The above (disclaimer: untested!) sets up a firewall rule that redirects all requests originating from the Roku device and bound for port 53 (the DNS service port) to a designated DNS server. All other DNS traffic is unaffected.

Alas, I don't have a router that lets me set such custom firewall rules.

Which brings me to Dnsmasq, the solution I settled on. Dnsmasq is a simple DNS server that's suitable for small intranets. In essence, you can configure two kinds of mapping rules in Dnsmasq:

  • hostname -> IP address
  • hostname -> DNS server

The first kind, obviously, sets a static mapping from a hostname to an IP address. This is the basic DNS functionality and you can imagine have ten to fifteen such rules being sufficient for a small intranet with a dozen computers.

The second kind of rule sets a mapping between a hostname and the DNS server to query to get the IP for that hostname. So if I had a rule like so:


It means that when there's a request to fetch the IP address of the domain name, forward that request to the DNS server at

That's all I need! I can configure one of the geo-block-bypassing services as the upstream DNS for a list of domains that host the content I am interested in and then use my regular DNS server for all other domains!

Now comes the question of where to run this dnsmasq server. As I mentioned in the aside above, my Raspberry Pi is no longer serving as a media center so I re-purposed it as a super-silent, power-sipping energy efficient Linux server running just dnsmasq!

Here's a quick howto:

  1. Install Raspbian on the RPi.
  2. Set it up to use a static IP, say
  3. Install dnsmasq: sudo apt-get install dnsmasq
  4. Create a custom mapping rules config file for dnsmasq
  5. Copy the new rules file to /etc/dnsmasq.d/
  6. Setup your main router's WAN settings to use as the DNS server.

That's pretty much it. Restart all devices and check if the DNS changes have taken effect. All devices in your intranet should now be using the RPi as a DNS server. To check if dnsmasq is working correctly, browse to a configured domain (say and check whether you get blocked or not. If you aren't, congratulations!

Because I am still experimenting with the DNS bypass service providers as well as the list of domains that require special handling, I wrote a quick Python script that generates the requisite dnsmasq configuration file. I've shared it in my Bitbucket repository in case anyone finds it useful.

As of this writing, this is the dnsmasq config file that I am using.

Now to figure out what's the most optimum way to populate my Netflix queue!

Restoring Windows

I had to spend way too much time last night recovering from a stupid mistake made nearly three years ago. A quick flashback: three years ago, my home computer died (mass suicide by the capacitors on the motherboard) and I bought a Dell Inspiron to replace it. As I noted in that post, I popped in an HDD taken from the dead PC into the new Dell and installed Ubuntu on it.

I made one critical error at that point: I installed Grub, the boot loader, on the Win 7 disk instead of the Linux disk. Looking back, I can't even recall if the Ubuntu installer gave me a choice in that regard, but I should've been more diligent.

Fast forward to this week: the Ubuntu running disk started giving SMART errors (not the other SMART) warning me of impending death. Luckily, the disk is still under warranty even under the hard disk cartel's industry-wide reduced warranty period of three years instead of five years.

So in preparation of returning the disk for replacement, I spent some time configuring & preparing for use the long-neglected Windows 7 installation. Windows 7, BTW, is much nicer than I expected; but let's leave that train of thought for another day. The final step in the process was to pull out the Linux disk and restore the Dell to its pure Windows existence.

That's when the three year old mistake came back to bite me:

error: no such partition
grub rescue>

Essentially, the Windows 7 boot loader on the Win 7 disk was gone and replaced by Grub which now couldn't find its second stage loader which was installed on the failing hard disk. In simpler terms, I needed both disks in the PC to boot Windows.

Restoring the Win 7 boot loader ranges from trivial to tricky depending on which side of the retail Windows license holder or OEM Windows license holder line you fall. It is trivial because all you need is a Win 7 CD with which you boot the computer, go into the rescue console and type out a sum total of two commands. It is tricky because Dell, like most other PC manufacturers, subscribes to the logic that it is too expensive to ship a 5$ Windows 7 installation CD with a 1000$ computer.

Microsoft thankfully seems to care about their end users more than Dell and as a work-around to the cheapo behaviour of their OEMs, have added a feature to Windows 7 that allows you to create a rescue or recovery CD from any Win 7 installation. As luck would have it, I still have a few dozen shiny coasters left over from the pre-flashdrive era. I fetched one from storage, popped it into the CD/DVD burner and had a rescue CD ready in a couple of minutes.

But Dell had another card up its sleeves. While the computer seemed to boot from the CD, it eventually errored out throwing the number 0X4001100200001012 in my face. A bit of Googling revealed that this error code seems to appear only on rescue CDs created from Win 7 computers sold by Dell.

Thankfully, some kind souls on the Internet have provided instructions on how to create a USB boot disk using the Win 7 rescue CD. Following those instructions, I was finally able to boot into the recovery environment using a flashdrive and restore the boot loader.

Once Windows was booting properly, I kicked off a full disk erase process using the zeroing feature in Seagate's Seatools utility and went to bed. This morning, I pulled out the dying disk and went down to the Seagate distributor's office to return the disk and initiate a warranty replacement.

TL;DR: be careful where you install Grub!