Team Geek

I just finished reading Team Geek: A Software Developer's Guide to Working Well with Others by Brian W. Fitzpatrick and Ben Collins-Sussman. The authors bring their experience of working on open source project teams (Subversion) as well as corporate software development teams (Google) and share what it takes to build great teams, create organizational change and manage your own career.

Although at times I felt that the book got a bit repetitive and could have done with tighter editing, overall, it is a great read with lots of hard earned wisdom on working in software teams. I've pulled out a few quotes which I particularly liked.

On not criticizing every single decision and learning to pick the right battles to fight:

Every time a decision is made, it’s like a train coming through town — when you jump in front of the train to stop it you slow the train down and potentially annoy the engineer driving the train. A new train comes by every 15 minutes, and if you jump in front of every train, not only do you spend a lot of your time stopping trains, but eventually one of the engineers driving the train is going to get mad enough to run right over you. So, while it’s OK to jump in front of some trains, pick and choose the ones you want to stop to make sure you’re only stopping the trains that really matter. -- Team Geek

On not tolerating people that threaten to poison your team's culture, even if they are "genius" programmers:

Genius is such a commodity these days that it’s not acceptable to be an eccentric any more. -- Team Geek

On attempting to change bad habits and behaviours:

It’s impossible to simply stop a bad habit; you need to replace it with a good habit. -- Team Geek

On learning to Manage Upward:

Shipping things gives you credibility, reputation, and political capital more than just about anything else in a company. -- Team Geek

On writing emails that get results:

A good Three Bullets and a Call to Action email contains (at most) three bullet points detailing the issue at hand, and one — and only one — call to action. That’s it — nothing more. -- Team Geek

On doing the Right Thing:

Do the right thing, wait to get fired. -- Chade-Meng Tan

Metrics reporting from Guava Caches

Codahale Metrics is a wonderful library to collect and report various types of metrics from your Java server-side applications. Google Guava is the swiss army knife of libraries that every Java developer should have in their toolbox.

If you use both these libraries, then at some point of time, you may find the need to capture statistics from Guava Caches and report them via Metrics. When you do, perhaps this class would be useful.

package net.antrix.utils;

import static;

import java.util.HashMap;
import java.util.Map;

import com.codahale.metrics.Gauge;
import com.codahale.metrics.Metric;
import com.codahale.metrics.MetricSet;

public class GuavaCacheMetrics extends HashMap< String, Metric > implements MetricSet {

     * Wraps the provided Guava cache's statistics into Gauges suitable for reporting via Codahale Metrics
     * <p/>
     * The returned MetricSet is suitable for registration with a MetricRegistry like so:
     * <p/>
     * <code>registry.registerAll( GuavaCacheMetrics.metricsFor( "MyCache", cache ) );</code>
     * @param cacheName This will be prefixed to all the reported metrics
     * @param cache The cache from which to report the statistics
     * @return MetricSet suitable for registration with a MetricRegistry
    public static MetricSet metricsFor( String cacheName, final Cache cache ) {

        GuavaCacheMetrics metrics = new GuavaCacheMetrics();

        metrics.put( name( cacheName, "hitRate" ), new Gauge< Double >() {
            public Double getValue() {
                return cache.stats().hitRate();
        } );

        metrics.put( name( cacheName, "hitCount" ), new Gauge< Long >() {
            public Long getValue() {
                return cache.stats().hitCount();
        } );

        metrics.put( name( cacheName, "missCount" ), new Gauge< Long >() {
            public Long getValue() {
                return cache.stats().missCount();
        } );

        metrics.put( name( cacheName, "loadExceptionCount" ), new Gauge< Long >() {
            public Long getValue() {
                return cache.stats().loadExceptionCount();
        } );

        metrics.put( name( cacheName, "evictionCount" ), new Gauge< Long >() {
            public Long getValue() {
                return cache.stats().evictionCount();
        } );

        return metrics;

    private GuavaCacheMetrics() {

    public Map< String, Metric > getMetrics() {
        return this;


And here's a Spock specification that verifies that it works.

package net.antrix.utils

import com.codahale.metrics.MetricRegistry
import spock.lang.Specification

class GuavaCacheMetricsTest extends Specification {

    def "ensure guava cache metrics are reported to the registry"() {

        given: "a guava cache registered with a Metrics registry"

            def cache = CacheBuilder.newBuilder().recordStats().build()
            def registry = new MetricRegistry()
            registry.registerAll(GuavaCacheMetrics.metricsFor("MyCache", cache))

        when: "various read/write operations are performed on the cache"

            cache.put("k1", "v1")
            cache.put("k2", "v2")


            cache.get("k4", { "v4" })

            try {
                cache.get "k5", {
                    throw new Exception()
            } catch (Exception expected) {

        then: "the metrics registry records them correctly"

            def gauges = registry.gauges

            2 == gauges["MyCache.hitCount"].value
            3 == gauges["MyCache.missCount"].value
            0.4 == gauges["MyCache.hitRate"].value
            1 == gauges["MyCache.loadExceptionCount"].value
            0 == gauges["MyCache.evictionCount"].value

[x] driven development

I made a new site: [x] driven development.

Go check it out!

The idea for this site came from a conversation at work where we were discussing design choices for a requirement. The designs felt like over-engineering and in a moment of frustration I said, "This feels like audit driven development!" We had a good laugh and then went back to hashing out a design that would satisfy Audit while still meeting the requirements.

That conversation stayed in my head and eventually, took the form of this new site.

Incidentally, the development of this site itself was a case of résumé driven development! I'd been reading about RethinkDB for a while now and was looking for a project in which to try it out. So when starting development on this site, I dived right in and coded the first version of this site using the Flask web framework and a RethinkDB based storage layer. Thankfully, sanity prevailed soon enough and I converted the site into a 100% static website. There's a bit of AJAX going on when you navigate around the site but that's still all served up as static content. Nothing fancy though, the few dozen lines of source will give you the complete behind the scenes story.

If you liked the site, you can subscribe to the feed using Google Reader your favourite news reader or get updates from @devdrivenby on Twitter.

Roku, Netflix and the Raspberry Pi

I recently bought a Roku 3 media player from Amazon, taking advantage of the latter's free international shipping offer.

Roku 3

This new Roku, the latest in a line of low-priced devices from the company, is essentially a streaming media device that connects to your TV and allow you to watch content streamed over the network. Featuring a dead-simple interface - be it on-screen or in your hands - the Roku makes browsing for and consuming content in the living room virtually effortless. It's the promise of 'Smart TVs' - actually delivered.

Unfortunately, if you are outside the USA, much of the content that the Roku promises to stream is unavailable -- locked away behind geographic restrictions. You can use something like the Plex Media Server to stream your local media to it - and it does an excellent job of that. In fact, you can even treat this tiny device as just the best way to get your Plex content on your living room television.

Aside: My earlier effort at using the Raspberry Pi as a living room media center didn't quite work out due its inability to output surround sound to my home theater.

There's a veritable cottage industry of services that have sprung up around mechanisms that let users bypass these content geo-restrictions. One such mechanism utilizes the DNS system to enable access. In this category are services like Unblock-Us and UnoDNS. I'll just point you to one of these services for an explanation of how this works.

If you read that explanation, you know that it boils down to using these services are your DNS provider. This does come with certain downsides. Specifically:

  1. You must trust these services not to indulge in malicious DNS spoofing.
  2. You must be okay with the results returned by these services for the rest of the Internet's domain names.

While #1 in the above list may be dismissed as a paranoid concern, the second is a legitimate, everyday concern. As an example, some of these services delegate to Google's DNS for resolving names that aren't core to their service (i.e. not,, etc.). I personally don't use Google DNS because I've had issues with them not resolving my work place's remote-access domain to the correct load-balanced server.

Simply put, I'd be loath to use one of these services as my primary DNS.

One nice middle ground would be to configure the Roku to use, say, UnoDNS as the DNS provider and continue using my preferred DNS provider for all the other the devices in my network.

Unfortunately, the Roku provides no way to specify the DNS server to use!

If I had a hackable router, there's some iptables trickery that could be put to use:

iptables -t nat -I PREROUTING -i br0 -s <roku.ip.addr> -p udp --dport 53 -j DNAT --to <dns.ip.addr>
iptables -t nat -I PREROUTING -i br0 -s <roku.ip.addr> -p tcp --dport 53 -j DNAT --to <dns.ip.addr>

The above (disclaimer: untested!) sets up a firewall rule that redirects all requests originating from the Roku device and bound for port 53 (the DNS service port) to a designated DNS server. All other DNS traffic is unaffected.

Alas, I don't have a router that lets me set such custom firewall rules.

Which brings me to Dnsmasq, the solution I settled on. Dnsmasq is a simple DNS server that's suitable for small intranets. In essence, you can configure two kinds of mapping rules in Dnsmasq:

  • hostname -> IP address
  • hostname -> DNS server

The first kind, obviously, sets a static mapping from a hostname to an IP address. This is the basic DNS functionality and you can imagine have ten to fifteen such rules being sufficient for a small intranet with a dozen computers.

The second kind of rule sets a mapping between a hostname and the DNS server to query to get the IP for that hostname. So if I had a rule like so:


It means that when there's a request to fetch the IP address of the domain name, forward that request to the DNS server at

That's all I need! I can configure one of the geo-block-bypassing services as the upstream DNS for a list of domains that host the content I am interested in and then use my regular DNS server for all other domains!

Now comes the question of where to run this dnsmasq server. As I mentioned in the aside above, my Raspberry Pi is no longer serving as a media center so I re-purposed it as a super-silent, power-sipping energy efficient Linux server running just dnsmasq!

Here's a quick howto:

  1. Install Raspbian on the RPi.
  2. Set it up to use a static IP, say
  3. Install dnsmasq: sudo apt-get install dnsmasq
  4. Create a custom mapping rules config file for dnsmasq
  5. Copy the new rules file to /etc/dnsmasq.d/
  6. Setup your main router's WAN settings to use as the DNS server.

That's pretty much it. Restart all devices and check if the DNS changes have taken effect. All devices in your intranet should now be using the RPi as a DNS server. To check if dnsmasq is working correctly, browse to a configured domain (say and check whether you get blocked or not. If you aren't, congratulations!

Because I am still experimenting with the DNS bypass service providers as well as the list of domains that require special handling, I wrote a quick Python script that generates the requisite dnsmasq configuration file. I've shared it in my Bitbucket repository in case anyone finds it useful.

As of this writing, this is the dnsmasq config file that I am using.

Now to figure out what's the most optimum way to populate my Netflix queue!