<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom"><title>antrix.net</title><link href="https://antrix.net/" rel="alternate"/><link href="/posts/feed/" rel="self"/><id>https://antrix.net/</id><updated>2026-03-01T17:40:00+08:00</updated><entry><title>Entropy Containment Engineer</title><link href="https://antrix.net/posts/2026/software-entropy/" rel="alternate"/><published>2026-03-01T17:40:00+08:00</published><updated>2026-03-01T17:40:00+08:00</updated><author><name>Deepak Sarda</name></author><id>tag:antrix.net,2026-03-01:/posts/2026/software-entropy/</id><summary type="html">&lt;p&gt;I can imagine a fairly ordinary Tuesday a few years from now.&lt;/p&gt;
&lt;p&gt;A compliance lead notices a regulator has tightened rules around &amp;ldquo;high-risk accounts.&amp;rdquo; She opens an internal admin tool, writes what she wants in plain language, and an agent generates the change: a new check, a UI nudge, an …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I can imagine a fairly ordinary Tuesday a few years from now.&lt;/p&gt;
&lt;p&gt;A compliance lead notices a regulator has tightened rules around &amp;ldquo;high-risk accounts.&amp;rdquo; She opens an internal admin tool, writes what she wants in plain language, and an agent generates the change: a new check, a UI nudge, an audit record, a couple of dashboards, a feature flag. It rolls out slowly, watches itself, and keeps going.&lt;/p&gt;
&lt;p&gt;Across the office, a support manager is tired of a refund workflow that forces three manual steps and produces inconsistent outcomes. He describes the intent, the agent threads the needle through a messy back-office system, adds guardrails, and ships the fix.&lt;/p&gt;
&lt;p&gt;A marketer wants a different onboarding path for a specific campaign. She asks for two screens and a new event. The agent wires it up end to end.&lt;/p&gt;
&lt;p&gt;None of this is &amp;ldquo;deep engineering,&amp;rdquo; but it’s still real engineering surface area: money, customer data, risk controls, and downstream obligations. The novelty is not that changes can be made; it’s that they can be made all day, by many people, without queuing behind a scarce engineer.&lt;/p&gt;
&lt;p&gt;There have already been plenty of essays about this shift – domain specialists using AI to build, the impact on software company moats, on margins, on valuations, on who captures value in the economy. All of that is interesting and probably important.&lt;/p&gt;
&lt;p&gt;One aspect I haven’t heard discussed as much is more operational and less glamorous: when this shift actually happens inside a real company, how do we stop the system from gradually degrading as the rate of change goes up by 100× or 1000×?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;If you step back and look at the system as a whole, that gradual breakdown of order has a name: &lt;em&gt;Entropy&lt;/em&gt;.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;Entropy, in software terms (and why we’ve always fought it)&lt;/h2&gt;
&lt;p&gt;Entropy in a codebase isn’t a bug count. It’s the gap between the system’s intended shape and what it becomes after years of local decisions that drag it away from that ideal shape.&lt;/p&gt;
&lt;p&gt;Every working system has an intended shape, even when nobody wrote it down. There are boundaries (what belongs where), invariants (what must remain true), and meanings (what a &amp;ldquo;customer,&amp;rdquo; a &amp;ldquo;refund,&amp;rdquo;  a &amp;ldquo;risk flag&amp;rdquo; actually is). When those meanings stay crisp, the system stays legible: people can predict consequences, reuse parts safely, and make changes without constantly rediscovering hidden rules. When meanings blur, the system can still be changed for a while, but the cost shows up as bugs, performance issues, and an increasing fear of changing anything.&lt;/p&gt;
&lt;p&gt;This is not a new problem. It’s what &amp;ldquo;software maintenance&amp;rdquo; has always been!&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;When we do &amp;ldquo;software maintenance&amp;rdquo;, we are reducing entropy.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Refactoring is entropy reduction. Dependency upgrades are entropy reduction. Cleaning up duplicated business rules, untangling accidental dependencies, fixing CI so it behaves predictably, keeping runbooks in shape, it’s all the same family of work: restoring the intended shape so the next change is still possible.&lt;/p&gt;
&lt;p&gt;&amp;ldquo;Software Maintenance&amp;rdquo; has always been a part of software engineering. In healthy companies (and healthy systems), we find time to do maintenance regularly. However, it&amp;rsquo;s not the primary activity: that&amp;rsquo;s still shipping features and shipping value. &lt;/p&gt;
&lt;h2&gt;Under 1000× change, entropy management becomes the job&lt;/h2&gt;
&lt;p&gt;If domain specialists can ship changes with AI agents, the volume and frequency of modification changes character. The codebase stops evolving in bursts around sprint boundaries and starts evolving continuously, with many hands applying small, locally rational adjustments throughout the day.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;With 100× or 1000× more change, treating entropy reduction as a side activity stops being viable.&lt;/strong&gt; The curve outruns you. So the question becomes: who does the entropy work when almost anyone can generate new code and ship new behavior?&lt;/p&gt;
&lt;p&gt;The answer is still &amp;ldquo;software engineers,&amp;rdquo; but the role changes. &lt;strong&gt;Engineers stop being primarily the people who implement most business features, and become the people responsible for the system’s coherence while change hits it from all sides.&lt;/strong&gt; They still build, but their highest leverage work moves toward the machinery that keeps the system understandable, stable enough to evolve, and resistant to slow degradation.&lt;/p&gt;
&lt;h2&gt;The tools we have, and the tools we need&lt;/h2&gt;
&lt;p&gt;Some of the answer is already sitting inside software engineering. We know how to test systems, how to control releases, how to refactor, how to encode constraints. We simply have not had to apply these disciplines with the level of consistency that 1000× change will demand.&lt;/p&gt;
&lt;p&gt;At the same time, not everything we need lives inside the traditional SDLC toolbox. If software starts to resemble other large systems under constant, loosely coordinated change, then we should expect to borrow ideas from those systems as well.&lt;/p&gt;
&lt;h2&gt;Three software engineering practices that move from optional to essential&lt;/h2&gt;
&lt;p&gt;Before looking elsewhere, it’s worth being honest about how much unused leverage already exists inside our own discipline. The practices below are familiar. What changes is not the concept, but the intensity and consistency with which they are applied.&lt;/p&gt;
&lt;h3&gt;Continuous Refactoring, industrialized&lt;/h3&gt;
&lt;p&gt;Refactoring is one of the cleanest entropy reduction tools we have. The problem is scale. As a craft activity done occasionally by humans, it cannot keep up with a world where the codebase is being modified constantly.&lt;/p&gt;
&lt;p&gt;In a high-change environment, refactoring needs to look less like an annual cleanup and more like continuous background maintenance. Imagine an army of AI agents that engineers run and supervise: shrinking complexity hotspots, deduplicating business rules, deleting dead code, keeping dependencies fresh, and nudging the codebase back toward a small set of consistent patterns.&lt;/p&gt;
&lt;p&gt;Engineers define what &amp;ldquo;healthy&amp;rdquo; means, constrain the agents so they do not introduce risk, and decide when automated cleanup should give way to human judgment. Entropy reduction becomes continuous rather than episodic.&lt;/p&gt;
&lt;h3&gt;Error Budgets + Progressive Delivery&lt;/h3&gt;
&lt;p&gt;Error budgets were popularized by &lt;a href="https://sre.google/"&gt;Google’s SRE practice&lt;/a&gt;, but adoption remains uneven. Under 1000× change, they become central.&lt;/p&gt;
&lt;p&gt;The mechanism is straightforward. You set a Service Level Objective. You track performance against it. When you overspend your &lt;a href="https://sre.google/workbook/error-budget-policy/"&gt;error budget&lt;/a&gt;, you slow down or pause new changes and focus on restoring stability until you are back within bounds.&lt;/p&gt;
&lt;p&gt;Pair that with disciplined release engineering: staged rollouts, &lt;a href="https://martinfowler.com/bliki/CanaryRelease.html"&gt;canaries&lt;/a&gt;, automatic rollback when metrics move in the wrong direction, and monitoring that is designed alongside the change itself rather than after it. Add production-like simulation and regular chaos exercises so fragility is discovered deliberately.&lt;/p&gt;
&lt;p&gt;The underlying idea is not to prevent change, but to control its blast radius and respond quickly when reality disagrees with the plan. Engineers build and tune this feedback system so that high change velocity does not automatically translate into high instability.&lt;/p&gt;
&lt;h3&gt;Architectural Fitness Functions + Policy-as-Code&lt;/h3&gt;
&lt;p&gt;Earlier, we talked about boundaries, invariants, and meanings. Those define the system’s intended shape. Over time, the quiet risk is drift: boundaries becoming porous, invariants being weakened, meanings diverging across modules.&lt;/p&gt;
&lt;p&gt;Today, much of our defense against that drift is human code review. That depends on attention and memory, and it does not scale when changes arrive continuously from many sources.&lt;/p&gt;
&lt;p&gt;If we want to preserve intent at 1000× scale, the checks must be automatic.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.infoq.com/articles/fitness-functions-architecture/"&gt;Architectural fitness functions&lt;/a&gt; encode structural intent directly: which modules may depend on which, what invariants must always hold, what kinds of coupling are forbidden. Policy-as-code applies the same principle to operational and business rules: logging constraints, audit requirements, sensitive state transitions.&lt;/p&gt;
&lt;p&gt;The system’s constitution is expressed as executable rules. Engineers spend less time reviewing every change and more time engineering the checks that preserve boundaries, invariants, and meanings at scale.&lt;/p&gt;
&lt;h2&gt;Three ideas we need to borrow from other complex systems&lt;/h2&gt;
&lt;p&gt;There are other systems that live under constant, loosely coordinated change and would degrade quickly if left unmanaged.&lt;/p&gt;
&lt;p&gt;Cities evolve through thousands of independent decisions each day, yet remain livable because of zoning, codes, and inspection regimes. Living organisms grow and adapt, yet remain viable because growth is paired with removal. Markets coordinate countless actors, yet avoid collapse because pricing introduces friction when shared resources are stressed.&lt;/p&gt;
&lt;p&gt;If software systems begin to experience similar levels of uncoordinated change, it is reasonable to borrow from these mechanisms.&lt;/p&gt;
&lt;h3&gt;Stable Core, Fast Edge (zoning)&lt;/h3&gt;
&lt;p&gt;A useful architectural principle is to keep a stable core and push innovation to the edges.&lt;/p&gt;
&lt;p&gt;The core is where the most expensive invariants live: identity, permissions, ledger rules, durable state, risk calculations. The edge is where variation belongs: UI flows, routing logic, experiments, workflow glue.&lt;/p&gt;
&lt;p&gt;Under modest rates of change, that separation can remain informal. Under 1000× change, it cannot. Not all edge changes are equal. Some are superficial. Others reach back into core assumptions through shared data models or hidden coupling.&lt;/p&gt;
&lt;p&gt;Cities address similar gradients of risk through zoning. Different areas carry different rules and levels of scrutiny. The same idea applies in software. A copy tweak can move quickly. A workflow change touching billing requires stronger verification. A direct change to the ledger or permission model belongs in the tightest zone.&lt;/p&gt;
&lt;p&gt;Zoning turns an architectural idea into an enforceable structure. Engineers define the zones, specify what crosses boundaries, and integrate the checks into the delivery pipeline so variability concentrates where it is cheap and stays contained where it is dangerous.&lt;/p&gt;
&lt;h3&gt;Built-in Death (feature expiry as default)&lt;/h3&gt;
&lt;p&gt;A large share of software complexity is accumulated history. Features remain long after their context has faded. Exceptions introduced under pressure settle into the baseline.&lt;/p&gt;
&lt;p&gt;Biological systems pair growth with removal. Cells are created and cleared out. Tissue is repaired and damaged parts are broken down. Health depends on maintaining that balance over time. Biologists call this programmed cell death, or apoptosis.&lt;/p&gt;
&lt;p&gt;Software rarely enforces that balance. By default, it remembers everything, which means the surface area of the system only expands and each new change must navigate old intent.&lt;/p&gt;
&lt;p&gt;A practical response is to make expiry the default. New features and experiments ship with an explicit end date. They are renewed if they prove value; otherwise they are removed. Deletion becomes part of the normal lifecycle rather than a rare, anxious cleanup.&lt;/p&gt;
&lt;p&gt;This is entropy control in practice. Engineers make it feasible by building visibility into unused paths, safe migration tooling, reliable rollback, and release processes that treat deletion as a first-class change.&lt;/p&gt;
&lt;h3&gt;Pricing the Cost of Change&lt;/h3&gt;
&lt;p&gt;When many people can change a shared system, the &lt;a href="https://en.wikipedia.org/wiki/Tragedy_of_the_commons"&gt;tragedy of the commons&lt;/a&gt; appears quickly. Each change may be reasonable in isolation, yet the shared system slowly absorbs disorder if the cost of adding complexity is invisible.&lt;/p&gt;
&lt;p&gt;Markets address this by using pricing to reflect scarcity. When a shared resource is stressed, the cost of consuming it rises, introducing friction that protects the system as a whole.&lt;/p&gt;
&lt;p&gt;Singapore’s &lt;a href="https://en.wikipedia.org/wiki/Certificate_of_Entitlement"&gt;Certificate of Entitlement (COE)&lt;/a&gt; system is one concrete example. The number of cars allowed on the road is capped, and the right to own one is allocated through bidding. When demand increases, prices rise, reflecting limited capacity and preventing congestion from overwhelming the system.&lt;/p&gt;
&lt;p&gt;Software under 1000× change faces a similar constraint. The scarce resource is system coherence. We need to price the cost of change.&lt;/p&gt;
&lt;p&gt;That price does not have to be monetary. It can mean fewer release windows, stricter automated checks, mandatory review, or reduced access to the most sensitive zones without sponsorship. It can also influence performance evaluation. If someone consistently improves the system while shipping value, that should count. If someone repeatedly injects entropy without benefit, that should also be visible.&lt;/p&gt;
&lt;p&gt;The point is not punishment. It is alignment. Engineers design the measurement, attribution, and enforcement so that the long-term health of the system is reflected in the day-to-day economics of making change.&lt;/p&gt;
&lt;h2&gt;The title is slightly silly, but the job is real&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;&amp;ldquo;Entropy Containment Engineer&amp;rdquo;&lt;/strong&gt; is not a serious title. But I think it’s a useful one, because it points at a real inversion.&lt;/p&gt;
&lt;p&gt;If AI agents make feature creation cheap, then the bottleneck moves. It is no longer the ability to produce change. It is the ability of the system to absorb change without losing its shape, without losing coherence: clean boundaries, stable meanings, and a system that can keep evolving without turning into a mess.&lt;/p&gt;
&lt;p&gt;Engineers become the people who build and operate the machinery that resists drift. That fight the entropy.&lt;/p&gt;
&lt;p&gt;Continuous refactoring that runs in the background so complexity does not quietly accumulate. Error budgets and progressive delivery that treat stability as a managed resource. Fitness functions and policy-as-code that encode boundaries, invariants, and meanings so they do not drift. Zoning that recognizes gradients of risk rather than pretending everything is either core or edge. Built-in expiry so growth is paired with removal. Pricing mechanisms that make the tragedy of the commons visible in day-to-day decisions.&lt;/p&gt;
&lt;p&gt;In the agent-assisted era, the hard part will not be producing change. It will be keeping the system coherent as change becomes abundant.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The engineers who matter most will be the ones who design and operate that entropy fighting machinery.&lt;/strong&gt;&lt;/p&gt;</content><category term="misc"/><category term="ai"/><category term="software"/></entry><entry><title>The System of Record Fallacy</title><link href="https://antrix.net/posts/2026/system-of-record-ai/" rel="alternate"/><published>2026-02-16T20:01:00+08:00</published><updated>2026-02-16T20:01:00+08:00</updated><author><name>Deepak Sarda</name></author><id>tag:antrix.net,2026-02-16:/posts/2026/system-of-record-ai/</id><summary type="html">&lt;p&gt;I was having dinner with a VC friend and we ended up talking about the meltdown in SaaS valuations. When discussing SaaS, I often hear this logic: AI will commoditize a lot of software. But the Systems&lt;/p&gt;
&lt;p&gt;I was at dinner recently with a VC friend and we ended up …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I was having dinner with a VC friend and we ended up talking about the meltdown in SaaS valuations. When discussing SaaS, I often hear this logic: AI will commoditize a lot of software. But the Systems&lt;/p&gt;
&lt;p&gt;I was at dinner recently with a VC friend and we ended up talking about the meltdown in SaaS valuations. When discussing SaaS, I often hear this logic: AI will commoditize a lot of software. But the Systems of Record are safe. If you are the SoR, you have a moat &amp;amp; valuations will certainly recover.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;I&amp;rsquo;m very skeptical of this logic: System of Record → Defensible valuation.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;What&amp;rsquo;s a System of Record&lt;/h2&gt;
&lt;p&gt;A SoR is one where the authoritative write happens. The primary store of truth for a domain. The ledger, the CRM master, etc.&lt;/p&gt;
&lt;p&gt;Yes, those are critical and risky to replace but that&amp;rsquo;s not the only reason why they are valuable!&lt;/p&gt;
&lt;p&gt;Look at your own company. Your actual SoR is probably a transactional database. Postgres, SQL Server, etc.&lt;/p&gt;
&lt;p&gt;But no serious architecture stops there. You replicate data, reshape and repurpose it. You build warehouses and lakes because your OLTP database is not designed for aggregations, ML feature extraction, or complex reporting.&lt;/p&gt;
&lt;p&gt;This is why Databricks exists at its current scale. &lt;strong&gt;Databricks isn&amp;rsquo;t anyone&amp;rsquo;s SoR.&lt;/strong&gt; It sits downstream, consumes replicated data, and clearly commands a massive valuation. The value is in participation, not custody.&lt;/p&gt;
&lt;p&gt;Which suggests the story is more complicated.&lt;/p&gt;
&lt;p&gt;A SoR is one node in a broader graph of systems. Analytics engines. Workflow tools. They all exist because the primary store cannot and should not serve every read pattern or operational use case.&lt;/p&gt;
&lt;h2&gt;Now add AI.&lt;/h2&gt;
&lt;p&gt;If AI agents become the orchestrators of workflows, they will sit above many of these systems and interact with them programmatically. From the outside, that can make the SoR look like a commoditized state backend.&lt;/p&gt;
&lt;p&gt;Commoditized! That&amp;rsquo;s a word inversely correlated with valuation multiples. :)&lt;/p&gt;
&lt;p&gt;If you&amp;rsquo;re a SaaS vendor and you sense that risk, a defensive reaction is understandable. Tighten APIs, constrain access, and rate limit aggressively.&lt;/p&gt;
&lt;p&gt;Essentially make autonomous agents harder to integrate. Preserve control over the primary interface so you&amp;rsquo;re not reduced to dumb store. Some vendors are already doing some of this.&lt;/p&gt;
&lt;p&gt;But as we just discussed, the value customers extract from a SoR rarely sits inside the SoR box itself. It emerges when that data participates in a broader system. When it flows into the other nodes.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The whole is greater than the sum of the parts.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;If you restrict access in order to defend a moat, you are shrinking the surface area through which that excess value is created. And customers will notice that. They will feel that the system is less flexible, less compounding, less future-proof.&lt;/p&gt;
&lt;p&gt;At that point, you are not just defending a moat. You are forcing your customers to ask whether the box is still worth the constraints.&lt;/p&gt;
&lt;p&gt;None of this means systems of record are unimportant. They clearly matter.&lt;/p&gt;
&lt;p&gt;It just means that &amp;ldquo;is it a system of record or not&amp;rdquo; is the wrong question.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The right question:&lt;/strong&gt; does this system become more valuable as the graph around it grows? Or does it depend on restricting that graph to preserve its position?&lt;/p&gt;</content><category term="misc"/><category term="ai"/><category term="saas"/></entry><entry><title>Agent Context Graphs: Color Me Skeptical</title><link href="https://antrix.net/posts/2026/agent-context-graphs/" rel="alternate"/><published>2026-01-26T20:10:00+08:00</published><updated>2026-01-26T20:10:00+08:00</updated><author><name>Deepak Sarda</name></author><id>tag:antrix.net,2026-01-26:/posts/2026/agent-context-graphs/</id><summary type="html">&lt;p&gt;Over the past few weeks, I&amp;rsquo;ve heard repeated mentions of this idea of &lt;strong&gt;&amp;ldquo;enterprise context graphs&amp;rdquo; paired with AI agents&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;One representative description goes like this:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;rdquo;.. the missing layer that actually runs enterprises: the decision traces – the exceptions, overrides, precedents, and cross-system context that currently live in Slack threads …&lt;/p&gt;&lt;/blockquote&gt;</summary><content type="html">&lt;p&gt;Over the past few weeks, I&amp;rsquo;ve heard repeated mentions of this idea of &lt;strong&gt;&amp;ldquo;enterprise context graphs&amp;rdquo; paired with AI agents&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;One representative description goes like this:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;rdquo;.. the missing layer that actually runs enterprises: the decision traces – the exceptions, overrides, precedents, and cross-system context that currently live in Slack threads, deal desk conversations, escalation calls, and people&amp;rsquo;s heads. [&amp;hellip;] Once you have decision records, the &amp;lsquo;why&amp;rsquo; becomes first-class data. Over time, these records naturally form a context graph: the entities the business already cares about (accounts, renewals, tickets, incidents, policies, approvers, agent runs) connected by decision events (the moments that matter) and &amp;lsquo;why&amp;rsquo; links.&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;What&amp;rsquo;s interesting, and perhaps somewhat telling, is that the only people I&amp;rsquo;ve seen talk about these &amp;ldquo;context graphs&amp;rdquo; are VCs, not engineers.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve seen firsthand three attempts to create a unified data &amp;amp; domain model across just one slice of an enterprise fail after years of effort. Many CTOs I know have similar stories. We all have scars from this.&lt;/p&gt;
&lt;p&gt;The reason is simple. &lt;strong&gt;It is genuinely hard to unify the complexity of a real company into a clean data model.&lt;/strong&gt; Hard enough to capture end-to-end &lt;em&gt;what&lt;/em&gt; a large company does, let alone &lt;em&gt;why&lt;/em&gt;!&lt;/p&gt;
&lt;p&gt;So I remain skeptical that such a &amp;ldquo;context graph&amp;rdquo; can be built in any clean or durable way.&lt;/p&gt;
&lt;h2&gt;But wait. The argument is that AI agents change the equation.&lt;/h2&gt;
&lt;p&gt;Because agents sit directly within business workflows, the claim is that they are naturally exposed to the &amp;ldquo;why,&amp;rdquo; and can capture decision traces as decisions are made, allowing a context graph to emerge organically over time.&lt;/p&gt;
&lt;p&gt;To quote again:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;When an agent triages an escalation, responds to an incident, or decides on a discount, it pulls context from multiple systems, evaluates rules, resolves conflicts, and acts. The orchestration layer sees the full picture: what inputs were gathered, what policies applied, what exceptions were granted, and why. Because it&amp;rsquo;s executing the workflow, it can capture that context at decision time – not after the fact via ETL, but in the moment, as a first-class record.&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;I find this logic suspect, because the hard problem here is not data capture. It is representation.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Capturing traces and rationales is one thing. Representing them in a structured graph of entities and relationships is where things repeatedly break down. That modeling problem doesn&amp;rsquo;t disappear just because an agent is in the loop.&lt;/p&gt;
&lt;p&gt;And if the claim is that &lt;em&gt;structure&lt;/em&gt; doesn&amp;rsquo;t really matter because AI can figure it out anyway, then it&amp;rsquo;s not clear why we need to talk about a &amp;ldquo;context graph&amp;rdquo; at all! AI should already be able to infer what it needs from Slack threads, deal desk conversations, escalation calls, and people&amp;rsquo;s notes.&lt;/p&gt;
&lt;p&gt;I can&amp;rsquo;t help but think that this is less about a real, unmet need, and more about naming and inflating a new startup category that glosses over some very old, very hard problems.&lt;/p&gt;</content><category term="misc"/><category term="ai"/></entry><entry><title>Sravya, the Genesis</title><link href="https://antrix.net/posts/2025/sravya-genesis/" rel="alternate"/><published>2025-08-24T14:01:00+08:00</published><updated>2025-08-24T14:01:00+08:00</updated><author><name>Deepak Sarda</name></author><id>tag:antrix.net,2025-08-24:/posts/2025/sravya-genesis/</id><summary type="html">&lt;h2&gt;A Spark at the Stanford Executive Program&lt;/h2&gt;
&lt;p&gt;June 2025 was a turning point. I found myself in the thick of the &lt;a href="https://www.gsb.stanford.edu/exec-ed/programs/stanford-executive-program"&gt;Stanford Executive Program&lt;/a&gt;, six weeks of full-time classes on the beautiful Stanford campus. Strategy, leadership, culture, communications, AI, and more. It was like a compressed MBA, taught by some …&lt;/p&gt;</summary><content type="html">&lt;h2&gt;A Spark at the Stanford Executive Program&lt;/h2&gt;
&lt;p&gt;June 2025 was a turning point. I found myself in the thick of the &lt;a href="https://www.gsb.stanford.edu/exec-ed/programs/stanford-executive-program"&gt;Stanford Executive Program&lt;/a&gt;, six weeks of full-time classes on the beautiful Stanford campus. Strategy, leadership, culture, communications, AI, and more. It was like a compressed MBA, taught by some of the best minds in the world. It was intense and exhilarating.&lt;/p&gt;
&lt;p&gt;And the reading. My god, the reading. Every class came with loads of it! Case studies. Book chapters. Sometimes more than one per session. Enough to bury anyone.&lt;/p&gt;
&lt;p&gt;But here’s the thing. I don’t really “read” anymore. I used to read a lot of books and long form content. But then life got too busy, and I found it increasingly hard to sit down and read at length. These days I listen instead of reading. Podcasts, audiobooks, interviews, even research papers if I can get them in audio. I listen while jogging, commuting, making coffee.&lt;/p&gt;
&lt;p&gt;So staring at this mountain of reading at Stanford, I wondered, what if I didn’t read any of it? What if I could listen to it instead? Could I turn every case study and assigned chapter into a personal podcast feed?&lt;/p&gt;
&lt;p&gt;I had been deeply immersed in the evolution of Generative AI since 2023 and I knew that building an app that did exactly this was possible using the current state of AI. But where would I find the time to develop this app?&lt;/p&gt;
&lt;h2&gt;&lt;em&gt;Vibe Coding&lt;/em&gt;&lt;/h2&gt;
&lt;p&gt;A few weeks before Stanford, back at work, a colleague showed me &lt;a href="https://firebase.google.com/"&gt;Google Firebase&amp;rsquo;s&lt;/a&gt; &lt;a href="https://firebase.google.com/docs/studio/get-started-ai"&gt;App Prototyping agent&lt;/a&gt;. You simply describe what you want and it generates a working app. It&amp;rsquo;s called &lt;em&gt;vibe coding,&lt;/em&gt; a term &lt;a href="https://x.com/karpathy/status/1886192184808149383"&gt;coined by Andrej Karpathy&lt;/a&gt; in early 2025:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“There’s a new kind of coding I call ‘vibe coding,’ where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.”&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The idea is to let the AI write the code, test it, fix it, and iterate till it works. You guide it with prompts, course-correct where needed, and accept the messiness. You &lt;em&gt;never read&lt;/em&gt; the actual code!&lt;/p&gt;
&lt;p&gt;I had been thinking about &lt;em&gt;vibe coding&lt;/em&gt; since then. Could this really work beyond toy demos? Were the critics right about fragile architectures, spaghetti code, security holes waiting to happen? I wanted to know for myself which made this the perfect experiment; kill two birds with one stone. Try vibe coding, and maybe solve my reading overload at Stanford!&lt;/p&gt;
&lt;h2&gt;The Vision: Auto-Podcastify My Stanford Readings&lt;/h2&gt;
&lt;p&gt;The idea took shape. &lt;strong&gt;Build an app that converts assigned readings into a podcast.&lt;/strong&gt; Drop in a PDF or a chapter, get back a narrated audio file. Stitch them together, and I’d have my own SEP podcast feed to queue on a morning run.&lt;/p&gt;
&lt;p&gt;This was the genesis of &lt;a href="https://sravya.app/"&gt;Sravya: an app that let&amp;rsquo;s you Read Less, Listen More&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;It sounded simple, but building this would be anything but trivial. I didn&amp;rsquo;t want the app to create dumb read-aloud versions of  text. It couldn&amp;rsquo;t be your basic text-to-speech. It needed to be smart to not read boilerplate headers and footers. It needed to know how to handle inherently visual artifacts like charts and illustrations. It had to &lt;em&gt;sound&lt;/em&gt; &lt;em&gt;good&lt;/em&gt; - like it was really meant to be heard as audio.&lt;/p&gt;
&lt;p&gt;AI could probably get me there, but only if I told it clearly what I wanted. With these systems, vague instructions give you unusable results. Garbage in, garbage out.&lt;/p&gt;
&lt;h2&gt;Writing the Spec&lt;/h2&gt;
&lt;p&gt;So I needed a &lt;em&gt;product specification&lt;/em&gt; first. I began by &lt;em&gt;talking&lt;/em&gt; to ChatGPT, using voice dictation to dump my messy, half-formed ideas. The AI turned my ramblings into a rough product spec, something that started to look like real requirements. It listed features, inputs and outputs, a user flow, even some edge cases. I was &lt;em&gt;vibe speccing&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Of course it wasn’t perfect. I had to push back, refine, add detail, re-explain things I hadn’t been clear about. We went back and forth. Each round improved the spec until it started to feel like something I could actually build from. When it felt close, I asked ChatGPT to break it into milestones. I knew from experience that if you throw a huge, complex project at these AI tools all at once, they choke. But if you build step by step, they have a fighting chance.&lt;/p&gt;
&lt;p&gt;Armed with a spec and a set of milestones, I opened up Firebase Studio and got to &lt;em&gt;vibe coding&lt;/em&gt;.&lt;/p&gt;
&lt;h2&gt;Coming Next&lt;/h2&gt;
&lt;p&gt;In the next post, I’ll walk through what actually happened inside Firebase.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What did the tool build?&lt;/li&gt;
&lt;li&gt;What surprised me?&lt;/li&gt;
&lt;li&gt;Where did vibe coding shine, and where did it trip?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;em&gt;To be continued &amp;hellip;&lt;/em&gt; &lt;/p&gt;</content><category term="misc"/><category term="sravya"/><category term="ai"/></entry><entry><title>Forever Thirty</title><link href="https://antrix.net/posts/2025/forever-thirty/" rel="alternate"/><published>2025-01-11T18:01:00+08:00</published><updated>2025-01-11T18:01:00+08:00</updated><author><name>Deepak Sarda</name></author><id>tag:antrix.net,2025-01-11:/posts/2025/forever-thirty/</id><summary type="html">&lt;p&gt;Aurora adjusted her e-lenses, the protest below snapping into sharp relief. From the hundred-and-twentieth floor, the chanting was a low thrum, the signs held aloft like desperate pleas to indifferent gods. LONG LIFE FOR ALL. STOP THE MONOPOLY. The same demands, echoing daily against the mirrored facades of the Big …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Aurora adjusted her e-lenses, the protest below snapping into sharp relief. From the hundred-and-twentieth floor, the chanting was a low thrum, the signs held aloft like desperate pleas to indifferent gods. LONG LIFE FOR ALL. STOP THE MONOPOLY. The same demands, echoing daily against the mirrored facades of the Big Four, a quartet of megacorporations whose tendrils wrapped around every facet of civilization. Helion, the largest, loomed dominant, its name a byword for both innovation and ruthless control – particularly when it came to GeneLife&amp;rsquo;s anti-aging therapy.&lt;/p&gt;
&lt;p&gt;A sigh escaped her lips, misting briefly in the recycled air of her private balcony. Inside Helion&amp;rsquo;s walls, time was a commodity, carefully rationed. Employees received their annual rejuvenation cycle, a precise cocktail of gene therapies that arrested aging, granting them extra decades, even centuries. Veterans, some well over two hundred, moved with the practiced efficiency of seasoned professionals in their prime. Decades, even centuries, spent within Helion had forged an unrivaled depth of institutional memory, a fortress of knowledge. No upstart competitor – constantly losing talent both to the constraints of a natural lifespan and to the irresistible draw of the Big Four&amp;rsquo;s life extension – could hope to overcome such accumulated expertise. Smaller companies withered, their brightest minds inevitably drawn to the gilded cage of extended life, leaving behind skeletal remains of forgotten industries. Venture capital dried up for any enterprise not blessed by a Big Four partnership – who would invest in a future measured in mere decades when centuries were on offer?&lt;/p&gt;
&lt;p&gt;Ten years ago, a ripple of hope had spread among the masses barred from the therapy, those condemned to the brevity of a normal lifespan. A group of independent scientists, fueled by ethical outrage, had dared to pursue their own research. Helion’s legal eagles descended like harpies, the ensuing lawsuits protracted and brutal. The scientists, denied the very therapy they sought to democratize, now lived their dwindling, accelerated lifespans in frustrated obscurity, a stark warning to any would-be challengers. Whispers of alternative research now vanished like smoke in the wind, snuffed out by the sheer economic gravity of the Big Four.&lt;/p&gt;
&lt;p&gt;Aurora retreated inside, the balcony doors hissing shut, muffling the distant cries. She filed her daily reports – the same incremental progress, the same layers of bureaucratic approval. A quiet rebellion flickered within her: the yearning to build, to create something beyond Helion&amp;rsquo;s meticulously controlled ecosystem. But the thought felt as flimsy as the protesters&amp;rsquo; cardboard signs against the monolithic power she served. Who would trade a guaranteed 200-year lifespan for the uncertain trajectory of a dream?&lt;/p&gt;
&lt;p&gt;She caught her reflection in the polished chrome of a passing service bot. The same smooth skin, the same untroubled eyes she&amp;rsquo;d had since her thirtieth birthday, four decades ago. &amp;ldquo;Forever Thirty,&amp;rdquo; she murmured, the old joke tasting like ash in her mouth. It was a promise, a perk, a prison. The chants outside seemed to grow louder in the sterile silence of the corridor, each syllable a tiny hammer blow against the walls of her carefully constructed reality. As she walked towards her next meeting, the pervasive hum of Helion a constant companion, the unspoken truth settled heavy in her chest: &amp;ldquo;forever&amp;rdquo; wasn&amp;rsquo;t a gift; it was a leash, and the protesters, for all their perceived naiveté, might be the only ones seeing the cage for what it truly was.&lt;/p&gt;</content><category term="misc"/><category term="micro"/></entry><entry><title>Large Language Models: Changing the Future of Communication</title><link href="https://antrix.net/posts/2023/llm-communications/" rel="alternate"/><published>2023-11-04T08:01:00+05:30</published><updated>2023-11-04T08:01:00+05:30</updated><author><name>Deepak Sarda</name></author><id>tag:antrix.net,2023-11-04:/posts/2023/llm-communications/</id><summary type="html">&lt;p&gt;Every significant stride in human history has hinged on one key factor: communication. Our world, our societies, and our cultures have been shaped by our ability to connect and share ideas, stories, and knowledge. The advent of computers and the internet supercharged this exchange, facilitating unprecedented levels of global interaction …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Every significant stride in human history has hinged on one key factor: communication. Our world, our societies, and our cultures have been shaped by our ability to connect and share ideas, stories, and knowledge. The advent of computers and the internet supercharged this exchange, facilitating unprecedented levels of global interaction. &lt;/p&gt;
&lt;p&gt;But these interactions have largely been human-driven. As computers can&amp;rsquo;t independently process or understand ambiguous human language(s), they need precise instructions to &amp;lsquo;communicate&amp;rsquo;. This gave birth to programming, a complex language for &amp;lsquo;talking&amp;rsquo; to machines.&lt;/p&gt;
&lt;p&gt;The stark contrast between human communication, with its richness and adaptability, and the rigid structure of machine communication became more evident as our dependence on technology grew. But the emergence of Large Language Models (LLMs) is gradually bridging this gap, making computers capable of adapting their communication in a more human-like way.&lt;/p&gt;
&lt;p&gt;Humans learn by experiencing, observing, and replicating. Early AI research sought to emulate this process but faltered due to the paucity of data and computational power. The AI domain then shifted focus to task-specific learning models, such as fraud detection and image recognition. However, the surge of data from the internet and the advent of GPUs – initially designed for gaming, but exceptionally apt for data processing – revived the &amp;lsquo;learning by example&amp;rsquo; approach, leading to the creation of LLMs.&lt;/p&gt;
&lt;p&gt;LLMs, fundamentally, are software capable of interacting through various mediums - text, voice, images, and video - mimicking the ambiguity and adaptability characteristic of human communication. As humans use foundational communication skills to learn, garner feedback, and improve upon other skills, LLMs can be directed to do the same. Instead of creating task specific AI, we can take a LLM that is capable of generalised communication and teach it task specific skills. This opens up significant opportunities to streamline and automate many tasks and processes that currently  require extensive custom software development.&lt;/p&gt;
&lt;p&gt;This paradigm shift heralds a revolution in communication: human-to-computer, human-to-human, and computer-to-computer.&lt;/p&gt;
&lt;h2&gt;Human-to-Computer Communication&lt;/h2&gt;
&lt;p&gt;LLMs promise a future where we don&amp;rsquo;t need to meticulously instruct computers on every task, thereby eliminating the necessity of programmers for each task. Picture using Excel: remember the thrill of employing a function to automate a process? Now imagine applying this excitement to every part of your life. By describing your needs in plain language and providing a few examples, an LLM assistant can become your digital butler, ready to carry out your requests. It&amp;rsquo;s an era of human augmentation that we&amp;rsquo;re only beginning to explore.  This is also the area with the most investment today, be that in the form of better chatbots for customer service, or AI assistants that speed up data analysis and content creation.&lt;/p&gt;
&lt;p&gt;Applying these technologies to the wealth management space, Endowus is actively pursuing several LLM initiatives to enhance our services and provide our customers with an increasingly seamless, digitally-enabled experience. This will be accomplished through the design and launch of a variety of AI-driven Chatbots, and a Fund Recommendation Engine that adopts a  natural conversational style to capture customers&amp;rsquo; goals and provide sophisticated, personalised recommendations.&lt;/p&gt;
&lt;h2&gt;Human-to-Human Communication&lt;/h2&gt;
&lt;p&gt;LLMs are fundamentally about understanding human communication in all its forms and across all mediums. This understanding is not language-specific – it merely requires an abundance of examples. &lt;/p&gt;
&lt;p&gt;Consequently, LLMs are poised to catalyse &lt;a href="https://about.fb.com/news/2023/05/ai-massively-multilingual-speech-technology/"&gt;a revolution in language translation&lt;/a&gt;, rendering high-quality translations imbued with the original text&amp;rsquo;s nuance, idiom, and metaphor. Imagine reading &amp;lsquo;Anna Karenina&amp;rsquo; or &amp;lsquo;The Three-Body Problem&amp;rsquo; in any language, with no delay for translation and no compromise on quality. &lt;/p&gt;
&lt;p&gt;As this technology matures, translations will be done in real time, enabling us to speak with each other in our native languages &amp;amp; yet understand each other completely. The fabled Babel Fish from &amp;lsquo;The Hitchhiker&amp;rsquo;s Guide to the Galaxy&amp;rsquo; is on the verge of becoming a reality. &lt;/p&gt;
&lt;p&gt;For businesses, this represents a significant leverage opportunity for efficiently translating all forms of marketing and business communications to support regional aspirations and expansion into new markets.&lt;/p&gt;
&lt;h2&gt;Computer-to-Computer Communication&lt;/h2&gt;
&lt;p&gt;As someone whose background is that of a software developer, I find the potential of LLMs in computer-to-computer communication particularly fascinating. The dream of autonomous software agents, previously restrained by their inability to adapt to new scenarios, now appears within reach. &lt;/p&gt;
&lt;p&gt;Do you recall that cinematic moment in Independence Day when Jeff Goldblum&amp;rsquo;s character audaciously infects an alien spacecraft with a computer virus? As a software programmer, I always found that scene a bit hard to swallow. I mean, here on Earth, we face hurdles in getting a simple Windows app to run seamlessly on a Mac, and vice versa. Yet in the realm of Hollywood, we somehow managed to implant our human-made software, a computer virus, into an entirely alien tech!&lt;/p&gt;
&lt;p&gt;With LLMs, software agents could negotiate a shared language (a protocol in software terms), enabling them to interact without needing human intermediaries. That scenario from Independence Day is completely in the realm of possibility now!&lt;/p&gt;
&lt;p&gt;As we look to the future, I foresee software development focusing on two key areas: the foundational, involving making LLMs more efficient and cost-effective, and the applied, involving leveraging LLMs in diverse contexts. Traditional programming might still exist, but it will morph into a specialised dialect, akin to doctors or physicists communicating using their domain specific terminology.&lt;/p&gt;
&lt;h2&gt;The Future&lt;/h2&gt;
&lt;p&gt;Large Language Models signify a profound shift in the communication paradigm, offering a path towards a more intertwined human-digital world. It&amp;rsquo;s about harnessing practical solutions that enrich our lives, transforming our reality by infusing it with elements of a previously imagined futuristic sci-fi world. As custodians of this technology, we must handle it with care, ensuring that it aligns with our values and serves our collective needs.&lt;/p&gt;
&lt;p&gt;As we navigate this new frontier, it&amp;rsquo;s crucial to embrace the possibilities while acknowledging the accompanying challenges. By treating this progress with thoughtful curiosity, we can orchestrate a future where technology becomes an integral and harmonious part of our daily interactions and communications.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;An edited version of this post was first published here: &lt;a href="https://www.smehorizon.com/large-language-models-5-takeaways-for-smes/"&gt;https://www.smehorizon.com/large-language-models-5-takeaways-for-smes/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;</content><category term="misc"/><category term="ai"/><category term="collaboration"/><category term="llm"/><category term="tech"/><category term="software"/></entry><entry><title>Android Work Profile</title><link href="https://antrix.net/posts/2021/android-work-profile/" rel="alternate"/><published>2021-11-13T10:10:10+08:00</published><updated>2021-11-13T10:10:10+08:00</updated><author><name>Deepak Sarda</name></author><id>tag:antrix.net,2021-11-13:/posts/2021/android-work-profile/</id><summary type="html">&lt;p&gt;Android has a neat feature called &lt;a href="https://support.google.com/work/android/answer/6191949?hl=en"&gt;Work Profile&lt;/a&gt; that sandboxes work apps &amp;amp; data from personal apps &amp;amp; data. If needed, your company can (remotely) manage and/or wipe just the Work related data without touching your personal data. This makes it really easy to use personal devices for work usage, while …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Android has a neat feature called &lt;a href="https://support.google.com/work/android/answer/6191949?hl=en"&gt;Work Profile&lt;/a&gt; that sandboxes work apps &amp;amp; data from personal apps &amp;amp; data. If needed, your company can (remotely) manage and/or wipe just the Work related data without touching your personal data. This makes it really easy to use personal devices for work usage, while staying compliant with data security requirements.&lt;/p&gt;
&lt;p&gt;However, Android doesn&amp;rsquo;t include any built-in user interface to create and manage a Work profile. This feature is only exposed as APIs that 3rd party Mobile Device Management (MDM) vendor products leverage. Examples: &lt;a href="https://www.vmware.com/asean/products/workspace-one/android-management.html"&gt;VMWare&lt;/a&gt;, &lt;a href="https://workspace.google.com/intl/en_sg/products/admin/endpoint/"&gt;Google&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Thankfully, there&amp;rsquo;s a neat OSS app called &lt;a href="https://f-droid.org/en/packages/net.typeblog.shelter/"&gt;Shelter&lt;/a&gt; that implements a UI on top of the APIs, making it possible for you to create a Work profile right on the phone.&lt;/p&gt;
&lt;p&gt;With Shelter, you can create a Work profile and manage apps within it, all from your phone. This is incredibly useful if your company uses Google Workspace since it keeps your personal Google account separate from the Google Workspace account that is used for work. You get two different instances of GMail, Calendar, Drive, etc. No cognitive overhead of making sure that you are using the right account!&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s a complete experience! Links opened from a Work profile app open in Chrome (or another browser) within the Work profile. Clipboard &amp;amp; filesystem contents are separated so there&amp;rsquo;s no accidental leak of data from Work to Personal or vice-versa. Very nicely done &amp;amp; highly recommended!&lt;/p&gt;
&lt;p&gt;One slightly unintuitive aspect of setting up Shelter was enabling Widgets. I wanted to have the calendar widget on the Android homescreen and the widget wasn&amp;rsquo;t available at first. The solution was to open the Shelter app, navigate to the &lt;code&gt;Calendar&lt;/code&gt; app entry on the Shelter profile, click on it and then enable the &lt;code&gt;Allow Widgets in Main Profile&lt;/code&gt; setting for the app.&lt;/p&gt;
&lt;p&gt;I really wish Windows and MacOS also implemented something like a Work Profile for our desktop environments as well. In this post-covid world where work-from-home is the norm, Bring Your Own Device (BYOD) is not limited to phones any more!&lt;/p&gt;</content><category term="misc"/><category term="android"/></entry><entry><title>Migrating to AWS Amplify</title><link href="https://antrix.net/posts/2021/aws-amplify/" rel="alternate"/><published>2021-09-30T18:30:00+08:00</published><updated>2021-09-30T18:30:00+08:00</updated><author><name>Deepak Sarda</name></author><id>tag:antrix.net,2021-09-30:/posts/2021/aws-amplify/</id><summary type="html">&lt;p&gt;In a &lt;a href="https://antrix.net/posts/2021/goaccess/"&gt;recent blog post&lt;/a&gt;, I wrote:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;this site runs on a shared host provided by Dreamhost. I know that sounds awfully outmoded in this day and age of containerizing everything! But I&amp;rsquo;ve had antrix.net on shared hosting for more than fifteen years now and if it ain …&lt;/p&gt;&lt;/blockquote&gt;</summary><content type="html">&lt;p&gt;In a &lt;a href="https://antrix.net/posts/2021/goaccess/"&gt;recent blog post&lt;/a&gt;, I wrote:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;this site runs on a shared host provided by Dreamhost. I know that sounds awfully outmoded in this day and age of containerizing everything! But I&amp;rsquo;ve had antrix.net on shared hosting for more than fifteen years now and if it ain&amp;rsquo;t broke&amp;hellip;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I guess it was time to &lt;em&gt;fix it!&lt;/em&gt; &lt;/p&gt;
&lt;p&gt;Being in between jobs and &lt;a href="https://en.wikipedia.org/wiki/COVID-19_lockdowns"&gt;unable to travel&lt;/a&gt;, I found myself with a few days of downtime on hand. I spent some of that time evaluating a few of the newer jamstack/serverless/heroku-inspired hosting services, &lt;a href="https://twitter.com/antrix/status/1440889098646933513"&gt;live tweeting&lt;/a&gt; along the way. In the end, I&amp;rsquo;ve settled on &lt;a href="https://aws.amazon.com/amplify/"&gt;AWS Amplify&lt;/a&gt; for the moment and migrated &lt;a href="https://devdriven.by/"&gt;devdriven.by&lt;/a&gt; and &lt;a href="https://antrix.net/"&gt;antrix.net&lt;/a&gt; to it. &lt;/p&gt;
&lt;p&gt;This post documents my setup on Amplify, i.e, a prime example of Infra-as-blog-post.&lt;/p&gt;
&lt;h2&gt;Build Setup&lt;/h2&gt;
&lt;p&gt;As I&amp;rsquo;ve &lt;a href="/posts/2020/reboot-2/"&gt;described previously&lt;/a&gt;, I use &lt;a href="https://blog.getpelican.com/"&gt;Pelican&lt;/a&gt; as the static site generator to build this site. Pelican provides a &lt;code&gt;Makefile&lt;/code&gt; that encapsulates the entire site generation process. Thus, replicating it in Amplify&amp;rsquo;s app build specification was straightforward:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;1&lt;/span&gt;
&lt;span class="nt"&gt;frontend&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;phases&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;build&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;commands&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;pip3 install -r requirements.txt&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;cd site&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;make publish&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;artifacts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;baseDirectory&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;site/output&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;files&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;#39;**/*&amp;#39;&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;paths&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;[]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;I&amp;rsquo;ve setup Amplify to build two branches: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Every commit to &lt;code&gt;develop&lt;/code&gt; branch is built &amp;amp; deployed to &lt;a href="https://next.antrix.net/"&gt;https://next.antrix.net/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Every commit to &lt;code&gt;main&lt;/code&gt; branch is built &amp;amp; deployed to &lt;a href="https://antrix.net/"&gt;https://antrix.net/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I used the &lt;em&gt;Environment Variables&lt;/em&gt; facility of Amplify to setup a variable named &lt;code&gt;STAGING&lt;/code&gt; which is active in the &lt;code&gt;develop&lt;/code&gt; branch build. &lt;/p&gt;
&lt;p&gt;&lt;img src="/static/posts/2021/aws-amplify/aws-amplify-env-vars.png" alt="environment variables config"/&gt;&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;Makefile&lt;/code&gt; uses the &lt;code&gt;STAGING&lt;/code&gt; variable to make some changes in the build, e.g., use a different &lt;code&gt;robots.txt&lt;/code&gt; with a global deny rule for the staging website.&lt;/p&gt;
&lt;h2&gt;Domain Setup&lt;/h2&gt;
&lt;p&gt;Amplify uses S3 for storage and Cloudfront as the CDN for static content. But these services are not exposed to you and are managed behind the scenes within Amplify&amp;rsquo;s service accounts. The end result is that Amplify creates a &lt;code&gt;&amp;lt;some-id&amp;gt;.cloudfront.net&lt;/code&gt; fqdn and also makes your site available on &lt;code&gt;&amp;lt;id&amp;gt;.amplifyapp.com&lt;/code&gt;. To use a custom domain such as &lt;code&gt;antrix.net&lt;/code&gt;, we need to setup DNS records to point to the cloudfront fqdn.&lt;/p&gt;
&lt;p&gt;While I&amp;rsquo;ve now migrated hosting away from Dreamhost, I continue to use them for DNS management. Following Amplify&amp;rsquo;s instructions, I setup four DNS records:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;@        ALIAS    &amp;lt;id&amp;gt;.cloudfront.net
www      CNAME    &amp;lt;id&amp;gt;.cloudfront.net
next     CNAME    &amp;lt;id&amp;gt;.cloudfront.net
&amp;lt;id2&amp;gt;    CNAME    &amp;lt;id3&amp;gt;.acm-validations.aws
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;That last one is to provide AWS proof of domain name ownership.&lt;/p&gt;
&lt;h2&gt;Rewrites &amp;amp; Redirects&lt;/h2&gt;
&lt;p&gt;Amplify provides flexible URL rewrite/redirect features that addressed all of my requirements. You can create these using their web console and changes appear to take effect instantaneously. Here&amp;rsquo;s my configuration:&lt;/p&gt;
&lt;p&gt;&lt;img src="/static/posts/2021/aws-amplify/aws-amplify-rewrites-redirects.png" alt="rewrites and redirects config"/&gt;&lt;/p&gt;
&lt;h2&gt;Performance&lt;/h2&gt;
&lt;p&gt;The few builds I&amp;rsquo;ve observed so far trigger almost immediately after a push to Github and then complete deployment in a minute. This is quite good already and I suspect can be improved further using build caching.&lt;/p&gt;
&lt;p&gt;On to the page load performance, this was the site&amp;rsquo;s &lt;a href="https://developers.google.com/web/tools/lighthouse/"&gt;Lighthouse&lt;/a&gt; score prior to migrating:&lt;/p&gt;
&lt;p&gt;&lt;img src="/static/posts/2021/aws-amplify/aws-amplify-lighthouse-pre-migration.png" alt="lighthouse score before migration"/&gt;&lt;/p&gt;
&lt;p&gt;And this is the score post migration to Amplify:&lt;/p&gt;
&lt;p&gt;&lt;img src="/static/posts/2021/aws-amplify/aws-amplify-lighthouse-post-migration.png" alt="lighthouse score after migration"/&gt;&lt;/p&gt;
&lt;p&gt;As expected, the performance score improved thanks to edge caching enabled by cloudfront. &lt;/p&gt;</content><category term="misc"/><category term="aws"/><category term="webdev"/></entry><entry><title>Résumé writing: Focus on Outcomes</title><link href="https://antrix.net/posts/2021/resume-outcomes/" rel="alternate"/><published>2021-09-19T15:40:00+08:00</published><updated>2021-09-19T15:40:00+08:00</updated><author><name>Deepak Sarda</name></author><id>tag:antrix.net,2021-09-19:/posts/2021/resume-outcomes/</id><summary type="html">&lt;p&gt;Take a look at this snippet taken from a resume:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Senior Engineer in the Sales Trading development team within Citi Equities Technology. 
In this role, my responsibilities include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Working with desk traders and market connectivity teams on several features.&lt;/li&gt;
&lt;li&gt;Software development primarily in Java on Linux with some Perl, Solaris …&lt;/li&gt;&lt;/ul&gt;&lt;/blockquote&gt;</summary><content type="html">&lt;p&gt;Take a look at this snippet taken from a resume:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Senior Engineer in the Sales Trading development team within Citi Equities Technology. 
In this role, my responsibilities include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Working with desk traders and market connectivity teams on several features.&lt;/li&gt;
&lt;li&gt;Software development primarily in Java on Linux with some Perl, Solaris as well.&lt;/li&gt;
&lt;li&gt;Ensuring high quality code using unit testing and other practices.&lt;/li&gt;
&lt;li&gt;Driving quality improvement &amp;amp; cost reduction initiatives.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;As a hiring manager reading this resume, it&amp;rsquo;ll be really hard for me to get a good sense of the candidate based on the above snippet.
The entire snippet reads like a job description. Isn&amp;rsquo;t that good? If the description of the job that the candidate claims to have done 
matches the description of the job that I&amp;rsquo;m hiring for, isn&amp;rsquo;t that all I need as a hiring manager? Not quite. &lt;/p&gt;
&lt;p&gt;The reality is that there&amp;rsquo;s only so many ways to describe a software engineering role. So if resumes are nothing more than descriptions of the job, 
every resume starts to look the same and there&amp;rsquo;s nothing to differentiate one candidate from another. Thus, a resume written like a job description 
is failing at its primary function: getting the hiring manager to pick it out of a stack and shortlist the candidate for an interview.&lt;/p&gt;
&lt;p&gt;Now take a look at this snippet:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Senior Engineer in the Sales Trading development team within Citi Equities Technology.
Worked in a team of four that developed PTE – the primary trading system used by Citi’s Program Trading desk globally.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;On-boarded the Program Trading desk onto two new exchanges, collaborating with desk traders and the market connectivity teams.&lt;/li&gt;
&lt;li&gt;Built a FIX to Proprietary messaging adapter that allowed 70MM$ notional daily order flow to be routed to smaller markets and also serve as a failover route to the larger markets.&lt;/li&gt;
&lt;li&gt;Migrated dozens of Java services and Perl/Shell scripts from Solaris to Linux yielding infra cost savings of $90K/year.&lt;/li&gt;
&lt;li&gt;Spearheaded the Development Maturity Model, a development efficiency improvement program within Global Equities. Raised team’s score by 54%.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;It&amp;rsquo;s for the same role but is now re-written to focus on tangible achievements and outcomes. There&amp;rsquo;s still a &lt;em&gt;job description&lt;/em&gt; but it&amp;rsquo;s 
limited to the first couple of lines to provide context. It&amp;rsquo;ll be hard to come across another resume that looks exactly like this!&lt;/p&gt;
&lt;p&gt;Not only does such an outcomes focussed resume stand out in front of a hiring manager, it also provides them with data that they can use 
to make a rough assessment of the level at which this person performs. This makes it easier for them to determine if the candidate would be a good fit 
for the scope/level of the role being hired for.&lt;/p&gt;
&lt;p&gt;Writing resumes like this isn&amp;rsquo;t very hard to do. While the earlier example is from &lt;a href="/pages/resume/"&gt;my own resume&lt;/a&gt;, let me share another example
from the resume of a friend whom I recently helped with this exercise.&lt;/p&gt;
&lt;p&gt;Here&amp;rsquo;s a section of his resume from before:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;As a senior technologist in the Private Banking Technology group at {Large Bank}, I am responsible for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Development of pre-trade disclosure automation platform which is part of digitization strategy of the bank to ensure all risks are disclosed accurately.&lt;/li&gt;
&lt;li&gt;Heading the architecture and implementation of price discovery and execution platform for Foreign Exchange Options trading.&lt;/li&gt;
&lt;li&gt;Participate in wider code and design review meetings to ensure adherence to clean coding and other agile practices.&lt;/li&gt;
&lt;li&gt;Screening and interviewing software engineers for the division.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;It mentions some specifics of the role but by and large, the entire section still reads like a job description. &lt;/p&gt;
&lt;p&gt;Now here&amp;rsquo;s the same role, re-written to be outcomes oriented:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;As a senior technologist in the Private Banking Technology group at {Large Bank}, I have:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Owned the greenfield buildout of FX Options price discovery and execution platform which enabled sales traders to efficiently get the best deal across the street and also reduced the manual booking efforts of Middle Office team by 80-90%&lt;/li&gt;
&lt;li&gt;Onboarded majority of FX Vanilla options to new automated flow supporting a monthly trading volume of nearly $125M&lt;/li&gt;
&lt;li&gt;Provided valuable data insights to senior management on key metrics related to platform usage and influenced future strategy&lt;/li&gt;
&lt;li&gt;Mentored software engineers to help them establish best practices for software development and automated testing to ensure fast time to market&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;Hopefully, you will agree that the re-written version is much more differentiated and impactful.&lt;/p&gt;
&lt;p&gt;A great piece of career advice that I received more than a decade ago was to update my resume every six to twelve months regardless of whether 
I was looking for a new job. The act of updating the resume provides the catalyst you need to reflect on your own career growth. If in six months or an year 
you have nothing meaningful to add to your resume, then clearly you aren&amp;rsquo;t growing in the role and it&amp;rsquo;s time to explore new options. &lt;/p&gt;
&lt;p&gt;Obviously, implementing this advice would be rather difficult with a resume that reads like a job description! An outcomes oriented resume 
forces you to take stock of your career and be honest about your own progression.&lt;/p&gt;
&lt;p&gt;So go ahead and take a good look at your resume. If it doesn&amp;rsquo;t mention what you&amp;rsquo;ve actually achieved, spend a little time and make it 
outcome oriented today!&lt;/p&gt;</content><category term="misc"/><category term="career"/></entry><entry><title>Building a Documentation Culture</title><link href="https://antrix.net/posts/2021/documentation/" rel="alternate"/><published>2021-08-08T15:10:00+08:00</published><updated>2021-08-08T15:10:00+08:00</updated><author><name>Deepak Sarda</name></author><id>tag:antrix.net,2021-08-08:/posts/2021/documentation/</id><summary type="html">&lt;p&gt;&amp;ldquo;How do you build a good documentation culture in a company?&amp;rdquo; — that was the question posed by a friend who&amp;rsquo;s the VP of Engineering at a fast growing startup.&lt;/p&gt;
&lt;p&gt;As we do in most situations, more so when talking about culture, we start with the People.&lt;/p&gt;
&lt;h2&gt;People&lt;/h2&gt;
&lt;p&gt;People model …&lt;/p&gt;</summary><content type="html">&lt;p&gt;&amp;ldquo;How do you build a good documentation culture in a company?&amp;rdquo; — that was the question posed by a friend who&amp;rsquo;s the VP of Engineering at a fast growing startup.&lt;/p&gt;
&lt;p&gt;As we do in most situations, more so when talking about culture, we start with the People.&lt;/p&gt;
&lt;h2&gt;People&lt;/h2&gt;
&lt;p&gt;People model behaviours that they observe are being practiced by others and that are being rewarded. Those behaviours become your culture.&lt;/p&gt;
&lt;p&gt;Good documentation starts with a good writing culture. So you want to make good, clear writing the norm in your organization.&lt;/p&gt;
&lt;p&gt;There are plenty of opportunities to model &amp;amp; reinforce a good writing culture. Are your emails well written? When you see someone writing an ineffective email, do you take the time to coach them on &lt;a href="https://hbr.org/2016/11/how-to-write-email-with-military-precision"&gt;writing clear, actionable emails&lt;/a&gt;? Are your JIRA tickets well written or just one liners that would be inscrutable to anyone, including the ticket writer after a month? How about presentations that your team creates? Are they being thoughtful in adapting their writing style to the medium?&lt;/p&gt;
&lt;p&gt;Writing is a skill and like any other skill, it improves with practice &amp;amp; training. So invest in writing training for your staff! Amazon is known to have a strong writing culture and there&amp;rsquo;s plenty of internal training that&amp;rsquo;s provided for employees to learn to write well. Do note that writing well is contextual. What&amp;rsquo;s well written prose for a period novel is probably not well written for a business document. The training needs to reflect that. In my first job at a scientific research institute, I was sent to a training on writing good scientific papers. It was immensely useful and I put that training to good use in the two journal papers that I  subsequently wrote during my tenure there.&lt;/p&gt;
&lt;p&gt;Finally, as I mentioned earlier, culture is also what gets rewarded. If you want a good documentation culture, then do factor in documentation work into performance appraisals. While documentation in general should be considered part of the job (and culture!), there are opportunities to call out &amp;amp; reward specific activities, e.g. when a junior employee writes a new piece of documentation or when a senior employee creates/improves a documentation related process.&lt;/p&gt;
&lt;h2&gt;Process&lt;/h2&gt;
&lt;p&gt;Which is a good segue into looking at process improvements that you can make to create a good documentation culture. &lt;/p&gt;
&lt;p&gt;The goal of a process is to ensure you get the desired outcomes in a reliable &amp;amp; repeatable manner. In the case of documentation, the outcome to strive for is great &lt;em&gt;Quality&lt;/em&gt; of documention, i.e., documentation that&amp;rsquo;s &lt;em&gt;Accurate&lt;/em&gt;, &lt;em&gt;Useful&lt;/em&gt; and &lt;em&gt;Complete&lt;/em&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Accuracy&lt;/em&gt; means that your documentation reflects reality. So you want processes that help you catch &amp;amp; fix errors in your documentation.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Useful&lt;/em&gt; documentation helps the reader accomplish their goal. So you want feedback mechanisms that inform you whether readers managed to do that.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Completeness&lt;/em&gt; means that there aren&amp;rsquo;t any gaps in your documentation. Documentation could be useful but not complete (e.g. some features aren&amp;rsquo;t documented) and vice-versa (e.g. everything is documented but is incomprehensible to the reader).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The processes will vary depending on the quality bar that you are striving for. Given limited time that can be spent on documentation alone, you&amp;rsquo;d want a higher quality bar on customer visible documentation than internal documentation. &lt;/p&gt;
&lt;p&gt;With that mental model, here are some ideas for processes that you can implement:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Make it easy to create new documentation by creating templates. More on this in the next section below. &lt;/li&gt;
&lt;li&gt;Make it easy to add &amp;amp; update documentation. E.g., don&amp;rsquo;t prevent other teams from updating your team&amp;rsquo;s docs - make them world writeable. The risk of incorrect changes is small and easily manageable with version control.&lt;/li&gt;
&lt;li&gt;Pull Requests must include doc changes. If you have a PR review checklist, add &amp;ldquo;docs updated&amp;rdquo; to that list as a reminder. Don&amp;rsquo;t tolerate a &amp;ldquo;docs will be updated in the next PR&amp;rdquo; culture.&lt;/li&gt;
&lt;li&gt;Use a written process (e.g. &lt;a href="https://www.allthingsdistributed.com/2006/11/working_backwards.html"&gt;Amazon&amp;rsquo;s PR/FAQ&lt;/a&gt;, &lt;a href="https://www.python.org/dev/peps/pep-0001/"&gt;Python&amp;rsquo;s PEP&lt;/a&gt;, &lt;a href="https://oxide.computer/blog/rfd-1-requests-for-discussion"&gt;Oxide&amp;rsquo;s RFD&lt;/a&gt;) to discuss and agree upon major architecture, process or product changes.&lt;/li&gt;
&lt;li&gt;Add automation to check broken links within your documentation, especially for customer facing docs.&lt;/li&gt;
&lt;li&gt;Add analytics and &lt;em&gt;feedback button&lt;/em&gt; mechanisms to your doc pages and look at the metrics to assess quality. &lt;/li&gt;
&lt;li&gt;In your customer support system, ensure there&amp;rsquo;s a &amp;ldquo;documentation related&amp;rdquo; tag that can be applied as the root cause when resolving tickets. Measure this ticket count to check progress on your documentation efforts.&lt;/li&gt;
&lt;li&gt;Improve the previous process by ensuring such a support ticket isn&amp;rsquo;t closed till a related doc fix ticket is created.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Hopefully, these bullets have given you a few jump off points to consider when making process changes to improve documentation.&lt;/p&gt;
&lt;h1&gt;Product&lt;/h1&gt;
&lt;p&gt;Finally, we look at the product which in this case, is the actual content of your documentation.&lt;/p&gt;
&lt;p&gt;The most important thing you can do here is to figure out how best to organize the content and then create templates around that organizational framework. Your documentation may be Complete, but if it&amp;rsquo;s poorly organized, it&amp;rsquo;ll be perceived as neither Useful nor Complete by your readers who&amp;rsquo;ll struggle to find what they need. Moreover, an organizational structure also significantly reduces friction in creating new documentation since you are not repeatedly spending cycles just figuring out where every new piece of content should live (cf. &lt;a href="https://en.wikipedia.org/wiki/Law_of_triviality"&gt;bike-shedding&lt;/a&gt;). This is applicable both for external documentation as well as internal docs. &lt;/p&gt;
&lt;p&gt;A good organizational framework is &lt;a href="https://documentation.divio.com/"&gt;The documentation system&lt;/a&gt; which organizes documentation by purpose:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Tutorials - step by step docs that aid learning for a new comer.&lt;/li&gt;
&lt;li&gt;How-to guides - problem oriented docs that help in accomplishing specific tasks.&lt;/li&gt;
&lt;li&gt;Conceptual - docs providing detailed explanations that aid in building a strong understanding of your system.&lt;/li&gt;
&lt;li&gt;Reference - Informational documentation that provide all the nitty-gritty details.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I often cite the &lt;a href="https://docs.djangoproject.com/en/dev/#how-the-documentation-is-organized"&gt;Django project&amp;rsquo;s docs&lt;/a&gt; as an example of high quality documentation and you can see how it follows the above framework. &lt;/p&gt;
&lt;p&gt;Interestingly, this framework isn&amp;rsquo;t limited to just software or technical products. You can easily imagine how you could apply it to say, a podcasting tool or a stock trading platform.&lt;/p&gt;
&lt;p&gt;For internal documentation, an area which benefits immensely from investment is organizing operational documentation. This documentation covers aspects like where&amp;rsquo;s a particular service&amp;rsquo;s source code, the requirements queue, design docs, deployment processes, monitoring dashboards, SLAs, alarm configurations, on-call rotation, etc. When I was at JPMorgan Chase, one of the first problems that the SRE chapter in my team tackled was to standardize the service documentation so that on-call engineers could quickly and reliably find what they needed when responding to incidents. &lt;/p&gt;
&lt;p&gt;&lt;a href="https://backstage.io/"&gt;Spotify&amp;rsquo;s Backstage&lt;/a&gt; attempts to standardize all of this information so that it&amp;rsquo;s discoverable at scale. But you don&amp;rsquo;t need to start with such a system; just take the concepts and then create a few template wiki pages that make it easy to instantiate internal docs for every new microservice that your team spins up. &lt;/p&gt;
&lt;p&gt;Architecture documentation is often overlooked in a fast moving environment and struggles to keep up with an evolving system. Here, I recommend the &lt;a href="https://c4model.com/"&gt;C4 Model&lt;/a&gt; as a light-weight system that can help your team maintain high quality architectural documentation. Every four to six months, get your entire team in a (virtual/physical) room for a couple of hours sketching your system&amp;rsquo;s architecture on (digital/physical) whiteboards or flipcharts using the C4 model. For maximum impact, split your team into two and get each half to document the system independently and then compare their diagrams to identify gaps. Once done, just upload the diagrams as-is to your internal wiki. This exercise not only keeps your docs up to date but is also a fantastic way to deepen your team&amp;rsquo;s collective understanding of the system and serve as a great onboarding resource for those new to the team.&lt;/p&gt;
&lt;h2&gt;Being intentional&lt;/h2&gt;
&lt;blockquote class="quotable"&gt;&lt;span class="quote-text"&gt;Good intentions never work, you need good mechanisms to make anything happen.&lt;/span&gt; &lt;span 
class="quote-source"&gt; — Jeff Bezos&lt;/span&gt;&lt;/blockquote&gt;

&lt;p&gt;To create a culture of documentation, communicate your intention and back it up with investments in your people and processes. &lt;/p&gt;</content><category term="misc"/><category term="collaboration"/><category term="culture"/><category term="leadership"/></entry><entry><title>Don't ignore Chaos testing!</title><link href="https://antrix.net/posts/2021/chaos/" rel="alternate"/><published>2021-05-23T21:35:00+08:00</published><updated>2021-05-23T21:35:00+08:00</updated><author><name>Deepak Sarda</name></author><id>tag:antrix.net,2021-05-23:/posts/2021/chaos/</id><summary type="html">&lt;p&gt;In March this year, I left JPMorgan Chase to join Amazon Web Services. I was at JPMC for close to eight years and my last role there was to lead the product development &amp;amp; delivery of observability services. This included passive observability services for aggregating &amp;amp; alerting over operational metrics, events, logs …&lt;/p&gt;</summary><content type="html">&lt;p&gt;In March this year, I left JPMorgan Chase to join Amazon Web Services. I was at JPMC for close to eight years and my last role there was to lead the product development &amp;amp; delivery of observability services. This included passive observability services for aggregating &amp;amp; alerting over operational metrics, events, logs, and traces; as well as active observability services like synthetic transaction testing &amp;amp; chaos engineering. &lt;/p&gt;
&lt;p&gt;That last bit around chaos engineering was one of the more memorable, forward looking, leading edge, &lt;em&gt;I-can&amp;rsquo;t-believe-a-bank-does-this&lt;/em&gt;, pieces of work that I&amp;rsquo;m especially proud of incubating &amp;amp; delivering. From that vantage point, I thought I&amp;rsquo;ll share a bit of my perspective on this field.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m not going to delve into explaining chaos engineering in this post. The &lt;a href="https://principlesofchaos.org/"&gt;principlesofchaos.org&lt;/a&gt; site does a great job of explaining the fundamentals in 10 minutes and is well worth a read. If you have more time, then consider watching this &lt;a href="https://www.youtube.com/watch?v=8e93cFBpvPQ"&gt;talk on chaos engineering&lt;/a&gt; that a &lt;a href="/posts/2020/chaos-talk/"&gt;colleague and I gave&lt;/a&gt;. It&amp;rsquo;s a great talk, I promise. 😊&lt;/p&gt;
&lt;p&gt;One of the ideas we touch on in that talk is the &lt;a href="https://youtu.be/8e93cFBpvPQ?t=1225"&gt;distinction between chaos testing and chaos experimentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In essence:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Experimentation leads to learning new things about the system while testing validates our existing understanding of the system.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;In a complex system, it&amp;rsquo;s very hard, if not impossible, to predict the behaviour of the system based purely on an inside out understanding of the system&amp;rsquo;s components and internal design&lt;sup id="fnref:1"&gt;&lt;a class="footnote-ref" href="#fn:1"&gt;1&lt;/a&gt;&lt;/sup&gt;. Thus, we run chaos experiments with failure injection to understand how the system behaves under turbulent conditions. The results of these chaos experiments provide us with new knowledge about the system&lt;sup id="fnref:2"&gt;&lt;a class="footnote-ref" href="#fn:2"&gt;2&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;On the other hand, chaos testing also involves failure injection but with the goal of validating that the currently known properties of the system (related to failure modes) haven&amp;rsquo;t changed. The properties are either known because they were a design goal (e.g. fallback to cache when source is unavailable) or were added to the system in response to a failure mode discovered through chaos experiments (e.g. circuit breakers).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;I feel that in the chaos engineering community, chaos testing is sometimes given the short shrift.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Chaos testing is an important new tool to add to your arsenal of unit, integration, and performance tests. Chaos tests can be automated and added to your CI/CD pipeline as part of your regression test suite. In an enterprise and/or regulated context, automated chaos tests provide always available evidence of conformance to &lt;a href="https://en.wikipedia.org/wiki/Business_continuity_planning"&gt;BCP&lt;/a&gt; standards.&lt;/p&gt;
&lt;p&gt;While there&amp;rsquo;s immense value to be unlocked through chaos experimentation, you shouldn&amp;rsquo;t ignore chaos testing! After all, what good is the knowledge accrued from a chaos experiment if the next big feature release or refactoring leaves your system vulnerable to the same failure mode due to a regression?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Hence, your chaos engineering adoption journey must include both chaos experimentation and chaos testing.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The challenge adopters face today is that the tooling for automated chaos testing is still in infancy. While significant progress is being made by the likes of &lt;a href="https://chaostoolkit.org/"&gt;Chaos Toolkit&lt;/a&gt;, &lt;a href="https://verica.io/"&gt;Verica&lt;/a&gt;, &lt;a href="https://www.gremlin.com/"&gt;Gremlin&lt;/a&gt;, &lt;a href="https://aws.amazon.com/fis/"&gt;AWS&lt;/a&gt;, and others, I believe we are still at the beginning of this promising journey&lt;sup id="fnref:3"&gt;&lt;a class="footnote-ref" href="#fn:3"&gt;3&lt;/a&gt;&lt;/sup&gt;. As a point of comparison, consider how pervasive unit testing is today. However, this wasn&amp;rsquo;t always the case. JUnit was created in 1997 and Kent Beck&amp;rsquo;s XP book came out in 1999. Evidently, it took decades for unit testing to become pervasive, requiring a combination of greater collective experience and better tooling. &lt;/p&gt;
&lt;p&gt;We are in that same primordial state with chaos engineering and I&amp;rsquo;m convinced that we&amp;rsquo;ll see immense progress in the tooling landscape as well as our understanding of the field in the coming decade.&lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr&gt;
&lt;ol&gt;
&lt;li id="fn:1"&gt;
&lt;p&gt;Systems theory makes this explicit in that the internals of a system are incidental and what matters are the system&amp;rsquo;s inputs &amp;amp; outputs.&amp;#160;&lt;a class="footnote-backref" href="#fnref:1" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:2"&gt;
&lt;p&gt;In fact, Chaos Engineering emerged &lt;em&gt;because&lt;/em&gt; complex systems&amp;rsquo; failure modes couldn&amp;rsquo;t be understood by traditional inside-out methods.&amp;#160;&lt;a class="footnote-backref" href="#fnref:2" title="Jump back to footnote 2 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:3"&gt;
&lt;p&gt;At JPMC, my team built our own internal chaos service because none of the tools met all of our needs.&amp;#160;&lt;a class="footnote-backref" href="#fnref:3" title="Jump back to footnote 3 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="misc"/><category term="chaos"/><category term="software"/></entry><entry><title>AWS SA - Associate</title><link href="https://antrix.net/posts/2021/aws-saa-c02/" rel="alternate"/><published>2021-05-07T20:25:00+08:00</published><updated>2021-05-07T20:25:00+08:00</updated><author><name>Deepak Sarda</name></author><id>tag:antrix.net,2021-05-07:/posts/2021/aws-saa-c02/</id><summary type="html">&lt;p&gt;To help with my new job, I&amp;rsquo;ll be taking a few AWS certifications. First on that list is the &lt;a href="https://aws.amazon.com/certification/certified-solutions-architect-associate/"&gt;AWS Certified Solutions Architect – Associate&lt;/a&gt; certification exam which I passed yesterday.&lt;/p&gt;
&lt;p&gt;As a reminder, this is my personal blog and whatever you read here are my personal views and not …&lt;/p&gt;</summary><content type="html">&lt;p&gt;To help with my new job, I&amp;rsquo;ll be taking a few AWS certifications. First on that list is the &lt;a href="https://aws.amazon.com/certification/certified-solutions-architect-associate/"&gt;AWS Certified Solutions Architect – Associate&lt;/a&gt; certification exam which I passed yesterday.&lt;/p&gt;
&lt;p&gt;As a reminder, this is my personal blog and whatever you read here are my personal views and not those of any past, current or future employer!&lt;/p&gt;
&lt;p&gt;The AWS SA Associate exam tests foundational knowledge AWS services and the ability to use them to solve requirements with secure &amp;amp; robust solutions. The exam&amp;rsquo;s duration is 130 minutes &amp;amp; it consists of 65 multiple choice questions with either one or two right answers.&lt;/p&gt;
&lt;h3&gt;Preparation&lt;/h3&gt;
&lt;p&gt;These are the resources that I used to prepare for the exam:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.udemy.com/course/aws-certified-solutions-architect-associate-saa-c02/"&gt;Stephane Maarek&amp;rsquo;s Udemy Course&lt;/a&gt;: This course is great. It covered all the stuff needed for the exam. But this course can be overwhelming if you don&amp;rsquo;t have some prior experience building software systems. If that&amp;rsquo;s the case, you should find another foundational course before coming to this. &lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.aws.training/Details/Curriculum?id=20685"&gt;Exam Readiness: AWS SA Certified - Associate&lt;/a&gt;: This free course is a nice short resource to get oriented to the exam. The key value of this course is that it explains clearly what factors to consider when an exam question asks for scalability vs resiliency vs HA vs fault tolerance. &lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.whizlabs.com/aws-solutions-architect-associate/practice-tests/"&gt;Whizlabs practice tests&lt;/a&gt;: These were just alright. While the level of the actual exam is probably 20-30% harder than the questions here, the good thing about the test collection is that they cover a wide variety of questions. For each question that I&amp;rsquo;d get wrong in practice, I would read the linked documentation, any related documentation as well as the FAQ pages.&lt;/li&gt;
&lt;li&gt;On the day before the exam, I reviewed all the slides from Stephane&amp;rsquo;s course as a final refresher. This is essential since the amount of material covered in the exam is quite vast and reviewing everything a day before helps considerably with information recall.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;The exam&lt;/h3&gt;
&lt;p&gt;I took the exam at a PSI center instead of at home. The exam interface is quite straight forward and allows you to navigate between questions as well as flag questions for review later. The count down clock is always visible on the top right and gives a five minute warning as well. I had sufficient time to complete all the questions and then review about 30-odd questions again, before running out of time.&lt;/p&gt;
&lt;p&gt;I was informed of the result (pass!) as soon as the exam ended. The screen also informed me that I&amp;rsquo;ll receive an email within five business days with the score. However, when I logged on to the AWS training website later in the night, the score certificate was already available.&lt;/p&gt;
&lt;h3&gt;One weird trick&lt;/h3&gt;
&lt;p&gt;If you are Singaporean, you can use your &lt;a href="https://www.myskillsfuture.gov.sg/content/portal/en/career-resources/career-resources/education-career-personal-development/SkillsFuture_Credit.html"&gt;SkillsFuture credits&lt;/a&gt; towards Stephane&amp;rsquo;s Udemy courses. I did just that for this course and plan to buy his other courses as well for the next certification exam that I attempt.&lt;/p&gt;
&lt;p&gt;Good luck if you are planning to take the exam as well!&lt;/p&gt;</content><category term="misc"/><category term="aws"/><category term="certification"/></entry><entry><title>Designing an IT organization from scratch</title><link href="https://antrix.net/posts/2021/it-org-design/" rel="alternate"/><published>2021-02-22T17:35:00+08:00</published><updated>2021-02-22T17:35:00+08:00</updated><author><name>Deepak Sarda</name></author><id>tag:antrix.net,2021-02-22:/posts/2021/it-org-design/</id><summary type="html">&lt;p&gt;A few months ago, the &lt;a href="https://www.mas.gov.sg/"&gt;Monetary Authority of Singapore&lt;/a&gt; awarded the first set of &lt;a href="https://www.mas.gov.sg/regulation/Banking/digital-bank-licence"&gt;digital banking licenses&lt;/a&gt;. As the &lt;a href="https://www.mas.gov.sg/news/media-releases/2020/mas-announces-successful-applicants-of-licences-to-operate-new-digital-banks-in-singapore"&gt;press release&lt;/a&gt; mentions, there were 14 applicants out of which 4 were granted licenses.  &lt;/p&gt;
&lt;p&gt;As it happens, I had interviewed with one of those applicants. While I did clear the interviews …&lt;/p&gt;</summary><content type="html">&lt;p&gt;A few months ago, the &lt;a href="https://www.mas.gov.sg/"&gt;Monetary Authority of Singapore&lt;/a&gt; awarded the first set of &lt;a href="https://www.mas.gov.sg/regulation/Banking/digital-bank-licence"&gt;digital banking licenses&lt;/a&gt;. As the &lt;a href="https://www.mas.gov.sg/news/media-releases/2020/mas-announces-successful-applicants-of-licences-to-operate-new-digital-banks-in-singapore"&gt;press release&lt;/a&gt; mentions, there were 14 applicants out of which 4 were granted licenses.  &lt;/p&gt;
&lt;p&gt;As it happens, I had interviewed with one of those applicants. While I did clear the interviews, it didn&amp;rsquo;t result in a job offer since they didn&amp;rsquo;t get the license. To be honest, this was a bit surprising since they are a well known brand and were thought to be a strong contender. Nevertheless, there&amp;rsquo;s not much point in hiring people without a banking license!&lt;/p&gt;
&lt;p&gt;The role that I interviewed for was called the &lt;em&gt;Head of Production Services&lt;/em&gt;, reporting into the CTO:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Responsible for overall technology and operational support (front to back) of the bank’s digital
platforms residing on multi-cloud environment, which includes operating the total cost ownership of
the bank’s technology platforms efficiently, managing the life-cycle of the platforms systems,
complying to regulations, enabling risk &amp;amp; controls, to deliver a high standards bar in users’ customer experience.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;In effect, this role would&amp;rsquo;ve been responsible for delivering the entire IT platform capabilities that the bank&amp;rsquo;s business, operations and software engineering teams would rely upon. While a bit out of my wheelhouse, the scope and impact of the role was meaningful and challenging enough for me to go for it. &lt;/p&gt;
&lt;p&gt;During the interview process, I was asked to prepare and present how I would structure this &lt;em&gt;Production Services&lt;/em&gt; organization. This was a very interesting thought exercise that I enjoyed digging into immensely. I drew on my own experience, received insight &amp;amp; feedback from a few trusted friends, and read numerous resources. One of the resources that I found quite helpful was &lt;a href="https://www.goodreads.com/book/show/52908253-truth-from-the-valley"&gt;Truth from the Valley: A Practical Primer on IT Management for the Next Decade&lt;/a&gt;, a book by Mark Settle. I highly recommend it to anyone in an IT leadership role.   &lt;/p&gt;
&lt;p&gt;Here&amp;rsquo;s the final deck outlining my design of the IT services organization at a hypothetical &lt;em&gt;Blue Bank&lt;/em&gt;. I hope you find it interesting! &lt;/p&gt;
&lt;iframe src="https://docs.google.com/presentation/d/e/2PACX-1vTK0FwNtcVdyU2p9QS2Vitl9cfW6PFwPnkBkkLUqyz11KVbWzA9qzo0UfeLrG75kHKRdyaZQH2F0gls/embed?start=false&amp;loop=false&amp;delayms=3000" frameborder="0" width="960" height="569" allowfullscreen="true" mozallowfullscreen="true" webkitallowfullscreen="true"&gt;&lt;/iframe&gt;</content><category term="misc"/><category term="leadership"/><category term="management"/><category term="platforms"/></entry><entry><title>GoAccess on Dreamhost</title><link href="https://antrix.net/posts/2021/goaccess/" rel="alternate"/><published>2021-02-09T17:10:00+08:00</published><updated>2021-02-09T17:10:00+08:00</updated><author><name>Deepak Sarda</name></author><id>tag:antrix.net,2021-02-09:/posts/2021/goaccess/</id><summary type="html">&lt;p&gt;In the &lt;a href="/posts/2020/reboot-2/"&gt;most recent reboot&lt;/a&gt; of the site, I had ripped out all JavaScript, including Google Analytics. To be honest, it wasn&amp;rsquo;t a big loss since I can&amp;rsquo;t even remember when I had last looked at the Analytics data. &lt;/p&gt;
&lt;p&gt;But it&amp;rsquo;s good to have some understanding of …&lt;/p&gt;</summary><content type="html">&lt;p&gt;In the &lt;a href="/posts/2020/reboot-2/"&gt;most recent reboot&lt;/a&gt; of the site, I had ripped out all JavaScript, including Google Analytics. To be honest, it wasn&amp;rsquo;t a big loss since I can&amp;rsquo;t even remember when I had last looked at the Analytics data. &lt;/p&gt;
&lt;p&gt;But it&amp;rsquo;s good to have some understanding of what&amp;rsquo;s happening, if only to detect and weed out badly behaved crawler bots.&lt;/p&gt;
&lt;p&gt;As I have occasionally mentioned here, this site runs on a shared host provided by Dreamhost. I know that sounds awfully outmoded in this day and age of containerizing everything! But I&amp;rsquo;ve had antrix.net on shared hosting for more than fifteen years now and if it ain&amp;rsquo;t broke &amp;hellip;&lt;/p&gt;
&lt;p&gt;Anyway, Dreamhost &lt;a href="https://help.dreamhost.com/hc/en-us/articles/216509818-Statistics-overview"&gt;provides web site statistics&lt;/a&gt; out of the box that rely on server side access logs to generate reports. While it works, it is based on &lt;a href="https://en.wikipedia.org/wiki/Analog_(program)"&gt;Analog&lt;/a&gt; which, true to its origins, looks like &lt;a href="https://help.dreamhost.com/hc/en-us/articles/216661708-Analog-stats-overview"&gt;something from the 90s&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;Could I replace Analog stats with something a bit more modern? Could I do it while ignoring the irony that I&amp;rsquo;ll be fixing something that isn&amp;rsquo;t broken?&lt;/p&gt;
&lt;p&gt;&lt;a href="https://goaccess.io/"&gt;GoAccess&lt;/a&gt; appears to be the new kid on the block of open source web log analyzers. It seemed simple enough to configure and use that I decided to give it a Go. &lt;/p&gt;
&lt;p&gt;The rest of this post describes how I set it up on Dreamhost. If you want to jump ahead, here&amp;rsquo;s the final result: &lt;a href="https://stats.antrix.net/"&gt;stats.antrix.net&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The first step was to compile and install &lt;code&gt;goaccess&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;$ mkdir ~/goaccess
$ cd ~/goaccess/
$ wget https://tar.goaccess.io/goaccess-1.4.5.tar.gz
$ tar xvzf goaccess-1.4.5.tar.gz
$ cd goaccess-1.4.5
$ ./configure --prefix=$HOME/goaccess/ --enable-utf8
$ make
$ make install
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Next, I created the &lt;code&gt;stats.antrix.net&lt;/code&gt; subdomain using Dreamhost&amp;rsquo;s &lt;a href="https://help.dreamhost.com/hc/en-us/articles/215457827-How-do-I-add-a-subdomain-"&gt;domain management&lt;/a&gt; UI, making sure to use &lt;code&gt;$HOME/stats.antrix.net/&lt;/code&gt; as the web server root.&lt;/p&gt;
&lt;p&gt;After that, I created a simple script that will call &lt;code&gt;goaccess&lt;/code&gt; to parse the most recent access logs and generate an html report in the web server root directory.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;$ cat $HOME/goaccess/gen-access-report.sh
#!/bin/bash

${HOME}/goaccess/bin/goaccess --db-path ${HOME}/goaccess/data/ --persist --restore --log-format=COMBINED --anonymize-ip --keep-last=90 --real-os ${HOME}/logs/antrix.net/http/access.log.0 -o ${HOME}/stats.antrix.net/index.html 1&amp;gt;${HOME}/goaccess/cronjob.log 2&amp;gt;&amp;amp;1

if [[ $? -ne 0 ]]; then
    echo &amp;quot;goaccess execution failed&amp;quot;
    echo &amp;quot;=========================&amp;quot;
    cat ${HOME}/goaccess/cronjob.log
    exit 1
fi
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The script does a few things:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It instructs &lt;code&gt;goaccess&lt;/code&gt; to use a database to store parsed data&lt;/li&gt;
&lt;li&gt;The data is retained for 90 days&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;access.log.0&lt;/code&gt; file is parsed, which is the most recently rotated log file&lt;/li&gt;
&lt;li&gt;The client IPs are anonymized because my stats are now public&lt;/li&gt;
&lt;li&gt;Execution logs are printed out only in case of any failure&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Finally, I setup a cronjob to run the script once a day.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;$ crontab -l
MAILTO=&amp;quot;xxxxxxxxxxx@xxxxx.com&amp;quot;
0 7 * * * /bin/bash ${HOME}/goaccess/gen-access-report.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;If there&amp;rsquo;s any error, the failure output printed by the script is emailed to the MAILTO address. &lt;/p&gt;
&lt;p&gt;That&amp;rsquo;s about it. Relatively painless to setup and it seems to be working well so far: &lt;a href="https://stats.antrix.net/"&gt;stats.antrix.net&lt;/a&gt;&lt;/p&gt;</content><category term="misc"/><category term="linux"/><category term="webdev"/></entry><entry><title>CKAD</title><link href="https://antrix.net/posts/2021/ckad/" rel="alternate"/><published>2021-02-02T20:25:00+08:00</published><updated>2021-02-02T20:25:00+08:00</updated><author><name>Deepak Sarda</name></author><id>tag:antrix.net,2021-02-02:/posts/2021/ckad/</id><summary type="html">&lt;p&gt;I passed the &lt;a href="https://training.linuxfoundation.org/certification/certified-kubernetes-application-developer-ckad/"&gt;Certified Kubernetes Application Developer&lt;/a&gt; exam recently. As is &lt;a href="https://www.google.com/search?q=ckad+tips"&gt;apparently a requirement&lt;/a&gt;, I must now blog about it!&lt;/p&gt;
&lt;p&gt;In case you are not aware, the CKAD is a 2 hour CLI based hands-on examination that tests how well candidates can apply their Kubernetes knowledge to perform numerous tasks …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I passed the &lt;a href="https://training.linuxfoundation.org/certification/certified-kubernetes-application-developer-ckad/"&gt;Certified Kubernetes Application Developer&lt;/a&gt; exam recently. As is &lt;a href="https://www.google.com/search?q=ckad+tips"&gt;apparently a requirement&lt;/a&gt;, I must now blog about it!&lt;/p&gt;
&lt;p&gt;In case you are not aware, the CKAD is a 2 hour CLI based hands-on examination that tests how well candidates can apply their Kubernetes knowledge to perform numerous tasks in a real kubernetes environment. In case you are not aware, &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt; is a highly popular container orchestration system that hits the sweet spot between being too complex for web applications and too simple for anything else. &lt;/p&gt;
&lt;p&gt;As the search link above indicates, there&amp;rsquo;s no shortage of advice on how to pass the CKAD. I&amp;rsquo;ll refrain from repeating those and just share what I think hasn&amp;rsquo;t been discussed so far. &lt;/p&gt;
&lt;p&gt;At the risk of stating the obvious, preparing for the CKAD is a two step process:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;learn Kubernetes&lt;/li&gt;
&lt;li&gt;practice for the exam&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Learn Kubernetes&lt;/h3&gt;
&lt;p&gt;I learnt Kubernetes mainly through the material from &lt;a href="https://learnk8s.io/"&gt;learnk8s.io&lt;/a&gt;. My employer had arranged for their in-person training which was delivered by &lt;a href="https://learnk8s.io/about-us"&gt;Daniele Polencic&lt;/a&gt;, whom I had met earlier at Kubecon Shanghai through my friend &lt;a href="https://twitter.com/jamesbuckett"&gt;James Buckett&lt;/a&gt;. Daniele is a great instructor who knows Kubernetes inside out. And their material is top-notch. It&amp;rsquo;s focussed on giving you a strong understanding of the building blocks of Kubernetes instead of just helping you prep for CKAD.&lt;/p&gt;
&lt;p&gt;As just one example, I found that quite a few resources don&amp;rsquo;t explain the relationship between Services and Endpoint objects. This relationship is really important since it&amp;rsquo;ll help you debug and diagnose various Service issues. The learnk8s material explains this in a very cogent way. There are many more such fundamental concepts that are glossed over in CKAD focussed material which are explained well by Daniele and team.&lt;/p&gt;
&lt;h3&gt;Practice for the exam&lt;/h3&gt;
&lt;p&gt;During my CKAD, I received 18 problems that I had to solve within 2 hours. That&amp;rsquo;s not a lot of time if you don&amp;rsquo;t have some hands-on fluency with Kubernetes. The best way to build fluency is practice! &lt;/p&gt;
&lt;p&gt;I practiced with just these two resources: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/dgkanatsios/CKAD-exercises"&gt;dgkanatsios/CKAD-exercises&lt;/a&gt;: This is widely recommended and I agree. I went through these exercises twice.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.katacoda.com/liptanbiswas/courses/ckad-practice-challenges"&gt;CKAD Practice Challenges&lt;/a&gt; by Liptan Biswas: this gives a feel for what the real exam is like. &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Note that the actual exam was more difficult than these practice problems but not too far off.&lt;/p&gt;
&lt;p&gt;The other thing I did was to have a set of bookmarks ready to go during the exam. Since it is allowed to refer to the kubernetes.io documentation during the exam, having a set of bookmarks to the relevant material was a big time-saver. With the amount of yaml boilerplate involved in kubernetes, you definitely want quick access to yaml templates that you can copy from the website into your exam terminal.&lt;/p&gt;
&lt;h3&gt;One weird trick&lt;/h3&gt;
&lt;p&gt;Speaking of yaml, you must set up your vim editor for yaml editing. At the start of the exam, I created a &lt;code&gt;~/.vimrc&lt;/code&gt; file with the following:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;set ts=2 sts=2 sw=2 et
set list
syntax on
filetype plugin indent on
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;It is absolutely worth memorizing the above because you don&amp;rsquo;t want to be fighting with yaml editing during the exam. I haven&amp;rsquo;t seen &lt;code&gt;set list&lt;/code&gt; mentioned in CKAD prep resources but I highly recommend it. That instruction makes vim render tab characters as &lt;code&gt;^I&lt;/code&gt; so that they stand out. The last thing you want is an invalid yaml due to a stray invisible tab character that you&amp;rsquo;ll struggle to find and fix! &lt;/p&gt;
&lt;p&gt;The other vim tip worth remembering is to &lt;code&gt;:set paste&lt;/code&gt; before pasting a bunch of yaml from the documentation. This prevents vim from indenting the already indented code that you paste from elsewhere. I realized this during the practice sessions where I&amp;rsquo;d spent way too much time just fixing indentation of copy/pasted samples.&lt;/p&gt;
&lt;p&gt;One more trick worth sharing is to use &lt;code&gt;kubectl replace --force&lt;/code&gt; when you need to delete and recreate a pod or other object. Saved me quite a few minutes!&lt;/p&gt;
&lt;h3&gt;Closing&lt;/h3&gt;
&lt;p&gt;Due to the format, this exam was more interesting compared to the usual multiple choice certification examinations. That however, does not change my view of certification examinations in general. I think IT certifications are great as a structured approach to learning a new subject. But they are not very useful as signifiers of subject knowledge that an employer can supposedly use in making hiring decisions.&lt;/p&gt;</content><category term="misc"/><category term="kubernetes"/><category term="linux"/><category term="certification"/></entry></feed>