Thoughts on data science, statistics and machine learning.

Rabies, Laziness & Privilege

It was the night of the recent 5-state assembly elections. One of my company’s clients is a major news channel, and I was at the studio late into the night, until the election commission announced that they had cancelled their press conference which was supposed to make an announcement about the final vote counts in Madhya Pradesh. A colleague offered to drop me home, and I got off at the gate of my colony, not wanting to subject my colleague to navigating the labyrinth that is any gated colony in South Delhi.

Read more...

Weighted Loss Functions for Instance Segmentation

This post is a follow up to my talk, Practical Image Classification & Object Detection at PyData Delhi 2018. You can watch the talk here: and see the slides here. I spoke at length about the different kinds of problems in computer vision and how they are interpreted in deep learning architectures. I spent a fair bit of time on instance and semantic segmentation (for an introduction to these problems, watch Justin Johnson‘s lecture from the Stanford CS231 course here).

Read more...

Playing With the Konmari Method

I heard about the bestseller The Life-Changing Magic of Tidying Up at a SciPy talk about deculttering your data science project. The speakers admitted they hadn’t read it - they were simply trying to point out that tidying up your space and tidying up your software project are both similar. I’ve been married and living with my wife for about a year now. After we moved into “our own home” last year, we have both undergone major role reversals when it comes to tidying up.

Read more...

On the Linearity of Bayesian Classifiers

In his book, Neural Networks - A Comprehensive Foundation, Simon Haykin has an entire section (3.10) dedicated to how perceptrons and Bayesian classifiers are closely related when operating in a Gaussian environment. However, it is not until the end of the section that Haykin mentions that the relation is only limited to linearity. What is interesting about this is that a Perceptron can produce the same classification “model” as a Bayesian classifier, provided that the underlying data is drawn from a Gaussian distribution.

Read more...

Page 1 of 3