The Decision Boundary

I wasn’t really sure what to name this page. It’s a collection of various posts I’ve made covering why I pursue challenges and adventures, why I don’t, and how I decide which goals and adventures I take on. I could have named the page My Motivation, Exploring My Limits, or simply Goals.

In the end I decided to stick with a nerdy play on words similar to the Random Forest Runner moniker itself. The decision boundary is the border between different outputs of a machine learning model: the picture is a cat vs. the picture is a dog, select ad 345 vs. ad 219, the outcome will be a success vs. the outcome will be a failure. The region around this border is where the model typically struggles the most in terms of accuracy. Gathering new data in this region can be particularly valuable in improving performance.

This has formed a tangible decision boundary for many people.

How close can we venture to this border while being confident in our outcome? How can we more sharply define the boundary between outcomes and improve performance near the limit of capabilities? Why are there weaknesses in these areas, and why are there strengths in others? Is the risk of failure for a particular situation worth it?

Instead of a maching learning model the paragraph above could just as easily be referring to any number of other things. The posts below, written over the span of a few years, cover many of my thoughts on those questions as they relate to me personally.