Some thoughts on papers, coding, tech + society, and other random topics. Mostly just recording thoughts for future me. All opinions are my own.
If you have any suggestions, questions or comments feel free to send me a note!
Imagine you have some sensory data of a robotic arm moving around. Given a little bit of prior info about this mechanism, we may reasonably assume that it has several joints at which the arm moves. How can we figure out what kinds of motions each of these joints exhibits? In our recent AISTATS paper, Product Manifold Learning, we tackle this question and others like it.
Recently, I wrapped up some work on the 2020 Machine Learning Reproducibility Challenge with the Princeton Visual AI Lab team. Over the span of four months, we worked on reproducing all the experiments and proposed methods in a paper from CVPR 2020. I learned so much from this experience, and would highly recommend participating in the challenge to any undergrad interested in getting into a new area of machine learning.
A few weeks ago, I decided to get a library card. This isn’t my first library card (that would be quite sad given how close I’ve lived to one), and it wasn’t because of a new year’s resolution to read more in 2021 either. I just suddenly felt compelled to get back into books, without any goals, terms or obligations. I was also suffering from a little more eyestrain than I usually do from all the screen time in 2020, so I figured that this might help. I ended up reading more than I expected in the last month.
I came across a neat little problem the other day that proposes a variation on the classic tic-tac-toe game, which was named “tic-toe-tac” in the post. In this version, your goal is to prevent anyone from winning, i.e. force a draw. I tried playing against myself a few times and didn’t have any luck doing so, and I began wondering if this was possible at all. After a bit of thinking, here’s what I found.
In a previous post I discussed a bit of my thoughts on network dissection, primarily focusing on two papers which were the first papers I’ve seen propose this neuron-based framework of interpretability. A more recent paper from NeurIPS 2020 takes a different approach to this idea, one that is slightly more quantitative. The result is fascinating and also touches on some important use cases.