Higher Level Activity Recognition

There are numerous activity recognition neural networks which can identify activities performed in videos. However, there is not much in terms of sequences of activities. How can we build a composite classifier- one which takes in classified activities and can learn to classify schedules out of them? The Higher Level Activity Recognition Project seeks to answer this question. By taking a state of the art classification system, feeding its output into a higher level classifier, and including relevant metadata, we hope to show that we can create schedules from the classification of basic activities.

Dynamic Deep Learning

Modern deep learning is handicapped by the immense amount of time needed to find the optimal network structure to solve a given problem. This project, featured at CVPR 2018, not only automatically selects the optimal architecture, but can also be used to shrink the network to a fraction of its original size with little loss in performance. This work has also yielded a patent for the network optimization process (currently pending).

Deep Learning for Black Box Detection

Sonar imagery utilizes sonic reverberations to detect objects. In low-visibility undersea environments, this is very helpful for finding the black boxes of crashed airplanes. This project, presented at Acoustic Society America 2017, showed the use of deep learning for this task. This work also yielded a patent (pending) and won MIT Lincoln Laboratory’s Tech Office Challenge 2016.

Canned Mentorshipcm_interface

The subject of my Master’s thesis, Canned Mentorship combines human and AI agents in order to create how-to manuals using crowdsourcing and machine learning techniques. Previously this was an unsolved problem in the field. By leveraging advances in human computational hierarchy, Canned Mentorship was able to solve the problem by creating a low-complexity solution backed by a machine-learning supervisor.

Imagistic Modeling System


Recent psychological evidence suggests that humans run a simulation of 3D objects in their minds as part of the reasoning process. Can we create a program to do the same? The IMS project seeks to answer that question. In cooperation with Eric Bigelow, Alex Wilson, and Dr. Lenhart Schubert, we have developed a 3D reasoning specialist for EPILOG that does just this. This project is written in python and uses Blender for 3D modeling. Publication on the project has been accepted for both a journal entry and an oral presentation in Biologically Inspired Cognitive Architectures (BICA) 2014.