Overview
This research project was done for a graduate course I took on Transfer Learning. While transfer learning is commonly thought of as applying models across domains, it can equivalently be thought of as looking at how models handle shifts in the distribution of their input dataset. In our project we seek to quantify how image data augmentations affect the distribution, and consequently how that distribution shift affects the model performance.
While it is know that dataset augmentation will benefit model performance, we are interested in why that is the case. Intuitively it should help model generalization, but we seek to find a more concrete explanation by analyzing the problem through the lens of distribution shift.
The poster and final paper are both included below.