Mission Overview
Interpreting Convolutional Neural Networks
Machine learning techniques can be powerful tools for categorizing data and performing data analysis questions. However, machine learning techniques often involve a lot of hidden computation that is not immediately meaningful. The black-box nature of intermediary processes, especially in layered neural networks, can make it difficult to interpret and understand. The goal of this notebook is to familiarize you with some of the various techniques used to make sense of machine learning and convolutional neural networks (CNNs) in particular. CNNs in particular can be very difficult to interpret due to their multi-layered structure and convolutional layers. In this notebook, we will examine two methods of visualizing CNN results (Backpropagation and Grad-CAM) and another method for testing model architecture.
Data: The DeepMerge HLSP
Notebook: Interpreting Convolutional Neural Networks
Released: 2024-06-05
Updated: 2024-06-05
Tags: deep learning, 2d data, interpretation, convolutional neural networks