Skip to content
CyberEd Essentials

Reversing Large Deep Learning Models


Course
Upgrade subscription below

Yashodhan Vivek Mandke of MIT World Peace University explains how reversing deep learning models exposes architecture, tensors and weights that can be exploited or defended at the mathematical core.

Deep learning systems increasingly underpin critical applications, yet their internal mathematical structures often remain insufficiently protected. Reverse engineering modern neural networks exposes architectural flows, tensor properties and hyperparameters that directly influence model behavior and resilience. By examining convolutional and transformer-based architectures, including image classifiers and large language models, this session explores how model formats enable extraction of weights, biases and sparsity patterns. These elements reveal how learning dynamics, stability and inference behavior can be manipulated when integrity controls are weak. Understanding tokenizer reconstruction further extends this risk to language models, where input representation directly affects semantic output.

This video lesson, taught by Yashodhan Vivek Mandke, research scholar at MIT World Peace University, will cover:

  • Architectural reconstruction across convolutional and transformer-based models;
  • Tensor formats and how interoperability enables model extraction;
  • Weight matrices, sparsity and their role in adversarial manipulation.
 

 

Here is the course outline:

Reversing Large Deep Learning Models: Architecture, Weights and Tensor-Level Exposure

Completion

The following certificates are awarded when the course is completed:

CPE Credit Certificate

Floating Button