deep learning – Pytorch lightning callback for logging the activations

How can I create a PyTorch Lightning callback to log the activations of each layer in the Faster R-CNN model with ResNet50-FPN-v2? The existing solution I found breaks the training loop when moving data from CUDA to CPU.

I’m currently working on an object detection project and using the Faster R-CNN model with the ResNet50-FPN-v2 backbone in PyTorch Lightning. During training, I need to monitor and log the activations of each layer in the model for further analysis.

I have searched for a solution or example specifically tailored to the Faster R-CNN model with ResNet50-FPN-v2 in PyTorch Lightning. However, I haven’t been able to find a comprehensive implementation that addresses my needs. Most of the available resources are either general PyTorch implementations or focused on different model architectures.

Could someone provide guidance on how to create a PyTorch Lightning callback that allows me to log the activations of each layer in the Faster R-CNN model with ResNet50-FPN-v2? It would be greatly appreciated if the provided solution can access the activations as tensors or numpy arrays for further analysis.

This is also what I got so far but it breaks the training loop as I move data from cuda to cpu:



import pytorch_lightning as pl
def save_activations(module, hook_name, output_file):
    activations = []

    def hook(module, input, output):
        activations.append(output)

    hook_fn = module.register_forward_hook(hook)

    # Forward pass
    x = torch.randn(1, 1, 28, 28)
    x = x.reshape(x.size(0), -1)
    module(x)

    # Remove hook
    hook_fn.remove()

    return activations
    
class ActivationsCallback(pl.Callback):
    def __init__(self, layers):
        super().__init__()
        self.layers = layers
        self.idx = 0
        self.activations = {}

    def on_train_epoch_end(self, trainer, pl_module):
        for layer in self.layers:
            act_vals = save_activations(pl_module, layer, "activations_{}_activations.pt".format(layer))
            self.activations[f"{layer}_{self.idx}"] = act_vals[0].detach()
        self.idx += 1

Read more here: Source link