An argument against using Jupyter Notebook for Machine Learning

Joel Grus, an Allen Institute for Artificial Intelligence researcher, addressed a Jupyter developers’ conference where he gave a presentation titled ‘I don’t like notebooks’. He said, “One hand, you have this idea that no books are great for iterative development, but then you also have this idea that notebooks are actually kind of dangerous unless you run each cell exactly once in order, and otherwise you can’t really rely on, and what the outputs of the cells are so there’s this tension there that makes me kind of uncomfortable.”

Computing platforms like Jupyter Notebook have become ubiquitous among machine learning engineers and data scientists. An interactive web-based platform, Jupyter, supports multi-language programming, Markdown cells, easy formatting, and allows for more detailed write-ups. Further, the growing open-software industry and maturation of scientific Python have increased the use of Jupyter Notebooks. Further, as Fernando Pérez, the co-founder, said, Jupyter’s popularity grew rapidly because of the improvements that were made in the web software that helped in driving applications like Gmail and Google Docs – this facilitates access to remote data that is otherwise impractical to download.

While Jupyter has been a boon for data scientists, there are a few drawbacks that one must be wary of. As outlined by Damien Benveniste, a machine learning tech lead at Meta, in a detailed LinkedIn post, Jupyter fails to provide a ‘robust framework for reproducibility and maintainable codebase for ML systems’. In this article, we will be discussing some of the major roadblocks that may hamper the adoption of Jupyter Notebooks.

Non-linear workflow

Grus said that Jupyter running code cells out of order were the source of major frustration among programmers. He further said that Jupyter encourages poor coding practices by making it difficult to organise code logically, break it down into reusable components and develop tests to check if the code is working properly.

The ‘jumping’ around between the cells of the Jupyter Notebook results in unreproducible experiments.

 

Modularity

Jupyter requires the data scientists to put most of the code directly into the cells so as to use the interactive tools most effectively. However, to make the code modular, one would need many functions, classes, etc. In general, a modular code would have nothing except functions, classes, import definitions, and variables touching the left margin – but with Jupyter Notebook, everything is extreme left-aligned.

The most important thing when writing a repeatable data science experiment is to write modular and robust code. However, Jupyter notebooks are not designed to be modularised. This also means that reproducing it in another environment is difficult. Even if we attempt to do so, we would first need to reproduce it in the environment in which the code was run and basic notebooks do not offer this provision.

Difficult to test

Testing helps in verifying whether a given program meets the requirements and keeps working when changes or iterations are applied. Since notebooks are not modules, testing them is challenging. They mix test code with notebook narrative code. Jupyter is used for scientific software and data analytics mainly, and testing this kind of software is hard. This happens mainly because of the lack of oracles and the difficulty of predicting the number of tests required to guarantee correctness. Further, if programmers deploy code that is run multiple times, they may prefer to do it in Python scripts.

Read more here: Source link