in machine studying are the identical.
Coding, ready for outcomes, decoding them, returning again to coding. Plus, some intermediate displays of 1’s progress. However, issues principally being the identical doesn’t imply that there’s nothing to study. Fairly quite the opposite! Two to a few years in the past, I began a each day behavior of writing down classes that I discovered from my ML work. In trying again by a few of the classes from this month, I discovered three sensible classes that stand out:
- Hold logging easy
- Use an experimental pocket book
- Hold in a single day runs in thoughts
Hold logging easy
For years, I used Weights & Biases (W&B)* as my go-to experiment logger. Actually, I’ve as soon as been within the prime 5% of all energetic customers. The stats in beneath determine inform me that, at the moment, I’ve skilled near 25000 fashions, used a cumulative 5000 hours of compute, and did greater than 500 hyperparameter searches. I used it for papers, for large initiatives like climate prediction with massive datasets, and for monitoring numerous small-scale experiments.

And W&B actually is a good instrument: if you would like stunning dashboards and are collaborating** with a crew, W&B shines. And, till just lately, whereas reconstructing information from skilled neural networks, I ran a number of hyperparameter sweeps and W&B’s visualization capabilities have been invaluable. I may instantly evaluate reconstructions throughout runs.
However I noticed that for many of my analysis initiatives, W&B was overkill. I hardly ever revisited particular person runs, and as soon as a mission was performed, the logs simply sat there, and I did nothing with them ever after. After I then refactored the talked about information reconstruction mission, I thus explicitly eliminated the W&B integration. Not as a result of something was improper with it, however as a result of it wasn’t needed.
Now, my setup is way less complicated. I simply log chosen metrics to CSV and textual content recordsdata, writing on to disk. For hyperparameter searches, I depend on Optuna. Not even the distributed model with a central server — simply native Optuna, saving examine states to a pickle file. If one thing crashes, I reload and proceed. Pragmatic and adequate (for my use instances).
The important thing perception right here is that this: logging is just not the work. It’s a assist system. Spending 99% of your time deciding on what you need to log — gradients? weights? distributions? and at which frequency? — can simply distract you from the precise analysis. For me, easy, native logging covers all wants, with minimal setup effort.
Keep experimental lab notebooks
In December 1939, William Shockley wrote down an thought into his lab pocket book: substitute vacuum tubes with semiconductors. Roughly 20 years later, Shockley and two colleagues at Bell Labs have been awarded Nobel Prizes for the invention of the trendy transistor.
Whereas most of us aren’t writing Nobel-worthy entries into our notebooks, we are able to nonetheless study from the precept. Granted, in machine studying, our laboraties don’t have chemical substances or check tubes, as all of us envision after we take into consideration a laboratory. As an alternative, our labs typically are our computer systems; the identical gadget that I take advantage of to jot down these strains has skilled numerous fashions over time. And these labs are inherently portably, particularly after we are growing remotely on high-performance compute clusters. Even higher, because of highly-skilled administrative stuff, these clusters are operating 24/7 — so there’s at all times time to run an experiment!
However, the query is, which experiment? Right here, a former colleague launched me to the concept of mainting a lab pocket book, and recently I’ve returned to it within the easiest type doable. Earlier than beginning long-running experiments, I write down:
what I’m testing, and why I’m testing it.
Then, once I come again later — often the following morning — I can instantly see which ends up are prepared and what I had hoped to study. It’s easy, but it surely adjustments the workflow. As an alternative of simply “rerun till it really works,” these devoted experiments turn into a part of a documented suggestions loop. Failures are simpler to interpret. Successes are simpler to copy.
Run experiments in a single day
That’s a small, however painful classes that I (re-)discovered this month.
On a Friday night, I found a bug that may have an effect on my experiment outcomes. I patched it and reran the experiments to validate. By Saturday morning, the runs had completed — however once I inspected the outcomes, I noticed I had forgotten to incorporate a key ablation. Which meant … one other full day of ready.
In ML, in a single day time is valuable. For us programmers, it’s relaxation. For our experiments, it’s work. If we don’t have an experiment operating whereas we sleep, we’re successfully losing free compute cycles.
That doesn’t imply it is best to run experiments only for the sake of it. However every time there’s a significant one to launch, beginning them within the night is the right time. Clusters are sometimes under-utilized and sources are extra shortly accessible, and — most significantly — you’ll have outcomes to analyse the following morning.
A easy trick is to plan this intentionally. As Cal Newport mentions in his guide “Deep Work”, good workdays begin the evening earlier than. If you already know tomorrow’s duties right this moment, you’ll be able to arrange the suitable experiments in time.
* That ain’t bashing W&B (it could have been the identical with, e.g., MLFlow), however reasonably asking customers to guage what their mission objectives are, after which spend the vast majority of time on pursuing that objectives with utmost focus.
** Footnote: mere collaborating is in my eyes not sufficient to warrant utilizing such shared dashboards. You’ll want to acquire extra insights from such shared instruments than the time spent setting them up.