The video recording is available on Youtube.
About the book
I couldn't resist sharing some thoughts about the book firstIf you plan to buy the book, check the W&B forums linked above for Sanyam's discount code, good for PDF and (if you live where they ship at reasonable prices) and print edition (and any other book from them, I think).. Needless to say, these are my own opinions, not necessarily shared by my coauthors or the publisher.
One of the comments I have been seeing a lot is that the book is light on formulas, and it is! As a mathematician with a Ph.D. in pen-and-paper analysis, I tend to be quick to get out the equations (avid readers of this blog will have noticed) when I feel the need. My take on the absence of formulas in the book is that it is less an absence of mathematics, but that we didn't need the formulas to make things precise because we can use code to do that.
Having named my company MathInf for Mathematics and Inference, I do have the firm belief that better models require more (and more creative) maths than we currently have.
The other bit about the book is that we do present information (as in facts) as well as give style advice and have an opinionated take on things. In our writing we try to be clear which is which, but at any rate, there will be things to disagree with. Fun fact: We got a complaint that we were undermining batch norm. This was even before NFNets were a thing...
The other part of the book I wanted to comment on is the structure (which is a large part of the reason why we stick to convolutional networks and e.g. don't cover e.g. NLP).
Part 1 takes you from images to a classifying convolutional network, but it also is an example of how to approach new tasks in a structured way:
- Get a good grip on the nature of the data and data representation (also do a train/validation/test split),
- define objective(s) (metrics for success, loss functions as proxy),
- and only then the models and architectures.
People tend to focus on models and architectures, but to my mind, it is good to put data and objectives need first.
In part 2 you will find that we do hit bumps on the road to finding and classifying nodules and only then show the remedy. With this, we hope to give you some ideas when you hit obstacles in your own journey.
As someone with an affection for free software (I'm a retired Debian Developer and try to put in an occasional patch to LibreOffice) and from observing and participating in the PyTorch community for more than four years, I also offered some reflections on community.
Community is with exchange
There is not always a clear distinction between Developer Relations work (which can be much more one-way) and more organic notions of community. It is not quite as dire "if you are not the customer, you are the product" as in other parts of the internet, but one has to be mindful of the fact that much of the free resources are there for a reason.Hey, I'm hoping my blog will benefit my business, too, did you notice...
Even if it is about exchange though, being in a community is not an accounting exercise and we all start by needing more input than we contribute at first, so we don't need to be too shy, either.
Community is not a service
On one hand, this means to be aware of the "other side" of the community being people, too, and their respective situations. Some people help out as part of their job, others may want to give back for help they received, etc. Being kind almost always helps!
The other side is there very soon, too: Know why and how much you want to contribute (i.e. have a limit) and don’t burden yourself by “obligation” or people demanding things. This is particularly dear to me, because I think forgetting to take care of yourself in these situations can ruin anyone with the best intentions and firm commitment. Communities often have gamification things (likes, starts whatnot) to lure you into doing more, but it is definitely a two-edged sword.
Community resources are best used wisely
This is where things get practical. A well-prepared question (thought-out questions, some context, maybe code snippets) makes good use of people’s time and expertise and is much more likely to get good answers. At the same time,don’t be afraid to ask: There are no stupid questions - and we have all been in situations where we stared at a problem for hours or days and someone else found the problem within minutes.
Moderation is another aspect using resources wisely: If your questions dominate the forum, something is not going well.
Community is about people
The above being said, this is not a surprise. So it is good to keep it fun for you… and others.
One special aspect here: Participating in a larger community is much nicer if you also find people that you like to “hang out” with if only online – or to reflect on things going on in the community. I probably interacted with thousands of people. But I also regularly catch up with two dozen, have close contact with half a dozen people, and have found some very good friends in the community.
Enough theory! Let’s see some community resources. Of course, this will not be a complete list, just my personal highlights.
The occasion makes the reading group go first. To my mind, having a group (or even just one or two other people) is a real help for any larger endeavour.
It provides pace and so makes it easier to keep going. Also, it is a good opportunity to discuss things that are not explained well, that you disagree with, that might be errors in the book.
Finally, technical books - and our book in particular - always have limitations in their scope, so it is neat to share ideas for going further / deeper.
I consider myself an advanced PyTorch user, but I often turn to the official tutorials for examples of how to achieve specific things.
To my mind, they come in different flavours:
- Some present an application or a technique like the tutorial on finetuning.
- Others introduce a part of the library. For example, there is a tutorial on how to use the profiler. I would also put the quantization in this category.
The tutorials are crisp to their purpose, generally not overdoing contextualizing what they do (background, transferability to other tasks) with all the advantages and disadvantages. You will get something done very quickly. But when I recently wrote a little series on quantization the PyTorch Lightning dev blog (1, 2, 3, 4), I felt that I should put more emphasis on the background than the static quantization / quantization aware training tutorial tutorial.
The tutorials also probably a good reference/starting point when asking questions on the forums about how something works because you’ll find people that have looked at them just like you do.
Like all code, the tutorials can have bugs, too, or go out of date. (And if you find a bug, it is a good opportunity to contribute fixes, as they’re less unwieldy than PyTorch itself.)
In my experience, the PyTorch forums at discuss.pytorch.org invariably have great answers (unless they’re from me) – and this is thanks to people like E. Yan @eqy, K. Frank @kfrank and of course P. Bialecki @ptrblck with many, many insightful answers in the last quarter. (Sanyam rightly commented that Piotr is a legend!)
You get best answers on the forum if you have a simple code snippet to demonstrate what does not work or where you are stuck with the next line. This is because people are inclined to run a self-contained demonstration of the problem. If you have to tweak the suggestions (or it was one of them and not others if there are multiple), we do appreciate if you share what solved the problem in the end.
It also is nice place to share a link if you have built something. I found some very cool projects on there.
Sharing your own things
Building and sharing fun things (bonus points if it is code + description how you made it / what you learned) is also a great way to contribute to the wider community.
As a random example, I loved deep learning with cats when I was a relatively new PyTorch user.
I tried to do small things in notebooks like [an early PyTorch SNGAN][https://nbviewer.jupyter.org/github/t-vi/pytorch-tvmisc/blob/master/wasserstein-distance/sn_projection_cgan_64x64_143c.ipynb] and Piotr and my PyTorch implementation of the original StyleGAN.
You might do:
- generate things with a GAN on some new type of data
- show a new application for neural networks in some field of your expertise or interest
- explain things better than we do in the book
- have better ideas than I have...
Reading source code
Reading the source of things you find interesting to find out what is going on under the hood can be very insightful. If you don't know where to start (the PyTorch source code is intimidating!), you can start looking at call stacks in the debugger to find what is called by whom.
For example: What happens if you call a given PyTorch function? That is a simple question, but will have an elaborate answer, and in the best case you will learn about the Python-C++ interface, the PyTorch dispatcher, Autograd, code generation, and the ATen sublibrary of tensor routines calling into backends.
Some examples of blog posts:
Oldie but goldie: A selective excursion by yours truely
More recently, I made something for calling TorchScript functions in great depth with source code links: JIT Runtime Overview
Incidentally, I saw A public dissection of a PyTorch training step by Charles Frye on the day of my talk.
Submitting features / bug fixes (PRs)
When? If you find something that isn’t quite working right. Maybe validate that it’s a bug on the forums.
Alternatively, you might look for Good first issue-labeled issues in the bug tracker.
There is a detailed help for building PyTorch (the main hurdle) in CONTRIBUTING.md. A full fresh built used to take 4 hours for me on my previous work machine, this can limit the enthusiasm for changing things in files causing a lot of compilation...
If you feel PyTorch is very large even with the hints in CONTRIBUTING.md, auxiliary repos (like TorchVision & co, Examples, Tutorials, …) can be less unwieldy…
I once made an video fixing a bug.
And this was just a very personal selection
Again, many more resources exist: Blogs, podcasts, newsletters, forums of ecosystem projects...
I won't transcribe the Q&A part here.
Thanks again Sanyam for organizing the reading group and having me, and Andrea Pessl for the technical support.
I hope you will enjoy reading and discussing our book and that some of the resources listed above will be helpful in your journey with PyTorch.