Today we visualize some common neural network structures.
Maschinelles Lernen Lernen
Today we visualize some common neural network structures.
Eine der aktuell meistdiskutierten Kennzahlen der Epidemie ist die Reproduktionszahl R und deren Schätzung durch das Robert-Koch-Institut. In diesem Artikel vollziehen wir - so gut wie uns möglich - die zugrundeliegende Schätzung der Neuerkrankungszeitreihe. Das erlaubt es uns auch, analoge Kennzahlen für Bundesländer zu berechnen.
Je nach Perspektive waren diese Woche die Lockerungsdiskussionsorgien oder aber das Wort der Aufreger.
In den meisten Ländern Westeuropas sind wir glücklicherweise, aber leider wohl auch erst mal über die Phase exponentiellen Wachstums der Fallzahlen hinaus. Für den aufsteigenden Trend der Epidemie haben die Zeitungs- und Online-Redakteure mit der Zeit anschauliche Kennzahlen und Graphiken aufgenommen oder entwickelt. Wie könnte man für die Phase der Reduktion jetzt sinnvoll die Lage und Entwicklung messen?
Kann man das nur mit Modellen oder auch relativ direkt auf Basis der Fallzahlen?
Thankfully, the measures taken to reduce the transmission of the coronavirus seem to work.
In Germany (and elsewhere) we're seeing a discussion on lifting restrictions.
It would be good to inform the discussion with some quantitative picture of the trade-off here.
Epidemiology is the science of the distribution, patterns, and determinants of diseases. We are interested in the distribution here. Like in many scientific fields mathematical modeling plays a large role. We look a a basic model for the spread of disease.
Epidemiologie ist die Wissenschaft von der Ausbreitung von Krankheiten. Wie in vielen wissenschaftlichen Disziplinen gibt es mathematische Modelle, mit denen man versuchen kann, die Ausbreitung zu beschreiben. Wir werfen heute einen Blick auf ein grundlegendes Modell.
I have been more than usually quiet here. This is not because I have been writing little, but because I have been writing much. I'm very thrilled to work with Eli Stevens and Luca Antiga on our book, Deep Learning with PyTorch.
I'm experimenting with low-barrier sponsoring of some of my work via GitHub Sponsors. Here is the first report for subscribers.
An extension providing offline Neural Machine Translation in LibreOffice Writer.
Explaining AI outputs has been a topic I have worked on implementing quite a bit. Last May I gave a talk Der KI auf die Finger geschaut (Keeping an eye on the AI) to mathematicians and actuaries at the University of Göttingen.
Today we look at the Sinkhorn iteration for entropy-regularised Wasserstein distances as a loss function between histograms.
Three of the most liked features of PyTorch are the extensible autograd mechanism, the ability to extend PyTorch with C++ efficiently, and the tracing/scripting mechanism. Which leads to the natural question - can we have all at the same time?
In this post, we dive into the autograd internals and come out with a solution.
PyTorch is a great project and I have only met very helpful people when contributing to it. However, the code base can be quite intimidating. Here we look at fixing a simple bug in detail and see that it is a less daunting task than it might seem at first.
And now for something completely different: In order to access the Fischertechnik Robotics TXT's camera functions under Wine, one needs to cope with the camera port being opened slowly. We provide a small Python proxy to solve this.
Exponentially weighted moving averages are used in several places in machine learning (often under the header of momentum). We look at the connection between batch size and momentum.
In a second very technical PyTorch JIT article, we look at graphs, specialization, and the impact on optimizations in the JIT.
Implementing fast recurrent neural networks is a challenging task. This is not only a hassle for training existing architectures - sometimes optimized implementations such as CuDNN's LSTM help there. More gravely, it also limits experimentation with new architectures.
This week, we had a PyTorch Meetup in Munich at Microsoft.
It was great to see more than 90 people visit for the two talks and PyTorch chat over Pizza and drinks afterwards! Piotr Bialecki gave a talk on semantic search on the PyTorch forums, and I had the honor of talking about PyTorch, the JIT, and Android.
Recently, I discussed the use of PyTorch on Mobile / IoT-like devices. Naturally, the Caffe2 Android tutorial was a starting point. Getting it to work with Caffe2 from PyTorch and recent Android wasn't trivial, though. Apparently, other people have not had much luck, I easily got a dozen questions about it on the first day after mentioning it in a discussion.
This should be easier. Here is how.
The beauty of PyTorch is that it makes its magic so conveniently accessible from Python. But how does it do so? We take a peek inside the gears that make PyTroch tick.
(Note that this is a work in progress. I'd be happy to hear your suggestions for additions or corrections.)
Today I gave a talk on Alex Graves's classic RNN paper and what I took away from implementing the handwriting generation model in PyTorch. To me, the density of insights combined with the almost complete absence of mechanical bits as well as the relatively short training time, makes this a very worthwhile exercise that I can heartily recommend to anyone interested in RNNs.
The beautiful thing of PyTorch's immediate execution model is that you can actually debug your programs.
Sometimes, however, the asynchronous nature of CUDA execution makes it hard. Here is a little trick to debug your programs.
At the excellent fast.ai course and website, they are training a language model zoo.
It's a charming idea and here is (not quite complete yet) code and model I got for German.
The other day I got a question how to do wavelet transformation in PyTorch in a way that allows to compute gradients (that is gradients of outputs w.r.t. the inputs, probably not the coefficients). I like Pytorch and I happen to have a certain fancy for wavelets as well, so here we go.
This is following up on my post on improved and semi-improved training of Wasserstein GANs. A few days ago, Kodaldi et al published How to Train Your DRAGAN. They introduce an algorithmic game theory approach and propose to apply the gradient penalty only close to the real-data manifold. We take a look at their objective function, offer a new possible interpretation and also consider what might be wrong in Improved Training objective.
While doing so we introduce PRODGAN and SLOGAN.
We look at Improved Training of Wasserstein GANs and describe some geometric intuition on how it improves over the original Wasserstein GAN article.
Updated: We also introduce Semi-Improved Training of Wasserstein GANs, a variant that is simpler to implement as it does not need second derivatives.