Maschinelles Lernen Lernen

Debugging CUDA device-side assert in PyTorch

June 15, 2018

The beautiful thing of PyTorch's immediate execution model is that you can actually debug your programs.
Sometimes, however, the asynchronous nature of CUDA execution makes it hard. Here is a little trick to debug your programs.


German LM for the Fast AI model zoo (work in progress)

June 4, 2018

At the excellent fast.ai course and website, they are training a language model zoo.

It's a charming idea and here is (not quite complete yet) code and model I got for German.


2D Wavelet Transformation in PyTorch

Oct. 29, 2017

The other day I got a question how to do wavelet transformation in PyTorch in a way that allows to compute gradients (that is gradients of outputs w.r.t. the inputs, probably not the coefficients). I like Pytorch and I happen to have a certain fancy for wavelets as well, so here we go.


More Improved Training of Wasserstein GANs and DRAGAN

May 29, 2017

This is following up on my post on improved and semi-improved training of Wasserstein GANs. A few days ago, Kodaldi et al published How to Train Your DRAGAN. They introduce an algorithmic game theory approach and propose to apply the gradient penalty only close to the real-data manifold. We take a look at their objective function, offer a new possible interpretation and also consider what might be wrong in Improved Training objective.
While doing so we introduce PRODGAN and SLOGAN.


Geometric Intuition on Improved Wasserstein GANs

April 13, 2017

We look at Improved Training of Wasserstein GANs and describe some geometric intuition on how it improves over the original Wasserstein GAN article.

Updated: We also introduce Semi-Improved Training of Wasserstein GANs, a variant that is simpler to implement as it does not need second derivatives.