Challenging common assumptions in the unsupervised learning of disentangled representations

Challenging common assumptions in the unsupervised learning of disentangled representations Locatello et al., ICML'19 Today’s paper choice won a best paper award at ICML’19. The ‘common assumptions’ that the paper challenges seem to be: "unsupervised learning of disentangled representations is possible, and useful!" The key idea behind the unsupervised learning of disentangled representations is that ... Continue Reading

Continuous integration of machine learning models with ease.ml/ci

Continuous integration of machine learning models with ease.ml/ci: towards a rigorous yet practical treatment Renggli et al., SysML'19 Developing machine learning models is no different from developing traditional software, in the sense that it is also a full life cycle involving design, implementation, tuning, testing, and deployment. As machine learning models are used in more ... Continue Reading