Synthesizing data structure transformations from input-output examples

Synthesizing data structure transformations from input-output examples, Feser et al., PLDI'15 The Programmatically Interpretable Reinforcement Learning paper that we looked at last time out contained this passing comment coupled with a link to today's paper choice: It is known from prior work that such [functional] languages offer natural advantages in program synthesis. That certainly caught … Continue reading Synthesizing data structure transformations from input-output examples

Declarative assembly of web applications from pre-defined concepts

Declarative assembly of web applications from predefined concepts De Rosso et al., Onward! 2019 I chose this paper to challenge my own thinking. I’m not really a fan of low-code / no-code / just drag-and-drop-from-our-catalogue forms of application development. My fear is that all too often it’s like jumping on a motorbike and tearing off … Continue reading Declarative assembly of web applications from pre-defined concepts

Local-first software: you own your data, in spite of the cloud

Local-first software: you own your data, in spite of the cloud Kleppmann et al., Onward! '19 Watch out! If you start reading this paper you could be lost for hours following all the interesting links and ideas, and end up even more dissatisfied than you already are with the state of software today. You might … Continue reading Local-first software: you own your data, in spite of the cloud

Scaling symbolic evaluation for automated verification of systems code with Serval

Scaling symbolic evaluation for automated verification of systems code with Serval Nelson et al., SOSP'19 Serval is a framework for developing automated verifiers of systems software. It makes an interesting juxtaposition to the approach Google took with Snap that we looked at last time out. I’m sure that Google engineers do indeed take extreme care … Continue reading Scaling symbolic evaluation for automated verification of systems code with Serval

Three key checklists and remedies for trustworthy analysis of online controlled experiments at scale

Three key checklists and remedies for trustworthy analysis of online controlled experiments at scale Fabijan et al., ICSE 2019 Last time out we looked at machine learning at Microsoft, where we learned among other things that using an online controlled experiment (OCE) approach to rolling out changes to ML-centric software is important. Prior to that … Continue reading Three key checklists and remedies for trustworthy analysis of online controlled experiments at scale

Automating chaos experiments in production

Automating chaos experiments in production Basiri et al., ICSE 2019 Are you ready to take your system assurance programme to the next level? This is a fascinating paper from members of Netflix’s Resilience Engineering team describing their chaos engineering initiatives: automated controlled experiments designed to verify hypotheses about how the system should behave under gray … Continue reading Automating chaos experiments in production