Incremental consistency guarantees for replicated objects Guerraoui et al., OSDI 2016 We know that there's a price to be paid for strong consistency in terms of higher latencies and reduced throughput. We also know that there's a price to be paid for weaker consistency in terms of application correctness and / or programmer difficulty. Furthermore, … Continue reading Incremental consistency guarantees for replicated objects
Author: adriancolyer
The many faces of consistency
The many faces of consistency Aguilera & Terry, IEEE TC on Data Engineering Bulletin, 2016 Update: Mark Vukolic posted a comment to point me to an ACM Survey paper he published together with Paolo Viotti last year that looks at 50 different consistency models for distributed non-transactional storage systems and puts them into a comprehensive … Continue reading The many faces of consistency
Adaptive logging: optimizing logging and recovery costs in distributed in-memory databases
Adaptive Logging: Optimizing logging and recovery costs in distributed In-memory databases Yao et al., SIGMOD 2016 This is a paper about the trade-offs between transaction throughput and database recovery time. Intuitively for example, you can do a little more work on each transaction (lowering throughput) in order to reduce the time it takes to recover … Continue reading Adaptive logging: optimizing logging and recovery costs in distributed in-memory databases
Shasta: Interactive reporting at scale
Shasta: Interactive Reporting At Scale Manoharan et al., SIGMOD 2016 You have vast database schemas with hundreds of tables, applications that need to combine OLTP and OLAP functionality, queries that may join 50 or more tables across disparate data sources, oh, and the user is waiting, so you'd better deliver the results online with low … Continue reading Shasta: Interactive reporting at scale
Apache Hadoop YARN: Yet another resource negotiator
Apache Hadoop YARN: Yet Another Resource Negotiator Vavilapalli et al., SoCC 2013 The opening section of Prof. Demirbas' reading list is concerned with programming the datacenter, aka 'the Datacenter Operating System' - though I can't help but think of Mesosphere when I hear that latter phrase. There are four papers: in publication order these are … Continue reading Apache Hadoop YARN: Yet another resource negotiator
“A Distributed Systems Seminar Reading List,” Spring 2017 edition
Update: links giving 404s were too confusing, so I've removed links to not-yet published posts and will add them back in at the end of week! Last year we looked at Murat Demirbas' Distributed systems seminar reading list for Spring 2016. Now of course it's 2017 and Prof. Demirbas has a new list of papers … Continue reading “A Distributed Systems Seminar Reading List,” Spring 2017 edition
Strategic attentive writer for learning macro-actions
Strategic attentive writer for learning macro-actions Vezhnevets et al. (Google DeepMind), NIPS 2016 Baldrick may have a cunning plan, but most Deep Q Networks (DQNs) just react to what's immediately in front of them and what has come before. That is, at any given time step they propose the best action to take there and … Continue reading Strategic attentive writer for learning macro-actions
Unsupervised learning of 3D structure from images
Unsupervised learning of 3D structure from images Unsupervised learning of 3D structure from images Rezende et al. (Google DeepMind) NIPS,2016 Earlier this week we looked at how deep nets can learn intuitive physics given an input of objects and the relations between them. If only there was some way to look at a 2D scene … Continue reading Unsupervised learning of 3D structure from images
Learning to learn by gradient descent by gradient descent
Learning to learn by gradient descent by gradient descent Andrychowicz et al. NIPS 2016 One of the things that strikes me when I read these NIPS papers is just how short some of them are - between the introduction and the evaluation sections you might find only one or two pages! A general form is … Continue reading Learning to learn by gradient descent by gradient descent
Matching networks for one shot learning
Matching networks for one shot learning Vinyals et al. (Google DeepMind), NIPS 2016 Yesterday we saw a neural network that can learn basic Newtonian physics. On reflection that's not totally surprising since we know that deep networks are very good at learning functions of the kind that describe our natural world. Alongside an intuitive understanding … Continue reading Matching networks for one shot learning