Quartet: Harmonizing Task Scheduling and Caching for Cluster Computing

Francis Deslauriers, Peter McCormick, George Amvrosiadis, Ashvin Goel, Angela Demke Brown

the 8th USENIX Workshop on Hot Topics in Storage and File Systems, Denver, Colorado, June 2016

 

Abstract

Cluster computing frameworks such as Apache Hadoop and Apache Spark are commonly used to analyze large data sets. The analysis often involves running multiple, similar queries on the same data sets. This data reuse should improve query performance, but we find that these frameworks schedule query tasks independently of each other and are thus unable to exploit the data sharing across these tasks. We present Quartet, a system that leverages information on cached data to schedule together tasks that share data. Our preliminary results are promising, showing that Quartet can increase the cache hit rate of Hadoop and Spark jobs by up to 54%. Our results suggest a shift in the way we think about job and task scheduling today, as Quartet is expected to perform better as more jobs are dispatched on the same data.

 

Manuscript

Pdf

 

Bibtex

Bib