1Federation in Cloud Data Management:Challenges and OpportunitiesGang Chen, Member, IEEE, H. V. Jagadish, Member, IEEE, Dawei Jiang, David Maier, SeniorMember, IEEE, Beng Chin Ooi, Fellow, IEEE, Kian-Lee Tan, Member, IEEE,Wang-Chiew Tan, Member, IEEEAbstract—Companies are increasingly moving their data processing to the cloud, for reasons of cost, scalability, andconvenience, among others. However, hosting multiple applications and storage systems on the same cloud introduces resourcesharing and heterogeneous data processing challenges due to the variety of resource usage patterns employed, the varietyof data types stored, and the variety of query interfaces presented by those systems. Furthermore, real clouds are neverperfectly symmetric – there often are differences between individual processors in their capabilities and connectivity. In thispaper, we introduce a federation framework to manage such heterogeneous clouds. We then use this framework to discussseveral challenges and their potential solutions.Index Terms—Cloud Computing, Database Systems, Database ApplicationsF1I NTRODUCTIONTHEREis increasing interest for companies to deploy their business systems on the cloud. Thetypical expectation is that the cloud computing serviceprovider will provision computing resources (e.g.,processors, storage, software, and possibly even applications) on-demand over the network. In the sameway that the electricity distribution grid revolutionized business use of electricity by allowing companies to access electricity without investing in theinfrastructure to produce it, cloud computing allowscompanies to access powerful computing resourceswithout requiring large IT investments from the companies. Small and medium companies can deploytheir business systems on the public cloud, hosted byservice providers such as Google or Amazon, and thuseliminate the costs of ownership. Large corporationscan build an on-premise private cloud to consolidatecomputing resources, and thus reduce the overall costof operations.A typical first step for deploying a business systemin the cloud is to move back-end data management to Gang Chen is with Computer Science College, Zhejiang University,Hangzhou 310027, China. H. V. Jagadish is with Department of Electrical Engineering andComputer Science, University of Michigan, USA 48109-2121. Dawei Jiang, Beng Chin Ooi and Kian-Lee Tan are with School ofComputing, National University of Singapore, Singapore 117417. David Maier is with Department of Computer Science, Portland StateUniversity, Portland 97201. Wang-Chiew Tan is with Computer Science Department, Universityof California, Santa Cruz, USA 95064.the cloud. Thus, cloud infrastructures have to supportdata management, for multiple applications and business units, in an efficient and scalable manner. Thestandard way to address this requirement is throughan elastic scaling-out approach, namely, on-demandprovision of more server instances for speeding updata processing tasks or handling increasing workloads. Therefore, a data management system is suitable for cloud deployment only if it is architecturallydesigned to leverage the elastic scaling-out feature.Initially, only distributed file systems (e.g., GFS and HDFS ) and massive parallel processing (MPP)analytical database systems  could readily serveas the basis for cloud deployment. With advancesin database sharding and NoSQL storage systems(such as BigTable , HBase  and Cassandra ),additional options for cloud deployment are nowavailable. Today, there continues to be a great deal ofeffort towards the study of the design, architecture,and algorithms for building scalable data management systems , , , , . We expect that inthe future, more back-end scale-out possibilities willbe developed, and hence, even more business systemswill migrate to the cloud environment.Notwithstanding the advantages of deploying business systems in the cloud, hosting multiple data storage systems in the same cloud introduces issues ofresource sharing and heterogeneous data processing. The cloud has to manage the variety of storageresource usage patterns (e.g., large file system blocksize vs. small block size), the variety of data managed(e.g., structured data, unstructured data, or mediadata such as digital maps and images), and the varietyof programming interfaces (e.g., SQL, MapReduce,key-value retrieval) presented by different systems.
2This article explores the challenges and opportunities in the design and implementation of data management systems on the cloud that can handle thevariety in storage usage patterns, the variety of datamanaged, and the variety of programming interfaces.We first consider the shared-nothing architecture, anarchitecture currently followed by data managementsystems for cloud deployment. We then explore thechallenges of hosting multiple architecturally similardata management systems on the cloud and suggest directions towards possible solutions. Finally, weintroduce epiC, a system that supports transparentresource management and heterogeneous data processing for federated cloud data management systemsthrough a framework design.The rest of the paper is organized as follows. Section 2 presents the physical characteristics of the cloudenvironment and the shared-nothing architecture fordata management systems hosted on the cloud. Section 3 presents the issues of deploying multiple data management systems on the cloud and desiredfeatures for such federated data management. Wealso discuss why such features are either missing orincompletely supported in current data managementsystems. Section 4 overviews the epiC system, followed by conclusion in Section 5.2 C HARACTERISTICS OFM ANAGEMENT S YSTEMSC LOUDDATAIn this section, we first present the physical characteristics of cloud computing environments and thearchitecture of a data management system that issuitable for cloud deployment. Afterwards, we discuss the desired features in providing federated datamanagement service in the cloud.2.1Characteristics of Cloud ComputingUnder the hood, the cloud computing serviceprovider (e.g., Google, Amazon or an IT departmentof an organization) manages a large number of computers interconnected by the network for on-demandprovisioning presented later. Those computers arehosted either inside a single data center or amongmultiple data centers connected by a wide area network (WAN). The service provider models a singledata center as a ‘big’ virtual computer and multipledata centers as a networked virtual computer . Thevirtual computer (big or networked) is then virtualized as a set of virtual server instances of differenttypes based on their computing power, such as CPUand storage capacity. For example, the Amazon EC2cloud offering provides six different instance types.The most powerful instance type is equipped with 15GB memory and 8 virtual cores .The service provider serves its customers with virtual server instances based on requests for the instance type and instance number. Usually, the serviceprovider offers a fixed number of instance types tochoose from. But, there is no a priori limit on themaximum number of virtual server instances thatcustomers can request. The actual limit (called a softlimit) depends on the capacity of the cloud and variesover time2.2 The Architecture of Scalable Cloud Data Management SystemsThere are two approaches to scale data managementsystems deployed on the cloud: scale up and scale out.In the scale-up approach, the company deploys thedatabase system on a single virtual server instance.When the workload increases, the business upgradesthe virtual server instance to a more powerful type.In the scale-out approach, the company deploys thedatabase on a set of virtual server instances of thesame instance type. Scaling involves requesting morevirtual server instances for processing the increasedworkload. Since the service provider poses a fixedlimit on the instance types that one can choose andmerely a soft limit on the number of virtual serverinstances that one can request, the scale-out approachis usually preferred. We will focus on scale-out in thispaper.Parallelism has been studied extensively in thedatabase community. A standard approach to parallelism is called shared-nothing, and this has turnedout to be suitable for building scale-out data management systems . The shared-nothing architecturedecomposes the data management system into twocomponents: master and worker. A typical sharednothing data management system consists of a singlemaster server and any number (at least theoretically)of worker servers. Worker servers are independentfrom each other. Each worker server independentlyserves read and write requests to its local data withoutrequiring help from other worker servers. The masterserver coordinates the worker servers to give usersthe illusion of data distribution transparency. Figure1 presents a typical shared-nothing architecture. Theshared-nothing architecture has two main advantages.First, it can leverage the elastic scaling out capabilityprovided by cloud services to handle increased workload by adding more worker servers to the systemwithout adversely affecting the higher level applications. Second, since worker servers are independentof each other, the whole data management system(also called a cluster) is highly resilient to workerfailures. When a worker fails, only data stored onthat worker need to be recovered. Data stored onother workers are not affected. Furthermore, one canspeed up the recovery process by involving several”healthy” workers in the recovery process, with eachhealthy worker recovering over a portion of datastored on the failed worker.As an example, GFS  (and its corresponding open
erWorkerWorkerDataDataData3.1 Asymmetric Hardware CapabilityFig. 1. The Shared-nothing ateFile()deleteFile()NameNodeLoad BalancingReplication, etc.DataNodeF1F3DataNodeF2F3Fig. 2. The HDFS Architecture. F1, F2 and F3 arereplicated files.source implementation HDFS) is a typical sharednothing storage system. The GFS system provides aservice of reading and writing files of arbitrary size.The system consists of a single master and a numberof slaves (i.e., workers). A file is stored as a numberof fixed size blocks, which are stored on the slaves.Each slave is capable of reading and writing file blocksstored on that slave. The master is responsible formaintaining the meta-information (e.g., file name, filesize, block locations) of each file. Figure 2 shows thearchitecture of HDFS system.The shared-nothing architecture is not only usefulfor building scalable storage systems but also suitablefor building a distributed data processing system.Similar to a storage system, a distributed data processing system also consists of a single master andany number of independent workers. Each worker iscapable of performing a query or a data processingtask independently. For a data processing task, themaster coordinates the workers to complete the dataprocessing task.Shared-nothing architectures are commonly used incloud data management, and we will focus on thesefor this paper. In other words, we assume that wehave a number of independent workers serving localrequests and a master that coordinates those workers.3 F EDERATION C HALLENGESDATA M ANAGEMENTis a challenge, as we describe in this section. In thenext three subsections we respectively consider thechallenges due to workers not being identical, due totheir having to accomplish a heterogeneous collectionof tasks, and due to their being less than perfectlyreliable.INC LOUDA cloud system comprises a set of perfectly reliable,identical independent workers, collectively dedicatedto the completion of one task of interest. While thissimple picture is useful in many contexts, it is inaccurate when it comes to actual deployment of a cloudsystem. Relaxing each of the simplifying assumptionsIt is typical to assume, in scale-out systems, that allnodes are identical in capability. While this situationmay be true for a short while following a new cloudinstallation, it is unlikely to remain true over time. Itis often the case that a cloud owner will add servicesto the cloud over time, and in doing so, s/he willpurchase new hardware. If even only a few monthshave gone by since the previous purchase, technologywill likely have progressed, and the new nodes mayhave better clock speeds, more cache or memory,and so on. It is likely the cloud owner will desireto purchase newer models of hardware available atthe time of purchase, rather than try to replicate aprevious purchase of older hardware. At the sametime, the cloud owner is unlikely to retire the oldermodel nodes until their useful life is over. Hence, weshould not expect that individual nodes in a cloud areidentical in practice.In the classic map-reduce paradigm, there is asynchronization point at the reduce phase, whichrequires that all mappers complete their tasks beforethe reduce phase begins. If the nodes implementingthe map phase are heterogeneous in their capabilities,then this heterogeneity must be taken into accountwhen distributing tasks across the nodes so as tooptimize the overall completion time for the map andreduce phases.3.2 Dynamic Resource SharingIt is well-recognized that there are two major classesof data management applications: transactional dataapplications and analytical data applications. Companies develop transactional data applications for theirbusinesses needs (e.g., airline ticket reservations).Transactional data applications tend to have a largenumber of database reads and writes. Even thoughthe total number of database reads and writes maybe large, each read tends to process only a smallnumber of records (a thousand records per read isconsidered large in most companies) and each writetypically only inserts a single record or a small number of records into the database . Companies buildanalytical data applications to derive insights from thebusiness data for decision making. Such applicationsfavor periodic reads and writes performed in batches.Each query or loading operation can involve retrievalor insertion of a huge number of records (millions orbillions).
4To support the two kinds of data applications, twokinds of data management systems are built: transactional and analytical. To achieve good performance,both types of data management systems persist datato the underlying file systems or storage devices (e.g.,hard disks) in different ways. Analytical data management systems favor sequential reads and writes ofa large number of records and thus tend to employ alarge file-system block size (tens or hundreds of MBtypically) for excellent throughput. On the other hand,transactional data management systems favor randomreads and writes of a small number of records andthus, they usually adopt a small file system block size(tens of KB at most) to reduce data transmission timeand latency.Traditionally, the needs of these two different classes of applications are deemed so disparate that theycannot be run efficiently on the same machine. For thesame reasons, deploying both transactional and analytical data management systems on the same cloudintroduces similar conflicts in storage resource usage.In the shared-nothing architecture, if one allows bothkinds of data management systems to share the sameset of workers by employing a small file systemblock size, then one can potentially achieve goodresource utilization (space freed by one system canbe reclaimed by another system). However, the performance of the analytical data management systemwill be suboptimal. This is because the large numberof insertions, deletions and updates introduced by thetransactional database will fragment data files storedon hard disks over time and therefore, prevent theanalytical data management system from claiminglarge consecutive storage blocks. On the other hand,if one separates the two data management systems byplacing them on different workers, better performancecan be achieved as the data management systemsare effectively isolated from each other. However, theoverall resource utilization is likely to be low sincethe storage space released by one system cannot bereused by the other.Given that different kinds of data managementsystems favor different resource usage patterns, companies must deploy different file systems on the cloudfor different data-persistence requirements. Each filesystem is dedicated to a specific storage system.Companies must also employ a resource-managementsystem for dynamic resource sharing between thosefile systems. The resource-management system shouldbalance two conflicting goals: performance isolation –ensuring that each file system operates on a dedicatedstorage space without being affected by other filesystems – and resource utilization – ensuring storagespace released by one file system can be reclaimed byanother file system.There are a few systems that support dynamicresource sharing on cloud environment. Cluster management systems such as Mesos  guarantee perfor-mance isolation on computing devices (e.g., CPU andmemory). However, its performance isolation capabilities on storage devices (e.g., hard disks, SSDs) is onlyin the experimental stages. The project most closelyrelated to this topic is HDFS federation . Thissystem allows companies to deploy multiple HDFSfile systems on the same cloud. It allows the same datanode (i.e., a worker server of HDFS storing file blocks)to be shared among multiple HDFS file systems.However, the system does not provide performanceisolation. A single system cannot gain exclusive accessto the underlying storage device.In addition to sharing a computer node betweendifferent storage systems, there is another kind ofresource sharing in an enterprise cloud environment,which is sharing a computer node between a computation framework such as MapReduce and a storagesystem such as GFS. In the latter case, a computenode contributes a fixed number of computing units(called slots) to the computation framework (e.g.,MapReduce) for running tasks and a set of storagedevices ( e.g., disks) to the storage system for storingand retrieving data.It is well-known that data retrieval or scanningperformance dominates the performance of largescale data analytical jobs. Thus, the aforementionedresource sharing scheme introduces an issue of howto efficiently fetch data from the storage system to thecomputation framework for processing. The current state of the art approach for handling this problem is toincrease data-locality during data processing, namely,launching as many tasks on compute nodes that havethose tasks’ input as possible. Since data is primarilystored on disks, data locality is usually realized at thedisk level (called disk locality). However, due to theadvances in networking technology, the disk-localityapproach may not work well in an in-house businesscloud environment. Recent studies have shown that,with improved commodity network, the read bandwidth from local disk is only slightly (about 8%) fasterthan the read bandwidth from a remote disk hosted inthe same rack . Furthermore, when a data centeremploys bisection topology , the inter-rack readbandwidth is equal to the intra-rack read bandwidth.As a result, the performance of reading remote diskswill be comparable to the performance of readinglocal disks, meaning the disk-locality approach maynot deliver better data retrieval performance.However, the principle of data-locality to improvedata retrieval performance still applies. Instead of focusing on disk-locality, we need to focus on memorylocality. That is, to read data from local memory instead of from remote memory and disks, since readingdata from local memory is at least two orders ofmagnitude faster than reading data from disks (localor remote) or remote memory, and the performancegap is unlikely to be affected by advances in networktechnology , .
5To achieve memory-locality, two problems need tobe resolved. First, can data be effectively held in thememory of the cluster? Second, how can we scheduletasks to increase memory-locality? To solve the firstproblem, a recent proposal suggests a RAMCloud approach, namely using memory as the primary storagedevice to store all input data and only using disksfor backup or archival purposes . Unfortunately,given that companies produce data at a tremendousrate these days and the capacity of disks in today’senterprise cloud environment is several orders ofmagnitude larger than the size of the memory, it is notviable to entirely replace the disks with memory. Forthe second problem, a recent work attempts to adopta dynamical data repartitioning technique to reducethe inter-memory data transmission for repeated jobs.In this article, we argue that a better and morefeasible approach for handling the memory-capacityproblem is to use memory as a cache. Instead ofholding all input data in memory at any time (suggested by RAMCloud), the caching scheme cachesdata (both input data and intermediate data producedby analytical jobs) in memory as much as possible,and employs an eviction strategy to make space fornewly arrived or computed data. The cache is adaptive and designed to facilitate data scheduling. Basedon the query workload, the cache management system actively redistributes the cached data to increasememory-locality by ensuring most tasks can read datafrom local memory instead of remote memory ordisks. The caching scheme presents several challengesincluding scalable meta data management (i.e., how toeffectively maintain location information for cacheddata as the data may be distributed) and an evictionstrategy designed for achieving better job-completionperformance instead of high hit ratio. These challenges are not well studied so far.3.2.1 Diversity in Node Query InterfacesGiven that cloud storage systems employ a sharednothing architecture, data access requests are actually performed by storage workers. Different storageworkers present different query interfaces. For example, a distributed file system worker exposes a byteoriented interface for users to read and write bytesfrom file blocks; and a NoSQL store worker exposesa key-value interface for accessing key-value pairs.To facilitate programming, application programmersoften wrap these native low-level query interface intoa high-level data processing interface. In this section,we describe the desired features that such a high-levelinterface should support if the local data is expected tobe processed by a distributed data processing system.There are two kinds of query interfaces proposedso far: record-oriented and set-oriented. A recordoriented query interface presents users a set of interfaces (i.e., functions) with well defined semantics.The input of each function is a single record. Usersspecify the data processing logic in the body of thefunction. The return value of the function could be arecord, a scalar value, or a list of records, dependingon the semantics of that function. A set-oriented queryinterface presents a collection of pre-defined operatorsto users. Each operator takes a single set or a fixednumber of sets of records as input and produces aset of records as the output. Users specify their dataprocessing tasks by composing those operators. Here,the definition of a set is similar to the definition of aset in mathematics, except that duplicate records arepermitted in some systems.The advantage of the record-oriented approach isthat the query interface allows users to implementarbitrary data-processing logic. There is no hard limit.A disadvantage of the record-oriented approach isthat the query interface is not operationally closed.A function is operationally closed if the inputs andoutputs are of the same type. For example, the integerarithmetic operations such as plus and minus areoperationally closed since both the inputs and outputsof those operators are all integers. If query interfacesare operationally closed, one can easily compose anarbitrary number of functions by piping the outputof one function to another and thus write queries anddata processing tasks concisely. Furthermore, with operational closure a query can be optimized by a queryplanner to automatically generate a good evaluationorder. This capability further reduces the burden forusers to optimize the query performance.The advantage of the set-oriented approach is thatthe query interface is operationally closed. The inputsand output of each operator are all sets. However, a disadvantage of set-oriented approach is thatthe queries and data-processing tasks that users canexpress are limited by the operators available. Forexample, if one wants to partition a dataset for parallelprocessing and the partition operator is missing in theinterface, then the user has no systematic way to perform that task and has to resort to an ad-hoc solution.Another disadvantage of the set-oriented approachis that the programming model does not providea way for users to customize the behavior of predefined operators. For example, if the built-in filteringoperator is deemed inappropriate for an application’sfiltering logic, one may want to implement his or hervery own filtering logic to replace the default filteringoperator. Such modification is typically not allowed ordifficult to do in set-oriented querying systems.Given that there is an unbounded number of potential useful operators desired, we argue that theideal query interface for per-node query processingshould be a mixture of the record-oriented and setoriented styles with capabilities for expanding thebuilt-in operator set. The query interface should provide two collections of APIs: a record-oriented API forcustomizing the behavior of the built-in operators, an
6extension API for introducing user-defined operatorsto the system and a set-oriented API for compositionof both built-in and user-defined operators. To evaluate a query, the optimizer should be able to generatequery plans for queries composed by a mixture ofbuilt-in operators and user-defined operators.The desired hybrid query interface and query optimization technique are missing in the current systems.Existing systems only offer a single query interface,which is either record-oriented or set-oriented. TheMapReduce system employs a record-oriented queryinterface. Each local query processing task is eitherperformed in a map or reduce function. The signatures of the two functions are as follows .map (k1 , v1 ) list(k2 , v2 )reduce (k2 , list(v2 )) list(v2 )It is clear that the interface is not operationallyclosed. The input and output of map or reduce functions are of different types. Therefore, it is impossible to directly feed the output of map functionsto reduce functions. Actually, the major service offered by the MapReduce runtime system (i.e., sorting,shuffling and merging) is to transform the outputformat of mappers to the input format of reducers. Therefore, the runtime system implicitly enforcesan evaluation order between mappers and reducers.Complex queries are performed by chaining a numberof MapReduce jobs. This evaluation strategy oftenresults in sub-optimal performance , , , .All shared-nothing parallel databases employ a setoriented query interface for local data processing.Even though the idea of employing user-defined functions for parameterizing built-in operators such asselection and aggregations is well discussed in thedatabase community, such a feature is still missing inmost database systems . In addition, the supportfor user-defined functions is incomplete in all commercial database systems we are familiar with. Additionally, not every operator is allowed to be parameterized. For example, to the best of our knowledge, nocommercial database system supports a user-definedjoin function, even though it is allowed in theory. Furthermore, no commercial database systemsupports user defined composable operators.3.2.2Diversity in the System-Level Query InterfaceAs discussed previously, in addition to structureddata stored in relational data management systems,companies also produce unstructured data stored indistributed file systems and other business data storedin NoSQL storage systems. These storage systemspresent fundamentally different data access methods,ranging from simple byte reading (file systems), andkey-value retrieval (NoSQL stores) to relational query(SQL databases). If one designs the data processingsystem to be general and thus only assumes theunderlying storage system provides a simple byteoriented or key-value retrieval interface, the resultingsystem may lose the chance of leveraging advancedquery processing provided by SQL database systems.As a result, users may need to manually performquery processing at the application level. If one designs the data processing system to use an advancedquery interface for data retrieval, it will be difficultfor the system to retrieve data from file systems orNoSQL stores as many advanced query features aremissing in those systems. As mentioned previously, toprocess or analyze data stored in the cloud, a sharednothing distributed data processing system is used.To protect the investment of businesses in buildinganalytical data applications, the distributed data processing system should be able to process data storedinside a single storage system (called intra-systemdata processing) and within a set of storage systems(called inter-system data processing). The ability tosupport both intra and inter-system query processingallows all business analytical data applications to bebuilt under the same framework. Thus, the knowledgeand experiences of tuning and programming the dataprocessing systems can be preserved, captured, andshared among developers.Since both the data processi
ment systems . The shared-nothing architecture decomposes the data management system into two components: master and worker. A typical shared-nothing data management system consists of a single master server and any number (at least theoretically) of worker servers. Worker servers are independent from each other. Each worker server .