褚福銀+張林+何坤鵬
摘要:隨著互聯(lián)網(wǎng)技術(shù)的發(fā)展,人類產(chǎn)生的數(shù)據(jù)量正在呈指數(shù)級(jí)增長(zhǎng),Hadoop作為大數(shù)據(jù)領(lǐng)域的常用工具,在現(xiàn)代生活中發(fā)揮著至關(guān)重要的作用。Hive是基于Hadoop的一個(gè)數(shù)據(jù)倉(cāng)庫(kù)工具,在做查詢統(tǒng)計(jì)分析時(shí)最終翻譯成Hadoop平臺(tái)上的MapReduce程序運(yùn)行,當(dāng)數(shù)據(jù)量不斷增大時(shí),就會(huì)使得查詢效率[5]下降。該文就此提出了一種Hive與Spark結(jié)合的方案,將Hive的查詢作為Spark的任務(wù)提交到Spark集群上進(jìn)行計(jì)算,利用Spark的特性提高Hive 查詢性能。該研究首先理論闡述了Hive與Spark各自的工作機(jī)制,然后介紹Hive_Spark原理,最后通過(guò)做實(shí)驗(yàn),對(duì)實(shí)驗(yàn)結(jié)果進(jìn)行對(duì)比,分析,從而驗(yàn)證Hive_Spark提高了查詢效率,對(duì)大規(guī)模數(shù)據(jù)處理具有一定參考意義。
關(guān)鍵詞:Hadoop;Hive;Spark;查詢;海量數(shù)據(jù)
中圖分類號(hào):TP31 文獻(xiàn)標(biāo)識(shí)碼:A 文章編號(hào):1009-3044(2016)21-0003-03
Abstract: With the development of Internet technology, The amount of data generated by humans is growing exponentially. Hadoop as a common tool in the field of big data, play a vital role in modern life. Hive is a data warehouse tools based on Hadoop, when doing statistical analysis queries eventually translated into Hadoop program running on the platform, when increasing amounts of data, it makes the query efficiency will be reduced. In this paper, we propose a Hive and Spark combination of the program, the Hive query as the task of Spark to submit to the Spark cluster computing, using the characteristics of Spark to improve the performance of Hive query. This research firstly theory elaborated the Hive and Spark their working mechanism, and then this paper introduces the principle of Hive_Spark finally by doing experiment, compared with the result of the experiment and analysis, to validate Hive_Spark improve the query efficiency, for large-scale data processing has a certain reference significance.
Key words: Hadoop; Hive; Spark; Data query; Mass data
1 引言
隨著大數(shù)據(jù)時(shí)代的到來(lái),數(shù)據(jù)量的急速增長(zhǎng)以及對(duì)數(shù)據(jù)實(shí)時(shí)查詢的迫切需求使得傳統(tǒng)的數(shù)據(jù)倉(cāng)庫(kù)引擎難以滿足企業(yè)對(duì)大數(shù)據(jù)存儲(chǔ)與分析的需求。Hadoop[3-4] 作為一種開(kāi)源的架構(gòu)憑借其低成本、可伸縮性和高容錯(cuò)性等優(yōu)點(diǎn)開(kāi)始取代傳統(tǒng)數(shù)據(jù)倉(cāng)庫(kù)[8],采用 MapReduce 編程模型可以對(duì)海量數(shù)據(jù)進(jìn)行有效分割和合理分配。hive是基于Hadoop的一個(gè)數(shù)據(jù)倉(cāng)庫(kù)工具, 提供了類似SQL的查詢接口,但是由于Hive[13]的執(zhí)行引擎是將SQL編譯成一系列的MapReduce作業(yè)來(lái)運(yùn)行,其性能代價(jià)較高。本文提出了一種hive_spark的查詢模式,spark本身是基于內(nèi)存的迭代式計(jì)算,利用Spark的特性提高Hive 查詢性能[12]。
2 Hive
2.1Hive系統(tǒng)架構(gòu)
Hive是建立在Hadoop上的數(shù)據(jù)倉(cāng)庫(kù)基礎(chǔ)構(gòu)架[11],它提供了一系列的工具,以用來(lái)進(jìn)行數(shù)據(jù)提取轉(zhuǎn)化加載ETL,這是一種可以存儲(chǔ)、查詢和分析存儲(chǔ)在Hadoop中的大規(guī)模數(shù)據(jù)的機(jī)制。Hive定義了簡(jiǎn)單的類SQL查詢語(yǔ)言,稱為 HQL,它允許熟悉SQL的用戶查詢數(shù)據(jù),方便熟悉MapReduce開(kāi)發(fā)者的開(kāi)發(fā)自定義的mapper和reducer來(lái)處理內(nèi)建的mapper和reducer無(wú)法完成的復(fù)雜的分析工作。Hive是SQL解析引擎,它將SQL語(yǔ)句轉(zhuǎn)譯成M/R Job然后在Hadoop執(zhí)行。
1) 用戶接口主要有三個(gè):CLI,Client 和 WUI。其中最常用的是CLI,Cli啟動(dòng)的時(shí)候,會(huì)同時(shí)啟動(dòng)一個(gè)Hive副本。Client是Hive的客戶端,用戶連接至Hive Server。在啟動(dòng) Client模式的時(shí)候,需要指出Hive Server所在節(jié)點(diǎn),并且在該節(jié)點(diǎn)啟動(dòng)Hive Server。 WUI是通過(guò)瀏覽器訪問(wèn)Hive。
2) Hive將元數(shù)據(jù)存儲(chǔ)在數(shù)據(jù)庫(kù)中,如mysql、derby。Hive中的元數(shù)據(jù)包括表的名字,表的列和分區(qū)及其屬性,表的屬性(是否為外部表等),表的數(shù)據(jù)所在目錄等[10]。
3) 解釋器、編譯器、優(yōu)化器完成HQL查詢語(yǔ)句從詞法分析、語(yǔ)法分析、編譯、優(yōu)化以及查詢計(jì)劃的生成。生成的查詢計(jì)劃存儲(chǔ)在HDFS中,并在隨后有MapReduce調(diào)用執(zhí)行。