Bootstrap

【spark】spark streaming 和flume、kafka整合

一、Spark Streaming整合flume

flume作为日志实时采集的框架,可以与SparkStreaming实时处理框架进行对接,flume实时产生数据,sparkStreaming做实时处理。

Spark Streaming对接FlumeNG有两种方式,
一种是FlumeNG将消息Push推给Spark Streaming,
还有一种是Spark Streaming从flume 中Poll拉取数据。

Poll方式

(1)安装flume1.6以上

(2)下载依赖包

spark-streaming-flume-sink_2.11-2.0.2.jar放入到flume的lib目录下
(3)修改flume/lib下的scala依赖包版本

从spark安装目录的jars文件夹下找到scala-library-2.11.8.jar 包替换掉flume的lib目录下自带的scala-library-2.10.1.jar。

(4)写flume的agent,注意既然是拉取的方式,那么flume向自己所在的机器上产数据就行

(5)编写flume-poll-spark.conf配置文件

a1.sources = r1
a1.sinks = k1
a1.channels = c1
#source
a1.sources.r1.channels = c1
a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = /root/data
a1.sources.r1.fileHeader = true
#channel
a1.channels.c1.type =memory
a1.channels.c1.capacity = 20000
a1.channels.c1.transactionCapacity=5000
#sinks
a1.sinks.k1.channel = c1
a1.sinks.k1.type = org.apache.spark.streaming.flume.sink.SparkSink
a1.sinks.k1.hostname=node1
a1.sinks.k1.port = 8888
a1.sinks.k1.batchSize= 2000

启动flume

bin/flume-ng agent -n a1 -c conf -f conf/flume-poll-spark.conf -Dflume.root.logger=info,console

代码实现:

引入依赖

 <!--sparkStreaming整合flume-->
 <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming-flume_2.11</artifactId>
            <version>2.0.2</version>
 </dependency>

代码

import java.net.InetSocketAddress
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.dstream.{
   DStream, ReceiverInputDStream}
import org.apache.spark.streaming.flume.{
   FlumeUtils, SparkFlumeEvent}
import org.apache.spark.streaming.{
   Seconds, StreamingContext}
import org.apache.spark.{
   SparkConf, SparkContext}
//todo:需求:利用sparkStreaming整合flume---(poll拉模式)
 //通过poll拉模式整合,启动顺序---》启动flume----》sparkStreaming程序
object SparkStreamingFlumePoll {
   
  def main(args: Array[String]): Unit = {
   
      //1、创建SparkConf
        val sparkConf: SparkConf = new SparkConf().setAppName("SparkStreamingFlumePoll").setMaster("local[2]")
      //2、创建SparkContext
      val sc = new SparkContext(sparkConf)
      sc.setLogLevel("WARN")
      //3、创建StreamingContext
      val ssc = new StreamingContext(sc,Seconds(5))
      //4、读取flume中的数据
      val pollingStream: ReceiverInputDStream[SparkFlumeEvent] = FlumeUtils.createPollingStream(ssc,"192.168.200.100",8888)
//定义一个List集合可以封装不同的flume收集数据
        val address=List(new InetSocketAddress("node1",8888),new InetSocketAddress("node2",8888),new InetSocketAddress("node3",8888))
        //val pollingStream: ReceiverInputDStream[SparkFlumeEvent] = FlumeUtils.createPollingStream(ssc,address,StorageLevel.MEMORY_AND_DISK_SER_2)
//5、 event是flume中传输数据的最小单元,event中数据结构:{"headers":"xxxxx","body":"xxxxxxx"}
     val flume_data: DStream[String] = pollingStream.map(x => new String(x.event.getBody.array()))
      
;