irpas技术客

Unable to deliver event. Exception follows.配置出错_cxy好好先生

irpas 7213

?今天在进行hbase-kafka-flume 的集群集成时,我是三个节点,三个Kafka节点,二三节点flume收集信息到一节点flume,再同时打入hbase和kafka,结果在打开一节点的收集flume之后,再去打开二三节点时就报了下面的错。

Unable to deliver event. Exception follows.

[ERROR - org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:158)] Unable to deliver event. Exception follows. org.apache.flume.EventDeliveryException: Failed to send events at org.apache.flume.sink.AbstractRpcSink.process(AbstractRpcSink.java:389) at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67) at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.flume.FlumeException: NettyAvroRpcClient { host: hadoop01, port: 1234 }: RPC connection error at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:181) at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:120) at org.apache.flume.api.NettyAvroRpcClient.configure(NettyAvroRpcClient.java:638) at org.apache.flume.api.RpcClientFactory.getInstance(RpcClientFactory.java:90) at org.apache.flume.sink.AvroSink.initializeRpcClient(AvroSink.java:127) at org.apache.flume.sink.AbstractRpcSink.createConnection(AbstractRpcSink.java:210) at org.apache.flume.sink.AbstractRpcSink.verifyConnection(AbstractRpcSink.java:270) at org.apache.flume.sink.AbstractRpcSink.process(AbstractRpcSink.java:346) ... 3 more Caused by: java.io.IOException: Error connecting to hadoop01/192.168.74.146:1234 at org.apache.avro.ipc.NettyTransceiver.getChannel(NettyTransceiver.java:261) at org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:203) at org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:152) at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:169) ... 10 more Caused by: java.net.ConnectException: Connection refused: hadoop01/192.168.74.146:1234 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:715) at org.jboss.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152) at org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105) at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318) at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more

(其实在之前zookeeper启动zkCli.sh 的时候也又碰到过这种问题,不知道是哪出了问题,就是集群配置出了问题,那时候我把三台zookeeper的zoo.cfg? 全部都改成单节点standalone? 来进行zkCli.sh测验,最终发现了是三节点出了问题,但是不管怎么看节点的配置都没啥问题,昨天就是因为这个搞了很久然后也没出什么结果浪费了很多时间。

今天一道图书馆,还想着搞一搞的结果,直接不报错了,我人麻了,所以说以后报错先别急着去看日志改配置,可以先重启康康,因为可能你的错误关闭服务导致一些端口没关闭或者出了什么问题你自己根本看不出。


更新?

这一块是我今天早上发现的。

针对上面的zk启动报错问题,之所以报错是因为,zkfc没有启动,zkfc是包含在start-dfs.sh中,如果你是先启动hdfs 再启动zk的话,zkfc会因为没有zk来control,而直接被kill掉,所以要先启动zk再启动hdfs,才能使用zkCli.sh。


所以我马上重启了我的三台虚拟机(先关闭所有集群,正确关闭不要用kill去强制关闭,那样很容易出问题),但是没有这么幸运还是报错。那行吧开始看配置。)

?connection refused ?????

什么情况,连接拒绝?不对啊我配置好了的,再回去看二三节点的configurations,反复看了几遍,没有问题啊,因为一节点是成功启动了,所以,我也就没去看一节点的conf。

在反复重启flume之后,我才发现,原来是一节点出错了,报的下面的错。

[ERROR - org.apache.flume.node.AbstractConfigurationProvider.loadSources(AbstractConfigurationProvider.java:361)] Source execSource has been removed due to an error during configuration java.lang.IllegalArgumentException: Required parameter bind must exist and may not be null at org.apache.flume.conf.Configurables.ensureRequiredNonNull(Configurables.java:61) at org.apache.flume.source.AvroSource.configure(AvroSource.java:169) at org.apache.flume.conf.Configurables.configure(Configurables.java:41) at org.apache.flume.node.AbstractConfigurationProvider.loadSources(AbstractConfigurationProvider.java:326) at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:101) at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:141) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750)

?原来是很上面出现了错误,因为启动flume他会把很多信息打出来,每个调用的jar包每条conf都会检验是否正确,他跳出来的log就会很长,最后一次打开我眼尖,因为只要看见这种形状(改过太多bug了呜呜呜)基本上就是报错了,我往上拉了很久才看到,其他都是对的,全是对的,就那一个地方是错的,这居然也能成功启动,让我好找。

agent.sources.execSource.type = avro agent.sources.execSource.channels = kafkaChannel hbaseChannel agent.sources.execSource.command = bind = 0.0.0.0 agent.sources.execSource.port = 1234 agent.sources.execSource.selector.type = replicating

马上到目标conf找到了错误。

原来

agent.sources.execSource.command = bind = 0.0.0.0

这一条在改的时候没有把之前的command去掉,导致两个属性,冲突了。

改成

agent.sources.execSource.bind = 0.0.0.0

ok成功启动,flume启动,然后往目标文件打入数据,会发现hbase中有了数据,kafka-comsumer也comsum到了消息。

hbase-kafka-flume 的集群集成成功。


1.本站遵循行业规范,任何转载的稿件都会明确标注作者和来源;2.本站的原创文章,会注明原创字样,如未注明都非原创,如有侵权请联系删除!;3.作者投稿可能会经我们编辑修改或补充;4.本站不提供任何储存功能只提供收集或者投稿人的网盘链接。

标签: #Unable #To #deliver #event #exception #follows配置出错