博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Caused by: java.net.ConnectException: Connection refused: master/192.168.3.129:7077
阅读量:5034 次
发布时间:2019-06-12

本文共 33632 字,大约阅读时间需要 112 分钟。

1:启动Spark Shell,spark-shell是Spark自带的交互式Shell程序,方便用户进行交互式编程,用户可以在该命令行下用scala编写spark程序。

启动Spark Shell,出现的错误如下所示:

1 [root@master spark-1.6.1-bin-hadoop2.6]# bin/spark-shell --master spark://master:7077 --executor-memory 512M --total-executor-cores 2  2 18/02/22 01:42:10 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable  3 18/02/22 01:42:10 INFO SecurityManager: Changing view acls to: root  4 18/02/22 01:42:10 INFO SecurityManager: Changing modify acls to: root  5 18/02/22 01:42:10 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)  6 18/02/22 01:42:11 INFO HttpServer: Starting HTTP Server  7 18/02/22 01:42:11 INFO Utils: Successfully started service 'HTTP class server' on port 52961.  8 Welcome to  9       ____              __ 10      / __/__  ___ _____/ /__ 11     _\ \/ _ \/ _ `/ __/  '_/ 12    /___/ .__/\_,_/_/ /_/\_\   version 1.6.1 13       /_/ 14  15 Using Scala version 2.10.5 (Java HotSpot(TM) Client VM, Java 1.7.0_65) 16 Type in expressions to have them evaluated. 17 Type :help for more information. 18 18/02/22 01:42:15 INFO SparkContext: Running Spark version 1.6.1 19 18/02/22 01:42:15 INFO SecurityManager: Changing view acls to: root 20 18/02/22 01:42:15 INFO SecurityManager: Changing modify acls to: root 21 18/02/22 01:42:15 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root) 22 18/02/22 01:42:15 INFO Utils: Successfully started service 'sparkDriver' on port 43566. 23 18/02/22 01:42:16 INFO Slf4jLogger: Slf4jLogger started 24 18/02/22 01:42:16 INFO Remoting: Starting remoting 25 18/02/22 01:42:16 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.3.129:43806] 26 18/02/22 01:42:16 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 43806. 27 18/02/22 01:42:16 INFO SparkEnv: Registering MapOutputTracker 28 18/02/22 01:42:16 INFO SparkEnv: Registering BlockManagerMaster 29 18/02/22 01:42:16 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-7face114-24b5-4f0e-adb6-8a104e387c78 30 18/02/22 01:42:16 INFO MemoryStore: MemoryStore started with capacity 517.4 MB 31 18/02/22 01:42:16 INFO SparkEnv: Registering OutputCommitCoordinator 32 18/02/22 01:42:16 INFO Utils: Successfully started service 'SparkUI' on port 4040. 33 18/02/22 01:42:16 INFO SparkUI: Started SparkUI at http://192.168.3.129:4040 34 18/02/22 01:42:17 INFO AppClient$ClientEndpoint: Connecting to master spark://master:7077... 35 18/02/22 01:42:17 WARN AppClient$ClientEndpoint: Failed to connect to master master:7077 36 java.io.IOException: Failed to connect to master/192.168.3.129:7077 37     at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216) 38     at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167) 39     at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:200) 40     at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187) 41     at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:183) 42     at java.util.concurrent.FutureTask.run(FutureTask.java:262) 43     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 44     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 45     at java.lang.Thread.run(Thread.java:745) 46 Caused by: java.net.ConnectException: Connection refused: master/192.168.3.129:7077 47     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 48     at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) 49     at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224) 50     at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289) 51     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528) 52     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) 53     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) 54     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) 55     at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) 56     ... 1 more 57 18/02/22 01:42:37 INFO AppClient$ClientEndpoint: Connecting to master spark://master:7077... 58 18/02/22 01:42:37 WARN AppClient$ClientEndpoint: Failed to connect to master master:7077 59 java.io.IOException: Failed to connect to master/192.168.3.129:7077 60     at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216) 61     at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167) 62     at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:200) 63     at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187) 64     at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:183) 65     at java.util.concurrent.FutureTask.run(FutureTask.java:262) 66     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 67     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 68     at java.lang.Thread.run(Thread.java:745) 69 Caused by: java.net.ConnectException: Connection refused: master/192.168.3.129:7077 70     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 71     at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) 72     at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224) 73     at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289) 74     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528) 75     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) 76     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) 77     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) 78     at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) 79     ... 1 more 80 18/02/22 01:42:57 INFO AppClient$ClientEndpoint: Connecting to master spark://master:7077... 81 18/02/22 01:42:57 WARN AppClient$ClientEndpoint: Failed to connect to master master:7077 82 java.io.IOException: Failed to connect to master/192.168.3.129:7077 83     at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216) 84     at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167) 85     at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:200) 86     at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187) 87     at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:183) 88     at java.util.concurrent.FutureTask.run(FutureTask.java:262) 89     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 90     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 91     at java.lang.Thread.run(Thread.java:745) 92 Caused by: java.net.ConnectException: Connection refused: master/192.168.3.129:7077 93     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 94     at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) 95     at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224) 96     at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289) 97     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528) 98     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) 99     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)100     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)101     at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)102     ... 1 more103 18/02/22 01:42:57 INFO AppClient$ClientEndpoint: Connecting to master spark://master:7077...104 18/02/22 01:42:57 WARN AppClient$ClientEndpoint: Failed to connect to master master:7077105 java.io.IOException: Failed to connect to master/192.168.3.129:7077106     at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216)107     at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167)108     at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:200)109     at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187)110     at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:183)111     at java.util.concurrent.FutureTask.run(FutureTask.java:262)112     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)113     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)114     at java.lang.Thread.run(Thread.java:745)115 Caused by: java.net.ConnectException: Connection refused: master/192.168.3.129:7077116     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)117     at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)118     at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)119     at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289)120     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)121     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)122     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)123     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)124     at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)125     ... 1 more126 18/02/22 01:43:17 INFO AppClient$ClientEndpoint: Connecting to master spark://master:7077...127 18/02/22 01:43:17 ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.128 18/02/22 01:43:17 INFO AppClient$ClientEndpoint: Connecting to master spark://master:7077...129 18/02/22 01:43:17 WARN SparkDeploySchedulerBackend: Application ID is not initialized yet.130 18/02/22 01:43:17 WARN AppClient$ClientEndpoint: Failed to connect to master master:7077131 java.io.IOException: Failed to connect to master/192.168.3.129:7077132     at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216)133     at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167)134     at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:200)135     at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187)136     at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:183)137     at java.util.concurrent.FutureTask.run(FutureTask.java:262)138     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)139     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)140     at java.lang.Thread.run(Thread.java:745)141 Caused by: java.net.ConnectException: Connection refused: master/192.168.3.129:7077142     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)143     at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)144     at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)145     at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289)146     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)147     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)148     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)149     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)150     at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)151     ... 1 more152 18/02/22 01:43:17 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 50513.153 18/02/22 01:43:17 INFO NettyBlockTransferService: Server created on 50513154 18/02/22 01:43:17 INFO BlockManagerMaster: Trying to register BlockManager155 18/02/22 01:43:17 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.3.129:50513 with 517.4 MB RAM, BlockManagerId(driver, 192.168.3.129, 50513)156 18/02/22 01:43:17 INFO BlockManagerMaster: Registered BlockManager157 18/02/22 01:43:17 INFO SparkUI: Stopped Spark web UI at http://192.168.3.129:4040158 18/02/22 01:43:17 INFO SparkDeploySchedulerBackend: Shutting down all executors159 18/02/22 01:43:17 INFO SparkDeploySchedulerBackend: Asking each executor to shut down160 18/02/22 01:43:17 WARN AppClient$ClientEndpoint: Drop UnregisterApplication(null) because has not yet connected to master161 18/02/22 01:43:17 ERROR MapOutputTrackerMaster: Error communicating with MapOutputTracker162 java.lang.InterruptedException163     at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1038)164     at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)165     at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208)166     at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)167     at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)168     at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)169     at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)170     at scala.concurrent.Await$.result(package.scala:107)171     at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)172     at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)173     at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77)174     at org.apache.spark.MapOutputTracker.askTracker(MapOutputTracker.scala:110)175     at org.apache.spark.MapOutputTracker.sendTracker(MapOutputTracker.scala:120)176     at org.apache.spark.MapOutputTrackerMaster.stop(MapOutputTracker.scala:462)177     at org.apache.spark.SparkEnv.stop(SparkEnv.scala:93)178     at org.apache.spark.SparkContext$$anonfun$stop$12.apply$mcV$sp(SparkContext.scala:1756)179     at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1229)180     at org.apache.spark.SparkContext.stop(SparkContext.scala:1755)181     at org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.dead(SparkDeploySchedulerBackend.scala:127)182     at org.apache.spark.deploy.client.AppClient$ClientEndpoint.markDead(AppClient.scala:264)183     at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2$$anonfun$run$1.apply$mcV$sp(AppClient.scala:134)184     at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1163)185     at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2.run(AppClient.scala:129)186     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)187     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)188     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)189     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)190     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)191     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)192     at java.lang.Thread.run(Thread.java:745)193 18/02/22 01:43:17 ERROR Utils: Uncaught exception in thread appclient-registration-retry-thread194 org.apache.spark.SparkException: Error communicating with MapOutputTracker195     at org.apache.spark.MapOutputTracker.askTracker(MapOutputTracker.scala:114)196     at org.apache.spark.MapOutputTracker.sendTracker(MapOutputTracker.scala:120)197     at org.apache.spark.MapOutputTrackerMaster.stop(MapOutputTracker.scala:462)198     at org.apache.spark.SparkEnv.stop(SparkEnv.scala:93)199     at org.apache.spark.SparkContext$$anonfun$stop$12.apply$mcV$sp(SparkContext.scala:1756)200     at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1229)201     at org.apache.spark.SparkContext.stop(SparkContext.scala:1755)202     at org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.dead(SparkDeploySchedulerBackend.scala:127)203     at org.apache.spark.deploy.client.AppClient$ClientEndpoint.markDead(AppClient.scala:264)204     at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2$$anonfun$run$1.apply$mcV$sp(AppClient.scala:134)205     at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1163)206     at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2.run(AppClient.scala:129)207     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)208     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)209     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)210     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)211     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)212     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)213     at java.lang.Thread.run(Thread.java:745)214 Caused by: java.lang.InterruptedException215     at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1038)216     at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)217     at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208)218     at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)219     at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)220     at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)221     at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)222     at scala.concurrent.Await$.result(package.scala:107)223     at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)224     at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)225     at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77)226     at org.apache.spark.MapOutputTracker.askTracker(MapOutputTracker.scala:110)227     ... 18 more228 18/02/22 01:43:17 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!229 18/02/22 01:43:17 INFO SparkContext: Successfully stopped SparkContext230 18/02/22 01:43:17 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[appclient-registration-retry-thread,5,main]231 org.apache.spark.SparkException: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up.232     at org.apache.spark.scheduler.TaskSchedulerImpl.error(TaskSchedulerImpl.scala:438)233     at org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.dead(SparkDeploySchedulerBackend.scala:124)234     at org.apache.spark.deploy.client.AppClient$ClientEndpoint.markDead(AppClient.scala:264)235     at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2$$anonfun$run$1.apply$mcV$sp(AppClient.scala:134)236     at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1163)237     at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2.run(AppClient.scala:129)238     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)239     at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)240     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)241     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)242     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)243     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)244     at java.lang.Thread.run(Thread.java:745)245 18/02/22 01:43:17 INFO DiskBlockManager: Shutdown hook called246 18/02/22 01:43:17 INFO ShutdownHookManager: Shutdown hook called247 18/02/22 01:43:17 INFO ShutdownHookManager: Deleting directory /tmp/spark-bf09944d-9867-4256-89c6-e8b415c9c315/userFiles-12e582c1-0438-490f-a8a2-64264d764463248 18/02/22 01:43:17 INFO ShutdownHookManager: Deleting directory /tmp/spark-7c648867-90b8-4d3c-af09-b1f3d16d1b30249 18/02/22 01:43:17 INFO ShutdownHookManager: Deleting directory /tmp/spark-bf09944d-9867-4256-89c6-e8b415c9c315

2:解决方法,是你必须先启动你的Spark集群,这样再启动Spark Shell即可:

在master节点,启动你的spark集群,启动方式如下所示:

[root@master spark-1.6.1-bin-hadoop2.6]# sbin/start-all.sh

然后再启动你的Spark Shell即可,解决上面的错误:

[root@master spark-1.6.1-bin-hadoop2.6]# bin/spark-shell --master spark://master:7077 --executor-memory 512M --total-executor-cores 2

启动的内容,注意一些重点内容:

第33行,第47行,第119行。

[root@master spark-1.6.1-bin-hadoop2.6]# bin/spark-shell --master spark://master:7077 --executor-memory 512M --total-executor-cores 218/02/22 01:51:00 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable18/02/22 01:51:00 INFO SecurityManager: Changing view acls to: root18/02/22 01:51:00 INFO SecurityManager: Changing modify acls to: root18/02/22 01:51:00 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)18/02/22 01:51:00 INFO HttpServer: Starting HTTP Server18/02/22 01:51:00 INFO Utils: Successfully started service 'HTTP class server' on port 58729.Welcome to      ____              __     / __/__  ___ _____/ /__    _\ \/ _ \/ _ `/ __/  '_/   /___/ .__/\_,_/_/ /_/\_\   version 1.6.1      /_/Using Scala version 2.10.5 (Java HotSpot(TM) Client VM, Java 1.7.0_65)Type in expressions to have them evaluated.Type :help for more information.18/02/22 01:51:06 INFO SparkContext: Running Spark version 1.6.118/02/22 01:51:06 INFO SecurityManager: Changing view acls to: root18/02/22 01:51:06 INFO SecurityManager: Changing modify acls to: root18/02/22 01:51:06 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)18/02/22 01:51:06 INFO Utils: Successfully started service 'sparkDriver' on port 45298.18/02/22 01:51:06 INFO Slf4jLogger: Slf4jLogger started18/02/22 01:51:06 INFO Remoting: Starting remoting18/02/22 01:51:07 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.3.129:36868]18/02/22 01:51:07 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 36868.18/02/22 01:51:07 INFO SparkEnv: Registering MapOutputTracker18/02/22 01:51:07 INFO SparkEnv: Registering BlockManagerMaster18/02/22 01:51:07 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-3b5e312e-7f6b-491d-8539-4a5f38d3839a18/02/22 01:51:07 INFO MemoryStore: MemoryStore started with capacity 517.4 MB18/02/22 01:51:07 INFO SparkEnv: Registering OutputCommitCoordinator18/02/22 01:51:07 INFO Utils: Successfully started service 'SparkUI' on port 4040.18/02/22 01:51:07 INFO SparkUI: Started SparkUI at http://192.168.3.129:404018/02/22 01:51:07 INFO AppClient$ClientEndpoint: Connecting to master spark://master:7077...18/02/22 01:51:08 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20180222015108-000018/02/22 01:51:08 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 46282.18/02/22 01:51:08 INFO NettyBlockTransferService: Server created on 4628218/02/22 01:51:08 INFO BlockManagerMaster: Trying to register BlockManager18/02/22 01:51:08 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.3.129:46282 with 517.4 MB RAM, BlockManagerId(driver, 192.168.3.129, 46282)18/02/22 01:51:08 INFO BlockManagerMaster: Registered BlockManager18/02/22 01:51:08 INFO AppClient$ClientEndpoint: Executor added: app-20180222015108-0000/0 on worker-20180222174932-192.168.3.130-39616 (192.168.3.130:39616) with 1 cores18/02/22 01:51:08 INFO SparkDeploySchedulerBackend: Granted executor ID app-20180222015108-0000/0 on hostPort 192.168.3.130:39616 with 1 cores, 512.0 MB RAM18/02/22 01:51:08 INFO AppClient$ClientEndpoint: Executor added: app-20180222015108-0000/1 on worker-20180222174932-192.168.3.131-58163 (192.168.3.131:58163) with 1 cores18/02/22 01:51:08 INFO SparkDeploySchedulerBackend: Granted executor ID app-20180222015108-0000/1 on hostPort 192.168.3.131:58163 with 1 cores, 512.0 MB RAM18/02/22 01:51:09 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.018/02/22 01:51:09 INFO SparkILoop: Created spark context..Spark context available as sc.18/02/22 01:51:10 INFO AppClient$ClientEndpoint: Executor updated: app-20180222015108-0000/1 is now RUNNING18/02/22 01:51:10 INFO AppClient$ClientEndpoint: Executor updated: app-20180222015108-0000/0 is now RUNNING18/02/22 01:51:16 INFO HiveContext: Initializing execution hive, version 1.2.118/02/22 01:51:17 INFO ClientWrapper: Inspected Hadoop version: 2.6.018/02/22 01:51:17 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.018/02/22 01:51:21 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore18/02/22 01:51:22 INFO ObjectStore: ObjectStore, initialize called18/02/22 01:51:26 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored18/02/22 01:51:26 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored18/02/22 01:51:26 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)18/02/22 01:51:30 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)18/02/22 01:51:35 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"18/02/22 01:51:38 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.18/02/22 01:51:38 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.18/02/22 01:51:39 INFO SparkDeploySchedulerBackend: Registered executor NettyRpcEndpointRef(null) (slaver2:55056) with ID 118/02/22 01:51:39 INFO BlockManagerMasterEndpoint: Registering block manager slaver2:57607 with 146.2 MB RAM, BlockManagerId(1, slaver2, 57607)18/02/22 01:51:39 INFO SparkDeploySchedulerBackend: Registered executor NettyRpcEndpointRef(null) (slaver1:47165) with ID 018/02/22 01:51:40 INFO BlockManagerMasterEndpoint: Registering block manager slaver1:38278 with 146.2 MB RAM, BlockManagerId(0, slaver1, 38278)18/02/22 01:51:40 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.18/02/22 01:51:40 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.18/02/22 01:51:40 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY18/02/22 01:51:40 INFO ObjectStore: Initialized ObjectStore18/02/22 01:51:41 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.018/02/22 01:51:42 WARN ObjectStore: Failed to get database default, returning NoSuchObjectExceptionJava HotSpot(TM) Client VM warning: You have loaded library /tmp/libnetty-transport-native-epoll870809507217922299.so which might have disabled stack guard. The VM will try to fix the stack guard now.It's highly recommended that you fix the library with 'execstack -c 
', or link it with '-z noexecstack'.18/02/22 01:51:44 INFO HiveMetaStore: Added admin role in metastore18/02/22 01:51:44 INFO HiveMetaStore: Added public role in metastore18/02/22 01:51:44 INFO HiveMetaStore: No user is added in admin role, since config is empty18/02/22 01:51:45 INFO HiveMetaStore: 0: get_all_databases18/02/22 01:51:45 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_all_databases 18/02/22 01:51:45 INFO HiveMetaStore: 0: get_functions: db=default pat=*18/02/22 01:51:45 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_functions: db=default pat=* 18/02/22 01:51:45 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.18/02/22 01:51:46 INFO SessionState: Created HDFS directory: /tmp/hive/root18/02/22 01:51:46 INFO SessionState: Created local directory: /tmp/root18/02/22 01:51:46 INFO SessionState: Created local directory: /tmp/afacd186-3b65-4cf9-a9b3-dad36055ed80_resources18/02/22 01:51:46 INFO SessionState: Created HDFS directory: /tmp/hive/root/afacd186-3b65-4cf9-a9b3-dad36055ed8018/02/22 01:51:46 INFO SessionState: Created local directory: /tmp/root/afacd186-3b65-4cf9-a9b3-dad36055ed8018/02/22 01:51:46 INFO SessionState: Created HDFS directory: /tmp/hive/root/afacd186-3b65-4cf9-a9b3-dad36055ed80/_tmp_space.db18/02/22 01:51:46 INFO HiveContext: default warehouse location is /user/hive/warehouse18/02/22 01:51:46 INFO HiveContext: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.18/02/22 01:51:46 INFO ClientWrapper: Inspected Hadoop version: 2.6.018/02/22 01:51:46 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.018/02/22 01:51:47 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore18/02/22 01:51:47 INFO ObjectStore: ObjectStore, initialize called18/02/22 01:51:47 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored18/02/22 01:51:47 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored18/02/22 01:51:47 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)18/02/22 01:51:48 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)18/02/22 01:51:49 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"18/02/22 01:51:51 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.18/02/22 01:51:51 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.18/02/22 01:51:51 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.18/02/22 01:51:51 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.18/02/22 01:51:51 INFO Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing18/02/22 01:51:51 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY18/02/22 01:51:51 INFO ObjectStore: Initialized ObjectStore18/02/22 01:51:51 INFO HiveMetaStore: Added admin role in metastore18/02/22 01:51:51 INFO HiveMetaStore: Added public role in metastore18/02/22 01:51:51 INFO HiveMetaStore: No user is added in admin role, since config is empty18/02/22 01:51:52 INFO HiveMetaStore: 0: get_all_databases18/02/22 01:51:52 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_all_databases 18/02/22 01:51:52 INFO HiveMetaStore: 0: get_functions: db=default pat=*18/02/22 01:51:52 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_functions: db=default pat=* 18/02/22 01:51:52 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.18/02/22 01:51:52 INFO SessionState: Created local directory: /tmp/e209230b-e230-4688-9b83-b04d182b952d_resources18/02/22 01:51:52 INFO SessionState: Created HDFS directory: /tmp/hive/root/e209230b-e230-4688-9b83-b04d182b952d18/02/22 01:51:52 INFO SessionState: Created local directory: /tmp/root/e209230b-e230-4688-9b83-b04d182b952d18/02/22 01:51:52 INFO SessionState: Created HDFS directory: /tmp/hive/root/e209230b-e230-4688-9b83-b04d182b952d/_tmp_space.db18/02/22 01:51:52 INFO SparkILoop: Created sql context (with Hive support)..SQL context available as sqlContext.scala>

 

转载于:https://www.cnblogs.com/biehongli/p/8459702.html

你可能感兴趣的文章
eclipse集成maven
查看>>
设计模式----中介者模式及简单总结(2018/10/30)
查看>>
vue封装组件
查看>>
一个Apache安装多个版本的PHP
查看>>
<a>锚链接的功能取消
查看>>
转载: input 的css技巧
查看>>
二分查找
查看>>
怎样使用 CCache 进行 cocos2d-x 编译加速
查看>>
03、介绍一下你对浏览器内核的理解
查看>>
【转载】C#中List集合使用AddRange方法将一个集合加入到指定集合末尾
查看>>
操作系统学习(五)
查看>>
PHP-深入理解Opcode缓存
查看>>
go程序基于阿里云CodePipeline的一次devops实践
查看>>
dedecms搜索引擎 友好提示内容不存在
查看>>
关于.net 用response.write会打乱布局的问题
查看>>
wifi 模块RTL8188以及mt7601u 移植测试
查看>>
windows 下,CCXT库的安装
查看>>
HDU 1242 Rescue(BFS)
查看>>
centos7.2下安装mantis2.19.0
查看>>
消息扩散(强连通分量)
查看>>