java.net.ConnectException: Call From slaver1/192.168.19.128 to slaver1:8020 failed on connection exc
时间:2022-05-06
本文章向大家介绍java.net.ConnectException: Call From slaver1/192.168.19.128 to slaver1:8020 failed on connection exc,主要内容包括其使用实例、应用技巧、基本知识点总结和需要注意事项,具有一定的参考价值,需要的朋友可以参考一下。
1:练习spark的时候,操作大概如我读取hdfs上面的文件,然后spark懒加载以后,我读取详细信息出现如下所示的错误,错误虽然不大,我感觉有必要记录一下,因为错误的起因是对命令的不熟悉造成的,错误如下所示:
1 scala> text.collect
2 java.net.ConnectException: Call From slaver1/192.168.19.128 to slaver1:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
3 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
4 at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
5 at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
6 at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
7 at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
8 at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
9 at org.apache.hadoop.ipc.Client.call(Client.java:1472)
10 at org.apache.hadoop.ipc.Client.call(Client.java:1399)
11 at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
12 at com.sun.proxy.$Proxy36.getFileInfo(Unknown Source)
13 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752)
14 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
15 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
16 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
17 at java.lang.reflect.Method.invoke(Method.java:606)
18 at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
19 at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
20 at com.sun.proxy.$Proxy37.getFileInfo(Unknown Source)
21 at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1988)
22 at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118)
23 at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
24 at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
25 at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
26 at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
27 at org.apache.hadoop.fs.Globber.glob(Globber.java:252)
28 at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1644)
29 at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:257)
30 at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
31 at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
32 at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
33 at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
34 at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
35 at scala.Option.getOrElse(Option.scala:120)
36 at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
37 at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
38 at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
39 at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
40 at scala.Option.getOrElse(Option.scala:120)
41 at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
42 at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
43 at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:927)
44 at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
45 at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
46 at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
47 at org.apache.spark.rdd.RDD.collect(RDD.scala:926)
48 at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:30)
49 at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:35)
50 at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:37)
51 at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:39)
52 at $iwC$$iwC$$iwC$$iwC.<init>(<console>:41)
53 at $iwC$$iwC$$iwC.<init>(<console>:43)
54 at $iwC$$iwC.<init>(<console>:45)
55 at $iwC.<init>(<console>:47)
56 at <init>(<console>:49)
57 at .<init>(<console>:53)
58 at .<clinit>(<console>)
59 at .<init>(<console>:7)
60 at .<clinit>(<console>)
61 at $print(<console>)
62 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
63 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
64 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
65 at java.lang.reflect.Method.invoke(Method.java:606)
66 at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
67 at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
68 at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
69 at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
70 at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
71 at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
72 at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
73 at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
74 at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
75 at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
76 at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
77 at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
78 at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
79 at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
80 at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
81 at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
82 at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
83 at org.apache.spark.repl.Main$.main(Main.scala:31)
84 at org.apache.spark.repl.Main.main(Main.scala)
85 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
86 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
87 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
88 at java.lang.reflect.Method.invoke(Method.java:606)
89 at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
90 at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
91 at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
92 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
93 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
94 Caused by: java.net.ConnectException: Connection refused
95 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
96 at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
97 at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
98 at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
99 at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
100 at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)
101 at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)
102 at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
103 at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
104 at org.apache.hadoop.ipc.Client.call(Client.java:1438)
105 ... 84 more
2:错误原因如下所示:
我使用了如下所示命令来读取hdfs上面的文件,scala> var text = sc.textFile("hdfs://slaver1:/input.txt");,然后使用text.collect命令来查看详细信息,就是查看详细信息的时候报的上面的错误,错误原因是因为我读取hdfs文件的时候少了端口号,造成的错误;
修改为如下所示即可:
scala> var text = sc.textFile("hdfs://slaver1:9000/input.txt");
scala> text.collect
- JavaWeb10-request&response你不得不学(1)
- dg的奇怪问题终结和分区问题答疑 (r7笔记第77天)
- 最近让我焦灼的四个问题(有解) (r7笔记第76天)
- JavaWeb10-reques;response你不得不学(2)
- 很多人比较纠结的约束和索引的关系(r7笔记第75天)
- JavaWeb09-Servlet(Java真正的全栈开发)
- JavaWeb08-XML,tomcat,HTTP轻松入门
- JavaWeb07-JDBC(Java真正的全栈开发)
- 编程思想 之「操作符」
- Hyperledger fabric Chaincode 开发详解
- 字节码文件的内部结构之谜
- 基因功能富集分析-R语言
- CRM, C4C和Hybris的工作流简介
- 想进大厂?50个多线程面试题,你会多少?(一)
- JavaScript 教程
- JavaScript 编辑工具
- JavaScript 与HTML
- JavaScript 与Java
- JavaScript 数据结构
- JavaScript 基本数据类型
- JavaScript 特殊数据类型
- JavaScript 运算符
- JavaScript typeof 运算符
- JavaScript 表达式
- JavaScript 类型转换
- JavaScript 基本语法
- JavaScript 注释
- Javascript 基本处理流程
- Javascript 选择结构
- Javascript if 语句
- Javascript if 语句的嵌套
- Javascript switch 语句
- Javascript 循环结构
- Javascript 循环结构实例
- Javascript 跳转语句
- Javascript 控制语句总结
- Javascript 函数介绍
- Javascript 函数的定义
- Javascript 函数调用
- Javascript 几种特殊的函数
- JavaScript 内置函数简介
- Javascript eval() 函数
- Javascript isFinite() 函数
- Javascript isNaN() 函数
- parseInt() 与 parseFloat()
- escape() 与 unescape()
- Javascript 字符串介绍
- Javascript length属性
- javascript 字符串函数
- Javascript 日期对象简介
- Javascript 日期对象用途
- Date 对象属性和方法
- Javascript 数组是什么
- Javascript 创建数组
- Javascript 数组赋值与取值
- Javascript 数组属性和方法
- flink教程-详解flink 1.11 中的JDBC Catalog
- flink教程-flink modules详解之使用hive函数
- 面试iOS 机会在自己手中
- Flink教程-将流式数据写入redis
- Flink教程-keyby 窗口数据倾斜的优化
- Flink源码分析之深度解读流式数据写入hive
- 浙大版《C语言程序设计(第3版)》题目集 习题10-1 判断满足条件的三位数
- 差分标记-HDU1556 Color the ball
- flink cep 案例之机架温度监控报警
- 详解flink 1.11中的新部署模式-Application模式
- 浙大版《C语言程序设计(第3版)》题目集 习题10-2 递归求阶乘和
- hadoop源码解析之RPC分析
- 存储过程和触发器
- hadoop源码学习之namenode启动
- 浙大版《C语言程序设计(第3版)》题目集 习题10-3 递归实现指数函数