java.io.IOException: Server returned HTTP response code: 411 for URL

今日调用一post方式提交的http接口,此接口在测试环境ip调用时无问题,但在生产环境通过域名调用时一直报如下错误:

java.io.IOException: Server returned HTTP response code: 411 for URL

百度之后得到:在调用时,添加如下两行代码即可,今行文以记之:

   /*解决411*/
     httpConnection.setRequestProperty("Content-Length","0");
     DataOutputStream os = new DataOutputStream( httpConnection.getOutputStream() );
   /*解决411*/

附解决方案链接:http://blog.csdn.net/pfyuit/article/details/8137777

时间: 2024-09-29 03:36:39

java.io.IOException: Server returned HTTP response code: 411 for URL的相关文章

关于java.io.IOException: Server returned HTTP response code: 400 for URL报错和string.getBytes()字符集

400 请求出错:由于语法格式有误,服务器无法理解此请求总论:这种错误应该会有很多原因,这里指出的是因为字符集编码的原因导致400,主要代码:向服务器发送请求传输json参数用的是out.write(json.getBytes())(读取的是操作系统的字符集,如果操作系统与部署项目的服务器不同则报错);改为out.writeChars(json);或out.write(json.getBytes(服务器编码))即可.如下代码16行 1 //创建连接 2 URL url = new URL(u);

Java服务端获取360token时候报错:Server returned HTTP response code: 400 for URL

http://book.zhulang.com/299056/736640.html http://book.zhulang.com/299056/736641.html http://book.zhulang.com/299056/736643.html http://book.zhulang.com/299056/736642.html http://book.zhulang.com/299056/736644.html http://book.zhulang.com/299056/7366

异常: http://www.ly.com/news/visa.html: java.io.IOException: unzipBestEffort returned null

nutch 运行时异常: http://www.ly.com/news/visa.html: java.io.IOException: unzipBestEffort returned null 参考:http://www.tuicool.com/articles/faUB73 此页面采用这个是一个分段传输,而nutch爬虫则默认采用了非分段式处理,导致构造GZIP时出错,从而影响了后面的GZIP解压失败. 是否是分段传输可以在Http headers里面看到,如果是分段传输则有:transfe

解决hiveserver2报错:java.io.IOException: Job status not available - Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

用户使用的sql: select count( distinct patient_id ) from argus.table_aa000612_641cd8ce_ceff_4ea0_9b27_0a3a743f0fe3; 下面做不同的测试: 1.beeline -u jdbc:hive2://0.0.0.0:10000 -e "select count( distinct patient_id ) from argus.table_aa000612_641cd8ce_ceff_4ea0_9b27_

hadoop错误FATAL org.apache.hadoop.hdfs.server.namenode.NameNode Exception in namenode join java.io.IOException There appears to be a gap in the edit log

错误: FATAL org.apache.hadoop.hdfs.server.namenode.NameNode Exception in namenode join java.io.IOException There appears to be a gap in the edit log 原因: namenode元数据被破坏,需要修复 解决:     恢复一下namenode hadoop namenode –recover 一路选择c,一般就OK了 如果,您认为阅读这篇博客让您有些收获,不

Caused by: java.io.EOFException: Can not read response from server.

1.错误描述 The last packet successfully received from the server was 76,997 milliseconds ago. The last packet sent successfully to the server was 78,995 milliseconds ago. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.ref

hive执行query语句时提示错误:org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.io.IOException:

hive> select product_id, track_time from trackinfo limit 5; Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.io.IOEx

Caused by: java.io.IOException: 您的主机中的软件中止了一个已建立的连接。

异常详情 2017-07-16 10:55:26,218 ERROR [500.jsp] - java.io.IOException: 你的主机中的软件中止了一个已建立的连接. org.apache.catalina.connector.ClientAbortException: java.io.IOException: 你的主机中的软件中止了一个已建立的连接. at org.apache.catalina.connector.OutputBuffer.realWriteBytes(Output

hive运行query语句时提示错误:org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.io.IOException:

hive> select product_id, track_time from trackinfo limit 5; Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.io.IOEx