Spark SQL 报错总结

报错一:

启动spark-shell后查询hive中的表信息,报错

$SPARK_HOME/bin/spark-shell
spark.sql("select * from student.student ").show()
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
    at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1523)
    at org.apache.hadoop.hive.metastore.RetryingMetaSto

    Caused by: org.datanucleus.store.rdbms.connectionpool.DatastoreDriverNotFoundException:
    The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the
    CLASSPATH. Please check your CLASSPATH specification,
    and the name of the driver.

原因:

spark访问存放hive的Metastore的mysql数据库时,没有连接成功,因为spark没有没有mysql-connecter的jar包,即mysql驱动

解决:

有人说,那就直接把jar包cp到$SPARK_HOME/libs下呗,不好意思,这样再生产中是绝对不行的,并不是所有spark程序都会用到mysql驱动,所以我们要在提交作业时指定--jars,多个jar包用逗号分隔 注(我的mysql版本是5.1.73)

[[email protected] spark]$ spark-shell --jars ~/softwares/mysql-connector-java-5.1.47.jar
19/05/21 08:02:55 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://hadoop003:4040
Spark context available as ‘sc‘ (master = local[*], app id = local-1558440185051).
Spark session available as ‘spark‘.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  ‘_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.4.2
      /_/

Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_131)
Type in expressions to have them evaluated.
Type :help for more information.

scala>  spark.sql("select * from student.student ").show()
19/05/21 08:04:42 WARN DataNucleus.General: Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/home/hadoop/app/spark-2.4.2-bin-hadoop-2.6.0-cdh5.7.0/jars/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/home/hadoop/app/spark/jars/datanucleus-core-3.2.10.jar."
19/05/21 08:04:42 WARN DataNucleus.General: Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/home/hadoop/app/spark/jars/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/home/hadoop/app/spark-2.4.2-bin-hadoop-2.6.0-cdh5.7.0/jars/datanucleus-api-jdo-3.2.6.jar."
19/05/21 08:04:42 WARN DataNucleus.General: Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/home/hadoop/app/spark-2.4.2-bin-hadoop-2.6.0-cdh5.7.0/jars/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/home/hadoop/app/spark/jars/datanucleus-rdbms-3.2.9.jar."
19/05/21 08:04:45 ERROR metastore.ObjectStore: Version information found in metastore differs 1.1.0 from expected schema version 1.2.0. Schema verififcation is disabled hive.metastore.schema.verification so setting version.
19/05/21 08:04:46 WARN metastore.ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
+------+--------+--------------+--------------------+
|stu_id|stu_name| stu_phone_num|           stu_email|
+------+--------+--------------+--------------------+
|     1|   Burke|1-300-746-8446|ullamcorper.velit...|
|     2|   Kamal|1-668-571-5046|[email protected]|
|     3|    Olga|1-956-311-1686|Aenean.eget.metus...|
|     4|   Belle|1-246-894-6340|vitae.aliquet.nec...|
|     5|  Trevor|1-300-527-4967|[email protected]|
|     6|  Laurel|1-691-379-9921|[email protected]|
|     7|    Sara|1-608-140-1995|[email protected]|
|     8|  Kaseem|1-881-586-2689|[email protected]|
|     9|     Lev|1-916-367-5608|[email protected]|
|    10|    Maya|1-271-683-2698|accumsan.convalli...|
|    11|     Emi|1-467-270-1337|        [email protected]|
|    12|   Caleb|1-683-212-0896|[email protected]|
|    13|Florence|1-603-575-2444|[email protected]|
|    14|   Anika|1-856-828-7883|[email protected]|
|    15|   Tarik|1-398-171-2268|[email protected]|
|    16|   Amena|1-878-250-3129|[email protected]|
|    17| Blossom|1-154-406-9596|Nunc.commodo.auct...|
|    18|     Guy|1-869-521-3230|senectus.et.netus...|
|    19| Malachi|1-608-637-2772|[email protected]|
|    20|  Edward|1-711-710-6552|[email protected]|
+------+--------+--------------+--------------------+
only showing top 20 rows

解决了
此时我们也可以在ui界面上看一下是不是将mysql驱动包添加到该任务中了


发现的确添加成功

报错二:

命令

启动spark-sql时,发生报错

spark-sql

日志

19/05/21 08:54:14 ERROR Datastore.Schema: Failed initialising database.
Unable to open a test connection to the given database.
JDBC url = jdbc:mysql://192.168.1.201:3306/hiveDB?createDatabaseIfNotExist=true, username = root. Terminating connection pool
(set lazyInit to true if you expect to start your database after your app).
Original Exception: ------
java.sql.SQLException: No suitable driver found for jdbc:mysql://192.168.1.201:3306/hiveDB?createDatabaseIfNotExist=true

Caused by: java.sql.SQLException: No suitable driver found for
jdbc:mysql://192.168.1.201:3306/hiveDB?createDatabaseIfNotExist=true

原因

driver端没有mysql驱动,连接不上hive的元数据。

解决

查看一下我的启动命令,发现已经使用--jars指定了mysql驱动包的路径
并且根据spark-sql --help的说明,可以将指定得jar包都添加到driver 和 executor的classpaths

--jars JARS                 Comma-separated list of jars to include on the driver and executor classpaths.

但是driver的classpath依旧没有mysql的驱动,这是为什么呢? 暂时不得而知,,所以,我尝试了一下,启动时另外添加driver的classpath路径

 spark-sql --jars softwares/mysql-connector-java-5.1.47.jar --driver-class-path softwares/mysql-connector-java-5.1.47.jar 
19/05/21 09:19:30 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
19/05/21 09:19:31 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
19/05/21 09:19:31 INFO metastore.ObjectStore: ObjectStore, initialize called
19/05/21 09:19:31 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
19/05/21 09:19:31 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
19/05/21 09:19:33 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
19/05/21 09:19:34 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
19/05/21 09:19:34 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
19/05/21 09:19:34 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
19/05/21 09:19:34 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
19/05/21 09:19:34 INFO DataNucleus.Query: Reading in results for query "[email protected]" since the connection used is closing
19/05/21 09:19:34 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
19/05/21 09:19:34 INFO metastore.ObjectStore: Initialized ObjectStore
19/05/21 09:19:35 INFO metastore.HiveMetaStore: Added admin role in metastore
19/05/21 09:19:35 INFO metastore.HiveMetaStore: Added public role in metastore
19/05/21 09:19:35 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
19/05/21 09:19:35 INFO metastore.HiveMetaStore: 0: get_all_databases
19/05/21 09:19:35 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr  cmd=get_all_databases
19/05/21 09:19:35 INFO metastore.HiveMetaStore: 0: get_functions: db=default pat=*
19/05/21 09:19:35 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr  cmd=get_functions: db=default pat=*
19/05/21 09:19:35 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
19/05/21 09:19:35 INFO metastore.HiveMetaStore: 0: get_functions: db=g6_hadoop pat=*
19/05/21 09:19:35 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr  cmd=get_functions: db=g6_hadoop pat=*
19/05/21 09:19:35 INFO metastore.HiveMetaStore: 0: get_functions: db=student pat=*
19/05/21 09:19:35 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr  cmd=get_functions: db=student pat=*
19/05/21 09:19:36 INFO session.SessionState: Created local directory: /tmp/b5ddbc6f-e572-4331-8a56-815dca0eaf1f_resources
19/05/21 09:19:36 INFO session.SessionState: Created HDFS directory: /tmp/hive/hadoop/b5ddbc6f-e572-4331-8a56-815dca0eaf1f
19/05/21 09:19:36 INFO session.SessionState: Created local directory: /tmp/hadoop/b5ddbc6f-e572-4331-8a56-815dca0eaf1f
19/05/21 09:19:36 INFO session.SessionState: Created HDFS directory: /tmp/hive/hadoop/b5ddbc6f-e572-4331-8a56-815dca0eaf1f/_tmp_space.db
19/05/21 09:19:36 INFO spark.SparkContext: Running Spark version 2.4.2
19/05/21 09:19:36 INFO spark.SparkContext: Submitted application: SparkSQL::192.168.1.203
19/05/21 09:19:36 INFO spark.SecurityManager: Changing view acls to: hadoop
19/05/21 09:19:36 INFO spark.SecurityManager: Changing modify acls to: hadoop
19/05/21 09:19:36 INFO spark.SecurityManager: Changing view acls groups to:
19/05/21 09:19:36 INFO spark.SecurityManager: Changing modify acls groups to:
19/05/21 09:19:36 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(hadoop); groups with view permissions: Set(); users  with modify permissions: Set(hadoop); groups with modify permissions: Set()
19/05/21 09:19:36 INFO util.Utils: Successfully started service ‘sparkDriver‘ on port 51505.
19/05/21 09:19:36 INFO spark.SparkEnv: Registering MapOutputTracker
19/05/21 09:19:36 INFO spark.SparkEnv: Registering BlockManagerMaster
19/05/21 09:19:36 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
19/05/21 09:19:36 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
19/05/21 09:19:36 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-b61694d9-76da-4d85-8797-e9e1403b4596
19/05/21 09:19:36 INFO memory.MemoryStore: MemoryStore started with capacity 366.3 MB
19/05/21 09:19:36 INFO spark.SparkEnv: Registering OutputCommitCoordinator
19/05/21 09:19:37 INFO util.log: Logging initialized @8235ms
19/05/21 09:19:37 INFO server.Server: jetty-9.3.z-SNAPSHOT, build timestamp: unknown, git hash: unknown
19/05/21 09:19:37 INFO server.Server: Started @8471ms
19/05/21 09:19:37 INFO server.AbstractConnector: Started [email protected]{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
19/05/21 09:19:37 INFO util.Utils: Successfully started service ‘SparkUI‘ on port 4040.
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/jobs,null,AVAILABLE,@Spark}
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/jobs/json,null,AVAILABLE,@Spark}
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/jobs/job,null,AVAILABLE,@Spark}
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/jobs/job/json,null,AVAILABLE,@Spark}
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/stages,null,AVAILABLE,@Spark}
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/stages/json,null,AVAILABLE,@Spark}
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/stages/stage,null,AVAILABLE,@Spark}
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/stages/stage/json,null,AVAILABLE,@Spark}
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/stages/pool,null,AVAILABLE,@Spark}
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/stages/pool/json,null,AVAILABLE,@Spark}
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/storage,null,AVAILABLE,@Spark}
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/storage/json,null,AVAILABLE,@Spark}
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/storage/rdd,null,AVAILABLE,@Spark}
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/storage/rdd/json,null,AVAILABLE,@Spark}
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/environment,null,AVAILABLE,@Spark}
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/environment/json,null,AVAILABLE,@Spark}
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/executors,null,AVAILABLE,@Spark}
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/executors/json,null,AVAILABLE,@Spark}
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/executors/threadDump,null,AVAILABLE,@Spark}
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/executors/threadDump/json,null,AVAILABLE,@Spark}
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/static,null,AVAILABLE,@Spark}
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/,null,AVAILABLE,@Spark}
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/api,null,AVAILABLE,@Spark}
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/jobs/job/kill,null,AVAILABLE,@Spark}
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/stages/stage/kill,null,AVAILABLE,@Spark}
19/05/21 09:19:37 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://hadoop003:4040
19/05/21 09:19:37 INFO spark.SparkContext: Added JAR file:///home/hadoop/softwares/mysql-connector-java-5.1.47.jar at spark://hadoop003:51505/jars/mysql-connector-java-5.1.47.jar with timestamp 1558444777501
19/05/21 09:19:37 INFO executor.Executor: Starting executor ID driver on host localhost
19/05/21 09:19:37 INFO util.Utils: Successfully started service ‘org.apache.spark.network.netty.NettyBlockTransferService‘ on port 43564.
19/05/21 09:19:37 INFO netty.NettyBlockTransferService: Server created on hadoop003:43564
19/05/21 09:19:37 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
19/05/21 09:19:37 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, hadoop003, 43564, None)
19/05/21 09:19:37 INFO storage.BlockManagerMasterEndpoint: Registering block manager hadoop003:43564 with 366.3 MB RAM, BlockManagerId(driver, hadoop003, 43564, None)
19/05/21 09:19:37 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, hadoop003, 43564, None)
19/05/21 09:19:37 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, hadoop003, 43564, None)
19/05/21 09:19:37 INFO handler.ContextHandler: Started [email protected]{/metrics/json,null,AVAILABLE,@Spark}
19/05/21 09:19:38 INFO scheduler.EventLoggingListener: Logging events to hdfs://ruozeclusterg6:8020/g6_direcory/local-1558444777553
19/05/21 09:19:38 INFO internal.SharedState: loading hive config file: file:/home/hadoop/app/spark-2.4.2-bin-hadoop-2.6.0-cdh5.7.0/conf/hive-site.xml
19/05/21 09:19:38 INFO internal.SharedState: Setting hive.metastore.warehouse.dir (‘null‘) to the value of spark.sql.warehouse.dir (‘file:/home/hadoop/spark-warehouse‘).
19/05/21 09:19:38 INFO internal.SharedState: Warehouse path is ‘file:/home/hadoop/spark-warehouse‘.
19/05/21 09:19:38 INFO handler.ContextHandler: Started [email protected]{/SQL,null,AVAILABLE,@Spark}
19/05/21 09:19:38 INFO handler.ContextHandler: Started [email protected]{/SQL/json,null,AVAILABLE,@Spark}
19/05/21 09:19:38 INFO handler.ContextHandler: Started [email protected]{/SQL/execution,null,AVAILABLE,@Spark}
19/05/21 09:19:38 INFO handler.ContextHandler: Started [email protected]{/SQL/execution/json,null,AVAILABLE,@Spark}
19/05/21 09:19:38 INFO handler.ContextHandler: Started [email protected]{/static/sql,null,AVAILABLE,@Spark}
19/05/21 09:19:38 INFO hive.HiveUtils: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
19/05/21 09:19:38 INFO client.HiveClientImpl: Warehouse location for Hive client (version 1.2.2) is file:/home/hadoop/spark-warehouse
19/05/21 09:19:38 INFO hive.metastore: Mestastore configuration hive.metastore.warehouse.dir changed from /user/hive/warehouse to file:/home/hadoop/spark-warehouse
19/05/21 09:19:38 INFO metastore.HiveMetaStore: 0: Shutting down the object store...
19/05/21 09:19:38 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr  cmd=Shutting down the object store...
19/05/21 09:19:38 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete.
19/05/21 09:19:38 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr  cmd=Metastore shutdown complete.
19/05/21 09:19:38 INFO metastore.HiveMetaStore: 0: get_database: default
19/05/21 09:19:38 INFO HiveMetaStore.audit: ugi=hadoop  ip=unknown-ip-addr  cmd=get_database: default
19/05/21 09:19:38 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
19/05/21 09:19:38 INFO metastore.ObjectStore: ObjectStore, initialize called
19/05/21 09:19:38 INFO DataNucleus.Query: Reading in results for query "[email protected]" since the connection used is closing
19/05/21 09:19:38 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
19/05/21 09:19:38 INFO metastore.ObjectStore: Initialized ObjectStore
19/05/21 09:19:39 INFO state.StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
Spark master: local[*], Application Id: local-1558444777553
19/05/21 09:19:40 INFO thriftserver.SparkSQLCLIDriver: Spark master: local[*], Application Id: local-1558444777553
spark-sql (default)>

启动成功~,看来官方解释也不一定就是准确的呀,还是要实测!小坑

原文地址:https://blog.51cto.com/14309075/2398215

时间: 2024-11-04 12:50:51

Spark SQL 报错总结的相关文章

利用sql报错帮助进行sql注入

我们可以利用sql报错帮助进行sql注入,这里以sql server 为例: sql查询时,若用group by子句时,该子句中的字段必须跟select 条件中的字段(非聚合函数)完全匹配,如果是select * ,那就必须将该表中所有列名都包含在group by 中:若少了哪个,就会报错,报错中会提示如下; 选择列表中的列 '列名'无效,因为该列没有包含在聚合函数或 GROUP BY 子句中. 这个提示的列名是按该表中的顺序来的,这时我们可以利用这点进行sql注入中枚举所有列的工作: 先使用

mysql 更新sql报错:You can't specify target table 'wms_cabinet_form' for update in FROM clause

数据库里面有两个字段的位置不对,要把他们对调换下.因为没有数据库写的权限,需要用sql语句来实现.原来以为简单的 update table a set a.字段a=(select b字段 from table  where id=?) ,set a.字段b=(select a字段 from table where id=?) where id=? ,结果报了 这个问题 You can't specify target table 'wms_cabinet_form' for update in

转 sqlplus执行sql报错:ORA-01756:

1.sqlplus执行sql报错:ORA-01756: quoted string not properly terminated   分类: 技术 在SQLPLUS中执行SQL文件时发生错误:ORA-01756: quoted string not properly terminated 但是SQL其实是没有问题的,找了很多原因,发现可能是由于使用TOAD导出SQL insert语句,编码集的问题导致的. 首先在.bash_profile文件中添加环境变量指定编码集: export NLS_L

eclispe集成Scalas环境后,导入外部Spark包报错:object apache is not a member of package org

在Eclipse中集成scala环境后,发现导入的Spark包报错,提示是:object apache is not a member of package org,网上说了一大推,其实问题很简单: 解决办法:在创建scala工程是,到了创建包的这一步是我们要选择: 而不是创建java工程是的Java程序的包类型:然后创建scala类的时候也是一样,注意选择是scala class而不是java class. 这样创建的项目,我们在将外部包,build path进来后,发现不再报错. 原文地址:

mysql - sql报错You can't specify target table 'table_name' for update in FROM clause

今天写了个更新数据库表的语句,本意是根据某个条件,筛选出该表某些数据之后,对这些数据进行删除操作,如下 delete from t_person where id in (select id from t_person where name = "hello"); 然而却报错: You can't specify target table 't_person' for update in FROM clause 以下这篇博客( https://blog.csdn.net/qq_2967

python3 UnicodeEncodeError错误,cx_oracle模块执行sql报错:UnicodeEncodeError: 'ascii' codec can't encode characters in position

问题描述: 写了一个执行sql的模块,引用了cx_oracle,在windows机器上完美运行,移植到Centos上就会报错, UnicodeEncodeError: 'ascii' codec can't encode characters in position 检查过程: 代码编码为utf-8,print编码为utf-8,文件编码为utf-8,服务器编码为utf-8,各种正常 定位代码报错位置,开始怀疑是sql执行成功,返回值有中文报错,但是后来通过观察,是传入sql,并没有执行成功,执行

解决spark程序报错:Caused by: java.util.concurrent.TimeoutException: Futures timed out after [300 seconds]

报错信息: 09-05-2017 09:58:44 CST xxxx_job_1494294485570174 INFO - at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:49) 09-05-2017 09:58:44 CST xxxx_job_1494294485570174 INFO - at org.apache.spark.sql.execution.aggregate.Tungsten

spark编译报错信息简介

spark编译需要环境 git java1.7+ maven R 报错信息1: [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 52.661s (W

SQL 报错信息整理及解决方案(持续更新)

整理一下自己遇见过的 SQL 各种报错信息及相应解决方法,方便以后查阅,主要平台为 Oracle: ORA-01461: 仅能绑定要插入 LONG 列的 LONG 值: 原因:插入操作时,数据大于字段设定大小,Oracle 会自动将数据转为 long 型,然后报插入失败错误. 解决:更改数据大小,或者将字段设为 clob 或 blob 类型. "ORA-01012: not logged on" 以及 "Connected to an idle instance":