300字范文,内容丰富有趣,生活中的好帮手!
300字范文 > pyspark报错java.lang.ClassNotFoundException: com.mysql.cj.jdbc.Driver

pyspark报错java.lang.ClassNotFoundException: com.mysql.cj.jdbc.Driver

时间:2023-07-17 06:20:54

相关推荐

pyspark报错java.lang.ClassNotFoundException: com.mysql.cj.jdbc.Driver

完整报错如下:

Traceback (most recent call last):File "<stdin>", line 6, in <module>File "/home/appleyuchi/bigdata/spark-2.3.1-bin-hadoop2.7/python/pyspark/sql/readwriter.py", line 703, in saveself._jwrite.save()File "/home/appleyuchi/bigdata/spark-2.3.1-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__File "/home/appleyuchi/bigdata/spark-2.3.1-bin-hadoop2.7/python/pyspark/sql/utils.py", line 63, in decoreturn f(*a, **kw)File "/home/appleyuchi/bigdata/spark-2.3.1-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_valuepy4j.protocol.Py4JJavaError: An error occurred while calling o92.save.: java.lang.ClassNotFoundException: com.mysql.cj.jdbc.Driverat .URLClassLoader.findClass(URLClassLoader.java:381)at java.lang.ClassLoader.loadClass(ClassLoader.java:424)at java.lang.ClassLoader.loadClass(ClassLoader.java:357)at org.apache.spark.sql.execution.datasources.jdbc.DriverRegistry$.register(DriverRegistry.scala:45)at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$6.apply(JDBCOptions.scala:79)at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$6.apply(JDBCOptions.scala:79)at scala.Option.foreach(Option.scala:257)at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:79)at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:35)at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:60)at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)at org.apache.spark.mand.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)at org.apache.spark.mand.ExecutedCommandExec.sideEffectResult(commands.scala:68)at org.apache.spark.mand.ExecutedCommandExec.doExecute(commands.scala:86)at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)at py4j.Gateway.invoke(Gateway.java:282)at mands.AbstractCommand.invokeMethod(AbstractCommand.java:132)at mands.CallCommand.execute(CallCommand.java:79)at py4j.GatewayConnection.run(GatewayConnection.java:238)at java.lang.Thread.run(Thread.java:748)

解决方案:

mv mysql-connector-java-8.0.20.jar $SPARK_HOME/jars/

驱动文件mysql-connector-java-8.0.20.jar是从maven仓库下载的:

/artifact/mysql/mysql-connector-java/8.0.20

注意,这个报错的设置,需要搞清楚当前spark是什么mode,如果盲目照搬stackoverflow和百度,你会发现无效!

该结论的依据来自[1]中的表格.

spark-defaults.conf中设置如下:

spark.driver.extraClassPath= /home/appleyuchi/bigdata/apache-hive-3.0.0-bin/lib/mysql-connector-java-8.0.20.jar

spark.executor.extraClassPath= /home/appleyuchi/bigdata/apache-hive-3.0.0-bin/lib/mysql-connector-java-8.0.20.jar

spark.jars= /home/appleyuchi/bigdata/apache-hive-3.0.0-bin/lib/mysql-connector-java-8.0.20.jar

测试方法如下:

①pyspark --master yarn(然后在交互是模式中输入交互式代码)

②spark-submit --master yarn --deploy-mode cluster 源码.py

#----------------------------------------------------------附录-------------------------------------------------------------------------------------------------

源码.py

import pandas as pdfrom pyspark.sql import SparkSessionfrom pyspark import SparkContextfrom pyspark.sql import SQLContextdef map_extract(element):file_path, content = elementyear = file_path[-8:-4]return [(year, i) for i in content.split("\n") if i]spark = SparkSession\.builder\.appName("PythonTest")\.getOrCreate()res = spark.sparkContext.wholeTextFiles('hdfs://Desktop:9000/user/mercury/names',minPartitions=40) \.map(map_extract) \.flatMap(lambda x: x) \.map(lambda x: (x[0], int(x[1].split(',')[2]))) \.reduceByKey(lambda x,y:x+y)df = res.toDF(["key","num"]) #把已有数据列改成和目标mysql表的列的名字相同# print(dir(df))df.printSchema()print(df.show())df.printSchema()df.write.format("jdbc").options(url="jdbc:mysql://Desktop:3306/leaf",driver="com.mysql.cj.jdbc.Driver",dbtable="spark",user="appleyuchi",password="appleyuchi").mode('append').save()

Reference:

[1]Spark Shell Add Multiple Drivers/Jars to Classpath using spark-defaults.conf

[2]Spark Configuration

[3]py4j.protocol.Py4JJavaError: An error occurred while calling o90.save(官方bug,目前没有解决)

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。