1.Spark可以读取多种数据源,本实例为读取mysql.
2.准备工作:
sacla,idea,mysql-connector-java,版本从:https://mvnrepository.com/获取
3.代码示例:
object WordFreq { def main(args: Array[String]) { val spark: SparkSession = SparkSession.builder().master("local").appName("getDatafromMysql") .config("spark.sql.shuffle.partitions", 1).getOrCreate() val properties: Properties = new Properties() properties.setProperty("user", "root") properties.setProperty("password", "root") properties.setProperty("driver", "com.mysql.jdbc.Driver") //方式一 val person: DataFrame = spark.read.jdbc("jdbc:mysql://localhost:3306/acc", "ttt", properties) person.show() //方式二 spark.read.jdbc("jdbc:mysql://localhost:3306/acc", "(select * from ut_tt) T", properties).show() } }
所需引用:
import org.apache.spark.sql.DataFrame import org.apache.spark.sql.SparkSession import java.util.Properties
在pom文件中,添加mysql-connector-java引用:
<dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>*****</version> </dependency>
4.运行结果:
原文地址:https://www.cnblogs.com/jizhong/p/12625247.html
时间: 2024-10-14 03:45:47