Reading Data From SQL Tables in Spark

SQL databases or relational databases are around for decads now. many systems store their data in RDBMS. Often we have to connect Spark to one of the relational database and process that data. In this article, we are going to learn about reading data from SQL tables in spark data frames. In this blog, I am using MySQL but you can use any relational database using same way.

Setting Up MySQL Connector

When we want spark to communicate with some RDBMS, we need a compatible connector. For MySQL, you can download its connector at this link MySQL Connector. Once you download it, we have to pass jar to Spark when we create SparkSession.

If this does not work for you, you can also use below method to pass connector jar.

Once we initialize spark correctly, we can communicate with MySQL server and read table data.

Reading Table From MySQL using Spark

Let us see how to read entire table from MySQL and create its data frame in Spark. I have employees database and in that employees table on MySQL server.

In spark, we can pass read format as “jdbc” with database url, username and password to read same table.

We can notice that we are getting the same number of row as count when we have data frame based on employees table.

Reading Data from SQL Query

Spark does not limit us to read entire table at a time. We can also pass any query to spark read function and we will get query result as data frame.

Remember to pass query in parentheses and renamed to something that is as sub query. Otherwise it will not work.

Reading From Database in Parallel

When we are reading large table, we would like to read that in parallel. This will dramatically improve read performance. We can pass “numPartitions” option to spark read function which will decide parallelism in reading data.

In our case, it will still show as 1 partition only. This is because we do not have enough data to create 10 different partitions.

Conclusion

In this article, we have connected Spark with RDBMS and create data frames from tables. This is a powerful feature and allows us to process SQL data easily. You can find the code in this article at GitHub. I hope you will find this useful. See you in the next blog.

Similar Posts