I have a SparkSQL connection to an external database:
from pyspark.sql import SparkSession
spark = SparkSession \
.builder \
.appName("Python Spark SQL basic example") \
.getOrCreate()
If I know the name of a table, it's easy to query.
users_df = spark \
.read.format("jdbc") \
.options(dbtable="users", **db_config) \
.load()
But is there a good way to list/discover tables?
I want the equivalent of SHOW TABLES
in mysql, or \dt
in postgres.
I'm using pyspark v2.1, in case that makes any difference.
The SHOW TABLES statement returns all the tables for an optionally specified database. Additionally, the output of this statement may be filtered by an optional matching pattern. If no database is specified then the tables are returned from the current database.
You can run the command SHOW TABLES once you have logged on to a database to see all tables. SHOW TABLES; The output will show a list of table names, and that's all.
The answer to this question isn't actually spark specific. You'll just need to load the information_schema.tables
.
The information schema consists of a set of views that contain information about the objects defined in the current database. The information schema is defined in the SQL standard and can therefore be expected to be portable and remain stable — unlike the system catalogs, which are specific to RDBMS and are modelled after implementation concerns.
I'll be using MySQL for my code snippet which contains a enwiki
database on which I want to list tables :
# read the information schema table
spark.read.format('jdbc'). \
options(
url='jdbc:mysql://localhost:3306/', # database url (local, remote)
dbtable='information_schema.tables',
user='root',
password='root',
driver='com.mysql.jdbc.Driver'). \
load(). \
filter("table_schema = 'enwiki'"). \ # filter on specific database.
show()
# +-------------+------------+----------+----------+------+-------+----------+----------+--------------+-----------+---------------+------------+----------+--------------+--------------------+-----------+----------+---------------+--------+--------------+-------------+
# |TABLE_CATALOG|TABLE_SCHEMA|TABLE_NAME|TABLE_TYPE|ENGINE|VERSION|ROW_FORMAT|TABLE_ROWS|AVG_ROW_LENGTH|DATA_LENGTH|MAX_DATA_LENGTH|INDEX_LENGTH| DATA_FREE|AUTO_INCREMENT| CREATE_TIME|UPDATE_TIME|CHECK_TIME|TABLE_COLLATION|CHECKSUM|CREATE_OPTIONS|TABLE_COMMENT|
# +-------------+------------+----------+----------+------+-------+----------+----------+--------------+-----------+---------------+------------+----------+--------------+--------------------+-----------+----------+---------------+--------+--------------+-------------+
# | def| enwiki| page|BASE TABLE|InnoDB| 10| Compact| 7155190| 115| 828375040| 0| 975601664|1965031424| 11359093|2017-01-23 08:42:...| null| null| binary| null| | |
# +-------------+------------+----------+----------+------+-------+----------+----------+--------------+-----------+---------------+------------+----------+--------------+--------------------+-----------+----------+---------------+--------+--------------+-------------+
Note: This solution can be applied to the scala and java with respectful languages constraints.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With