Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

When creating an external table in hive can I point the location to specific files in a directory?

Tags:

hive

external

I have defined a table as such:

create external table PageViews (Userid string, Page_View string)
partitioned by (ds string)
row format as delimited fields terminated by ','
stored as textfile location '/user/data';

I do not want all the files in the /user/data directory to be used as part of the table. Is it possible for me to do the following?

location 'user/data/*.csv'
like image 705
George TeVelde Avatar asked Jun 29 '12 21:06

George TeVelde


People also ask

What is location in hive external table?

By default, it is /user/hive/warehouse directory. For instance, a table named students will be located at /user/hive/warehouse/students.

What happen if we create table external with data in Hive warehouse location?

When you create external table with out location , the data will be stored in the hive default location.

What is location in Hive table creation?

The table in the hive is consists of multiple columns and records. The table we create in any database will be stored in the sub-directory of that database. The default location where the database is stored on HDFS is /user/hive/warehouse.


3 Answers

What kmosley said is true. As of now, you can't selectively choose certain files to be a part of your Hive table. However, there are 2 ways to get around it.

Option 1: You can move all the csv files into another HDFS directory and create a Hive table on top of that. If it works better for you, you can create a subdirectory (say, csv) within your present directory that houses all CSV files. You can then create a Hive table on top of this subdirectory. Keep in mind that any Hive tables created on top of the parent directory will NOT contain the data from the subdirectory.

Option 2: You can change your queries to make use of a virtual column called INPUT__FILE__NAME.

Your query would look something like:

SELECT 
   *
FROM
   my_table
WHERE
   INPUT__FILE__NAME LIKE '%csv';

The ill-effect of this approach is that the Hive query will have to churn through entire data present in the directory even though you only cared about specific files. The query wouldn't filter out files based on the predicate using INPUT__FILE__NAME. It will just filter out the records that don't come from match the predicate using INPUT__FILE__NAME during the map phase (consequently filtering out all records from particular files) but the mappers would run on unnecessary files as well. It will give you the correct result, might have some, probably minor, performance overhead.

The benefit of this approach is the you can use the same Hive table if you had multiple files in your table and you wanted the ability to query all files from that table (or its partition) in a few queries and a subset of the files in other queries. You could make use of the INPUT__FILE__NAME virtual column to achieve that. As an example: if a partition in your HDFS directory /user/hive/warehouse/web_logs/ looked like:

/user/hive/warehouse/web_logs/dt=2012-06-30/
   /user/hive/warehouse/web_logs/dt=2012-06-30/00.log
   /user/hive/warehouse/web_logs/dt=2012-06-30/01.log
   .
   .
   .
   /user/hive/warehouse/web_logs/dt=2012-06-30/23.log

Let's say your table definition looked like:

CREATE EXTERNAL TABLE IF NOT EXISTS web_logs_table (col1 STRING)
PARTITIONED BY (dt STRING)
LOCATION '/user/hive/warehouse/web_logs';

After adding the appropriate partitions, you could query all logs in the partition using a query like:

SELECT
   *
FROM
   web_logs_table w
WHERE
   dt='2012-06-30';

However, if you only cared about the logs from the first hour of the day, you could query the logs for the first hour using a query like:

SELECT
   *
FROM
   web_logs_table w
WHERE 
   dt ='2012-06-30'
   AND INPUT__FILE__NAME='00.log';

Another similar use case could be a directory that contains web logs from different domains and various queries need to analyze logs on different sets of domains. The queries can filter out domains using the INPUT__FILE__NAME virtual column.

In both the above use-cases, having a sub partition for hour or domain would solve the problem as well, without having to use the virtual column. However, there might exist some design trade-offs that require you to not create sub-partitions. In that case, arguably, using INPUT__FILE__NAME virtual column is your best bet.

Deciding between the 2 options:

It really depends on your use case. If you would never care about the files are you are trying to exclude from the Hive table, using Option 2 is probably an overkill and you should fix up the directory structure and create a Hive table on top of the directory containing files that you care about.

If the files you are presently excluding follow the same format as the other files (so they can all be part of the same Hive table) and you could see yourself writing a query that would analyze all the data in the directory, then go with Option 2.

like image 77
Mark Grover Avatar answered Oct 24 '22 01:10

Mark Grover


I came across this thread when I had a similar problem to solve. I was able to resolve it by using a custom SerDe. I then added SerDe properties which guided what RegEx to apply to the file name patterns for any particular table.

A custom SerDe might seem overkill if you are only dealing with standard CSV files, I had a more complex file format to deal with. Still this is a very viable solution if you don't shy away from writing some Java. It is particularly useful when you are unable to restructure the data in your storage location and you are looking for a very specific file pattern among a disproportionately large file set.

> CREATE EXTERNAL TABLE PageViews (Userid string, Page_View string)  
> ROW FORMAT SERDE 'com.something.MySimpleSerDe' 
> WITH SERDEPROPERTIES ( "input.regex" = "*.csv")
> LOCATION '/user/data';
like image 42
NG Algo Avatar answered Oct 24 '22 03:10

NG Algo


No you cannot currently do that. There is a JIRA ticket open to allow regex selection of included files for Hive tables (https://issues.apache.org/jira/browse/HIVE-951).

For now your best bet is to create a table over a different directory and just copy in the files you want to query.

like image 7
kmosley Avatar answered Oct 24 '22 02:10

kmosley