Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Pivot for redshift database

I know this question has been asked before but any of the answers were not able to help me to meet my desired requirements. So asking the question in new thread

In redshift how can use pivot the data into a form of one row per each unique dimension set, e.g.:

id         Name               Category         count
8660     Iced Chocolate         Coffees         105
8660     Iced Chocolate         Milkshakes      10
8662     Old Monk               Beer            29
8663     Burger                 Snacks          18

to

id        Name              Cofees  Milkshakes  Beer  Snacks
8660    Iced Chocolate       105       10        0      0
8662    Old Monk             0         0        29      0
8663    Burger               0         0         0      18

The category listed above gets keep on changing. Redshift does not support the pivot operator and a case expression would not be of much help (if not please suggest how to do it)

How can I achieve this result in redshift?

(The above is just an example, we would have 1000+ categories and these categories keep's on changing)

like image 695
ankitkhanduri Avatar asked Mar 09 '17 11:03

ankitkhanduri


People also ask

Is Pivot supported in Redshift?

Amazon Redshift now supports PIVOT and UNPIVOT SQL operators that can help you transpose rows into columns and vice versa with high performance, for data modeling, data analysis, and data presentation.

How do I convert rows to columns in Redshift?

In the relational database, Pivot used to convert rows to columns and vice versa. Many relational databases supports pivot function, but Amazon Redshift does not provide pivot functions. You can use CASE or DECODE to convert rows to columns, or columns to rows.

Why Redshift is faster than MySQL?

In database parlance, Redshift is read-optimized while MySQL is (comparatively) write-optimized. MySQL can effectively load small volumes of data more frequently. In contrast, Redshift is more efficient at loading large volumes of data less frequently.


4 Answers

i don't think there is a easy way to do that in Redshift,

also you say you have more then 1000 categories and the number is growing you need to taking in to account you have limit of 1600 columns per table,

see attached link [http://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_TABLE_usage.html][1]

you can use case but then you need to create case for each category

select id,
       name,
       sum(case when Category='Coffees' then count end) as Cofees,       
       sum(case when Category='Milkshakes' then count end) as Milkshakes,
       sum(case when Category='Beer' then count end) as Beer,
       sum(case when Category='Snacks' then count end) as Snacks
from my_table
group by 1,2

other option you have is to upload the table for example to R and then you can use cast function for example.

cast(data, name~ category)

and then upload the data back to S3 or Redshift

like image 178
user3600910 Avatar answered Sep 26 '22 16:09

user3600910


We do a lot of pivoting at Ro - we built python based tool for autogenerating pivot queries. This tool allows for the same basic options as what you'd find in excel, including specifying aggregation functions as well as whether you want overall aggregates.

like image 44
Sami Yabroudi Avatar answered Sep 26 '22 16:09

Sami Yabroudi


If you will typically want to query specific subsets of the categories from the pivot table, a workaround based on the approach linked in the comments might work.

You can populate your "pivot_table" from the original like so:

insert into pivot_table (id, Name, json_cats) (
    select id, Name,
        '{' || listagg(quote_ident(Category) || ':' || count, ',')
               within group (order by Category) || '}' as json_cats
    from to_pivot
    group by id, Name
)

And access specific categories this way:

select id, Name,
    nvl(json_extract_path_text(json_cats, 'Snacks')::int, 0) Snacks,
    nvl(json_extract_path_text(json_cats, 'Beer')::int, 0) Beer
from pivot_table

Using varchar(max) for the JSON column type will give 65535 bytes which should be room for a couple thousand categories.

like image 2
systemjack Avatar answered Sep 24 '22 16:09

systemjack


@user3600910 is right with the approach however 'END' is required else '500310' invalid operation would occur.

select id,
       name,
       sum(case when Category='Coffees' then count END) as Cofees,       
       sum(case when Category='Milkshakes' then count END) as Milkshakes,
       sum(case when Category='Beer' then count END) as Beer,
       sum(case when Category='Snacks' then count END) as Snacks
from my_table
group by 1,2
like image 2
Anshul Tak Avatar answered Sep 23 '22 16:09

Anshul Tak