Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

mass update in table storage tables

Is there a way to mass update the TableStorage entities?

Say, I want to rename all the Clients having "York" in the City field to "New-York".

Is there some tools to do it directly (without the need to writing code)?

like image 554
serge Avatar asked Feb 02 '18 13:02

serge


People also ask

What is the difference between Azure Table storage and Cosmos DB?

Azure Table Storage supports a single region with an optional read-only secondary region for availability. Cosmos DB supports distribution from 1 to more than 30 regions with automatic failovers worldwide. You can easily manage this from the Azure portal and define the failover behavior.

How fast is Azure Table storage?

With Azure Table, your throughput is limited to 20k operations per second while with Cosmos DB throughput is supported for up to 10 million operations per second.

Can two entities in same Table storage contain different collection of properties of different types?

An entity has a primary key and a set of properties. A property is a name, typed-value pair, similar to a column. The Table service does not enforce any schema for tables, so two entities in the same table may have different sets of properties.

What is the maximum size of an entity in Azure Table Storage?

An entity in Azure Storage can be up to 1MB in size. An entity in Azure Cosmos DB can be up to 2MB in size. Properties: A property is a name-value pair. Each entity can include up to 252 properties to store data.


2 Answers

You could try to use Microsoft Azure Storage Explorer to achieve it.

First, you have some entities in table storage with a City field in your Storage Explorer.

Then you could click Export button to export all your entities to a .csv file. enter image description here Enter Ctrl + F and choose Replace item. Fill the find and replace item with what you want then choose Replace All.

enter image description here

Finally, go back to the Storage Explorer and click Import button to choose the .csv file you have edited before. enter image description here

like image 168
Joey Cai Avatar answered Oct 06 '22 01:10

Joey Cai


I wanted to do the trick with export/import but it's a no go when you have millions of records. I exported all the records and ended up with ~5gb file. Azure Storage Explorer couldn't handle it (my pc i7, 32gb ram).

If someone is also struggling with similar issue, you can do as follow:

  1. Export records to csv file
  2. Remove the lines that you don't want to modify (if needed). You can use grep "i_want_this_phrase" myfile > mynewfile or use -v option to find all that doesn't match the given phrase. If file is too large, split it with some command eg. cat bigFile.csv | parallel --header : --pipe -N999 'cat >file_{#}.csv'
  3. Remove everything except the RowKey column.
  4. Prepare az cli command similar to az storage entity merge --connection-string 'XXX' --account-name your_storage -t your_table -e PartitionKey=your_pk MyColumn=false [email protected]=Edm.Boolean RowKey=. Remember about odata.type. At first I did an update without this and instead of bools, I switched to strings. Luckily it was easy to fix.
  5. Open the file in VSC, select all with ctrl+a, then shift+alt+i to put a cursor at the end of all lines and then paste previously prepared az cli command. This way you will get a list of az cli updates for each RowKey.
  6. Add #!/bin/bash at the beginning of the file, save as .sh, modify privileges chmod +x yourfile and run.

Of course if you want, you can create some bash script for that and read a file line by line and execute az command. I just did it my way as it was much simpler for me, I'm not so experienced in bash, so it would take me a while to dev&test the script.

like image 37
Adrian Avatar answered Oct 06 '22 01:10

Adrian