Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Split file with 800,000 columns

Tags:

bash

unix

awk

cut

I want to split a file of genomic data with 800,000 columns and 40,000 rows into a series of files with 100 columns each, total size 118GB.

I am currently running the following bash script, multithread 15 times:

infile="$1"
start=$2
end=$3
step=$(($4-1))

for((curr=$start, start=$start, end=$end; curr+step <= end; curr+=step+1)); do
  cut -f$curr-$((curr+step)) "$infile" > "${infile}.$curr" -d' '
done

However, judging by current progress of the script, it will take 300 days to complete the split?!

Is there a more efficient way to column wise split a space-delimited file into smaller chunks?

like image 903
Parsa Avatar asked Nov 23 '25 16:11

Parsa


1 Answers

Try this awk script:

awk -v cols=100 '{ 
     f = 1 
     for (i = 1; i <= NF; i++) {
       printf "%s%s", $i, (i % cols && i < NF ? OFS : ORS) > (FILENAME "." f)
       f=int(i/cols)+1
     }
  }' largefile

I expect it to be faster than the shell script in the question.

like image 57
user000001 Avatar answered Nov 26 '25 18:11

user000001