On a Linux desktop (RHEL4) I want to extract a range of bytes (typically less than 1000) from within a large file (>1 Gig). I know the offset into the file and the size of the chunk.
I can write code to do this but is there a command line solution?
Ideally, something like:
magicprogram --offset 102567 --size 253 < input.binary > output.binary
Right-click the file and click Properties. The image below shows that you can determine the size of the file or files you have highlighted from in the file properties window. In this example, the chrome. jpg file is 18.5 KB (19,032 bytes), and that the size on disk is 20.0 KB (20,480 bytes).
Try dd
:
dd skip=102567 count=253 if=input.binary of=output.binary bs=1
The option bs=1
sets the block size, making dd
read and write one byte at a time. The default block size is 512 bytes.
The value of bs
also affects the behavior of skip
and count
since the numbers in skip
and count
are the numbers of blocks that dd
will skip and read/write, respectively.
This is an old question, but I'd like to add another version of the dd
command that is better-suited for large chunks of bytes:
dd if=input.binary of=output.binary skip=$offset count=$bytes iflag=skip_bytes,count_bytes
where $offset
and $bytes
are numbers in byte units.
The difference with Thomas's accepted answer is that bs=1
does not appear here. bs=1
sets the input and output block size to 1 byte, which makes it terribly slow when the number of bytes to extract is large.
This means we leave the block size (bs
) at its default of 512 bytes. Using iflag=skip_bytes,count_bytes
, we tell dd
to treat the values after skip
and count
as byte amount instead of block amount.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With