Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to compute hash of a large file chunk?

I want to be able to compute the hashes of arbitrarily sized file chunks of a file in C#.

eg: Compute the hash of the 3rd gigabyte in 4gb file.

The main problem is that I don't want to load the entire file at memory, as there could be several files and the offsets could be quite arbitrary.

AFAIK, the HashAlgorithm.ComputeHash allows me to either use a byte buffer, of a stream. The stream would allow me to compute the hash efficiently, but for the entire file, not just for a specific chunk.

I was thinking to create aan alternate FileStream object and pass it to ComputeHash, where I would overload the FileStream methods and have read only for a certain chunk in a file.

Is there a better solution than this, preferably using the built in C# libraries ? Thanks.

like image 710
xander Avatar asked Dec 19 '22 20:12

xander


2 Answers

You should pass in either:

  • A byte array containing the chunk of data to compute the hash from
  • A stream that restricts access to the chunk you want to computer the hash from

The second option isn't all that hard, here's a quick LINQPad program I threw together. Note that it lacks quite a bit of error handling, such as checking that the chunk is actually available (ie. that you're passing in a position and length of the stream that actually exists and doesn't fall off the end of the underlying stream).

Needless to say, if this should end up as production code I would add a lot of error handling, and write a bunch of unit-tests to ensure all edge-cases are handled correctly.

You would construct the PartialStream instance for your file like this:

const long gb = 1024 * 1024 * 1024;
using (var fileStream = new FileStream(@"d:\temp\too_long_file.bin", FileMode.Open))
using (var chunk = new PartialStream(fileStream, 2 * gb, 1 * gb))
{
    var hash = hashAlgorithm.ComputeHash(chunk);
}

Here's the LINQPad test program:

void Main()
{
    var buffer = Enumerable.Range(0, 256).Select(i => (byte)i).ToArray();
    using (var underlying = new MemoryStream(buffer))
    using (var partialStream = new PartialStream(underlying, 64, 32))
    {
        var temp = new byte[1024]; // too much, ensure we don't read past window end
        partialStream.Read(temp, 0, temp.Length);
        temp.Dump();
        // should output 64-95 and then 0's for the rest (64-95 = 32 bytes)
    }
}

public class PartialStream : Stream
{
    private readonly Stream _UnderlyingStream;
    private readonly long _Position;
    private readonly long _Length;

    public PartialStream(Stream underlyingStream, long position, long length)
    {
        if (!underlyingStream.CanRead || !underlyingStream.CanSeek)
            throw new ArgumentException("underlyingStream");

        _UnderlyingStream = underlyingStream;
        _Position = position;
        _Length = length;
        _UnderlyingStream.Position = position;
    }

    public override bool CanRead
    {
        get
        {
            return _UnderlyingStream.CanRead;
        }
    }

    public override bool CanWrite
    {
        get
        {
            return false;
        }
    }

    public override bool CanSeek
    {
        get
        {
            return true;
        }
    }

    public override long Length
    {
        get
        {
            return _Length;
        }
    }

    public override long Position
    {
        get
        {
            return _UnderlyingStream.Position - _Position;
        }

        set
        {
            _UnderlyingStream.Position = value + _Position;
        }
    }

    public override void Flush()
    {
        throw new NotSupportedException();
    }

    public override long Seek(long offset, SeekOrigin origin)
    {
        switch (origin)
        {
            case SeekOrigin.Begin:
                return _UnderlyingStream.Seek(_Position + offset, SeekOrigin.Begin) - _Position;

            case SeekOrigin.End:
                return _UnderlyingStream.Seek(_Length + offset, SeekOrigin.Begin) - _Position;

            case SeekOrigin.Current:
                return _UnderlyingStream.Seek(offset, SeekOrigin.Current) - _Position;

            default:
                throw new ArgumentException("origin");
        }
    }

    public override void SetLength(long length)
    {
        throw new NotSupportedException();
    }

    public override int Read(byte[] buffer, int offset, int count)
    {
        long left = _Length - Position;
        if (left < count)
            count = (int)left;
        return _UnderlyingStream.Read(buffer, offset, count);
    }

    public override void Write(byte[] buffer, int offset, int count)
    {
        throw new NotSupportedException();
    }
}
like image 100
Lasse V. Karlsen Avatar answered Dec 22 '22 08:12

Lasse V. Karlsen


You can use TransformBlock and TransformFinalBlock directly. That's pretty similar to what HashAlgorithm.ComputeHash does internally.

Something like:

using(var hashAlgorithm = new SHA256Managed())
using(var fileStream = new File.OpenRead(...))
{     
    fileStream.Position = ...;
    long bytesToHash = ...;

    var buf = new byte[4 * 1024];
    while(bytesToHash > 0)
    {
        var bytesRead = fileStream.Read(buf, 0, (int)Math.Min(bytesToHash, buf.Length));
        hashAlgorithm.TransformBlock(buf, 0, bytesRead, null, 0);
        bytesToHash -= bytesRead;
        if(bytesRead == 0)
            throw new InvalidOperationException("Unexpected end of stream");
    }
    hashAlgorithm.TransformFinalBlock(buf, 0, 0);
    var hash = hashAlgorithm.Hash;
    return hash;
};
like image 30
CodesInChaos Avatar answered Dec 22 '22 09:12

CodesInChaos