Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Upload file using a virtual path provider and Amazon S3 SDK

Tags:

c#

amazon-s3

The background to this question is based on a virtual file system I'm developing. The concept I'm using is virutal path providers for different types of storage type i.e local file system, dropbox and amazon s3. My base class for a virtual file looks like this:

public abstract class CommonVirtualFile : VirtualFile {
    public virtual string Url {
        get { throw new NotImplementedException(); }
    }
    public virtual string LocalPath {
        get { throw new NotImplementedException(); }
    }
    public override Stream Open() {
        throw new NotImplementedException();
    }
    public virtual Stream Open(FileMode fileMode) {
        throw new NotImplementedException();
    }
    protected CommonVirtualFile(string virtualPath) : base(virtualPath) { }
}

The implementation of the second Open method is what my question is all about. If we look at my implementation for the local file system i.e saving a file on disk it looks like this:

public override Stream Open(FileMode fileMode) {
    return new FileStream("The_Path_To_The_File_On_Disk"), fileMode);
}

If I would like to save a file on the local file system this would look something like this:

    const string virtualPath = "/assets/newFile.txt";
    var file = HostingEnvironment.VirtualPathProvider.GetFile(virtualPath) as CommonVirtualFile;
    if (file == null) {
        var virtualDir = VirtualPathUtility.GetDirectory(virtualPath);
        var directory = HostingEnvironment.VirtualPathProvider.GetDirectory(virtualDir) as CommonVirtualDirectory;
        file = directory.CreateFile(VirtualPathUtility.GetFileName(virtualPath));
    }
    byte[] fileContent;
    using (var fileStream = new FileStream(@"c:\temp\fileToCopy.txt", FileMode.Open, FileAccess.Read)) {
        fileContent = new byte[fileStream.Length];
        fileStream.Read(fileContent, 0, fileContent.Length);
    }
    // write the content to the local file system
    using (Stream stream = file.Open(FileMode.Create)) {
        stream.Write(fileContent, 0, fileContent.Length);
    }

What I want is that if I switch to my amazon s3 virtual path provider I want this code to work directly without any changes so to sum things up, how can I solve this using the amazon s3 sdk and how should i implement my Open(FileMode fileMode) method in my amazon s3 virtual path provider?

like image 286
marcus Avatar asked Nov 13 '22 17:11

marcus


1 Answers

Hey i stood for this problem, too, and i solved it implementing a stream.

Here is my way i did it maybe it helps:

public static Stream OpenStream(S3TransferUtility transferUtility, string key)
    {                     
        byte[] buffer  = new byte[Buffersize + Buffersize/2];

        S3CopyMemoryStream s3CopyStream =
            new S3CopyMemoryStream(key, buffer, transferUtility)
            .WithS3CopyFileStreamEvent(CreateMultiPartS3Blob);

        return s3CopyStream;
    }

My Stream with constructor overrides the close and write(array, offset, count) methods and upload the stream to amazon s3 partly.

public class S3CopyMemoryStream : MemoryStream
    {

        public S3CopyMemoryStream WithS3CopyFileStreamEvent(StartUploadS3CopyFileStreamEvent doing)
        {
            S3CopyMemoryStream s3CopyStream = new S3CopyMemoryStream(this._key, this._buffer, this._transferUtility);

            s3CopyStream.StartUploadS3FileStreamEvent = new S3CopyMemoryStream.StartUploadS3CopyFileStreamEvent(CreateMultiPartS3Blob);

            return s3CopyStream;
        }

        public S3CopyMemoryStream(string key, byte[] buffer, S3TransferUtility transferUtility)
            : base(buffer)
        {
            if (buffer.LongLength > int.MaxValue)
                throw new ArgumentException("The length of the buffer may not be longer than int.MaxValue", "buffer");

            InitiatingPart = true;
            EndOfPart = false;
            WriteCount = 1;
            PartETagCollection = new List<PartETag>();

            _buffer = buffer;
            _key = key;
            _transferUtility = transferUtility;
        }

The event StartUploadS3FileStreamEvent invokes a call that initiate, uploadpart and complete the upload.

Alternatively you could implement a FileStream which is much easier because you can use

TransferUtilityUploadRequest request =
            new TransferUtilityUploadRequest()
            .WithAutoCloseStream(false).WithBucketName(
                transferUtility.BucketName)
                .WithKey(key)
                .WithPartSize(stream.PartSize)
                .WithInputStream(stream) as TransferUtilityUploadRequest;

        transferUtility.Upload(request);

at the close method of the overriden FileStream. The disadvantage is that you have to write the whole data to the disk first and then you can upload it.

like image 184
zirbel Avatar answered Nov 15 '22 13:11

zirbel