Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

GZipStream - write not writing all compressed data even with flush?

Tags:

c#

gzipstream

I've got a pesky problem with gzipstream targeting .Net 3.5. This is my first time working with gzipstream, however I have modeled after a number of tutorials including here and I'm still stuck.

My app serializes a datatable to xml and inserts into a database, storing the compressed data into a varbinary(max) field as well as the original length of the uncompressed buffer. Then, when I need it, I retrieve this data and decompress it and recreates the datatable. The decompress is what seems to fail.

EDIT: Sadly after changing the GetBuffer to ToArray as suggested, my issue remains. Code Updated below

Compress code:

DataTable dt = new DataTable("MyUnit");
//do stuff with dt
//okay...  now compress the table
using (MemoryStream xmlstream = new MemoryStream())
{
    //instead of stream, use xmlwriter?
    System.Xml.XmlWriterSettings settings = new System.Xml.XmlWriterSettings();
    settings.Encoding = Encoding.GetEncoding(1252);
    settings.Indent = false;
    System.Xml.XmlWriter writer = System.Xml.XmlWriter.Create(xmlstream, settings);
    try
    {
        dt.WriteXml(writer);
        writer.Flush();
    }
    catch (ArgumentException)
    {
        //likely an encoding issue...  okay, base64 encode it
        var base64 = Convert.ToBase64String(xmlstream.ToArray());
        xmlstream.Write(Encoding.GetEncoding(1252).GetBytes(base64), 0, Encoding.GetEncoding(1252).GetBytes(base64).Length);
    }

    using (MemoryStream zipstream = new MemoryStream())
    {
        GZipStream zip = new GZipStream(zipstream, CompressionMode.Compress);
        log.DebugFormat("Compressing commands...");
        zip.Write(xmlstream.GetBuffer(), 0, xmlstream.ToArray().Length);
        zip.Flush();
        float ratio = (float)zipstream.ToArray().Length / (float)xmlstream.ToArray().Length;
        log.InfoFormat("Resulting compressed size is {0:P2} of original", ratio);

        using (SqlCommand cmd = new SqlCommand())
        {
            cmd.CommandText = "INSERT INTO tinydup (lastid, command, compressedlength) VALUES (@lastid,@compressed,@length)";
            cmd.Connection = db;
            cmd.Parameters.Add("@lastid", SqlDbType.Int).Value = lastid;
            cmd.Parameters.Add("@compressed", SqlDbType.VarBinary).Value = zipstream.ToArray();
            cmd.Parameters.Add("@length", SqlDbType.Int).Value = xmlstream.ToArray().Length;
            cmd.ExecuteNonQuery();

        }
    }

Decompress Code:

/* This is an encapsulation of what I get from the database
 public class DupUnit{
    public uint lastid;
    public uint complength;
    public byte[] compressed;
}*/
  //I have already retrieved my list of work to do from the database in a List<Dupunit> dupunits
foreach (DupUnit unit in dupunits)
{
    DataSet ds = new DataSet();
    //DataTable dt = new DataTable();
    //uncompress and extract to original datatable
    try
    {
        using (MemoryStream zipstream = new MemoryStream(unit.compressed))
        {
            GZipStream zip = new GZipStream(zipstream, CompressionMode.Decompress);
            byte[] xmlbits = new byte[unit.complength];
            //WHY ARE YOU ALWAYS 0!!!!!!!!
            int bytesdecompressed = zip.Read(xmlbits, 0, unit.compressed.Length);
            MemoryStream xmlstream = new MemoryStream(xmlbits);
            log.DebugFormat("Uncompressed XML against {0} is: {1}", m_source.DSN, Encoding.GetEncoding(1252).GetString(xmlstream.ToArray()));
            try{
               ds.ReadXml(xmlstream);
            }catch(Exception)
            {
                //it may have been base64 encoded...  decode first.
               ds.ReadXml(Encoding.GetEncoding(1254).GetString(
                 Convert.FromBase64String(
                 Encoding.GetEncoding(1254).GetString(xmlstream.ToArray())))
                 );
            }
            xmlstream.Dispose();
        }
    }
    catch (Exception e)
    {
        log.Error(e);
        Thread.Sleep(1000);//sleep a sec!
        continue;
    }

Note the comment above... bytesdecompressed is always 0. Any ideas? Am I doing it wrong?

EDIT 2:

So this is weird. I added the following debug code to the decompression routine:

   GZipStream zip = new GZipStream(zipstream, CompressionMode.Decompress);
   byte[] xmlbits = new byte[unit.complength];
   int offset = 0;
   while (zip.CanRead && offset < xmlbits.Length)
   {
       while (zip.Read(xmlbits, offset, 1) == 0) ;
       offset++;
   }

When debugging, sometimes that loop would complete, but other times it would hang. When I'd stop the debugging, it would be at byte 1600 out of 1616. I'd continue, but it wouldn't move at all.

EDIT 3: The bug appears to be in the compress code. For whatever reason, it is not saving all of the data. When I try to decompress the data using a third party gzip mechanism, I only get part of the original data.

I'd start a bounty, but I really don't have much reputation to give as of now :-(

like image 231
longofest Avatar asked Jul 01 '14 14:07

longofest


2 Answers

Finally found the answer. The compressed data wasn't complete because GZipStream.Flush() does absolutely nothing to ensure that all of the data is out of the buffer - you need to use GZipStream.Close() as pointed out here. Of course, if you get a bad compress, it all goes downhill - if you try to decompress it, you will always get 0 returned from the Read().

like image 52
longofest Avatar answered Oct 16 '22 07:10

longofest


I'd say this line, at least, is the most wrong:

cmd.Parameters.Add("@compressed", SqlDbType.VarBinary).Value = zipstream.GetBuffer();

MemoryStream.GetBuffer:

Note that the buffer contains allocated bytes which might be unused. For example, if the string "test" is written into the MemoryStream object, the length of the buffer returned from GetBuffer is 256, not 4, with 252 bytes unused. To obtain only the data in the buffer, use the ToArray method.

It should be noted that in the zip format, it first works by locating data stored at the end of the file - so if you've stored more data than was required, the required entries at the "end" of the file don't exist.


As an aside, I'd also recommend a different name for your compressedlength column - I'd initially taken it (despite your narrative) as being intended to store, well, the length of the compressed data (and written part of my answer to address that). Maybe originalLength would be a better name?

like image 40
Damien_The_Unbeliever Avatar answered Oct 16 '22 07:10

Damien_The_Unbeliever