Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to deal with allocations in constrained execution regions?

Tags:

c#

.net

pinvoke

Constrained Execution Regions are a feature of C# / .Net that allow the developer to attempt to hoist the 'big three' exceptions out of critical regions of code - OutOfMemory, StackOverflow and ThreadAbort.

CERs achieve this by postponing ThreadAborts, preparing all methods in the call graph (so no JIT has to occur, which can cause allocations), and by ensuring enough stack space is available to fit the ensuing call stack.

A typical uninterruptable region might look like:

public static void GetNativeFlag()
{
    IntPtr nativeResource = new IntPtr();
    int flag;

    // Remember, only the finally block is constrained; try is normal.
    RuntimeHelpers.PrepareConstrainedRegions();
    try
    { }
    finally
    {
        NativeMethods.GetPackageFlags( ref nativeResource );

        if ( nativeResource != IntPtr.Zero ) {
            flag = Marshal.ReadInt32( nativeResource );
            NativeMethods.FreeBuffer( nativeResource );
        }
    }
}

The above is mostly all well and good because none of the rules are broken inside the CER - all .Net allocations are outside the CER, Marshal.ReadInt32() has a compatible ReliabilityContract, and we're presuming my NativeMethods are similarly tagged so that the VM can properly account for them when preparing the CER.

So with all of that out of the way, how do you handle situations where an allocation has to happen inside of a CER? Allocations break the rules, since it's very possible to get an OutOfMemoryException.

I've run into this problem when querying a native API (SSPI's QuerySecurityPackageInfo) that forces me to break these rules. The native API does perform its own (native) allocation, but if that fails I just get an empty result, so no big deal there. However, in the structure that it allocates, it stores a few C-strings of unknown size.

When it gives me back a pointer to the structure it allocated, I have to copy the entire thing, and allocate room to store those c-strings as .Net string objects. After all of that, I'm supposed to tell it to free the allocation.

However, since I'm performing .Net allocations in the CER, I'm breaking the rules and possibly leaking a handle.

What is the right way to deal with this?

For what it's worth, this is my naive approach:

internal static SecPkgInfo GetPackageCapabilities_Bad( string packageName )
{
    SecPkgInfo info;

    IntPtr rawInfoPtr;

    rawInfoPtr = new IntPtr();
    info = new SecPkgInfo();

    RuntimeHelpers.PrepareConstrainedRegions();
    try
    { }
    finally
    {
        NativeMethods.QuerySecurityPackageInfo( packageName, ref rawInfoPtr );

        if ( rawInfoPtr != IntPtr.Zero )
        {
            // This performs allocations as it makes room for the strings contained in the result.
            Marshal.PtrToStructure( rawInfoPtr, info );

            NativeMethods.FreeContextBuffer( rawInfoPtr );
        }
    }

    return info;
}

Edit

I should mention that 'success' for me in this case is that I never leak the handle; its alright if I perform an allocation which fails, and release the handle, and then return an error to my caller indicating that an allocation failed. Just can't leak handles.

Edit to respond to Frank Hileman

We don't have much control over the memory allocations required when we perform interop calls.

Depends on what you mean - memory that might be allocated to perform the call invocation, or the memory created by the invoked call?

We have perfect control over the memory allocated to perform the invocation - that's memory created by the JIT to compile the involved methods, and the memory needed by the stack to perform the invocation. The JIT compile memory is allocated during the preparation of the CER; if that fails, the whole CER is never executed. The CER preparation also calculates how much stack space is needed in the static call graph performed by the CER, and aborts the CER preparation if there's not enough stack.

Coincidentally, this involves stack space preparation for any try-catch-finally frames, even nested try-catch-finally frames, that happen to define and partake in the CER. Nesting try-catch-finally's inside a CER is perfectly reasonable, because the JIT can calculate the amount of stack memory needed to record the try-catch-finally context and abort the CER preparation all the same if too much is needed.

The call itself may do some memory allocations outside the .net heap; I am surprised native calls are allowed inside a CER at all.

If you meant native memory allocations performed by the invoked call, then that's also not a problem for CERs. Native memory allocations either succeed or return a status code. OOMs are not generated by native memory allocations. If a native allocation fails, presumably the native API I'm invoking handles it by returning a status code, or a null pointer. The call is still deterministic. The only side effect is that it may cause subsequent managed allocations to fail due to increased memory pressure. However, if we either never perform allocations, or can deterministically handle failed managed allocations, then it's still not a problem.

So the only kind of allocation that is bad in a CER is a managed allocation, since it may cause the 'asynchronous' OOM exception. So now the question is how do I deterministically handle a failed managed allocation inside of a CER..

But that's completely possible. A CER can have nested try-catch-finally blocks. All calls in a CER, and all stack space needed by a CER, even for recording the context of a nested try-finally inside a CER's finally, can be deterministically calculated during the preparation of the entire CER, before any of my code actually executes.

like image 277
antiduh Avatar asked Jun 27 '14 00:06

antiduh


1 Answers

It is possible to perform managed allocations inside of a CER, so long as the CER is prepared to handle the failed allocation.

First, this is the broken code:

SecPkgInfo info;
SecurityStatus status = SecurityStatus.InternalError;
SecurityStatus freeStatus;

IntPtr rawInfoPtr;

rawInfoPtr = new IntPtr();
info = new SecPkgInfo();

RuntimeHelpers.PrepareConstrainedRegions();
try
{ }
finally
{
    status = NativeMethods.QuerySecurityPackageInfo( packageName, ref rawInfoPtr );

    if ( rawInfoPtr != IntPtr.Zero  )
    {
        if ( status == SecurityStatus.OK )
        {
            // *** BWOOOP **** BWOOOP ***
            // This performs allocations as it makes room for the strings contained 
            // in the SecPkgInfo class. That means that we're performing managed 
            // allocation inside of a CER. This CER is broken and may cause a leak because
            // it never calls FreeContextBuffer if an OOM is caused by the Marshal.
            Marshal.PtrToStructure( rawInfoPtr, info );
        }

        freeStatus = NativeMethods.FreeContextBuffer( rawInfoPtr );
    }
}

Since try-catch-finally's can be nested, and any extra stack space needed by the nested try-catch-finally is precalculated during the CER's prepration, we can use a try-finally inside our CER's main finally ensure to that our FreeContextBuffer is never leaked:

SecPkgInfo info;
SecurityStatus status = SecurityStatus.InternalError;
SecurityStatus freeStatus;

IntPtr rawInfoPtr;

rawInfoPtr = new IntPtr();
info = new SecPkgInfo();

RuntimeHelpers.PrepareConstrainedRegions();
try
{ }
finally
{
    status = NativeMethods.QuerySecurityPackageInfo( packageName, ref rawInfoPtr );

    if ( rawInfoPtr != IntPtr.Zero  )
    {
        try
        {
            if ( status == SecurityStatus.OK )
            {
                // This may fail but the finally will make sure we always free the native pointer.
                Marshal.PtrToStructure( rawInfoPtr, info );
            }
        }
        finally
        {
            freeStatus = NativeMethods.FreeContextBuffer( rawInfoPtr );
        }
    }
}

I've also put together a demo program, available at http://www.antiduh.com/tests/LeakTest.zip. It has a little custom native DLL that keeps track of allocations, and a managed app that invokes that DLL. It shows how a CER, using nested try-finally's can still deterministically release unmanaged resources even when part of the CER causes an OOM exception.

like image 144
antiduh Avatar answered Nov 04 '22 00:11

antiduh