Monday, January 7, 2013

Null MessageBodyMember for WCF Stream Response

If you are using a WCF MessageContract that contains a MessageBodyMember of type Stream, as shown below, the Stream can never be null.

[MessageContract(WrapperNamespace = "Learnsomething.com/Internal/DataContracts/Responses/Certificates")]
public class CreateCertificateResponse : BaseResponseMessageContract, IDisposable
{

    [MessageHeader(Namespace = "Learnsomething.com/Internal/DataContracts/Responses/Certificates")]
    public long FileLength { get; set; }

    [MessageBodyMember(Order = 1, Namespace = "Learnsomething.com/Internal/DataContracts/Responses/Certificates")]
    public Stream CertificatePDF { get; set; }

    public void Dispose()
    {
        if (this.CertificatePDF != null)
        {
            this.CertificatePDF.Dispose();
            this.CertificatePDF = null;
        }
    }
}

If it is null when the response is sent back, then the connection will be closed improperly with a The socket connection was aborted error message.

If you must support a null Stream in the response, you can make use of the System.IO.Stream.Null value. That object is internal, called NullStream and inherits from Stream.

Links

Sunday, September 30, 2012

Queue up jQuery requests before it is loaded

If you are loading jQuery just before the closing body tag (if not, stop and read this now), you may be wondering how you can use jQuery before it has been loaded on your page above. Maybe it forced you find some crazy script ordering scheme or maybe you just gave up and included it somewhere else in the body or even in the head.

In a CMS world, each widgets could emit javascript that relies on jQuery existing on the page, but since it isn't loaded until just before the closing body tag, we need a way to fake a reference to jQuery and queue up the requests temporarily until jQuery has been loaded.

I came across this post that seemed to do exactly what we needed. Before jQuery is loaded, the script will queue the jQuery functions calls. Once loaded, the functions are dequeued and executed. Hooking up the script is simple and only involves adding a few small scripts to your page. 

First you have to add an inline script to your head tag. The script will create a fake $ reference on the window if jQuery hasn't yet loaded which pushes each function onto an array. It then defines a new function that will be used to execute each function after jQuery eventually loads.

<script type="text/javascript">
(function (a) {
 if (a.$ || a.jQuery) return;
 var b = [];
 a.$ = function (c) { b.push(c) };
 a.defer$ = function () { while (f = b.shift()) $(f) } }
)(window);
</script>

Next include jQuery just before the closing body tag.
<script type="text/javascript" src="/scripts/jquery.min.js">

Finally, after the jQuery include, call the defer function that we defined in the header. This will loop through each function that was deferred and execute them now that jQuery has been loaded.

<script type="text/javascript">
defer$();
</script>

This is a pretty cool trick, and while I can't take credit for it, I wanted to put the information out there for others to learn from.

Links

Thursday, June 14, 2012

Working around RavenDB and Safe-By-Default

I have seen these questions from time to time.
"How can I get all of the documents in RavenDB?"
"How can I force RavenDB to return more documents that the "safe-by-default" limits?"
The questions above usually stem from someone needing to work around the "safe-by-default" protection due to some sort of special circumstance. So if you find yourself needing these techniques more often than not, then you are doing it wrong.

You can change the RavenDB configuration by increasing the default PageSize, but that is a global change. Instead you need to do two things.
  1. Ensure that you start a new DocumentSession when you reach the  MaxNumberOfRequestsPerSession limit.
  2. Make sure you page the query with respect to the RavenDB PageSize (which is 1024 by default).
The RavenDB paging mechanism will not override the configured maximum number of items allowed per page. So if you try to Take() more than the configured limit, you will not get an error. Instead RavenDB's safe-by-default kicks in and will impose the configured page size on your results.

Below is an example that gets all document id values that match the predicate.

IDocumentSession session = null;
try
{
    session = MvcApplication.DocumentStore.OpenSession();

    List<string> ravenPageIds = null;
    const int RAVEN_MAX_PAGE_SIZE = 1024;
    int currentPageNumber = 0;

    do
    {
        List<string> tempRavenPageIds  = (from p in session.Query<Page>()
                                          where p.IsActive == true
                                          select p.Id)
                                          .Skip(currentPageNumber++ * RAVEN_MAX_PAGE_SIZE)
                                          .Take(RAVEN_MAX_PAGE_SIZE).ToList();

        ravenPageIds.AddRange(tempRavenPageIds);

        if (session.Advanced.NumberOfRequests >= session.Advanced.MaxNumberOfRequestsPerSession)
        {
            session.Dispose();
            session = MvcApplication.DocumentStore.OpenSession();
        }
    } while (tempRavenPageIds.Count == RAVEN_MAX_PAGE_SIZE);
}
finally
{
    session.Dispose();
}

We had code similar to this that needed to run when an MVC Application started (in order to build page routing table). But since then, we were able to refactor the code such that we no longer need to employ this technique. Rather than simply throw it away, perhaps you may find it useful.


Links


Thursday, May 24, 2012

MVC3 Razor and Preprocessor Directives Deficiency

You may have used preprocessor directives in ASP.NET Webforms to conditionally include code segments in markup for in the code-behind files. I have used them in master pages and aspx pages to do things like conditionally including Google Analytics or include compressed versus verbose CSS and Javascript.

Example including Google Analytics when DEBUG is not defined.
<% #if !DEBUG %>
<script type="text/javascript" src="/scripts/ga.min.js"></script>
<% #endif %>

Example including verbose CSS versus minified CSS.
<% #if DEBUG %>
<link rel="stylesheet" href="/Styles/Default.css" type="text/css" />
<% #else %>
<link rel="stylesheet" href="/Styles/Default.min.css" type="text/css" />
<% #endif %>

If you are like me, you may have tried to use this technique in MVC3 Razor. But what you may not know is that MVC Razor views are always compiled in DEBUG mode! That's right, it does not respect the debug attribute on your compilation web.config element! This is a huge issue, and frankly I am surprised that this has not yet been fixed.

In trying to find answers, I found this response on this thread which confirms the issue.

There is hope though! We have a few ways that we can conditionally include code in our Razor views.

Use IsDebuggingEnabled

Instead of using preprocessor directives in Razor, you can instead use the following property to test if the web.config compilation element's debug property (Reference here: Stack Overflow).
HttpContext.Current.IsDebuggingEnabled

That property will check your compilation element's debug property in web.config.
<system.web><compilation debug="true">...</compilation></system.web>

Create a Static Extension Method that uses preprocessor directives

Shawn on Stack Overflow posted an example of an extension method that can be used in conjunction with preprocessor directives.
    public static bool IsDebug(this HtmlHelper htmlHelper)
    {
#if DEBUG
      return true;
#else
      return false;
#endif
    }

Then in your view, just check IsDebug().


IMHO, this issue should be addressed in MVC4. There is no reason for the code bloat that Razor causes by not respecting the DEBUG compilation flag. If anyone can find the Microsoft connect issue, please comment and I'll add in my 2 cents!

Links

Friday, October 28, 2011

Classes, Structs and LLBLGen Pro

I just found an interesting bug that has trickled into some of our code.

Can you identify what is wrong with the following code (don’t cheat by reading below):
Guid? enrollmentId = (from e in metaData.Enrollment
                      where e.LearnerId == learnerId 
                      select e.Id).FirstOrDefault();

The problem comes from the way that FirstOrDefault() operates. In LLBLGen's case, that method will either take the first value, or it will set the default value if it does not find a match. This is where it is important to know what you are selecting. In this case, the query is selecting the e.Id, which is a Guid. A Guid can never be null because it is a struct and it is not a class.
Other things that are structs include: bool, int, short, long, byte, DateTime, KeyValuePair, etc...
If you are selecting a value that is a class, the default value will be NULL because classes are reference types.
If you are selecting a value that is a struct, the default value will never be NULL because structs are value types.
Side Note: The default value for a struct can be determined generically using the following (Where T is a type of struct):
T defaultValueForType = default(T);
So how do we select a struct using FirstOrDefault()?
You can work around this by selecting a class instead. The best way to do it is to use an Anonymous Type as shown below.
var enrollmentId = (from e in metaData.Enrollment
                    where e.LearnerId == learnerId 
                    select new { e.Id }).FirstOrDefault();

Anonymous Types are classes, therefore they can be null. By using an anonymous type in our query, the enrollmentId variable will be NULL if there are no records found.

Links

Sunday, May 1, 2011

.NET 4 Concurrent Dictionary Gotchas

While the .NET ConcurrentDictionary<T, V> is thread-safe, not all of its methods are atomic. Microsoft points out (http://msdn.microsoft.com/en-us/library/dd997369.aspx) that the GetOrAdd and AddOrUpdate methods that take in a delegate invoke the delegate outside of the locking mechanism used by the dictionary.

Because the locking is not employed when invoking the delegate, it could get executed more than once. I wrote some sample code and posted it onto a new StackOverflow community called CodeReview to get some feedback from the community on an approach to use the delegate overloads while maintaining atomicity. I found that by writing an extension method for the delegate methods I could use the .NET Lazy<T> class as a wrapper for the value in the dictionary. Because the Lazy<T> class is only evaluated once, the new methods are atomic.

Pinned below is the CodeReview post.

Tuesday, January 4, 2011

.NET 4 Concurrent Dictionary Collection

In ASP.NET we use a few static generic dictionaries that store data which needs to be accessed very quickly. Generic dictionaries are not thread-safe when adding items so we had to provide locks around all access to the collection.

Example using locks to add thread-safety when manipulating a generic dictionary.
private static readonly object cachedAnonymousIdentitiesLock = new object();
private static readonly Dictionary<Guid, Identity> cachedAnonymousIdentities = 
    new Dictionary<Guid, Identity>();

private Identity GetAnonymousIdentity(Guid orgId)
{
    Identity id = null;
    lock (cachedAnonymousIdentitiesLock)
    {
        if (cachedAnonymousIdentities.ContainsKey(orgId))
            id = cachedAnonymousIdentities[orgId];
        else
        {
            id = new Identity(orgId);
            cachedAnonymousIdentities.Add(orgId, id);
        }
    }
    return id;
}

The code above uses a mutual exclusion lock to provide thread-safe access to the generic dictionary which contains an immutable Identity object. While this code works fine, it does take a performance hit because of the blocking mutual exclusion lock. If a thread becomes blocked because of the lock, extra overhead is encountered because a context switch in the OS will occur. We really only need to maintain a lock for a very small amount of time while a new item is added to the collection (and truthfully a spin lock would be more efficient here).

Now in case you haven't heard, there is a new namespace in .NET 4 for concurrent collections. When I found out about this I immediately got the refactor itch. The thread-safe collections are discussed on MSDN. Looking around I found the System.Collections.Concurrent.ConcurrentDictionary<TKey, TValue> class. This class uses a spin lock instead of a blocking lock for thread-safe access. Additionally the class provides a few friendly methods that make adding a new item to the dictionary easier.  The GetOrAdd method is a welcomed method that allows you to quickly either get the item or add it to the dictionary.  Below is the same code above refactored using the ConcurrentDictionary.

private static readonly ConcurrentDictionary<Guid, Identity> cachedAnonymousIdentities =
    new ConcurrentDictionary<Guid, Identity>();

private Identity GetAnonymousIdentity(Guid orgId)
{
    return cachedAnonymousIdentities.GetOrAdd(orgId, key => new Identity(orgId));
}
The code is easy to follow and the locking is done automatically! I ran a quick unit test to double check the efficiency of the ConcurrentDictionary over our previous blocking method. The results show that in our scenario we do achieve a performance boost. Running the example below on a multi-core machine results in the blocking example executing in about 450 milliseconds while the concurrent example executes in about 350 milliseconds.

private readonly static ConcurrentDictionary<Guid, string> myThreadsafeObjects = new ConcurrentDictionary<Guid, string>();
private readonly static Dictionary<Guid, string> myObjects = new Dictionary<Guid, string>();
private readonly static object lockObj = new object();

[TestMethod]
public void LockingTest()
{
    int iterations = 1000000;
    string value = "hello world";
    int keyCount = 10000;
    List<Guid> keys = new List<Guid>(keyCount);
    for (int i = 0; i < keyCount; i++)
        keys.Add(Guid.NewGuid());
   
    Stopwatch lockSw = new Stopwatch();
    lockSw.Start();
    Thread t1 = new Thread(() =>
    {
        for (int i = 0; i < iterations; i++)
        {
            lock (lockObj)
            {
                if (!myObjects.ContainsKey(keys[i % keyCount]))
                    myObjects.Add(keys[i % keyCount], value);
                string itemValue = myObjects[keys[i % keyCount]];
            }
        }
    });
    t1.Start();
    for (int i = 0; i < iterations; i++)
    {
        lock (lockObj)
        {
            if (!myObjects.ContainsKey(keys[i % keyCount]))
                myObjects.Add(keys[i % keyCount], value);
            string itemValue = myObjects[keys[i % keyCount]];
        }
    }
    t1.Join();
    lockSw.Stop();
    Trace.WriteLine(string.Format("Blocking Dictionary lock test: {0} milliseconds", lockSw.ElapsedMilliseconds));

    Stopwatch concurrentSw = new Stopwatch();
    concurrentSw.Start();
    Thread t2 = new Thread(() =>
    {
        for (int i = 0; i < iterations; i++)
        {
            lock (lockObj)
            {
                string itemValue = myThreadsafeObjects.GetOrAdd(keys[i % keyCount], value);
            }
        }
    });
    t2.Start();
    for (int i = 0; i < iterations; i++)
    {
        lock (lockObj)
        {
            string itemValue = myThreadsafeObjects.GetOrAdd(keys[i % keyCount], value);
        }
    }
    t2.Join();
    concurrentSw.Stop();
    Trace.WriteLine(string.Format("Concurrent Dictionary lock test: {0} milliseconds", concurrentSw.ElapsedMilliseconds));
}

I'll take the simpler code and the performance gain. Great job MS on this new .NET class!

In an upcoming post I'll look into the use of the new BlockingCollection class in the Concurrent namespace. The class implements a new Producer-Consumer interface which is a pattern we reply upon in a few scenarios and where we have implemented the best practice ourselves. I am hoping this new collection will again simplify our code while improving efficiency.

Links