SharePoint 2010 Internals - Series 1


Introduction

This will help the SharePoint developer, reviewer and architect to use the correct design elements, coding structure, troubleshoot issues, fix bugs and performance tuning.

AllUserData Table

For a content database - SharePoint uses a single database table AllUserData to store all items for all SharePoint list and document library list items. All fields from every list and library, in every site, are stored in this table. Application-defined metadata for documents in document libraries also resides in AllUserData, and it is accessed via joins with the Docs View.

To query the list items using the SharePoint object model we can use object of type SPList, SPList provides an Items property that returns a SPListItemCollection object. The Items property queries all items from the Content Database for the current SPList and does that every time we access the Item's property. The retrieved items are not cached. For example the code below iterates the first 100 items in the current SPList object:

SPList activeList = SPContext.Current.List;
for(int i=0;i<100 && i<activeList.Items.Count;i++)
{
  SPListItem listItem = activeList.Items[i];
  htmlWriter.Write(listItem["Title"]);
}

In the loop in the above code sample we accessed the Items property twice - for every loop iteration - once to retrieve the Count, and once to access the actual Item identified by its index. Analyzing the actual database activity of that loop shows us the following interesting result:

sharepiont1.gif

200 SQL Statements get executed when iterating through SPList.Items

The best way for this is to store the SPListItemCollection object returned by the Items property in a variable and use it in the loop.

SPListItemCollection items = SPContext.Current.List.Items;
for(int i=0;i<100 && i<items.Count;i++) 
{
  SPListItem listItem = items[i];
  htmlWriter.Write(listItem["Title"]);
}

The above code queries the database only once and works on an in-memory collection of all retrieved items.

There are other ways as well.

Query the data that you really need using the SPQuery object. SPQuery allows to:

  1. Limit the number of returned items. Ex:
     
    SPQuery query = new SPQuery();
    query.RowLimit = 100; // we want to retrieve 100 items
    query.ListItemCollectionPosition = prevItems.ListItemCollectionPosition; 
    // starting at a previous position
    SPListItemCollection items = SPContext.Current.List.GetItems(query);
     // now iterate through the items collection
    
  2. Limit the number of returned columns. Example: use ViewFields property to only retrieve the ID, Text Field and XZY Column:
    new SPQuery();
    query.ViewFields = "<FieldRef Name='ID'/><FieldRef Name='Text Field'/><FieldRef Name='XYZ'/>";
     
  3. Query specific items using CAML (Collaborative Markup Language)

SharePoint 2010 COM Components

SharePoint uses COM components for some of its core features. SPSite and SPWeb are COM objects used to access content databases. Internally every SPSite instance holds a reference to the unmanaged world by means of a field member of type SPRequest which is called m_Request. SPRequest is a very important unmanaged internal class in the SharePoint object model. The following picture explains the call flow:

sharepiont2.gif

The internal SPRequest class has an unmanaged reference to a COM object called SP.SPRequest and having a ClassID of BDEADEE2-C265-11D0-BCED-00A0C90AB50F which is implemented in and exposed by the OWSSVR.DLL class library.

The SP.SPRequest COM object exposes almost 400 basic operations and almost everything you do with the Microsoft.NET managed SharePoint object model that reads from or writes to the Content Database (including data, fields, content types, list schemas, etc) will go via this unmanaged COM object. Even more the OWSSVR.DLL is actually an ISAPI extension registered in IIS and its methods can be called directly via an HTTP request to /_vti_bin/owssvr.dll. Many of the Office applications (Word, Excel, InfoPath, SharePoint Designer etc) are using HTTP calls to OWSSRV directly in order to integrate with a remote SharePoint server. It is the COM component OWSSVR.dll, which internally manipulates the Content DB and responsible for building the SQL queries at runtime and calling the stored procedures from the SharePoint content database.

The following diagram shows at a high level the relationship between the various objects working with SPRequests:

sharepiont3.gif

The implication of this is there are unmanaged objects being created all the time, and those objects cannot be cleaned up automatically by the .NET garbage collector. This is why it is so important to call Dispose() or Close() on these unmanaged resources when we have finished using it. If we don't do this they will cause unmanaged memory leaks and will severely slow down the server.

There are exceptions though. If the SPSite and SPObject are called through SPCurrentContext which is a managed class - then Dispose need not be called.
SPSite and SPWeb objects are not thread safe. As discussed in the earlier section the retrieved items in SPList are not cached. This is because SPListItemCollection object contains an embedded SPWeb object that is not thread safe and should not be cached. Caching SharePoint Objects that are not thread safe can fail the application or behavior might be unpredictable.

There is a free tool available called SPDisposeCheck that helps developers and administrators to check custom SharePoint solutions that use the SharePoint Object Model to help measure against known Microsoft dispose best practices. The tool can be installed as a Visual Studio add on or run from command prompt.

Event Receivers

Both synchronous and asynchronous event receivers execute in the same process that has made the change that resulted in the event receivers are being called. So for example if you are adding a new file into a document library from a web part, then the event receivers will execute in the w3wp.exe process of the application pool of the web application. If you are adding the file from a rich client (MyWinFormsApp.exe) then the event receivers, including the asynchronous ones will execute in the MyWinFormsApp.exe process.

The asynchronous event receivers run in their own thread from the System.Threading.ThreadPool. A new thread is queued up with every call from the unmanaged SPRequest class. This means that if you add say 50 files to a document library this will result in 50 calls to EnqueueItemEventReceivers for each of the items. And this will create and en-queue 50 threads each of which will run one by one all asynchronous event receivers for the corresponding file. Now if some of those event receivers are taking some time to complete, which is a few times more than the time it takes you to add a file, what will happen is you will end up with enormous amount of running threads. We need to wait for these threads to complete rather than allowing the resource intensive switching between the threads.

The solution to manage asynchronous event receivers is to add a timer to sleep for certain milliseconds or to change the asynchronous event to a synchronous. It is very well documented on MSDN blogs at http://blogs.msdn.com/b/unsharepoint/archive/2010/11/10/sharepoint-event-receivers-making-asynchronous-event-synchronous-to-avoid-save-conflict-error.aspx

Application Pools

When creating a web application in SharePoint 2010 we are required to specify the application pool it will run in. For a physical single Web Front End (WFE) Server Microsoft has mentioned the recommendation of 10 application pool at the maximum with a caveat that this can change with the amount of RAM that WFE has. This section is to understand why this limitation is and how it affects the performance.

Each application pool is given its own set of server resources. To see the application pool properties right click on any application pool and select properties. You will see the application pool's properties sheet, as shown below.

sharepiont4.gif

Each application pool running on the server will consume a minimum of 100 Mb of RAM. So if you have 10 application pools 1 Gb of RAM is consumed. Each application pool has its own cache and application pools build a lot of information in the cache. We need to set the capacity on the application pool. Set the Maximum virtual memory limit to an appropriate level so that if an application hogs memory it does not bring down the other applications.

There is an option to add more number of worker processes to an application pool like shown below in the Performance tab.

sharepiont5.gif

But more worker processes (w3wp.exe) waiting for requests means more stuff is being cached and RAM will be consumed pretty quickly.

The Performance tab is designed to keep the application pool running efficiently. The first option on the page shuts down the worker process after the site has been idle for 20 minutes. This helps to give the server processing power and memory resources that it can use for other things until the worker process is needed again.

The next option allows limiting the inbound request queue length. By doing so, one can make sure that the site doesn't get slammed with more requests than it can handle.

The next portion of the Performance tab has to do with CPU monitoring. CPU monitoring allows one to prevent a demanding Web application from hogging the server's CPU resources. We can set the maximum percentage of CPU time that the worker process is allowed to use. If this value is exceeded, we can recycle the worker process.

So ideally these settings depend on the RAM size available on each WFE.

Content Enumeration

There are two parts to gathering the information from a content source.

  1. The first part is the enumeration of the content items that should be crawled. The content gatherer pipeline inside mssearch.exe invokes a search component called a filter daemon to load up the protocol handler or indexing connector. The connectors will connect to the content source and will walk the URL addresses of each content item, depositing each URL of each content item in the MSSCrawlQueue table.
  2. When a crawl is started, a new process named Mssdmn.exe is spawned under Msssearch.exe. It is under this process that the content source is connected, fetched, enumerated and parsed.

    sharepiont6.gif

After a sufficient number of URLs are placed in the MSSCrawlQueue, another Mssdmn.exe is spawned, and this process goes back to the content source connecting to each URL in batches as determined by the crawler impact rules, opening each document at each URL in the current batch and then downloading first the metadata about the document and then the document's contents. Both of these data streams are run through several components such as mapping crawled properties to managed properties, file format disambiguation, linguistic processing using word breakers and stemming and property extraction in the process pipeline and then the content is placed in the full-text index while the metadata is placed in the SQL database property store.

One important point to make about crawlers in SharePoint 2010 is that they are stateless. This means that they do not store a physical complete copy of the index or index partition. Although they do use a temporary location in building up batches to submit, if the server fails, this crawl job is assigned to another crawl server to complete by the Central Search Administration component.

The following figure contains the sequence used to crawl all items in a content source.

sharepiont7.gif

This document is an attempt to glimpse into the SharePoint 2010 internals. There are multiple other components and objects that I can write about. Probably in the next series.

Up Next
    Ebook Download
    View all
    Learn
    View all