Always check Page.IsValid when using Validation Controls
Always be sure you check Page.IsValid before processing your forms when using Validation Controls.
Use Paging: The Grid Choice
Most web applications need the data to be shown in tabular format. For that we need a Grid view, DataGrid, JQgrid, Telerik grid, Kendo Grid UI and so on. When doing a selection we should use a thin grid that gives better performance. ASP.NET grid view uses server-side code and makes the page heavy, whereas JQGRID is faster since it does everything at the client side. Use paging to display data on a user demand basis instead of pulling a huge amount of data and showing it in a grid. Use a Repeater control instead of a DataGrid or DataList because it is efficient, customizable and programmable and faster in performance than that of a GridView, DataGrid and DataList.
Do Ajax calls instead of ASP.NET code behind code
Call a web service from JavaScript or jQuery instead of server-side. Use asynchronous calls to call a web method from a web service.
Store your content by using caching
Authentication can also have an impact on the performance of your application. For example, Passport authentication is slower than form-based authentication and that in turn is slower than Windows authentication.
Minimize the number of web server controls
The use of web server controls increases the response time of your application because they need time to be processed on the server side before they are rendered on the client side. One way to minimize the number of web server controls is to take into consideration the use of HTML elements where they are suited, for example if you want to display static text.
Avoid using unmanaged code
Calls to unmanaged code is a costly marshalling operation. Try to reduce the number of calls between the managed and unmanaged code. Consider doing more work in each call rather than making frequent calls to do small tasks. Use a using block.
Avoid making frequent calls across processes
If you are working with distributed applications then this involves additional overhead negotiating network and application level protocols. In this case network speed can also be a bottleneck. Try to do as much work as possible in fewer calls over the network.
Design with Value Types
Use simple structs when you can and when you don't do much boxing and unboxing. ValueTypes are far less flexible than Objects and end up degrading performance if used incorrectly. You need to be very careful about when you treat them like objects. This adds extra boxing and unboxing overhead to your program and can end up costing you more than it would if you had stuck with objects.
Remove the unused HTTPMODULES
There are several ASP.NET default Http Modules that sit in the request pipeline and intercept each and every request. For example, SessionStateModule intercepts each request, parses the session cookie and then loads the proper session in the HttpContext. Not all of these modules are always necessary for processing in the page execution life cycle.
For example, if you aren't using Membership and Profile provider, you don't need FormsAuthentication module remove this from web application web.config file.
Minimize assemblies
Minimize the number of assemblies you use to keep your working set small. If you load an entire assembly just to use one method, you're paying a tremendous cost for very little benefit. See if you can duplicate that method's functionality using code that you already have loaded.
Use Reflection wisely if needed
Avoid using reflection if there is no functional need. Especially Activator.CreateInstance() in refelection takes more time to execute. I noticed many times in the Ant profiler that dynamically creating an instance using Activator.CreateInstance (with two constructor parameters) is taking a decent amount of time. If you use reflection at the core module then test it in the profiler and see the performance hit before using it any UI application.
Encode Using ASCII When You Don't Need UTF
By default, ASP.NET comes configured to encode requests and responses as UTF-8. If ASCII is all your application needs, eliminating the UTF overhead can return a few cycles. Note that this can only be done on a per-application basis.
Avoid Recursive Functions / Nested Loops
These are general things to adopt in any programming language that consumes much memory. Always avoid nested loops and recursive functions, to improve performance.
Minimize the Use of Format ()
When you can, use toString() instead of Format(). In most cases, it will provide you with the functionality you need, with much less overhead.
Make JavaScript and CSS External
Using external files generally produces faster pages because the JavaScript and CSS files are cached by the browser. Inline JavaScript and CSS increases the HTML document size but reduces the number of HTTP requests. With cached external files, the size of the HTML is kept small without increasing the number of HTTP requests, thus improving the performance.
Use multiple threads when calling multiple operations
The problem arises when single-threaded code gets stuck on a long-running process. So when we call multiple services in a single method we should call two methods in two different threads. Use of multiple threads makes the application more responsive.
Do Minification
Right now in Visual Studio versions we see all the styles and the scripts are bundled together to reduce the size of them. Minify your styles and script classes to reduce the size of the pages.
Do compression at IIS Level
Using compression is the single most effective way to reduce page load times. The .aspx files sent by the server to the browser consists of HTML. HTML is highly compressible by algorithms such as gzip. Because of this, modern web servers including IIS 5 and later have the ability to compress outgoing files and modern browsers have the ability to decompress incoming files.
Use Ngen.exe to optimize managed code performance
The Native Image Generator (Ngen.exe) is a tool that improves the performance of managed applications. Ngen.exe creates native images that are files containing compiled processor-specific machine code and installs them into the native image cache on the local computer. The runtime can use native images from the cache instead of using the Just-In-Time (JIT) compiler to compile the original assembly.
Do a load test in the end of development cycle
Visual Studio provides options for doing load tests where we can test the application when the development is about to be complete. We can load as many users and set the duration and test it. By doing so we can easily learn which page has performance issues or the application itself is performing badly. This should be a part of the development process and reduces the cost of repair with less effort instead of waiting for someone to identify performance issues in the application.
I have observed that the team that follows a strict code review has very less performance issues reported because they avoid all these in the early stage. When we are working on a customer facing website where transactions are huge there should be proper process, proper guidelines and proper planning during the development phase.
Script rendering order and cleaning up Html code
If possible, you can move the script tags <script> to the very bottom of the page. The reason this is important is that during the rendering, when the browser comes across a <script> tag, it stops to process the script and then proceeds. If you put the script tags at the bottom of the page then the page/HTML will render faster and the scripts can execute after the DOM elements have loaded.
Sometimes moving the script to the bottom of the page is not possible since some DOM elements or CSS may depend on these scripts, so they can be rendered. In such cases, you could move those scripts further up the page. However, as a rule of thumb, try to keep the scripts as low, towards the bottom, as possible.
Positioning the <script> tag towards the bottom of the page is not the only option to defer the load of script files. There are other ways too, for example, you can use the defer attribute.
Image “Optimization” does not mean that it reduces the quality of the image. But it will re-arrange the pixels and palettes to make the overall size smaller.
Web Definition of Image Optimization is “This term is used to describe the process of image slicing and resolution reduction. This is done to make file sizes smaller so images will load faster. When we place style sheets near the bottom part of the HTML, most browsers stop rendering to avoid redrawing elements of the page if their styles change, thus decreasing the performance of the page. So, always place Style Sheets into the Header. Minimize the number of iframes and DOM access. Use div elements instead of tables and a single CSS Style Sheet or Script File for the entire website.
Remove the unused ViewEnginee from ASP.NET MVC Pipeline
By default the ASP.Net runtime provides two types of view engines in ASP.Net MVC applications. The unused view engines should be removed from the runtime. For example, if you use a Razor page then add the following code in your global.asax.cs. By default, ASP.Net MVC renders with an aspx engine and a Razor engine. This only uses the RazorViewEngine.
- ViewEngines.Engines.Clear();
- ViewEngines.Engines.Add(new RazorViewEngine());
Do not put C# code in your MVC view
Your ASP.NET MVC views are compiled at run time and not at compile time. Therefore if you include too much C# code in them, your code will not be compiled and placed in DLL files. Not only will that damage the testability of your software, it will also make your site slower because every view will take longer to be displayed (because they must be compiled). Another down side of adding code to the views is that they cannot be run asynchronously and so if you decide to build your site based on Task-based Asynchronous Pattern (TAP), you won't be able to take advantage of asynchronous methods and actions in the views.
Use the high-performance libraries
Recently I was diagnosing the performance issues of a web site and I came across a hotspot in the code where JSON messages are coming from a third-party web service that had to be serialized several times. Those JSON messages were de-serialized by Newtonsoft.Json and it so happened that Newtonsoft.Json was not the fastest library when it came to de-serialization. Then we replaced Json.Net with a faster library (for example ServiceStack) and got a much better result.
Tips for Database Operations
Profile Database and check the High Response Time in any pages
Run SQL Profiler against the solutions database whilst hitting all the key web pages. Identify all SQL operations that have high durations or CPU values and review them with an eye to optimising them. Also identify how many SQL operations are involved in the rendering of each page and see if any of them can be coalesced. Aim for the goal of at most one SQL call to render any page.
Return Multiple Resultsets
If the database code has request paths that go to the database more than once, then these round-trips decrease the number of requests per second your application can serve. Return multiple resultsets in a single database request, so that you can cut the total time spent communicating with the database. You'll be making your system more scalable, too, since you'll reduce the work the database server is doing managing requests.
Connection Pooling and Object Pooling
Connection pooling is a useful way to reuse connections for multiple requests, rather than paying the overhead of opening and closing a connection for each request. It's done implicitly, but you get one pool per unique connection string. Be sure you call Close or Dispose on a connection as soon as possible. When pooling is enabled, calling Close or Dispose returns the connection to the pool instead of closing the underlying database connection. Account for the following issues when pooling is a part of your design:
- Share connections.
- Avoid per-user logons to the database.
- Do not vary connection strings.
- Do not cache connections.
Use SqlDataReader Instead of Dataset wherever it is possible
If you are reading a table sequentially you should use the DataReader rather than DataSet. A DataReader object creates a read-only stream of data that will increase your application performance because only one row is in memory at a time.
Keep Your Datasets Lean
Remember that the dataset stores all of its data in memory and that the more data you request, the longer it will take to transmit across the wire. Therefore only put the records you need into the dataset.
Avoid Inefficient queries
Queries that process and then return more columns or rows than necessary waste processing cycles that could best be used for servicing other requests. Too much data in your results is usually the result of inefficient queries. The SELECT * query often causes this problem. You do not usually need to return all the columns in a row. Also, analyze the WHERE clause in your queries to ensure that you are not returning too many rows. Try to make the WHERE clause as specific as possible to ensure that the least number of rows are returned. Queries that do not take advantage of indexes may also cause poor performance.
Too many open connections
Connections are an expensive and scarce resource that should be shared between callers by using connection pooling. Opening a connection for each caller limits scalability. To ensure the efficient use of connection pooling, avoid keeping connections open and avoid varying connection strings.
Avoid Transaction misuse
If you select the wrong type of transaction management, you may add latency to each operation. Additionally, if you keep transactions active for long periods of time, the active transactions may cause resource pressure. Transactions are necessary to ensure the integrity of your data, but you need to ensure that you use the appropriate type of transaction for the shortest duration possible and only where necessary.
Avoid Over-Normalized tables
Over-normalized tables may require excessive joins for simple operations. These additional steps may significantly affect the performance and scalability of your application, especially as the number of users and requests increases.
Reduce Serialization
Dataset serialization is more efficiently implemented in .NET Framework version 1.1 than in version 1.0. However, Dataset serialization often introduces performance bottlenecks. You can reduce the performance impact in a number of ways. Use column name aliasing. Avoid serializing multiple versions of the same data. Reduce the number of DataTable objects that are serialized.
Do Not Use CommandBuilder at Run Time
CommandBuilder objects, such as as SqlCommandBuilder and OleDbCommandBuilder, are useful when you are designing and prototyping your application. However, you should not use them in production applications. The processing required to generate the commands affects performance. Manually create Stored Procedures for your commands, or use the Visual Studio® .NET design-time wizard and customize them later if necessary.
Use Stored Procedures Whenever Possible
Stored Procedures are highly optimized tools that result in excellent performance when used effectively. Set up Stored Procedures to handle inserts, updates and deletes with the data adapter. Stored Procedures do not need to be interpreted, compiled or even transmitted from the client and cut down on both network traffic and server overhead. Be sure to use CommandType.StoredProcedure instead of CommandType.Text
Avoid Auto-Generated Commands
When using a data adapter, avoid auto-generated commands. These require additional trips to the server to retrieve metadata and provide you a lower level of interaction control. Whereas using auto-generated commands is convenient, it's worth the effort to do it yourself in performance-critical applications.
Use Sequential Access as Often as Possible
With a data reader, use CommandBehavior.SequentialAccess. This is essential for dealing with BLOB data types since it allows data to be read off of the wire in small chunks. Whereas you can only work with one piece of the data at a time, the latency for loading a large data type disappears. If you don't need to work the entire object at once, using Sequential Access will provide you much better performance.
Database general rule for high performance
Use Set NOCOUNT ON in Stored Procedures. If you turn on the NOCOUNT option, Stored Procedures will not return the row-count information to the client and this will prevent SQL Server from sending the DONE_IN_PROC message for each statement in the Stored Procedure.
Do not use the sp_ prefix for custom Stored Procedures. Microsoft does not recommend the use of the prefix "sp_" in user-created Stored Procedure names because SQL Server always looks for a Stored Procedure beginning with "sp_" in the master database.
Use the following rules:
- Create indexes based on use.
- Keep clustered index keys as small as possible.
- Consider range data for clustered indexes.
- Create an index on all foreign keys.
- Create highly selective indexes.
- Consider a covering index for often-used, high-impact queries.
- Use multiple narrow indexes rather than a few wide indexes.
- Create composite indexes with the most restrictive column first.
- Consider indexes on columns used in WHERE, ORDER BY, GROUP BY and DISTINCT clauses.
- Remove unused indexes.
- Use the Index Tuning Wizard.
- Consider table partition and row parttion.
- use RAID for more read performance.
WCF tips for performance improvement
Select proper WCF Binding
Selection of the WCF binding also affects the performance. There are many types of bindings available in WCF Services. Each binding has a special purpose and security model. Depending on requirements we can select the proper binding type. For example if we create a WCF service that initially uses a WSHttpBinding. This binding has an extra cost for security, reliable sessions and transaction flow. If we select BasicHttpBinding instead of this then the performance is dramatically improved.
Throttling
Throttling of services is another key element for WCF performance tuning. WCF throttling provides the prosperities maxConcurrentCalls, maxConcurrentInstances and maxConcurrentSessions that can help us to limit the number of instances or sessions created at the application level.
- <serviceThrottling
- maxConcurrentCalls="16" maxConcurrentSessions="100" maxConcurrentInstances="10" />
Use data contract serialization
Serialization is the process of converting an object to a transferable format. XML serialization and binary are very useful when transferring objects over the network. XML serialization is very popular for its interoperability and binary serialization is used when transferring objects between two .NET applications.
Data contract serialization is about 10% faster than XML serialization. This can be significant if anyone is working with a large amount of data. The Data Contract Serializer can serialize public members as well as private and protected members.
Caching
External Dependency is another main problem with WCF Service Performance. We can use Caching to avoid this problem. Caching allows us to store data in memory or some other place from which we can retrieve it quickly. We have two options for caching, in-memory and external caching.
In-memory Caching
WCF services do not have access to the ASP.NET cache by default. We can do this using ASP.NET compatibility by adding the "AspNetCompatibilityRequirements" Attribute.
- [AspNetCompatibilityRequirements ( RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
-
- <system.serviceModel>
- <serviceHostingEnvironment aspNetCompatibilityEnabled="true" />
- </system.serviceModel>
External Caching
The problem with in-memory caching is that it is very difficult to expire an item from the cache when the user does a change on it. The sticky session can help us to resolve the problem. All requests from the same source IP address are routed to the same server; this is called a sticky session. We can also use Windows Server AppFabric as the cache server.
If your application is smaller (non-clustered, non-scalable and so on) and your service is not stateless (you might want to make it stateless), you might want to consider In-Memory caching, however with large scale applications you might want to check out AppFabric, Coherence and so on.
Compress data
Only serialize the data to be sent across the network and that is actually required by the end user. In other words, try to avoid sending unneeded data across the network.
Setting the WCF transport and Reader Quotas property
The WCF transport property, like timeout, Memory allocation limits and Collection size limits, also help to improve the performance of the service. The Timeout is used to mitigate DOS (Denial of Service) attacks. Memory allocation allows us to prevent a single connection from exhausting the system resources and denying service to all other connections. Collection size limits help us to restrict the consumption of resources. The Reader Quotas property (like MaxDepth, MaxStringContentLength, MaxArrayLength, MaxBytesPerRead and MaxNameTableCharCount) can assist us in restricting message complexity to provide protection from Denial of Service (DOS) attacks.
Do Load Balancing and Server Addition
Load balance should not just be seen as a means to achieve scalability. Whereas it definitely increases scalability, many times it increases the performance of the web applications since the requests and users are distributed with multiple servers.
Doing Clean Code in development Phase by using FXCOP
Fxcop is a tool given by Microsoft that restricts developers from writing any code that can cause performance issues. If we write custom rules in FXCOP and follow these rules then the build will not be a success until the developer fixes the errors.
Some of these rules are:
- Avoid excessive locals
- Avoid uncalled private code
- Avoid uninstantiated internal classes
- Avoid unnecessary string creation
- Avoid unsealed attributes
- Review unused parameters
- Dispose methods should call SuppressFinalize
- Do not call properties that clone values in loops
- Do not cast unnecessarily
- Do not concatenate strings inside loops
- Do not initialize unnecessarily
- Initialize reference type static fields inline
- Override equals and operator equals on value types
- Prefer jagged arrays over multidimensional
- Properties should not return arrays
- Remove unused locals
- Test for empty strings using string length
- Use literals where appropriate
Tool used for performance tuning
These are the tools that are used to monitor the performance of the code. There are various performance counters for each .NET object that is used to decide which area to focus on during performance tuning. Performance Counters are used to provide information as to how well the operating system or an application, service, or driver is performing. The counter data can help determine system bottlenecks and fine-tune system and application performance.
- .Net memory profiler
- App Dynamics
- Red Gate Ants profiler
- Fiddler
- Performance counters by perfmon
Conclusion: I have mentioned some of the tips for performance tuning. Performance tuning is not a one day job since it takes iterative tasks to improve performance. Understanding the performance counters is the fundamental way to fix any performance issues.
In the next article I will write how to deal with performance counters. Thanks for reading.
Reference:
Improving .NET Application Performance and Scalability