Welcome to the C# Performance Enhancement tutorial series. I have been writing about on this in my previous articles of this series. If you are interested enough in learning about performance enhancement in C# then please go through all of the following.
OK, let's start article 4 of this performance improvement series of articles. Here I will show a few ADO.NET tips too. I have tested my code in my computer and as performance varies from system configuration to configuration you may encounter different results if you run this code in your computer. But I hope it will not show reciprocal results, it may show slide differently.
Step 1: Check the Connection Pooling of your project
If you don't any concept of connection pooling then I suggest you go though the following few lines.
A connection object is one of the most resource consuming objects in ADO.NET. So we need to manage this object very carefully. Now, opening a new connection and closing it takes a lot of resources. So, rather than creating a connection object again and again, what if we keep a connection object frozen (yes like vegetables) to reuse them later. In the following example I have shown the performance difference of both approaches. In the ConnectionWithPool() function I have used Pooling in the connection string but in ConnectWithoutPool() I did not.
Connection Pooling
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.IO;
using System.Data.SqlClient;
namespace Test1
{
class TestConnection
{
public static void ConnectionWithPool()
{
SqlConnection con = new SqlConnection();
con.ConnectionString = "Data Source=.\\SQL_INSTANCE2;Initial Catalog=dbPOSPRO;Integrated Security=True;Pooling=True";
for (int i = 0; i < 100; i++)
{
con.Open();
con.Close();
}
}
public static void ConnectWithoutPool()
{
SqlConnection con = new SqlConnection();
con.ConnectionString = "Data Source=.\\SQL_INSTANCE2;Initial Catalog=dbPOSPRO;Integrated Security=True;Pooling=False";
for (int i = 0; i < 100; i++)
{
con.Open();
con.Close();
}
}
}
class Program
{
static void Main(string[] args)
{
Stopwatch sw = new Stopwatch();
sw.Start();
TestConnection.ConnectionWithPool();
sw.Stop();
Console.WriteLine("With Pooling: "+sw.ElapsedTicks);
sw.Restart();
TestConnection.ConnectWithoutPool();
sw.Stop();
Console.WriteLine("Without Pooling: " + sw.ElapsedTicks);
Console.ReadLine();
}
}
}
Yes, the output is saying that if we use the Pooling option then the performance will be improved three times.
So ,the single line conclusion is "Always use connection pooling to enhance ADO.NET code performance".
Step 2: Implement a "Using" block to manage resources properly
Nothing new in here, if you are inexperienced then you have probably heard or read this advice a thousand times. I will also give the same advice in this point but with evidence (read the practical example).
Within the withUsing() function I have implemented a Using block to manage the connection object properly but in the WithuOutUsing() function I did not. Please go through the following code to understand and keep enough courage to see the output screen (Ha ..Ha..).
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.IO;
using System.Data.SqlClient;
namespace Test1
{
class TestConnection
{
public static void WithuUsing()
{
for (int i = 0; i < 10; i++)
{
using (SqlConnection con = new SqlConnection("Data Source=.\\SQL_INSTANCE2;Initial Catalog=dbPOSPRO;Integrated Security=True;Pooling=false"))
{
con.Close();
}
}
}
public static void WithuOutUsing()
{
SqlConnection con = null;
for (int i = 0; i < 10; i++)
{
con = new SqlConnection("Data Source=.\\SQL_INSTANCE2;Initial Catalog=dbPOSPRO;Integrated Security=True;Pooling=false");
con.Open();
con.Close();
}
}
}
class Program
{
static void Main(string[] args)
{
Stopwatch sw = new Stopwatch();
sw.Start();
TestConnection.WithuUsing();
sw.Stop();
Console.WriteLine("With Using: "+sw.ElapsedTicks);
sw.Restart();
TestConnection.WithuOutUsing();
sw.Stop();
Console.WriteLine("Without Using: " + sw.ElapsedTicks);
Console.ReadLine();
}
}
}
Yes, if you manage a resource properly in your program with a Using block then it might enhance performance up to 78 times.
Again, the single line conclusion is "Use using to manage resources and improve performance".
Step 3: Use local variables in high label of iteration
In a project, looping and heavy iteration is a common scenario. Generally we use for, while or do while loop (yes a foreach loop is internally converted to a for loop) in our code and we initialize the loop counter variable (I, j generally, I also don't know why geek choose I or j rather than the use of the remaining characters. (Ha.. Ha..) It might be similar like class C IP address 192.168 .*.* , No one knows why but people use it.) to go through all iterations.
OK, we are getting off-topic, let's return to the subject. Always make a loop counter variable or other variable local when you are performing some operation within a certain function. I have written the following code to show it practically. Please go through the following code.
class test
{
public void loop1()
{
int j = 0;
for (int i = 0; i < 250000; i++)
{
j = j + 1;
}
}
int i;
int j = 0;
public void loop2()
{
for (i = 0; i < 250000; i++)
{
j = j + 1;
}
}
}
class Program
{
static void Main(string[] args)
{
test t = new test();
Stopwatch sw = new Stopwatch();
sw.Start();
t.loop1();
sw.Stop();
Console.WriteLine("With Loop1: " + sw.ElapsedTicks);
sw.Restart();
t.loop2();
sw.Stop();
Console.WriteLine("With Loop2: " + sw.ElapsedTicks);
Console.ReadLine();
}
}
Here is the output screen. We can see that when we define a variable as local (within the function) the performance improves.
Step 4: LINQ is much slower than other searching techniques
Hold your breath, I will show that soon. First of all this point is not against to LINQ , I know it is a very useful feature of C# and we can apply the same LINQ query to various data sources. Bla bla.. But think twice before using LINQ in your next code.
I have implemented one searching scenario to show a demonstration. I have created an array of 1000 integer numbers and filled them sequentially from 0 to n. Then within a LINQ function I have implemented a LINQ query to search for a number (say for example 70 in my case) and in the second function I have implemented a simple sequential search to search for the same number. (Yes you read that correct;ly, a sequential search is the most time consuming searching algorithyam, and yes, a binary search is far better than a sequential search.)
class test
{
public void LINQ()
{
int []a = new int[100];
for (int i = 0; i < a.Length; i++)
{
a[i] =i;
}
var v = from p in a where p == 70 select p;
}
public void Linear()
{
int[] a = new int[1000];
for (int i = 0; i < a.Length; i++)
{
a[i] = i;
}
for (int i = 0; i < a.Length; i++)
{
if (a[i] == 70)
{
int b = a[i];
break;
}
}
}
}
class Program
{
static void Main(string[] args)
{
test t = new test();
Stopwatch sw = new Stopwatch();
sw.Start();
t.LINQ();
sw.Stop();
Console.WriteLine("With LINQ: " + sw.ElapsedTicks);
sw.Restart();
t.Linear();
sw.Stop();
Console.WriteLine("With Search: " + sw.ElapsedTicks);
Console.ReadLine();
}
}
Our POC result shows the worst searching method is 17 times faster than LINQ. Now, why is LINQ is slower?
LINQ is developer friendly but not performance friendly. It's easy to read and write a LINQ query but internally the LINQ query is converted into a search function and in conversion it requires a lot of resources.
So, our conclusion ia "I do not recommend use of LINQ but use LINQ in the proper place. Simply to use LINQ, don't use LINQ."
Step 5: Use the files system to save an image rather than a database.
Let me explain this point from my real experience. A few days before we were trying to implement a project where file uploading was very important.
Now, there are two approaches to solve this:
- Keep it file in the file system
- Keep the file in a database (after converting it to binary format)
I have searched in various blogs, articles and discussion forums. Some suggest the filesystem, few are in favor of the database. I was getting confused, and then I implemented both approaches and my experience said
"Keeping the file in the file system is much faster than keeping it in the database after converting to binary.". Here, this article already became bulky and I don't want to bore you any more. In a future article I will try to show it.
Conclusion
If you are reading up to this line, I hope you have enjoyed this article. All types of suggestions are welcome.
Special line: Again, like other articles this line is dedicated to my dear Sam. Sam, "I have nominated you as my best reader. Waiting for your suggestion and valuable comments".