Overview

In today's fast changing technology, keeping up with technological changes is a challenge. Every time we try to keep up with technology changes, technology changes again. It is never ending chase to catch up with technology and that is what makes software development even more interesting.

Focus of this article is to give developers, project managers a quick view of many of new Microsoft technologies out there and help them realize the value of these new technologies. To be frank, technologies covered here are not new and some of them were introduced to IT world, 3 to 4 year ago. But in relative terms, these are still new technologies in Microsoft world as real-world projects and products might take some time to adapt to these changes in technology.

Microsoft did extremely good job in coming up with more matured/innovative tools and technology in the past 5 years. It is just amazing how much Microsoft delivered to IT industry in these past few years and sometimes it could be little overwhelming too. My goal in this article is to explain key benefits of these new technologies and help everyone to quickly get a hold of them.

Finally, key point I would like to make before shifting focus on technology is, we all need to know, what are current tools and technology choices, so that we can make right decisions in projects, when we need them. Not that we pickup technology first and decide to use in specific scenario because we believe it is good technology. Understanding requirement and figuring out what fits best is the key, before jumping on and say, some technology looks cool and let us do something with it.

All details provided in this document are my personal views based on my hands-on experience with these tools and technologies. You can agree or disagree based on how you see these technologies and how you used them.

I would like to take this opportunity to thank all those people, who share their knowledge, time and provide very valuable information on Internet. It is those wonderful and useful articles, blogs and videos contributed by everyone that makes things different. This article is tiny attempt to give my share of contribution and hope that helps.

Finally sample code shown in this document focuses mainly in demonstrating concepts and not of production quality. In a production quality code, developers need to take care of error checking and exception handling that is omitted for simplicity in sample code.

.NET Language and Development Tools

C# 3.0: Specification

Following are few key new features in C# 3.0 that you might be interested:

Anonymous types

In short developers can package results into an instance of a class, without having to define it up front.

Problem: Given a string array, pick up only strings that start with 'W', build a key/Value collection for matching strings. Value collection should be result of split on the string -excluding key and results should include total data length.

Solution: We can jump into writing nested loops and build additional types collections up front to hold results etc-using well known approaches.

But try to think this through from LINQ, and you will appreciate both LINQ and idea of anonymous types in this scenario.

Sample code below, solves the problem in one-line and also allows us to hold results in an instance of a class that we never defined up front.

Sample code:

static void Main(string[] args)
        {
            // INPUT:
            string[] searchStr = { "This is a no match",

     "W1I have a match",

     "W2Val1 Val2 Val3 Found It",

     "O2 No Match" };

            // SOLUTION: One liner (multi-line only for readability)
            var result = from item in searchStr
                         where item[0] == 'W'       
                         select new { Key=item.Substring(0,2),
                                      Values=item.Substring(2).Split(' '),
        DataLength=item.Length };

            // Display result: Demo Only (Not part of solution)
            foreach (var item in result)
            {
                Console.WriteLine("Key: {0} length: {1}", item.Key, item.DataLength);
                foreach (string itemValue in item.Values)
                {
                    Console.Write("{0} | ", itemValue);
                }
                Console.WriteLine();
            }
 
            // Wait for uesr input
            Console.ReadKey();
        }


Details: In the above sample, we created an object with properties (Key, Values, DataLength) on fly, avoiding additional work in creating these types up front and losing focus.

Anonymous methods/Lambda expressions

Anonymous methods were introduced in C# 2.0 and are not new in 3.0. Anonymous methods are good if we are adding an event handler or delegate and don't really need to take the pain of creating separate method explicitly for implementation.

Following is sample code using lambda expression to stop a form from closing:

this.FormClosing += (object sender, FormClosingEventArgs e) =>
{ e.Cancel = true; };

In some cases it is simpler and more readable to see delegate implementation right next as shown above, instead of defining additional method explicitly.

Operator "=>" Is referred to Lambda operator. Knowing that Lambda operator is assigning a "Method signature" to a "Method Body" is one way to understand it. What made me confusing is, when I see an expression like below:
x => x > 100;

It translates to: A method that takes parameter of X (of some type) and Body is checking if X is > 100 and return true or false.

Problem: Get list of dates that are greater than today from a date collection

Solution:                    

             // INPUT
      List<DateTime> list = new List<DateTime>();     
      list.Add(DateTime.Today.AddDays(-1));list.Add(DateTime.Today);     
      list.Add(DateTime.Today.AddDays(-2));
      list.Add(DateTime.Today.AddDays(1));
      list.Add(DateTime.Today.AddDays(2));

      // SOLUTION: One liner
     
List<DateTime> result = list.FindAll(x => x > DateTime.Today);

Is it not cool how much code is reduced. In old days, we had to write a predicate that checks condition and return true/false. Lambda expressions simplify anonymous method usage further and they are same as anonymous methods for technical understanding.

Extension methods

It is interesting concept that we can add any method to existing classes, without modifying that class code.

Problem: Allow developers to use IsBusinessDay method on DateTime variables

Solution:

    // POINT 1: Helper class has to be delared as static
    static class DateHelper
    {
        // POINT 2: Start with "this" target class/param: rest of parameters
        public static bool IsBusinessDay(this DateTime date, string clientCode)
        {
            return ((clientCode == "USClient") && date.DayOfWeek != DayOfWeek.Saturday);

        }
    }
       
        
// USAGE
    bool isBusiness = DateTime.Today.IsBusinessDay("USClient");


What is big deal? Yes we can create static methods in a class and use them as DateHelper.IsBusinessDay syntax. But extension methods make it easy to use them, on instances of class they are targeting. Best part is IntelliSense works well with extension methods too when using instance of target class.

It feels like we can add methods to .NET built-in classes, just like Microsoft team can do Of course there are reasons to use extension methods and there are restrictions too.

Extension methods are useful if you just writing helper methods targeting specific Interface type or class type. If you want generic helper method, we are better off without extension methods.

I felt like extension methods are created in first place to provide some implementation around Interfaces without providing concrete base class. Many times I personally felt like providing some concrete reusable functionality to an interface but didn't want to force developers to derive from specific class. With extension methods, we can achieve most of it, allowing developers to implement specific interface but derive from any class and still use pre-defined methods on interface.

Automatic properties

This is probably first thing that many developers got on to. Everyone wants to follow guidelines by defining private fields and providing get/set assessor methods. But it is painful code and I always thought I wish I don't have to do this. Now you don't have to.

Example:

        
class Employee
    {
        public Employee(string name)
        {
            Name = name;
        }

        public string Name { get; private set; }
        public short Age { get; protected set; }
        public DateTime Joined { get; set; }
    }

    class Manager : Employee
    {
        Manager(string name, short age)
            : base(name)
        {
            Age = age;
        }
    }


Above example code shows different variations of access levels we can control for get/set methods. So just to achieve private or protected access for set and public get, we don't have to explicitly create a field and get/set methods.

Object initializer

Object initializers can create and initialize instance of known or anonymous types in single statement. If class name is specified in new, instance of known type is created. Otherwise instance of anonymous type is created.

Example code:

      
Employee e1 = new Employee { Name="Mike", Age=40, Joined=DateTime.Today };
      var e2 = new { Name = "Mike", Age = 40, Joined = DateTime.Today };

In above example, e1 and e2 are of different types. Anonymous type is created for e2 as we did not specify type.

What is big deal? Use of object initializers simplifies code in some scenarios. We don't have to declare multiple constructors matching all possible ways up front, to create instance and initialize at the same time.

Summary: Over all these new changes in C#, makes code more interesting and leaves developers, to focus on the problem at hand than wasting time in typing unwanted code. If you are not used to typing, you will enjoy these shorthand techniques.

LINQ to Objects and LINQ to XML

LINQ (Language Integrated Query) is another very useful technology released in .NET 3.5. Initially I was not so sure about the usefulness of this until I had few scenarios that proved very efficient with LINQ. Best part is that LINQ queries made the code compact and easy to read/maintain without impact to performance.

So far from my reading and testing of LINQ to XML, I am convinced that LINQ to XML is comparable to XPath and even better in performance in many scenarios. For comparison of XPath and "LINQ to XML" see the link: http://msdn.microsoft.com/en-us/library/bb675156.aspx

VS 2008

It is too old to talk about Visual Studio 2008, when Visual Studio 2010 Beta1 is available. In my own opinion Visual Studio 2008 is the most powerful IDE on earth. Visual Studio Expression versions are even free and come with good functionality.
Some cool things in VS 2008, just to cover the picture:

  • If you are working on .NET 2.0 projects, don't stick to VS 2005, you can use VS 2008 and change target to .NET 2.0. This is really cool feature, as not every one might be ready to move right away to .NET 3.5 but still want to make use of new IDE

  • Add Service Reference is useful to generate proxy for WCF services

    o It is similar to "Add Web Reference" in VS 2005 but it is more
    o There are advanced options to share contract assembly instead of regenerating
    o There is also option to use older "Add Web Reference" if you are dealing with web services


NOTE: So far I could not generate proxy through VS 2008, for a WCF dual contract. For dual contract I ended up using SvcUtil. But I cannot say for sure if this is VS 2008 issue, as there are people who claim that it works and I have no reason to doubt them. Something wrong with my machine is an easy answer?

Presentation Technologies/Tools

WPF: Windows Presentation Foundation


Windows Presentation Foundation (WPF), makes it easy to let business analysts create user interface/styles and developers to focus on providing functionality. With Expression Blend business users can create user interfaces with some amount of training, instead of relying in on developers to create UI. User Interface creation is an art and sometimes it is burden for developers to deal with it vs. someone passing on approved user interface along with a mark-up that developers can use right away.

Concept of control templates and data templates in WPF is amazing. Other than providing rich user interface and richer animation, ease of customizing almost any aspect of user interface, through control styles, templates is just way beyond a WinForm development. WPF and WinForm can also co-exist and that is best part of it. If you have existing WinForms product and you are adding new features, you don't have to stick to WinForms and use WPF instead for new development.

Learning curve: It is key to understand the concepts and guidelines before jumping on using WPF. Without deeper understanding of WPF concepts, if WinForms developers are asked to code in WPF, we might end up with WPF application that is coded like WinForm, making less use of XAML vs. code based implementation OR overuse of control templates and styles without proper re-use.

Professional WPF book from APress (by Matthew MacDonald) is an excellent book that covers all WPF details. There are many others on WPF that you might like.

Silverlight 2.0

Although I have not worked on Sliverlight based projects, trying few samaples and some reading so far impressed me quite a bit. Sliverlight 2.0 is C# based development, supports WCF and shares similar concepts of WPF. This makes it a very natural for a C# developers to be productive in Sliverlight quickly.

Best part of Silverlight2 is that it works on all browsers and many platforms (not windows-specific only -unlike Smart Clients). If you want to develop a Rich Internet Application and you don't think you have time or resources to do with ASP.NET/AJAX, Sliverlight2 is way to go.

I have not looked at Silverlight 3.0 and beyond but I can only hope for things to go better on this. I strongly believe that demand for Silverlight developers will go high in coming one year or so. So if you are new to .NET development and you are looking for good area to master skills, I would suggest Silverlight is good choice.

ASP.NET 3.5

So much is done in ASP.NET and it is lot different than what used to with ASP.NET 1.1. Most of ASP.NET 3.5 is based on ASP.NET 2.0 but few new additions (GridView and bunch of server controls along with AJAX). Although AJAX toolkit can be installed separately even on an ASP.NET 2.0.

If you are master of ASP.NET 1.1 and starting new project in ASP.NET 3.5, don't think you know it all and I would suggest good reading on ASP.NET 3.5. It is very promising. There are many good things like Web Parts, User profile support etc.

There is difference between Intranet web application development and Internet web application development. It also depends on how heavily the site will be used. For most of Intranet applications, purpose of the web application is either to manage data or provide reports. So it may be productive to use ASP.NET Web Forms and Server controls the way we normally used to use them in older days.

My suggestion is to keep always business layer separate from your Web application implementation. This will simply future migration, writing unit tests and keeps code more maintainable.

One word on web development is that there are too many options in many areas. DataBinding and DataAccess is one such area. There are always controls or technology options that are kind of overlapping. Developers need to look closely at current requirement and possible changes expected in future and pick up that will best address the need.

ASP.NET MVC


ASP.NET MVC helps developing web applications using MVC pattern. Key benefits of MVC framework is layered separation and test drive development support. Having said that it is not going to replace ASP.NET web forms and not necessarily looked at as superior to Web forms. ASP.NET MVC and ASP.NET web forms are there to solve different problems. It looks like Microsoft ASP.NET team is going to put equal focus on ASP.NET MVC and ASP.NET Web forms.

Although I could not get to a reasonably complex ASP.NET MVC based website or portal (compared to ASP.NET Web Form), there is another good reason why I think ASP.NET MVC might be good choice, for new web development. From what I have seen so far, it allows us to emit cleaner/compact and more valid HTML. If Semantic web and XMTML push takes over, it might be lot easier to migrate to new generation web development, if we use ASP.NET MVC.

JQuery

JQuery is javascript library that helps writing clean compact javascript code that takes care of cross-browser issues.

I liked JQuery for following reasons:

  • We can do a lot with one-line script and avoid writing too much JavaScript
  • Easy to maintain
  • Works on multiple browsers and allows us to easily write browser independent code
  • Supported by Microsoft and Microsoft will start shipping JQuery along with ASP.NET [That is what I heard in MDC 09]

Server-side technologies/Tools

WCF: Windows Communication Foundation

Windows Communication Foundation (WCF) is the way to build services, whether they are web-based or hosted in windows services. It is really the unified technology as Microsoft claims and I am really impressed with the extensibility and configurability in WCF.

In the past developers used to debate about .NET remoting over web services or DCOM and decide one over other etc. WCF gets all the best things in these technologies and makes them even better. So if you want to create an interface and expose it through a service, WCF is way to go. There are many ways for you to decide on what protocol to use, where to host service etc. Best part is WCF can be driven by configuration and WCF configuration tool and trace tools are very useful.

WF: Workflow Foundation


If you have used SQL DTS or SQL Integration Services, you might be familiar with using designer to build a workflow and adding some additional coding tasks. If you are BizTalk expert, I heard that it is similar to BizTalk orchestration. I don't know anything about BizTalk but I did use little bit of Workflow Foundation. So that let me say few words about workflow: So far I have not seen real pressing need or use of Workflow Foundation.

Having said that, there were cases when building service infrastructure, we wanted to build more pluggable infrastructure/architecture. There are some modules that required workflow kind of flexibility -allowing each service request to be configured differently. I can see the use of Workflows in these scenarios.

Database Server/Technologies

This is another area Microsoft is doing great work. They have come long way. I worked in Oracle and I like it very much too. I am not trying to compare SQL Server with Oracle here. But Just SQL Server 2000 and what changed recently.

SQL Express

Best part of this is that it is free. It comes with so much functionality and it tough to believe that it is free. You can do both SQL CLR and SQL Broker programming even with SQL Express. SQL Express is not best as real production database server but it has few key benefits. If you have application server and you want more reliable local cache, installing SQL Express on Application server and use it the way you want it. Some people might argue about the usage of database server on Application server but if you limit your usage more for local cache storage there are many advantages.

SQLCE (SQL Server Compact Edition)

So what if we want local cache support for Smart Client applications. SQL CE comes to rescue. SQL CE is pretty good in terms of stress/load it can take. Best part is it is free and it requires no installation. Another best part is that it can be deployed along with ClickOnce deployment.

Smart Client applications can use isolated storage OR data files type of storage but sometimes when we support off-line functionality for Smart Client, SQLCE might be good choice.

SQL Service Broker

SQL Service Broker is really cool database technology. SQL Service Broker supports database based queue support and allow sender and receiver to communicate through queues.

There are so many details around SQL Service Broker and this article is not targeted for this. But in short SQL Broker lets us build a queue in database and attach a stored procedure to be invoked when there are messages in the queue. This is only one type of usage. We can have two clients communicating actively through SQL Broker too.

SQL XML

When you are working with SQL Broker, it is best to define SQL XML schema and define message types of specific schema. This way automatically clients sending messages to broker queue are forced to send valid data. SQL XML support in SQL Server is also very good. You can do XPATH queries over XML and build XML type variables etc.

SQL CLR

There are many uses of SQL CLR including calling WCF service from SQL, calling a business logic or complex formula implemented in C# library etc. There are few cases we push all our logic to stored procedure and we are stuck because, we wanted to re-use some .NET library OR send this information to some .NET service from SQL. In these scenarios SQL CLR comes to rescue.

This is the area developers need to pay attention about what they are doing in SQL CLR method. It is important to justify the need for use of SQL CLR vs. it is cool let us use this technology. As much as possible it is best to do thing in SQL -SQL way and you will always see cases where SQL CLR is best option. It is good for developers to wait for this opportunity than to create one.

SQL Server Integration Services (SSIS)

DTS is my favorite and naturally I liked SQL Server Integration Services. There are many cases you cut down unwanted custom code and applications with out of box ETL functionality in SSIS.

Unit Testing Tools and Frameworks

Many developers love to write Unit tests and I am one of them. I don't believe that rapid development means there is no time to write unit tests. Many times unit tests saved me from unwanted bugs. Of course we need to draw a line of how much effort a developer should put in a unit test when functionality/code is in flux.

Following are few guidelines I follow:

  • Creating unit tests for every interface exposed to clients
  • Creating one test method for every method in the interface and test results as intelligently as possible.
  • Combine Insert/read/Update/Save/Delete methods in sequence to achieve some level of dynamic/integration testing

I am not against developers who write unit tests to test all possible code paths. But I personally did not find it efficient for the reason that UNIT test data is kind of hard-coded and not easily usable with nice GUI and not practical in some middleware/services test scenarios. So my solution is to create more powerful multi-thread test application using WinForms that will do good multi-user, multi-thread load/stress testing and complete functionality testing.

Visual Studio 2008 Unit Test Framework:

Visual Studio 2008 Unit test framework is lot easier and productive to use compared to NUnit. VS unit test framework allows picking up required projects and interfaces to generate required unit test code. So VS unit test framework might encourage more developers to write and use unit tests as it makes this process lot easier.

Additional Integration Test Tools:

Although unit testing is good start, creating custom integration test tools are very helpful to test, multi-user, multi-thread test scenarios, full functionality testing, testing with more dynamic data and stress/load testing. So for these scenarios it is best to create simple .NET WinForm based application that allows more dynamic configuration options and take user input on how many users, thread to simulate and how many times to repeat tests.
Following are key benefits with custom integration test tools:

  • To test your code well enough for all code paths and functionality
  • Covers load/stress testing
  • Useful to demonstrate functionality of services/middle-tier to clients
  • Helpful in regression testing
  • In cases where services are hosted on client locations, helps clients on how to consume services (By providing source code for test application).
  • When there is a problem in complex product with multiple components/services, a well written test tool helps us easily isolate the problem through different test configurations.

Tips in testing and debugging

Following are few tips that helped me around testing and debugging:

In your testing tool: Always create a log file from custom integration test tool that:

  • Reports what are input parameters for the test
  • How much time it took to complete operations (Use .NET StopWatch)
  • Generate log files names to be unique along with process ID and datetime stamp
  • Use simple text file writer (System.IO.File) directly, than using logging block or Log4Net etc in this scenarios. (For regular application/services logging application block is good)

In your service/application:

  • Use Debugger.WriteLine etc methods to dump information that is very useful while program is running
  • Debugger trace is also useful at any point to look at than dumping these large data to log files and makes it difficult to manage or look through
  • Log files are still good for DEBUG/Information/Error details but some specific scenarios -liking seeing how collection is changed on certain events, is better handled through Debugger.WriteLine. Best part is that it has no performance hit in release mode compared to adding other log statements for debugging these scenarios.

Performance and Memory Profiling

Performance Profiling

So far from different tools I used, I found AQTime and ANTS Profiler are very useful tools. What I liked most with ANTS Profiler is the fact that you can see the CPU curve and locate specific code related to any part of it.
If you are doing crude performance timing in code, use StopWatch vs. using performance counters. StopWatch is easier to use gives same level of accuracy that we get with performance counters.
Using Performance Monitor and monitoring specific performance counters is also useful in certain scenarios.

Memory Profiling:

Fortunately .NET has free memory profiler. But it might take some time to getting to know the tool. So far I find ANTS Memory profiler useful but it is available only with professional edition.
If you are writing services or UI that's stays open for long time, it is best practice to keep an eye on memory usage. This is another area; custom integration test tools are very useful. You can leave the repeat test count to 20K or so and leave it overnight to check for any memory leaks.
In general pay attention to how objects are created, use of finally to clean up code OR using statement on objects that implement Dispose etc. It is not always true that we no longer need to worry about memory in .NET. If an application written without .NET guidelines OR developers overlooked some issues, it can take up too much memory and you may not notice it unless you do enough load/stress testing.

Deployment and Administration

ClickOnce Deployment

ClickOnce is very useful deployment model to deploy Windows Forms or WPF based application. As long as application is designed and developed following Smart Client guidelines, it makes sense to go with ClickOnce. ClickOnce deployment also has many options to override default update behavior.

Developers can use ClickOnce deployment API directly to update application any time. All it takes is following 3 lines of code that does actual update to ClickOnce application and restart if updated:

        
// If you are testing Smart Client in debugger, this will be false
      if (System.Deployment.Application.ApplicationDeployment.IsNetworkDeployed)
      {
         if (ApplicationDeployment.CurrentDeployment.Update())
                    Application.Restart(); // Change this API for WPF accordingly
      }


.NET Framework Client Profile: Many times we hear complains that with Smart Client applications, users have to install .NET Framework. .NET Framework Client Profile allows users to use Smart Client applications without requiring users to install .NET 3.5. However this may not be suitable option for all scenarios, as .NET Framework Client Profile is sub-set of .NET 3.5 and there are other restrictions. You can read more on this at http://msdn.microsoft.com/en-us/library/cc656912.aspx

Windows Powershell

It is normal for every developer to write some scripting/WMI combination to do some administrative work at some point. Some developers like using Perl, VB script or windows script. While all of them have their own pros and cons, PowerShell seems like new direction to write administrative scripts.

There are few useful free Powershell IDEs with IntelleSense, making it a lot easier to write admin scripts. You can find more useful info on this at: http://blogs.msdn.com/PowerShell/

Next Recommended Readings