Praveen K

Praveen K

  • 0
  • 15
  • 0

Design help needed: Requirement to process huge number of data efficientlty

Apr 23 2008 4:41 AM

Hi All,


We have a requirement to process a huge number of data efficiently without any idle time.  The data that we need to process is related. Means, we have to process some 'n' number of related data by one DLL or thread.  That is, we need to process the data in sets.


Currently we are implementing the same using the following approach. 

  • We are having a Windows Service and 'n' number of DLLS.  (Using Reflection)The service will find the 'n' sets of data and it will assign the those 'n' sets of data to the 'n' DLLS for processing.
  • Once the DLLs are completed processing, the service will assign the next set of data for processing

The algorithm is like this:

  • Service will find the number of DLLs which are free(idle), say 20 DLLs are free at a particular point of time.
  • Service will fetch the 20 sets of data and assign those 20 sets to 20 DDLs

The issue we are facing is as follows:


Say the service is assigning the a set of data to 15th DLL of 20 free DLLs.  Within that time, there might be some n DLLS which got processed and will be idle stage till the service assigns the data to remaining 5 DLLS.  So the idle DLLs has to wait till the service finishes one round of assignment.


We need to avid the wait time to process the data efficiently. 


Could anyone suggest a better way to solve this problem.  We are in a position to redesign the system (ie, Service and n DLLs).  We are also thinking of doing the same using threads. But the issue is, the CPU usage is very high while using threading.


Any design pattern will serve my purpose in a better way?  Can anyone suggest any other better design?



Praveen Nair

Answers (3)