1
Per year this 3,60,000 does not have a really big issue. This is fine for me.
There are lot ways for the performance point of view.
1) You can create Index. If you create more index, it may affect the performance of inserting . Think before creating multiple index. If you have primary key, automatically it will create clustered index for you.
After the indices are created, after particular time, you may need to Rubuild/reindex it.
2) You can create some views
3) You can archive data based on some date /year.
4) Splitting one table data into multiple tables.
HTH.
0
The important question is how much data it is total instead of how many records there are. If each record is 100 bytes then 360,000 records would be 35156 KB which is 34 MB, correct? That is not very big at all. If the records are larger then it might be possible to reduce the size of each record but if not then you must do what the applictions needs to do.
Perhaps if the records are large then the tables can be split into two or more tables with the frequently used data separate from the infrequently used data.
0
Vijay,
There is nothing like table crash.
We can say database got crashed or the Hard disk got crashed.
Both are different scenarios.
Its all depends upon the server Hard Disk, RAM, space/size allocated during database creation etc.
You can over come your problem by using Partitioned Tables concept.
Also, look into Database AutoGrow feature settings.
0
ok, but 3,60,000 records or more can lead to crash the table, Is it right?