Hello.
I'm trying to optimize a process that synchronizes some proprietary objects to a database. Currently, this synchronization is made via ADO by using SQL statements. There are two differents actions that are made: the first initialization is made by a lot of INSERTs into different tables and the second operation is inserting/updating/removing objects one by one when it has changed.
Our objects are stored across multiple tables because they have a hierarchical structure.
My question is about performance for the first operation : what is the best way to achieve multiple inserts across multiple tables in the shortest time. We actually use ADO and an INSERT statement for each row which is obviously not the preferred method.
Thanks.One method you can try is:: Define cluster indexes on each table such that new inserts are distributed on multiple pages. Not only on last page.
Multiple inserts on last page will slow down ur speed.|||Thanks, but i was thinking about client methods to improve the performance of inserts such as batches. But i've got no knowledge of advanced methods. The actual problem is that one request is sent by INSERT of a row and we have roughly 1 million inserts to do.
Originally posted by avneesh
One method you can try is:: Define cluster indexes on each table such that new inserts are distributed on multiple pages. Not only on last page.
Multiple inserts on last page will slow down ur speed.|||1. SET NOCOUNT ON
removes useless "(1 row(s) affected)" messages (server feedback)
2. Create SPs for the insert
SPs are precompiled and therefore faster.
Showing posts with label optimize. Show all posts
Showing posts with label optimize. Show all posts
Friday, March 9, 2012
Inserts performance
Labels:
ado,
database,
inserts,
microsoft,
mysql,
objects,
optimize,
oracle,
performance,
process,
proprietary,
server,
sql,
synchronization,
synchronizes,
via
Subscribe to:
Posts (Atom)