Friday, March 9, 2012
Inserts....
I have a problem...
I have a temporary table and I'm traying to do this...
create table #tmp
( valor varchar(100),
valor1 varchar(100),
valor2 varchar(100)
)
declare @.cmd varchar(500)
select @.cmd = 'Select valor, valor2 from mytable)
insert into #tmp(valor, valor2)
exec(@.cmd)
select valor, valor2 from #tmp
but I have an error... 'Invalid object name #tmp'
why??other than a few typos everything looked good try this:
if object_id('mytable') is not null drop table mytable
if object_id('tempdb..#tmp') is not null drop table mytable
create table mytable(
valor varchar(100),
valor1 varchar(100),
valor2 varchar(100))
insert into mytable values('A','B','C')
insert into mytable values('D','E','F')
insert into mytable values('G','H','I')
insert into mytable values('J','K','L')
create table #tmp(
valor varchar(100),
valor1 varchar(100),
valor2 varchar(100))
declare @.cmd varchar(500)
select @.cmd = 'Select valor, valor2 from mytable'
insert into #tmp(valor, valor2)
exec(@.cmd)
select valor, valor2 from #tmp|||If you have this code inside of a stored proc it should work, otherwise it won't because your temporary table is local not global. (local temporary tables are visible only in the current sesion).
Try using:
CREATE TABLE ##tmp
or
CREATE TABLE tempdb..tmp
see "tempoary tables" in BOL for more details
Inserts with and without a clustered index...
I created 2 identical tables with just 12 records each. Then I created a clustered index in one table, but not on the other. I then tried inserting 2 identical records in both the tables. I guessed that in the table without the index the insertion would be faster, because there is no overhead of re-ordering the table based on the clustered index. I found that the insertion is actually faster on the table with the clustered index. How is this possible? Isn't inserts supposed to faster on heap files?A while ago I looked into the same matter but found that inserting rows in a table with/out indices took an equal amount of time, that is, the overhead of updating indices didn't change anything. I had quite a bit more rows than 12 though.
Having that said; I'm not exactly sure how to interpret these kind of results anyway. There could be a lot of influences that affect the sql-processes running in the background that again influence the select or insert results. The only thing I concluded was that, in _my_ case, it didn't matter that much.|||Well, if your table structure is like this create table dbo.t1 (
f1 int identity(1,1) not null primary key clustered,
f2 char(1) null)
then there is no difference. But if it's like this create table dbo.t2 (
f1 uniqueidentifier not null primary key clustered,
f2 char(1) null) then your INSERTS are going to be slower than if you remove CLUSTERED from your PK.
In addition, it's not the presence of the clustered index that imposes overhead, it's non-clustered indexes that require any action query to be accompanied by an implicit update of index pages.|||Thanks guys for the replies. But, doesn't having a clustered index mean that the records in the table are to be physically sorted? Therefore even if there is enough space on the page which should contain the record being inserted, the DBMS would still have to search for that page and insert the record, as opposed to simply appending it at the end, in the case of a heap file (unindexed file).
Maybe I see this in my example because the table is too small and everything fits into one disk page. How do you think my observations would have been if the table had a million records?|||It still goes to the last extent allocated to the table but does a simple check - is the new value greater than the last value? If the answer is YES - that's the end of the story. Else, - then we are gonna be talking about the overhead of the clustered index.
Inserts to tables as one or two users.
Background: C++ program accesses SQL via ODBC doing
massive inserts to two different tables. The data goes
either to one table or the other, but not both.
Problem: Is it more efficient to have the program access
the database as one or two SQL users?
1. user DOG inserts to table XXX "and" table YYY.
or
2. user DOG inserts to table XXX,
user CAT inserts to table YYY?
Thanks for your help,
DonUnless you need to load them in parallel there isn't any reason to use two
users. Adding connections has a slight overhead that you probably don't
need for this type operation.
--
Andrew J. Kelly
SQL Server MVP
"Don" <ddachner@.yahoo.com> wrote in message
news:0bf401c38126$d4174120$a001280a@.phx.gbl...
> SQL 7.0 SP4
> Background: C++ program accesses SQL via ODBC doing
> massive inserts to two different tables. The data goes
> either to one table or the other, but not both.
> Problem: Is it more efficient to have the program access
> the database as one or two SQL users?
> 1. user DOG inserts to table XXX "and" table YYY.
> or
> 2. user DOG inserts to table XXX,
> user CAT inserts to table YYY?
> Thanks for your help,
> Don|||no, it wouldn't make any difference in performance, it
will only increase work for you to manage permissions to
two users.
Also, consider using stored procedures rather than direct
insert statements. this will give slight performance gain,
and better management of sql code.
>--Original Message--
>SQL 7.0 SP4
>Background: C++ program accesses SQL via ODBC doing
>massive inserts to two different tables. The data goes
>either to one table or the other, but not both.
>Problem: Is it more efficient to have the program access
>the database as one or two SQL users?
>1. user DOG inserts to table XXX "and" table YYY.
>or
>2. user DOG inserts to table XXX,
> user CAT inserts to table YYY?
>Thanks for your help,
>Don
>.
>
Inserts taking longer time
I have SQL Server 2005 installed on my machine and I am firing following query to insert 1500 records into a simple table having just on column.
Declare @.i int
Set @.i=0
While (@.i<1500)
Begin
Insert into test2 values (@.i)
Set @.i=@.i+1
End
Here goes the table definition,
CREATE TABLE [dbo].[test2](
[int] NULL
) ON [PRIMARY]
Now the problem with this is that on one of my server this query is taking just 500ms to run while on my production and other test server this query is taking more than 25 seconds.
Same is the problem with updates. I have checked the configurations of both the servers and found them to be the same. Also there are no indexes defined on either of the tables. I was wondering what can be the possible reason for this to happen. If any of u people has any pointers regarding this, will be really useful
Thanks in advance,
Mitesh
Although this can′t be the possible cause, which recovery models are you using on the databases ?
Jens K. Suessmeyer.
http://www.sqlserver2005.de
|||Hey Jens,
Thanks for the pointer but we are using the Full Recovery model on both of my database. I think there is some problem with the server configuration as I have tried this Insert on multiple databases of my server (where the time taken is more) and have got the same output
|||
You machine is probably caching disk writes so the commits are not actually hardening.Servers are usually configured not to do that so your data is not at risk.
You should wrap your loop with a BEGIN TRAN and COMMIT TRAN.
Declare @.i int
Set @.i=0
Begin Tran
While (@.i<1500)
Begin
Insert into test2 values (@.i)
Set @.i=@.i+1
End
Commit Tran
Hi Mitesh,
I am having a very similar problem:
* I have a crappy testing machine (P3 at 0.5 GHz on Windows NT Server) with SQL 7,
* I have a testing environment running on a Virtual Machine on a pretty descent laptop (Laptop = Celeron @. 1.59 GHz on Windows XP - sorry, low budget -). The Virtual Machine runs Windows 2003 with SQL 2005 Express, with 400 MBytes assigned in RAM.
* The database I work with is created running an SQL script. It has about 15 tables, and as a whole is a quite normalized database.
* I have this application that I did, that updates the database through ODBC.
Now here is the interesting data:
* On my SQL 7, it takes between 2 to 3 seconds to update the database (we are talking about 50 to 100 insert statements) the first time I click on the "save" button. After that is just a matter of a little bit less than a second to update the database every time I click on the "save" button.
* However, on my SQL 2005 environment, it takes about 1 minute. The amount of time does not change much the next time I click "save".
Notes aside:
* Both are empty databases. They only have the table structure.
* The tables are quite normalized. It is ok, it is not a database that it is intended to grow beyond 100 Mbytes.
* "De-normalization" is not an option.
* I changed the size of the database log file from 1 Mbyte to 20 to see if my problem were excesive "check points"... but the behaviour was just the same.
I learned to live with it, after all, Windows XP does not support VM's. In fact, I was rather impressed by the Celeron chip.
But the fact that caught my attention is that, after time went by and I continued using my Virtual Machine, the time got reduced as my database grew up in size. After about 100 "save" operations, It now takes around 10 seconds to save. This suggests me that there is something going on with the indexes...
It is ok, after all, it is just a testing environment.
Hope it helps!!!
Tark Ik
PS: This posting has been posted "AS IS"
|||Hey Peter,Thanks a Ton for your reply. There would have been no way in the world through which i could have figured this out

Inserts resulting in Exclusive Key Lock
Thanks,
JohnWell put the clustered index back...
It's locking the row and/or page..
Did 7 have row? And was it a 2k or 8k page?
I can't remeber any more,
BUT, you need to keep your transactions brief...like lightning breif...
What's the code do?
Is this a sproc?|||Yes, 7 has row locking, not sure about the page size. I am converting and old Btrieve app and trying to keep modifications to the code to a minimum. This is acutally happening with only one user in. Within a tranasction its reading a table, inserting a row, then cycyling again - multiple times. After the first insert its not able to read the table again. Before I went in and checked if the code can be changed I wanted to make sure there isn't something else I could do - aside from changing the isolation level.
Thanks
Originally posted by Brett Kaiser
Well put the clustered index back...
It's locking the row and/or page..
Did 7 have row? And was it a 2k or 8k page?
I can't remeber any more,
BUT, you need to keep your transactions brief...like lightning breif...
What's the code do?
Is this a sproc?|||You mean looping?
You're probably blocking yourself...
Are you doing INSERT INTO the table you're reading from?
Why not write to a temp table, then if all the executions are fine, insert the temp to the final destination table...
can you post the code?|||Yes, I am blocking myself...
Would that be a user temp table? Will give that a try. Can't really post the code as its being generated from an application called Magic.
Even with the temp, when inserting the temp to the final dest, won't that still result in excl key locks? - although in another transaction.
Is this just the way insert locks work - has behavior changed in 2000?
Originally posted by Brett Kaiser
You mean looping?
You're probably blocking yourself...
Are you doing INSERT INTO the table you're reading from?
Why not write to a temp table, then if all the executions are fine, insert the temp to the final destination table...
can you post the code?|||Magic...that's funny...
Let's make some Magic
CREATE TABLE #myTable99(Col list)
DECLARE @.x int
SELECT @.x = 0
WHILE @.x < 10
BEGIN
SELECT @.loclavariable1 = Col1, ect FROM TABLE WHERE Predicate = guarentee 1 row
INSERT INTO #myTable99(Col1, ect) SELECT @.loclavariable1, ect
SELECT @.x = @.x + 1
INSERT INTO FinalDetaTable (Collist)
SELECT Col list FROM #myTable99
Like that...or are you using a cursor?
If it is a cursor, you can probably do without it...
Hey, I know, I'll create an app and call it Miracle
Good Luck|||Oh for Pete's sake!...|||What's wrong, my advice or the miracle thing?
Some say it would be a miracle if I ever get this third party extract done...
They keep chenging their mind...
I'm on revision 12 now...purely painful...for a simple (ok, well not so simple) extract...|||Hey Brett, you rock! I just don't understand why people are trying to write a script to send a rocket to Mars while doing an insert?! ;)|||Sounds like a 4GL that generates the code...
Ever see SQ-R
Painful...
I guess you can you it and write good code, but it leads down the cursor path..
I once saw a "program" that ran 7 hours..
8 levels of nested curors...
Wrote in pure sql...
5 minutes...|||LOL
So in actuallity, it really was my advice
:D
Inserts performance
I'm trying to optimize a process that synchronizes some proprietary objects to a database. Currently, this synchronization is made via ADO by using SQL statements. There are two differents actions that are made: the first initialization is made by a lot of INSERTs into different tables and the second operation is inserting/updating/removing objects one by one when it has changed.
Our objects are stored across multiple tables because they have a hierarchical structure.
My question is about performance for the first operation : what is the best way to achieve multiple inserts across multiple tables in the shortest time. We actually use ADO and an INSERT statement for each row which is obviously not the preferred method.
Thanks.One method you can try is:: Define cluster indexes on each table such that new inserts are distributed on multiple pages. Not only on last page.
Multiple inserts on last page will slow down ur speed.|||Thanks, but i was thinking about client methods to improve the performance of inserts such as batches. But i've got no knowledge of advanced methods. The actual problem is that one request is sent by INSERT of a row and we have roughly 1 million inserts to do.
Originally posted by avneesh
One method you can try is:: Define cluster indexes on each table such that new inserts are distributed on multiple pages. Not only on last page.
Multiple inserts on last page will slow down ur speed.|||1. SET NOCOUNT ON
removes useless "(1 row(s) affected)" messages (server feedback)
2. Create SPs for the insert
SPs are precompiled and therefore faster.
Inserts per Second
What rank of inserts per second could be archived
on a modern desktop (1CPU/3GHz/1GB) hardware?
How this compare to a dedicated server hardware?
And most important question;
In a scenario when an applications writes received data to a database.
Which insert method would give the fastest results?
And which method do you recommended for actual use?
(bulk insert, asynchronous inserts, packaged inserts, or something else)
Thanks,
Mitja SemolicYour question is impossible to answer. There are too many variables.
Your "on a modern desktop" question --
Do you mean that you would have SQL Server installed on this "modern
desktop" and you would be inserting data locally?
Or do you mean that a client application would be running on this desktop
and inserting data to a SQL Server ON THE NETWORK?
How is your application written? How are your stored procedures or insert
statements? Do you SET NOCOUNT ON or do you receive the "n rows affected"
from your insert statement?
Did you drop the non-clustered indexes on your tables before running the
inserts?
If you want a super fast "just get the data in there" routine, BULK INSERT
is probably your fastest bet. If you want to be able to create an error
file you will need to use BCP. If you want to insert your data one row at a
time, read the file and call an insert stored procedure or build the T-SQL
within your application to perform the insert.
Keith Kratochvil
"Mitja Semolic" <mitja.semolic@.ensico.si> wrote in message
news:yTDcg.3320$oj5.1032258@.news.siol.net...
> SQL Server 2005;
> What rank of inserts per second could be archived
> on a modern desktop (1CPU/3GHz/1GB) hardware?
> How this compare to a dedicated server hardware?
> And most important question;
> In a scenario when an applications writes received data to a database.
> Which insert method would give the fastest results?
> And which method do you recommended for actual use?
> (bulk insert, asynchronous inserts, packaged inserts, or something else)
> Thanks,
> Mitja Semolic
>|||I was expecting an obsolete answer,
so I wondered if it was worth posting?
I don't want to confuse you with a simple question.
But are there any concrete benchmark result
that could showcase the SQL server performance
in different scenarios and different hardware?
Let me ask you a similar question:s
How fast a car can go, what is its track time?
How many frames per second can draw a graphics card for a specific game?
Are this questions also impossible to answer?
Some thinks should be left simple.
Mitja Semolic
Inserts per Second
What rank of inserts per second could be archived
on a modern desktop (1CPU/3GHz/1GB) hardware?
How this compare to a dedicated server hardware?
And most important question;
In a scenario when an applications writes received data to a database.
Which insert method would give the fastest results?
And which method do you recommended for actual use?
(bulk insert, asynchronous inserts, packaged inserts, or something else)
Thanks,
Mitja SemolicYour question is impossible to answer. There are too many variables.
Your "on a modern desktop" question --
Do you mean that you would have SQL Server installed on this "modern
desktop" and you would be inserting data locally?
Or do you mean that a client application would be running on this desktop
and inserting data to a SQL Server ON THE NETWORK?
How is your application written? How are your stored procedures or insert
statements? Do you SET NOCOUNT ON or do you receive the "n rows affected"
from your insert statement?
Did you drop the non-clustered indexes on your tables before running the
inserts?
If you want a super fast "just get the data in there" routine, BULK INSERT
is probably your fastest bet. If you want to be able to create an error
file you will need to use BCP. If you want to insert your data one row at a
time, read the file and call an insert stored procedure or build the T-SQL
within your application to perform the insert.
Keith Kratochvil
"Mitja Semolic" <mitja.semolic@.ensico.si> wrote in message
news:yTDcg.3320$oj5.1032258@.news.siol.net...
> SQL Server 2005;
> What rank of inserts per second could be archived
> on a modern desktop (1CPU/3GHz/1GB) hardware?
> How this compare to a dedicated server hardware?
> And most important question;
> In a scenario when an applications writes received data to a database.
> Which insert method would give the fastest results?
> And which method do you recommended for actual use?
> (bulk insert, asynchronous inserts, packaged inserts, or something else)
> Thanks,
> Mitja Semolic
>|||I was expecting an obsolete answer,
so I wondered if it was worth posting?
I don't want to confuse you with a simple question.
But are there any concrete benchmark result
that could showcase the SQL server performance
in different scenarios and different hardware?
Let me ask you a similar question:s
How fast a car can go, what is its track time?
How many frames per second can draw a graphics card for a specific game?
Are this questions also impossible to answer?
Some thinks should be left simple.
Mitja Semolic
Inserts not committing
e
data is inserted into two SQL Server 2000 tables, 2- a commit is issued
(Begin Trans / Commit Trans), 3 - a Select is used to retrieve data to
display on the db memo thru Crystal Reports and is printed. For each of the
20 entered, a db memo was printed but when the application closed, only data
for the first four db memos was saved / committed in the database. The other
16 had not been saved / committed in the database. We know the data for the
16 were inserted or else the Select would not have retrieved the data to
display and print the DB memo. There were no error messages and the
application did not crash. This happened only three times in 18 months and w
e
cannot duplicate the error. The commits are all processed in the same VB for
m
flow code. Anyone ever have a similar issue? Anyone have an idea on what
might have occurred? Thanx.Are you printing inside the transaction?
AMB
"Inserts not committing" wrote:
> A user entered 20 debit memos thru the application. For each db memo, 1 -
the
> data is inserted into two SQL Server 2000 tables, 2- a commit is issued
> (Begin Trans / Commit Trans), 3 - a Select is used to retrieve data to
> display on the db memo thru Crystal Reports and is printed. For each of th
e
> 20 entered, a db memo was printed but when the application closed, only da
ta
> for the first four db memos was saved / committed in the database. The oth
er
> 16 had not been saved / committed in the database. We know the data for th
e
> 16 were inserted or else the Select would not have retrieved the data to
> display and print the DB memo. There were no error messages and the
> application did not crash. This happened only three times in 18 months and
we
> cannot duplicate the error. The commits are all processed in the same VB f
orm
> flow code. Anyone ever have a similar issue? Anyone have an idea on what
> might have occurred? Thanx.
Inserts into table that has a Primary key/Unique constraint
column. Also the key is defined as a unique constraint.
Theory would call for some additional latency as it has to check for
uniqueness, but since its an identity column, can we safely remove that
unique constraint and that way, we can speed up the inserts ?You may ignore this thread. I guess the primary key has a unique constraint
to it
"Hassan" <hassan@.test.com> wrote in message
news:ur1FBJpQIHA.2208@.TK2MSFTNGP06.phx.gbl...
> We do a lot of inserts into a table that has a primary key on an identity
> column. Also the key is defined as a unique constraint.
> Theory would call for some additional latency as it has to check for
> uniqueness, but since its an identity column, can we safely remove that
> unique constraint and that way, we can speed up the inserts ?
>
Inserts into table that has a Primary key/Unique constraint
column. Also the key is defined as a unique constraint.
Theory would call for some additional latency as it has to check for
uniqueness, but since its an identity column, can we safely remove that
unique constraint and that way, we can speed up the inserts ?You may ignore this thread. I guess the primary key has a unique constraint
to it
"Hassan" <hassan@.test.com> wrote in message
news:ur1FBJpQIHA.2208@.TK2MSFTNGP06.phx.gbl...
> We do a lot of inserts into a table that has a primary key on an identity
> column. Also the key is defined as a unique constraint.
> Theory would call for some additional latency as it has to check for
> uniqueness, but since its an identity column, can we safely remove that
> unique constraint and that way, we can speed up the inserts ?
>
Inserts into table that has a Primary key/Unique constraint
column. Also the key is defined as a unique constraint.
Theory would call for some additional latency as it has to check for
uniqueness, but since its an identity column, can we safely remove that
unique constraint and that way, we can speed up the inserts ?
You may ignore this thread. I guess the primary key has a unique constraint
to it
"Hassan" <hassan@.test.com> wrote in message
news:ur1FBJpQIHA.2208@.TK2MSFTNGP06.phx.gbl...
> We do a lot of inserts into a table that has a primary key on an identity
> column. Also the key is defined as a unique constraint.
> Theory would call for some additional latency as it has to check for
> uniqueness, but since its an identity column, can we safely remove that
> unique constraint and that way, we can speed up the inserts ?
>
Inserts gradually slow down
very fast and gradually slows down. Each insert is through a stored
procedure with no transactions involved.
If I stop and restart the process it immediately speeds up and then
gradually slows down again. But closing and re-opening the connection
every 10000 records didn't help.
Stopping and restarting the process is obviously clearing up some
resource on SQL Server (or DTC??), but what? How can I clean up that
resource manually?BTW, the tables have no indexes or constraints. Just simple tables
that are dropped and recreated each time the process is run.|||(andrewbb@.gmail.com) writes:
> I'm inserting 2 million+ records from a C# routine which starts out
> very fast and gradually slows down. Each insert is through a stored
> procedure with no transactions involved.
> If I stop and restart the process it immediately speeds up and then
> gradually slows down again. But closing and re-opening the connection
> every 10000 records didn't help.
> Stopping and restarting the process is obviously clearing up some
> resource on SQL Server (or DTC??), but what? How can I clean up that
> resource manually?
If I understand thius correctly, every time you restart the process
you also drop the tables and recreate them. So that is the "resource"
you clear up.
One reason could be autogrow. It might be an idea to extend the database
to reasonable size before you start loading. If you are running with
full recovery, this also includes the transaction log.
You could also consider adding a clustered index that is aligned with
the data that you insert. That is, you insert the data in foo-order,
you should have a clustered index on foo.
But since 2 million rows is quite a lot, you should probably examine
more efficient methods to load them. The fastest method is bulk-load,
but ADO .Net 1.1 does not have a bulk-load interface. But you could
run command-line BCP.
You could also build XML strings with your data and unpack these
with OPENXML in the stored procedure.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||In my experience 9 times out of 10 the reason for the problem is a file
that keeps growing by 10%.
either pre-allocate a big file as Erland suggests or change the
filegrowth from 10% (the default, which by the way is a bad default) to
something in the region of 20 mb or so.|||You're right, I removed the drop and re-create and it's definitely
slower when data already exists. So how would you suggest loading this
data?
The text file contains 11 different types of Rows. Each type of row
goes to a separate table, so I need to read each line, determine its
type, parse it and insert into the appropriate table.
Can BCP handle this? DTS? Or your XML idea?
Thanks
Erland Sommarskog wrote:
> (andrewbb@.gmail.com) writes:
> > I'm inserting 2 million+ records from a C# routine which starts out
> > very fast and gradually slows down. Each insert is through a
stored
> > procedure with no transactions involved.
> > If I stop and restart the process it immediately speeds up and then
> > gradually slows down again. But closing and re-opening the
connection
> > every 10000 records didn't help.
> > Stopping and restarting the process is obviously clearing up some
> > resource on SQL Server (or DTC??), but what? How can I clean up
that
> > resource manually?
> If I understand thius correctly, every time you restart the process
> you also drop the tables and recreate them. So that is the "resource"
> you clear up.
> One reason could be autogrow. It might be an idea to extend the
database
> to reasonable size before you start loading. If you are running with
> full recovery, this also includes the transaction log.
> You could also consider adding a clustered index that is aligned with
> the data that you insert. That is, you insert the data in foo-order,
> you should have a clustered index on foo.
> But since 2 million rows is quite a lot, you should probably examine
> more efficient methods to load them. The fastest method is bulk-load,
> but ADO .Net 1.1 does not have a bulk-load interface. But you could
> run command-line BCP.
> You could also build XML strings with your data and unpack these
> with OPENXML in the stored procedure.
> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server SP3 at
> http://www.microsoft.com/sql/techin.../2000/books.asp|||I tried adjusting the settings... both different %'s and specific MBs,
but the slow down is the same in all cases.
It's dramatically slower to insert records into a 100,000 row table
than an empty one I guess.|||
andrewbb@.gmail.com wrote:
> I tried adjusting the settings... both different %'s and specific MBs,
> but the slow down is the same in all cases.
> It's dramatically slower to insert records into a 100,000 row table
> than an empty one I guess.
So these are existing tables? Do these tables have indexes already? If you
had a clustered index, for instance, and were inserting data in a random
order, there would be a lot of inefficiency and index maintenance. The
fastest way to get that much data in, would be to sort it by table,
and in the order you'd want the index, and BCP it in, to tables with no
indices, and then create your indexes.|||(andrewbb@.gmail.com) writes:
> You're right, I removed the drop and re-create and it's definitely
> slower when data already exists. So how would you suggest loading this
> data?
I can't really give good suggestions about data that I don't anything
about.
> The text file contains 11 different types of Rows. Each type of row
> goes to a separate table, so I need to read each line, determine its
> type, parse it and insert into the appropriate table.
> Can BCP handle this? DTS? Or your XML idea?
If the data has a conformant appearance, you could load the lot in a staging
table and then distribute the data from there.
You could also just write new files for each table and then bulk-load
these tables.
It's possible that a Data Pump task in DTS could do all this out
of the box, but I don't know DTS.
The XML idea would require you parse the file, and build an XML document
of it. You wouldn't have to build 11 XML documents, though. (Although
that might be easier than building one big one.)
It's also possible to bulk-load from variables, but not in C# with
ADO .Net 1.1.
So how big did you make the database before you started loading? With
two million records, you should have at least 100 MB for both data and
log.
By the way, how do call the stored procedure? You are using
CommandType.StoredProcedure, aren't you?
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Thanks for the help. I found the fastest way to do it in a single SP:
- BULK INSERT all data into a 2 column staging table using a BCP Format
file. (One column is the RowType, the rest is the data to be parsed)
- create an index on RowType
- call 11 different SELECT INTO statements based on RowType
2,000,000 rows loaded in 1.5 minutes.
Thanks a lot, BCP works very well.|||(andrewbb@.gmail.com) writes:
> Thanks for the help. I found the fastest way to do it in a single SP:
> - BULK INSERT all data into a 2 column staging table using a BCP Format
> file. (One column is the RowType, the rest is the data to be parsed)
> - create an index on RowType
> - call 11 different SELECT INTO statements based on RowType
> 2,000,000 rows loaded in 1.5 minutes.
> Thanks a lot, BCP works very well.
Hmmm! It's always great to hear when things work out well!
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp
INSERTs given me the BLUES
cstring = cstring + "VALUES('%" + txtWatchID.Text + "%','%" + txtcenter + "%'" & _
cstring = "INSERT INTO tblNEW (watch_id, service_center_num, repair_envelope, store_number"
cstring = cstring + "date_purchase, transaction_num, cust_fname, cust_lname, product_code"
cstring = cstring + "value_watch, failure_date, service_date, failure_code, repair_code"
cstring = cstring + "service_request, store_number_senditem, register_number, street_address"
cstring = cstring + "city, state, zip_code, area_code, phone_num, product_desc, service_center"
cstring = cstring + " work_to_bdone, auth_num, labor_cost, parts_cost, tax_cost, total_cost"
cstring = cstring + "notes, client_number)"
cstring = cstring + "VALUES('%" + txtWatchID.Text + "%','%" + txtcenter + "%'" & _
this is the error is get, but i did the same thing on a select statement and it works fine...do i need to add something to the string or what i am kinda confused and help would be great.....
Operator '+' is not defined for types 'String' and 'System.Windows.Forms.TextBox'.
It should probably be "txtcenter.Text" and not "txtcenter". It is the second control you use to concatenate the value with.
|||Well, this is realy not a SQL Server question, but...
my guess is that:
"%','%" + txtcenter + "%'" & _
txtcenter is a textbox and you must do something more like:"%','%" + txtcenter.text + "%'" & _
Also, you seem to have ignored what everyone told you earlier about injection attacts and text entry because you arent using quotename (or Pull_Quotes from your example code.)
Louis
|||This is a function i wrote that takes care of the quotes, but what do you mean about injection attacts? and where can i read about what an attacts is? i have seemed to miss that part of SQL do you have any links for that topic, and this is how i have the INSERT now will it work this way or was i better off keeping it the other way?
Public Function PrepareStr(ByVal strValue As String) As String
If strValue.Trim() = "" Then
Return "NULL"
Else
Return "'" & strValue.Trim() & "'"
End If
End Function
cstring = "INSERT INTO tblNEW (watch_id, service_center_num, repair_envelope, store_number"
cstring = cstring + "date_purchase, transaction_num, cust_fname, cust_lname, product_code"
cstring = cstring + "value_watch, failure_date, service_date, failure_code, repair_code"
cstring = cstring + "service_request, store_number_senditem, register_number, street_address"
cstring = cstring + "city, state, zip_code, area_code, phone_num, product_desc, service_center"
cstring = cstring + " work_to_bdone, auth_num, labor_cost, parts_cost, tax_cost, total_cost"
cstring = cstring + "notes, client_number)"
cstring = cstring + "VALUES PrepareStr(txtWatchID.Text),PrepareStr(txtCenter.Text),PrepareStr(txtenvelope.Text),PrepareStr(txtSenditem.Text)" & _
cstring = cstring + "PrepareStr(dtDateofPur.Text), PrepareStr(txtTrans.Text), PrepareStr(txtfname.Text), PrepareStr(txtlname.Text),PrepareStr(txtprdcode.Text)" & _
cstring = cstring + "PrepareStr(txtvalue.Text), PrepareStr(datefail.Text), PrepareStr(dtshipdate.Text), PrepareStr(txtregisternum.Text), "
|||check here:
http://www.sommarskog.se/dynamic_sql.html#Security2
Amazingly good coverage of the topic and then look at the quote name explanation. Better than I could explain it here :)
|||An unhandled exception of type 'System.InvalidCastException' occurred in microsoft.visualbasic.dll
Additional information: Cast from string "INSERT INTO tblNEW (watch_id, se" to type 'Boolean' is not valid.
Where does the Boolean value come in i thought it was a string....my type in the DB is not boolean where is this coming from any help?
I get this exception when i am trying to excute this Query and i even added my Pull_Quotes Function....here is the query.....
cstring = "INSERT INTO tblNEW (watch_id, service_center_num, repair_envelope, store_number"
cstring = cstring + "date_purchase, transaction_num, cust_fname, cust_lname, product_code"
cstring = cstring + "value_watch, failure_date, service_date, failure_code, repair_code"
cstring = cstring + "service_request, store_number_senditem, register_number, street_address"
cstring = cstring + "city, state, zip_code, area_code, phone_num, product_desc, service_center"
cstring = cstring + " work_to_bdone, auth_num, labor_cost, parts_cost, tax_cost, total_cost"
cstring = cstring + "notes, client_number)"
cstring = cstring + "VALUES Pull_Quotes(txtWatchID.Text),Pull_Quotes(txtCenter.Text),Pull_Quotes(txtenvelope.Text),Pull_Quotes(txtSenditem.Text)" & _
cstring = cstring + "Pull_Quotes(dtDateofPur.Text), Pull_Quotes(txtTrans.Text), Pull_Quotes(txtfname.Text), Pull_Quotes(txtlname.Text),Pull_Quotes(txtprdcode.Text)" & _
cstring = cstring + "Pull_Quotes(txtvalue.Text), Pull_Quotes(datefail.Text), Pull_Quotes(dtshipdate.Text), Pull_Quotes(txtregisternum.Text)" & _
cstring = cstring + "Pull_Quotes(txtaddress.Text), Pull_Quotes(txtcity.Text),(txtstate.Text), (txtzip.Text), Pull_Qoutes(txtareacode.Text), Pull_Quotes(txtphonenum.Text)" & _
cstring = cstring + "Pull_Quotes(txtproductdesc.Text), Pull_Quotes(txtworkbdone.Text), Pull_Quotes(txtauthnumber),(txtlabor.Text), (txtPart.Text)" & _
cstring = cstring + " Pull_Quotes(txttaxcost.Text), Pull_Quotes(txtTotal.Text), Pull_Quotes(txtNotes.Text), Pull_Quotes(txtClient.Text)"
|||Just like last time, for help in this group, we need to see the SQL statement that is being sent to the client:
INSERT into tblNEW.. etc. For eerrors like: 'System.InvalidCastException' There isn't much we can do.
I can guess that the & _ is probably not right in this statement:
string = cstring + "Pull_Quotes(txtproductdesc.Text), Pull_Quotes(txtworkbdone.Text), Pull_Quotes(txtauthnumber),(txtlabor.Text), (txtPart.Text)" & _
cstring = cstring + " Pull_Quotes(txttaxcost.Text), Pull_Quotes(txtTotal.Text), Pull_Quotes(txtNotes.Text), Pull_Quotes(txtClient.Text)"
But really no idea. If you can do the messageBox trick again and post the INSERT statement the you will get useful help.
|||Ok i got rid of the exception i was using to many line continuations thats why i got that exception but here is what the Insert is sending to the server
INSERT INTO tblNEW(watch_id, service_center_num, repair_envelope, store_numberdate_purchase, transaction_num, cust_fname, cust_lname, product_codevalue_watch, failure_date, service_date, failure_code, repair_codeservice_request, store_number_senditem, register_number, street_addresscity, state, zip_code, area_code, phone_num, product_desc, service_centerwork_to_bdone, auth_num, labor_cost, parts_cost, tax_cost, total_costnotes, client_number, service_ship)VALUES '10000269%','%23432%','%234123%','%2341%''%3/22/2006%','%2412%', '%Demetrius%', '%Powers%','%2341%''%25,000.00%', '%3/22/2006%', '%3/22/2006%', '%34234%''%43534dfggsdg%', '%sdgsdg%','%fg%', '%42342%','%453%', '%435345%''%fgsdfgsd%','%2343%', '%gsdgsdg%', '%345345%',(txtlabor.Text), (txtPart.Text)'%534534%', '%45345%', '%sdgsdgdfgfger%', '%4234%', '%3/22/2006%'
I know some of it is garble but you know the saying Fuctionality first then make is pretty but this is the statement...and it does not work is it something with the ((txtlabor.Text or txtPart.Text)) should i add the Pull_Quotes function for all of them or what....kinda lost any help would be fantastic! thanks
|||I do not think you should have all these % when you insert a column into the table.
It makes sense when searching for sometime inside a column.
insert into TestTable (sometext) values ('this is a test')
To find a row in TestTable that contains 'test' you do
select * from TestTable where sometext like '%test%'
|||Ok, the first thing that is wrong is:
Msg 102, Level 15, State 1, Line 1
Incorrect syntax near '10000269%'.
This is because you need parenthesis around the stuff in the values clause.
Second:
You need a comma between these parts:
'%2341%''%25,000.00%'
Finally:
for the txtLabor.text stuff you need to look at how your quotes are in your statement. You probably have something like:
Pull_Quotes("(txtClient.Text)")
Or something along those lines, but that is a VB question, and I don't know.
|||Yeah, just completely missed that fact. I was so focused on the syntax that I missed the values being inserted :)|||Well here is what is being sent to the server from my insert statement
WatchTracker
INSERT INTO tblNEW(watch_id, service_center_num, repair_envelope, store_numberdate_purchase, transaction_num, cust_fname, cust_lname, product_codevalue_watch, failure_date, service_date, failure_code, repair_codeservice_request, store_number_senditem, register_number, street_addresscity, state, zip_code, area_code, phone_num, product_desc, service_centerwork_to_bdone, auth_num, labor_cost, parts_cost, tax_cost, total_costnotes, client_number, service_ship)VALUES '0010000269','345234','342523','3453','3/22/2006','53534', 'Demetrius', 'Powers','45345','25,000.00', '3/22/2006', '3/22/2006', '453245','45324gdsfgsgfd', 'dfgsdgsdg','rt', '43545','454', '43534534','sdfasfdasa','34544', 'sdfasfasd', '345345','43543', '3453','345', '3453', 'ghkweklfklasdfklneklrnkl', '435345', '3/22/2006'
OK
I am still not getting those results i need, is there a tool that comes with the SQL Server 2000 management studio? Or is the code syntax still incomplete?
|||Those results? Are you getting an error? Yes, I know you are:
Msg 102, Level 15, State 1, Line 1
Incorrect syntax near '0010000269'.
At the very least you need parens, and you have no comma between repair_code and service_request or store_number_send and item or street_address and city or service_center and work_to_bdone:
INSERT INTO tblNEW(watch_id, service_center_num, repair_envelope, store_numberdate_purchase, transaction_num, cust_fname, cust_lname, product_codevalue_watch, failure_date, service_date, failure_code, repair_code, service_request, store_number_send,item, register_number, street_address, city, state, zip_code, area_code, phone_num, product_desc, service_centerwork_to_bdone, auth_num, labor_cost, parts_cost, tax_cost, total_costnotes, client_number, service_ship)
VALUES ('0010000269','345234','342523','3453','3/22/2006','53534', 'Demetrius', 'Powers','45345','25,000.00', '3/22/2006', '3/22/2006', '453245','45324gdsfgsgfd', 'dfgsdgsdg','rt', '43545','454', '43534534','sdfasfdasa','34544', 'sdfasfasd', '345345','43543', '3453','345', '3453', 'ghkweklfklasdfklneklrnkl', '435345', '3/22/2006')
Once you get your statement into this form use the query tool to execute the statement until you get a result that works. Fix your code and then test again. Wrap the statement in a transaction and the database won't be affected:
BEGIN TRANSACTION
test statement
ROLLBACK TRANSACTION
|||Why are you performing a direct INSERT from the client code? It has security implications and in most cases a bad practice. Create a stored procedure that performs the insert and then call it from the client.|||Thanks here is the final query that worked for me thanks for the help!! It was great!!!
INSERT INTO tblNEW " & _
"VALUES (" & _
"'" & Pull_Quotes(txtWatchID.Text) & _
"','" & Pull_Quotes(txtcenter.Text) & _
"','" & Pull_Quotes(txtenvelope.Text) & _
"','" & Pull_Quotes(txtSenditem.Text) & _
"','" & Pull_Quotes(dtDateofPur.Text) & _
"','" & Pull_Quotes(txtTrans.Text) & _
"','" & Pull_Quotes(txtfname.Text) & _
"','" & Pull_Quotes(txtlname.Text) & _
"','" & Pull_Quotes(txtprdcode.Text) & _
"','" & Pull_Quotes(txtValue.Text) & _
"','" & Pull_Quotes(datefail.Text) & _
"','" & Pull_Quotes(dtServiceRecieve.Text) & _
"','" & Pull_Quotes(txtfailurecode.Text) & _
"','" & Pull_Quotes(txtrepaircode.Text) & _
"','" & Pull_Quotes(txtservice.Text) & _
"','" & Pull_Quotes(txtSenditem.Text) & _
"','" & Pull_Quotes(txtregisternum.Text) & _
"','" & Pull_Quotes(txtaddress.Text) & _
"','" & Pull_Quotes(txtcity.Text) & _
"','" & Pull_Quotes(txtstate.Text) & _
"','" & Pull_Quotes(txtzip.Text) & _
"','" & Pull_Quotes(txtareacode.Text) & _
"','" & Pull_Quotes(txtphonenum.Text) & _
"','" & Pull_Quotes(txtproductdesc.Text) & _
"','" & Pull_Quotes(dtshipdate.Text) & _
"','" & Pull_Quotes(txtworkbdone.Text) & _
"','" & Pull_Quotes(txtauthnumber.Text) & _
"'," & (txtLabor.Text) & _
"," & (txtPart.Text) & _
"," & (txtTaxcost.Text) & _
"," & (txtTotal.Text) & _
",'" & Pull_Quotes(txtNotes.Text) & _
"','" & Pull_Quotes(txtClient.Text) & _
"','" & Pull_Quotes(dtshipdate.Text) & "')"
Inserts failed sometimes ...
(when the server is with a big rate of transactions), we are seeing
the following messages in package debug file:
2004-04-22 14:23:27 3056:
------------------
2004-04-22 14:23:27 3056: rlm03000: ls_PutOneRecord: Failed inserting
[04113SO07236054]
2004-04-22 14:23:27 3056: ODBCSetCursorName[3056]: Failed specifying
cursor concurrency to statement handle 1 for 'ls_rep_add'
2004-04-22 14:23:27 3056: dbc01003: ODBCSetCursorName[3056]:
ls_rep_add generated SQL error state: 24000
2004-04-22 14:23:27 3056: dbc01002: [Microsoft][ODBC Driver Manager]
Invalid cursor state
2004-04-22 14:23:27 3056: odbcFilterConnectErrors[3056]: set the
iState to ODBC_DISCONNECT for <24000>
2004-04-22 14:23:27 3056: Function Return Code [00001] ODBC_ERROR
2004-04-22 14:23:27 3056: Operation [00000] ls_PutOneRecord
2004-04-22 14:23:27 3056: Primary Return Code [00000] 00000
2004-04-22 14:23:27 3056: Secondary Return Code [0000000001] 00000
2004-04-22 14:23:27 3056:
------------------
When this message appears in the app debug file we are loosing some
inserts that the application do in the database.
The MS SQL server configuration is:
4 Pentium 760Mhz 4 GB RAM
SQL 2000 Standard Edition 8.00.760 SP 4
MDAC 2.7 driver ODBC SQLSRV32.dll 2000.81.9031.14 15/11/2002
Any tip will be welcome,
Thanks in advance,
ReisRobson Reis (reisrf@.yahoo.com) writes:
> We have installed a package developed by another company and sometimes
> (when the server is with a big rate of transactions), we are seeing
> the following messages in package debug file:
I'm afraid that this is not an issue that is easy resolved over a
newsgroup. The best would of course be to open a support case with
that vendor. If you don't have a support contract or that vendor
has gone bust, you may have to find some who can reverse-engineer
the package, or who can dig up the source code.
Judging from the log and your mentioning of heavy stress, I would
guess that there is poor error handling in conjunction with deadlocks.
A poor workaround could be to run this package against an idle
environment, and then replicate from that environment in your live
environment.
--
Erland Sommarskog, SQL Server MVP, sommar@.algonet.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||"Robson Reis" <reisrf@.yahoo.com> wrote in message
news:33bf8ee4.0405061045.7cab35af@.posting.google.c om...
> We have installed a package developed by another company and sometimes
> (when the server is with a big rate of transactions), we are seeing
> the following messages in package debug file:
>
> 2004-04-22 14:23:27 3056:
> ------------------
> 2004-04-22 14:23:27 3056: rlm03000: ls_PutOneRecord: Failed inserting
> [04113SO07236054]
> 2004-04-22 14:23:27 3056: ODBCSetCursorName[3056]: Failed specifying
> cursor concurrency to statement handle 1 for 'ls_rep_add'
> 2004-04-22 14:23:27 3056: dbc01003: ODBCSetCursorName[3056]:
> ls_rep_add generated SQL error state: 24000
> 2004-04-22 14:23:27 3056: dbc01002: [Microsoft][ODBC Driver Manager]
> Invalid cursor state
> 2004-04-22 14:23:27 3056: odbcFilterConnectErrors[3056]: set the
> iState to ODBC_DISCONNECT for <24000>
> 2004-04-22 14:23:27 3056: Function Return Code [00001] ODBC_ERROR
> 2004-04-22 14:23:27 3056: Operation [00000] ls_PutOneRecord
> 2004-04-22 14:23:27 3056: Primary Return Code [00000] 00000
> 2004-04-22 14:23:27 3056: Secondary Return Code [0000000001] 00000
> 2004-04-22 14:23:27 3056:
> ------------------
> When this message appears in the app debug file we are loosing some
> inserts that the application do in the database.
> The MS SQL server configuration is:
> 4 Pentium 760Mhz 4 GB RAM
> SQL 2000 Standard Edition 8.00.760 SP 4
> MDAC 2.7 driver ODBC SQLSRV32.dll 2000.81.9031.14 15/11/2002
>
> Any tip will be welcome,
> Thanks in advance,
> Reis
As I understand it, the error messages you've listed come from your
application log, not from SQL Server, so the first place to go for support
should be the application vendor.
You might get more information in microsoft.public.sqlserver.odbc or
..jdbcdriver, but without more knowledge of what the application is, what the
code looks like, and what the app is doing when this error occurs, it's
almost impossible to give more help.
Simon
Inserts and Updates question (newbie)...should be simple to explain
Here is the situation:
We have an old propriatary database that is used for daily tasks. Not much we can do with it, but we have to use it.
I can create basically a read only connection to it through ODBC.
I would like to, on a timed interval copy certain data for reporting from this slow thing to my SQL server so that I can learn to program, and create some cool reports etc without having to wait on this server all day.
So here is what I don't quite understand.
I had originally planned on just deleting the contents of the tbl on my SQL server just before I populated it each time, but found out that my AutoNumber field will continue to increase, and I'm assuming that eventually I'm going to run into a problem as a result.
Should I be doing some kind of update instead? if so do I need to first CHECK if the record exisit, if not then do an insert, if so do an update type thing?
Or is there a way to basically do it in one command?
I hope this makes sense. I would show you some code but there really isn't much to show you other than my insert statement :->
Thanks for any advice!
Josh
Well, the quickest and easiest answer will solve the first problem thatyou mentioned. If you use TRUNCATE TABLE yourTableName instead ofDELETE FROM yourTableName, you'll reseed the identity value and it'llstart all over from the beginning. As an added bonus, thisperforms better, with less overhead than using DELETE.
|||
This "AutoNumber field" is better known as an identity field. It is used in conjuction with a data-type such as int, or bigint. It automates the unique identification of records in a table. It is usually associated with the Primary Key field of a table. If you want to truely replicate the data across different instances, then you should preserve this number. This is especially important if you have other tables that reference these records, which you are also trying to replicate now or later. You can do this by updating the fields of the records based on this key field.
By the way... with an integer (SQL 2000) data-type you have about 2 billion records until you'll need to worry about running out of numbers. If you're getting that much data in, then you should look into data-warehousing concepts.
Example of update statement: update tablename set field2 = infield2 where keyfield1 = inkeyfield1, ...
Adam
Xanderno wrote:
Well, the quickest and easiest answer will solve the first problem thatyou mentioned. If you use TRUNCATE TABLE yourTableName instead ofDELETE FROM yourTableName, you'll reseed the identity value and it'llstart all over from the beginning. As an added bonus, thisperforms better, with less overhead than using DELETE.
Just be careful with TRUNCATE TABLE. This is a non-loggedoperation. If you are doing something like transaction logshipping this will throw a major wrench into it. This is likelynot an issue for you, but I have run into the problem so I thought I'dmention it.
The alternative is a DELETE with a subsequent DBCC CHECKIDENT to resetthe identity seed value. As Xander says, this will incur moreoverhead than a simple TRUNCATE TABLE.
|||Additionally, if you have records in your table which are related toother tables in the database by that identity value, the best course ofaction is an in-place UPDATE, with an INSERT for new records and aDELETE of records that have disappeared from the data source. FWIW Iactually converted a daily import procedure from the "delete allrecords and insert fresh" approach to the UPDATE/INSERT/DELETE approachand was surprised how quick it still ran (for about 6000records). (This was the same stored procedure in which I triedthe TRUNCATE TABLE, incurring the wrath of the DBA.)
|||Really, I have to wonder if you need the identity column at all. If it's only there for looks, (and I'd imagine that it is, since you'reasking about how large it'll get rather than how to maintain it for agiven row across imports) why not do away with it entirely? Ifit's just a static lookup table, and the only reason for importing itinto SQL Server is to reduce overhead, I'd suggest that the SQL Servertable not contain *anything* other than what you have in the originaltable.
As to how you get it into SQL Server, I'll leave it to you to decidewhether truncating the table, or updating/inserting new and changedrows will be most efficient, but do look at both options and keep thecaveats that Terri mentioned in mind.
inserts and updates issue using sp
Hi, i have the following sp. im using vs.net2005, sqlserver2005.
now how do i implement functions like - > okay using datasets, exception is thrown that the row has been modified and i get to inform the user, reload the modified row, .
how do i get same functionality using sp? is that possible ? how/
setANSI_NULLSON
setQUOTED_IDENTIFIERON
go
ALTERProcedure [dbo].[sp_Insert_Or_Update_Bill]
@.BillID_All uniqueidentifier,
@.BillID uniqueidentifier,
@.Pono nvarchar(25),
@.Date_ smalldatetime,
@.SupplierCode nvarchar(25),
@.Reference nvarchar(25),
@.AmountType nchar(10),
@.BillType nvarchar(25),
@.TypeCode nchar(10),
@.AmountFC decimal(18,2),
@.ROE decimal(9,2),
@.Currency nchar(5),
@.LinkBill uniqueidentifier,
@.multiplicityID uniqueidentifier,
@.payment_or_bill smallint
as
IfExists(Select*from bill where billid_all= @.BillID_All and amounttype = @.amounttype)
begin
update bill
SET [BillID_All] = @.BillID_All
,[BillID] = @.BillID
,[Pono] = @.Pono
,[Date_] = @.Date_
,[SupplierCode] = @.SupplierCode
,[Reference] = @.Reference
,[AmountType] = @.AmountType
,[BillType] = @.BillType
,[TypeCode] = @.TypeCode
,[AmountFC] = @.AmountFC
,[ROE] = @.ROE
,[Currency] = @.Currency
,[LinkBill] = @.LinkBill
,[multiplicityID] = @.multiplicityID
,[payment_or_bill] = @.payment_or_bill
where billid_all= @.BillID_All and amounttype = @.amounttype
end
else
ifnot @.AmountFC = 0
begin
begin
INSERTINTO [Costing].[dbo].[Bill]
([BillID_All]
,[BillID]
,[Pono]
,[Date_]
,[SupplierCode]
,[Reference]
,[AmountType]
,[BillType]
,[TypeCode]
,[AmountFC]
,[ROE]
,[Currency]
,[LinkBill]
,[multiplicityID]
,[payment_or_bill])
VALUES
(@.BillID_All
,@.BillID
,@.Pono
,@.Date_
,@.SupplierCode
,@.Reference
,@.AmountType
,@.BillType
,@.TypeCode
,@.AmountFC
,@.ROE
,@.Currency
,@.LinkBill
,@.multiplicityID
,@.payment_or_bill)
end
end
This is known as optimistic locking, and it requires you to check to see if the copy of the data you originally fetched still exists using one of a few methods:
1. Pass all of the non-modified data you have in the table to the procedure and check to see if it has changed. That is probably how your app does it now. Not very efficient, and won't work with text.
2. Add a rowversion (commonly called timestamp) column to the table. Then check to see if it has changed. rowversions manage themselves, so every modification to the table will spawn a change to the rowversion.
ALTER TABLE test add row_version rowversion
Then in your update or delete:
...
,[payment_or_bill] = @.payment_or_bill
where billid_all = @.BillID_All
and amounttype = @.amounttype
and row_version = @.row_version --value fetched with to the client.
if @.@.rowcount <> 1 --what you expected
begin
if not exists (select --check to see if a row exists with the primary key value you are trying to update
raiserror ('The row has been modified',16,1)
3. Add an update_date to tables and check this. The downside here is that you have to manage the change values for yourself with a trigger, or even in the stored procedures (I always use triggers). Do the same sort of check as for the rowversion
It takes some work, but the last two are far more efficient than number 1...
Inserts and updated sqlserver
I have a multithreaded application. VC6++, VB6 and OLEDB. Every thread
has its own session to the database.
If I do all the inserts and do updates in one thread it works fine.
All my data is in the database.
If I split the work to be done on more than one thread it shows in the
SQL Profiler the it does the inserts and updates, but when I look in
the database not all the data was added. There were no errors logged.
Does anyone have an idea what could cause this?
Thanks
GertGert wrote:
> Hi
> I have a multithreaded application. VC6++, VB6 and OLEDB. Every thread
> has its own session to the database.
> If I do all the inserts and do updates in one thread it works fine.
> All my data is in the database.
> If I split the work to be done on more than one thread it shows in the
> SQL Profiler the it does the inserts and updates, but when I look in
> the database not all the data was added. There were no errors logged.
Do you insert the data in the transaction? Maybe you do not call COMMIT at
the end of it and the transaction is rolled back after the timeout.
sincerely,
--
Sebastian K. Zaklada
Skilled Software
http://www.skilledsoftware.com
This posting is provided "AS IS" with no warranties, and confers no rights.
Inserts and updated sqlserver
I have a multithreaded application. VC6++, VB6 and OLEDB. Every thread
has its own session to the database.
If I do all the inserts and do updates in one thread it works fine.
All my data is in the database.
If I split the work to be done on more than one thread it shows in the
SQL Profiler the it does the inserts and updates, but when I look in
the database not all the data was added. There were no errors logged.
Does anyone have an idea what could cause this?
Thanks
GertGert wrote:
> Hi
> I have a multithreaded application. VC6++, VB6 and OLEDB. Every thread
> has its own session to the database.
> If I do all the inserts and do updates in one thread it works fine.
> All my data is in the database.
> If I split the work to be done on more than one thread it shows in the
> SQL Profiler the it does the inserts and updates, but when I look in
> the database not all the data was added. There were no errors logged.
Do you insert the data in the transaction? Maybe you do not call COMMIT at
the end of it and the transaction is rolled back after the timeout.
sincerely,
--
Sebastian K. Zaklada
Skilled Software
http://www.skilledsoftware.com
This posting is provided "AS IS" with no warranties, and confers no rights.
insertion scheduled job in SQL
i want to make a scheduled job that inserts 10 user defined row each minute continously in enterprise managerI'd write a stored procedure to do the insertion and then use SQL Agent to schedule it to run every minute.
But, what do you mean by "10 user defined row"?