Showing posts with label tables. Show all posts
Showing posts with label tables. Show all posts

Monday, March 19, 2012

Installation - How do you get it to run?

I have downloaded the SQLce package from Microsoft and have run the .msi file.

How do I actually start the product and create tables and data?

Thanks,
Fred

You can't. It only really runs as part of your code (or someone else's).

If you want to create tables, etc. you're probably going to want to use SQL Server Management Studio, the express version is free.

Friday, March 9, 2012

Inserts with and without a clustered index...

Hi,
I created 2 identical tables with just 12 records each. Then I created a clustered index in one table, but not on the other. I then tried inserting 2 identical records in both the tables. I guessed that in the table without the index the insertion would be faster, because there is no overhead of re-ordering the table based on the clustered index. I found that the insertion is actually faster on the table with the clustered index. How is this possible? Isn't inserts supposed to faster on heap files?A while ago I looked into the same matter but found that inserting rows in a table with/out indices took an equal amount of time, that is, the overhead of updating indices didn't change anything. I had quite a bit more rows than 12 though.

Having that said; I'm not exactly sure how to interpret these kind of results anyway. There could be a lot of influences that affect the sql-processes running in the background that again influence the select or insert results. The only thing I concluded was that, in _my_ case, it didn't matter that much.|||Well, if your table structure is like this create table dbo.t1 (
f1 int identity(1,1) not null primary key clustered,
f2 char(1) null)
then there is no difference. But if it's like this create table dbo.t2 (
f1 uniqueidentifier not null primary key clustered,
f2 char(1) null) then your INSERTS are going to be slower than if you remove CLUSTERED from your PK.

In addition, it's not the presence of the clustered index that imposes overhead, it's non-clustered indexes that require any action query to be accompanied by an implicit update of index pages.|||Thanks guys for the replies. But, doesn't having a clustered index mean that the records in the table are to be physically sorted? Therefore even if there is enough space on the page which should contain the record being inserted, the DBMS would still have to search for that page and insert the record, as opposed to simply appending it at the end, in the case of a heap file (unindexed file).
Maybe I see this in my example because the table is too small and everything fits into one disk page. How do you think my observations would have been if the table had a million records?|||It still goes to the last extent allocated to the table but does a simple check - is the new value greater than the last value? If the answer is YES - that's the end of the story. Else, - then we are gonna be talking about the overhead of the clustered index.

Inserts to tables as one or two users.

SQL 7.0 SP4
Background: C++ program accesses SQL via ODBC doing
massive inserts to two different tables. The data goes
either to one table or the other, but not both.
Problem: Is it more efficient to have the program access
the database as one or two SQL users?
1. user DOG inserts to table XXX "and" table YYY.
or
2. user DOG inserts to table XXX,
user CAT inserts to table YYY?
Thanks for your help,
DonUnless you need to load them in parallel there isn't any reason to use two
users. Adding connections has a slight overhead that you probably don't
need for this type operation.
--
Andrew J. Kelly
SQL Server MVP
"Don" <ddachner@.yahoo.com> wrote in message
news:0bf401c38126$d4174120$a001280a@.phx.gbl...
> SQL 7.0 SP4
> Background: C++ program accesses SQL via ODBC doing
> massive inserts to two different tables. The data goes
> either to one table or the other, but not both.
> Problem: Is it more efficient to have the program access
> the database as one or two SQL users?
> 1. user DOG inserts to table XXX "and" table YYY.
> or
> 2. user DOG inserts to table XXX,
> user CAT inserts to table YYY?
> Thanks for your help,
> Don|||no, it wouldn't make any difference in performance, it
will only increase work for you to manage permissions to
two users.
Also, consider using stored procedures rather than direct
insert statements. this will give slight performance gain,
and better management of sql code.
>--Original Message--
>SQL 7.0 SP4
>Background: C++ program accesses SQL via ODBC doing
>massive inserts to two different tables. The data goes
>either to one table or the other, but not both.
>Problem: Is it more efficient to have the program access
>the database as one or two SQL users?
>1. user DOG inserts to table XXX "and" table YYY.
>or
>2. user DOG inserts to table XXX,
> user CAT inserts to table YYY?
>Thanks for your help,
>Don
>.
>

Inserts not committing

A user entered 20 debit memos thru the application. For each db memo, 1 - th
e
data is inserted into two SQL Server 2000 tables, 2- a commit is issued
(Begin Trans / Commit Trans), 3 - a Select is used to retrieve data to
display on the db memo thru Crystal Reports and is printed. For each of the
20 entered, a db memo was printed but when the application closed, only data
for the first four db memos was saved / committed in the database. The other
16 had not been saved / committed in the database. We know the data for the
16 were inserted or else the Select would not have retrieved the data to
display and print the DB memo. There were no error messages and the
application did not crash. This happened only three times in 18 months and w
e
cannot duplicate the error. The commits are all processed in the same VB for
m
flow code. Anyone ever have a similar issue? Anyone have an idea on what
might have occurred? Thanx.Are you printing inside the transaction?
AMB
"Inserts not committing" wrote:

> A user entered 20 debit memos thru the application. For each db memo, 1 -
the
> data is inserted into two SQL Server 2000 tables, 2- a commit is issued
> (Begin Trans / Commit Trans), 3 - a Select is used to retrieve data to
> display on the db memo thru Crystal Reports and is printed. For each of th
e
> 20 entered, a db memo was printed but when the application closed, only da
ta
> for the first four db memos was saved / committed in the database. The oth
er
> 16 had not been saved / committed in the database. We know the data for th
e
> 16 were inserted or else the Select would not have retrieved the data to
> display and print the DB memo. There were no error messages and the
> application did not crash. This happened only three times in 18 months and
we
> cannot duplicate the error. The commits are all processed in the same VB f
orm
> flow code. Anyone ever have a similar issue? Anyone have an idea on what
> might have occurred? Thanx.

Insertion failed

I have a service that writes a lot of records to many
tables in SQL server. Recently, the service failed in
writing record to any of the tables. But, the retrieving
of the records is still OK. By examining the application
log file, it seems that the SQL server started to
deteriorate by not allowing insertion in one table at a
time and eventually all tables are rejected for insertion.
I ended up re-booting the system and everything went back
to normal. After the reboot, I checked the SQL log file
and noticed that about 18,000 records were rolled forward
during SQL Server start-up.
Does anybody have any idea as why the insertion would
fail? And why the checkpoint was not executed even there
are about 18,000 committed transactions?Mike,
Perhaps the database space filled up as one table attempted to grow again.
Tables that still had some space left in them could take a few more rows,
then they were filled up as well and would no longer take any more rows.
Etc. Etc.
If this is the case, either allocate more space or turn on autogrow for the
database in question.
With regard to your closing comment, checkpoint is not equal to a commit.
Without knowing the period covered by the 18,000 records and something more
about your server it is not possible to offer details. However, I would
think that checkpointing was working just fine.
Russell Fields
"mike" <mike@.ftsl.com> wrote in message
news:5c0401c3e5e3$cb4094d0$a101280a@.phx.gbl...
quote:

> I have a service that writes a lot of records to many
> tables in SQL server. Recently, the service failed in
> writing record to any of the tables. But, the retrieving
> of the records is still OK. By examining the application
> log file, it seems that the SQL server started to
> deteriorate by not allowing insertion in one table at a
> time and eventually all tables are rejected for insertion.
> I ended up re-booting the system and everything went back
> to normal. After the reboot, I checked the SQL log file
> and noticed that about 18,000 records were rolled forward
> during SQL Server start-up.
> Does anybody have any idea as why the insertion would
> fail? And why the checkpoint was not executed even there
> are about 18,000 committed transactions?
>
|||Russell - Thank you for your reply.
I was also thinking about the possible disk space problem
but I don't think it's the case here. There was about 10G
left in the disk when this happened. At the same time, the
database files (data & transaction) are set to auto-grow.
If disk space was the problem, I would think rebooting the
server couldn't solve the problem. Based on my
calculation, in 5 mins period, properly 3500 record would
have been inserted to the table.
quote:

>--Original Message--
>Mike,
>Perhaps the database space filled up as one table

attempted to grow again.
quote:

>Tables that still had some space left in them could take

a few more rows,
quote:

>then they were filled up as well and would no longer take

any more rows.
quote:

>Etc. Etc.
>If this is the case, either allocate more space or turn

on autogrow for the
quote:

>database in question.
>With regard to your closing comment, checkpoint is not

equal to a commit.
quote:

>Without knowing the period covered by the 18,000 records

and something more
quote:

>about your server it is not possible to offer details.

However, I would
quote:

>think that checkpointing was working just fine.
>Russell Fields
>"mike" <mike@.ftsl.com> wrote in message
>news:5c0401c3e5e3$cb4094d0$a101280a@.phx.gbl...
insertion.[QUOTE]
back[QUOTE]
forward[QUOTE]
>
>.
>
|||Mike,
Sorry that I do not have a better idea.
We had problems at one time when another process besides SQL Server was
using the same disk. It wrote LOTS of data into an operating system file
eating up the disk, then either deleted it again or ran out of space itself,
causing the file write to abort. Since you have 10 GB free, then it is hard
to believe that a similar thing is happening to you.
Regarding checkpoints, this is set using the "recovery interval" option.
You might read up on this and discover that the timing of checkpoints is
more complicated than you would have thought.
Russell Fields
"mike" <mike@.ftsl.com> wrote in message
news:635a01c3e5ef$46160ae0$a501280a@.phx.gbl...[QUOTE]
> Russell - Thank you for your reply.
> I was also thinking about the possible disk space problem
> but I don't think it's the case here. There was about 10G
> left in the disk when this happened. At the same time, the
> database files (data & transaction) are set to auto-grow.
> If disk space was the problem, I would think rebooting the
> server couldn't solve the problem. Based on my
> calculation, in 5 mins period, properly 3500 record would
> have been inserted to the table.
>
> attempted to grow again.
> a few more rows,
> any more rows.
> on autogrow for the
> equal to a commit.
> and something more
> However, I would
> insertion.
> back
> forward|||Hi Mike,
Thank you for using the newsgroup and it is my pleasure to help you with
you issue.
As my understanding of your problem, when inserting a large amount of
records to many tables but failed by insert rejection to tables one by one.
You rebooted the system and everything seems OK again. and you found in SQL
log that SQL Server rolled forward the inserting during the startup, so
many records are inserting, you wonder why insert failed and how the
rolling forward come, right?
I would better to explain the rolling forwand first. The log records for a
transaction ( both a transaction.. commit structure or a single T-SQL
statement) are written to disk before the commit acknowledgement is sent to
the client process, but the actual changed data might not have been
physically written out to the data pages. That is, writes to data pages
need only be posted to the operating system, and SQL Server can check later
to see that they were completed. They don't have to complete immediately
because the log contains all the information needed to redo the work, even
in the event of a power failure or system crash before the write completes.
When you reboot you system and start SQL Server, recovery performs both
redo (rollforward) and undo (rollback) operations. In a redo operation, the
log is examined and each change is verified as being already reflected in
the database. (After a redo, every change made by the transaction is
guaranteed to have been applied.) If the change doesn't appear in the
database, it is again performed from the information in the log. Every
database page has an LSN in the page header that uniquely identifies it, by
version, as rows on the page are changed over time. This page LSN reflects
the location in the transaction log of the last log entry that modified a
row on this page. During a redo operation of transactions, the LSN of each
log record is compared to the page LSN of the data page that the log entry
modified; if the page LSN is less than the log LSN, the operation indicated
in the log entry should be redo, that is, should be roll forward. As for
the 18,000 record insert in the SQL start process, they are the modified
records that have been recorded in the transaction log, but have not been
writen to the database.
When you commit the transaction, the data modifications will be have been
made a permanent part of the database. no roll forward or roll back is
needed.
For the checkpiont, it will flush dirty data and log pages from the buffer
cache of the current database, minimizing the number of modifications that
have to be rolled forward during a recovery. So, that 18,000 records
insertings active operations that have been recorded in the log file after
or when check point occures, they are logged into the transaction log. For
more information of checkpoint, when it will occure, what SQL Server will
do when checkpoint come, please refer to 'checkpoing' in the SQL Server
Books Online.
As for your questions of how the records insertings are rejected by tables
one by one, it would be very helpful for you to provide the detailed error
message, the error log and the T-SQL you were excuting at that time for
further analysis. Thanks.
Best regards
Baisong Wei
Microsoft Online Support
----
Get Secure! - www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only. Thanks.
I have a service that writes a lot of records to many
tables in SQL server. Recently, the service failed in
writing record to any of the tables. But, the retrieving
of the records is still OK. By examining the application
log file, it seems that the SQL server started to
deteriorate by not allowing insertion in one table at a
time and eventually all tables are rejected for insertion.
I ended up re-booting the system and everything went back
to normal. After the reboot, I checked the SQL log file
and noticed that about 18,000 records were rolled forward
during SQL Server start-up.
Does anybody have any idea as why the insertion would
fail? And why the checkpoint was not executed even there
are about 18,000 committed transactions?|||From: v-baiwei@.online.microsoft.com (Baisong Wei[MSFT])
Date: Fri, 30 Jan 2004 05:10:41 GMT
Subject: RE: Insertion failed
Newsgroups: microsoft.public.sqlserver.server
Hi Mike,
Thank you for using the newsgroup and it is my pleasure to help you with
you issue.
As my understanding of your problem, when inserting a large amount of
records to many tables but failed by insert rejection to tables one by one.
You rebooted the system and everything seems OK again. and you found in SQL
log that SQL Server rolled forward the inserting during the startup, so
many records are inserting, you wonder why insert failed and how the
rolling forward come, right?
I would better to explain the rolling forwand first. The log records for a
transaction ( both a transaction.. commit structure or a single T-SQL
statement) are written to disk before the commit acknowledgement is sent to
the client process, but the actual changed data might not have been
physically written out to the data pages. That is, writes to data pages
need only be posted to the operating system, and SQL Server can check later
to see that they were completed. They don't have to complete immediately
because the log contains all the information needed to redo the work, even
in the event of a power failure or system crash before the write completes.
When you reboot you system and start SQL Server, recovery performs both
redo (rollforward) and undo (rollback) operations. In a redo operation, the
log is examined and each change is verified as being already reflected in
the database. (After a redo, every change made by the transaction is
guaranteed to have been applied.) If the change doesn't appear in the
database, it is again performed from the information in the log. Every
database page has an LSN in the page header that uniquely identifies it, by
version, as rows on the page are changed over time. This page LSN reflects
the location in the transaction log of the last log entry that modified a
row on this page. During a redo operation of transactions, the LSN of each
log record is compared to the page LSN of the data page that the log entry
modified; if the page LSN is less than the log LSN, the operation indicated
in the log entry should be redo, that is, should be roll forward. As for
the 18,000 record insert in the SQL start process, they are the modified
records that have been recorded in the transaction log, but have not been
writen to the database.
When you commit the transaction, the data modifications will be have been
made a permanent part of the database ( in the log file).
For the checkpiont, it will flush dirty data and log pages from the buffer
cache of the current database, minimizing the number of modifications that
have to be rolled forward during a recovery. So, that 18,000 records
insertings active operations that have been recorded in the log file after
or when check point occures, they are logged into the transaction log. For
more information of checkpoint, when it will occure, what SQL Server will
do when checkpoint come, please refer to 'checkpoing' in the SQL Server
Books Online. You could also refer to 'recoery mode' too.
As for your questions of how the records insertings are rejected by tables
one by one, it would be very helpful for you to provide the detailed error
message, the error log and the T-SQL you were excuting at that time for
further analysis. Thanks.
Best regards
Baisong Wei
Microsoft Online Support
----
Get Secure! - www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only. Thanks.|||Hi Mike,
Thank you for using the newsgroup and it is my pleasure to help you with
you issue.
As my understanding of your problem, when inserting a large amount of
records to many tables but failed by insert rejection to tables one by one.
You rebooted the system and everything seems OK again. and you found in SQL
log that SQL Server rolled forward the inserting during the startup, so
many records are inserting, you wonder why insert failed and how the
rolling forward come, right?
I would better to explain the rolling forwand first. The log records for a
transaction ( both a transaction.. commit structure or a single T-SQL
statement) are written to disk before the commit acknowledgement is sent to
the client process, but the actual changed data might not have been
physically written out to the data pages. That is, writes to data pages
need only be posted to the operating system, and SQL Server can check later
to see that they were completed. They don't have to complete immediately
because the log contains all the information needed to redo the work, even
in the event of a power failure or system crash before the write completes.
When you reboot you system and start SQL Server, recovery performs both
redo (rollforward) and undo (rollback) operations. In a redo operation, the
log is examined and each change is verified as being already reflected in
the database. (After a redo, every change made by the transaction is
guaranteed to have been applied.) If the change doesn't appear in the
database, it is again performed from the information in the log. Every
database page has an LSN in the page header that uniquely identifies it, by
version, as rows on the page are changed over time. This page LSN reflects
the location in the transaction log of the last log entry that modified a
row on this page. During a redo operation of transactions, the LSN of each
log record is compared to the page LSN of the data page that the log entry
modified; if the page LSN is less than the log LSN, the operation indicated
in the log entry should be redo, that is, should be roll forward. As for
the 18,000 record insert in the SQL start process, they are the modified
records that have been recorded in the transaction log, but have not been
writen to the database.
When you commit the transaction, the data modifications will be have been
made a permanent part of the database ( in the log file).
For the checkpiont, it will flush dirty data and log pages from the buffer
cache of the current database, minimizing the number of modifications that
have to be rolled forward during a recovery. So, that 18,000 records
insertings active operations that have been recorded in the log file after
or when check point occures, they are logged into the transaction log. For
more information of checkpoint, when it will occure, what SQL Server will
do when checkpoint come, please refer to 'checkpoing' in the SQL Server
Books Online. You could also refer to 'recoery mode' too.
As for your questions of how the records insertings are rejected by tables
one by one, it would be very helpful for you to provide the detailed error
message, the error log and the T-SQL you were excuting at that time for
further analysis. Thanks.
Best regards
Baisong Wei
Microsoft Online Support
----
Get Secure! - www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only. Thanks.|||Hi Baisong,
Thank you for your reply and I understand the description
provided. Here is more background regarding to my problem:
1. We use ADO command object to perform single T-SQL
statement with a time-out value set to one minute.
2. On average, we perform approximately 36,960 T-SQL
statements per hour.
3. Database and transaction files are set to auto grew at
a rate of (10%)
4. Unfortunately, we don't know what the error message was
generated by SQL Server when the records failed to be
written.
5. All the non-SQL processes appeared to be running
normally at the time of the restart. This information was
gathered using Windows Task Manager.
6. No rolled-forward information was logged in SQL Server
log files in previous restarts.
7. Prior to the restart, new information was unable to
insert into the database, but old records can be retrieved.
8. The largest table in the database had 25,000,000
records at the time of the problem.
Question:
What could be happening on the server that would result in
1) failed to insert, and 2) rolled forward of 18000
records in restart.|||Hi Mike,
Thank you for using the newsgroup and it is my pleasure to help you with
you issue.
In you first post, you mentioned that 'By examining the application log
file, it seems that the SQL server started to deteriorate by not allowing
insertion in one table at a time and eventually all tables are rejected for
insertion.' and in your last post, you mentioned that 'we don't know what
the error message was generated by SQL Server when the records failed to be
written'. I wonder if you could provide the content of this part of the log
that you made this judgement, what made you think that the insertion is
rejected while no error message indicating any abnormal. Any blocking of
locks or any information from you application side? This information is
helpful for our analysis. Second, as I mentioned in my last post, when an
operation is executed, this operation will be write to log but not
necessary to write to .MDB file, when checkpoint come, they will be write
to the MDB file. I think that it is because the 18,000 records' insert have
been written into the log file, but not to the .MDB file, so, when SQL
Server restart, it will compare with the LSN of page and log to decide
roll-forward or roll-back, no difference between, in the SQL Server start
process, no this action is taken, as in your past restart. Besides the log
message, could you please provide the recovery mode of you database?
As the ADO time-out option, I suggest you to set to zero, which prepresent
no limit for time-out.
Looking for your reply and thanks.
Best regards
Baisong Wei
Microsoft Online Support
----
Get Secure! - www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only. Thanks.|||Baisong,
What I meant by 'By examining the application log
file,...' is checking the SQL server tables that the
application is writing to. All the written records
contain a time stamp; therefore I can tell when the
insertion started to fail on these tables.
'we don't know what the error message was generated by SQL
Server when the records failed' simply means that the
application did not log any error returned by the ADO
command object.
The recovery mode that the database uses is 'Full Recovery
Mode'
Mike

Insertion failed

I have a service that writes a lot of records to many
tables in SQL server. Recently, the service failed in
writing record to any of the tables. But, the retrieving
of the records is still OK. By examining the application
log file, it seems that the SQL server started to
deteriorate by not allowing insertion in one table at a
time and eventually all tables are rejected for insertion.
I ended up re-booting the system and everything went back
to normal. After the reboot, I checked the SQL log file
and noticed that about 18,000 records were rolled forward
during SQL Server start-up.
Does anybody have any idea as why the insertion would
fail? And why the checkpoint was not executed even there
are about 18,000 committed transactions?Mike,
Perhaps the database space filled up as one table attempted to grow again.
Tables that still had some space left in them could take a few more rows,
then they were filled up as well and would no longer take any more rows.
Etc. Etc.
If this is the case, either allocate more space or turn on autogrow for the
database in question.
With regard to your closing comment, checkpoint is not equal to a commit.
Without knowing the period covered by the 18,000 records and something more
about your server it is not possible to offer details. However, I would
think that checkpointing was working just fine.
Russell Fields
"mike" <mike@.ftsl.com> wrote in message
news:5c0401c3e5e3$cb4094d0$a101280a@.phx.gbl...
> I have a service that writes a lot of records to many
> tables in SQL server. Recently, the service failed in
> writing record to any of the tables. But, the retrieving
> of the records is still OK. By examining the application
> log file, it seems that the SQL server started to
> deteriorate by not allowing insertion in one table at a
> time and eventually all tables are rejected for insertion.
> I ended up re-booting the system and everything went back
> to normal. After the reboot, I checked the SQL log file
> and noticed that about 18,000 records were rolled forward
> during SQL Server start-up.
> Does anybody have any idea as why the insertion would
> fail? And why the checkpoint was not executed even there
> are about 18,000 committed transactions?
>|||Russell - Thank you for your reply.
I was also thinking about the possible disk space problem
but I don't think it's the case here. There was about 10G
left in the disk when this happened. At the same time, the
database files (data & transaction) are set to auto-grow.
If disk space was the problem, I would think rebooting the
server couldn't solve the problem. Based on my
calculation, in 5 mins period, properly 3500 record would
have been inserted to the table.
>--Original Message--
>Mike,
>Perhaps the database space filled up as one table
attempted to grow again.
>Tables that still had some space left in them could take
a few more rows,
>then they were filled up as well and would no longer take
any more rows.
>Etc. Etc.
>If this is the case, either allocate more space or turn
on autogrow for the
>database in question.
>With regard to your closing comment, checkpoint is not
equal to a commit.
>Without knowing the period covered by the 18,000 records
and something more
>about your server it is not possible to offer details.
However, I would
>think that checkpointing was working just fine.
>Russell Fields
>"mike" <mike@.ftsl.com> wrote in message
>news:5c0401c3e5e3$cb4094d0$a101280a@.phx.gbl...
>> I have a service that writes a lot of records to many
>> tables in SQL server. Recently, the service failed in
>> writing record to any of the tables. But, the retrieving
>> of the records is still OK. By examining the application
>> log file, it seems that the SQL server started to
>> deteriorate by not allowing insertion in one table at a
>> time and eventually all tables are rejected for
insertion.
>> I ended up re-booting the system and everything went
back
>> to normal. After the reboot, I checked the SQL log file
>> and noticed that about 18,000 records were rolled
forward
>> during SQL Server start-up.
>> Does anybody have any idea as why the insertion would
>> fail? And why the checkpoint was not executed even there
>> are about 18,000 committed transactions?
>
>.
>|||Mike,
Sorry that I do not have a better idea.
We had problems at one time when another process besides SQL Server was
using the same disk. It wrote LOTS of data into an operating system file
eating up the disk, then either deleted it again or ran out of space itself,
causing the file write to abort. Since you have 10 GB free, then it is hard
to believe that a similar thing is happening to you.
Regarding checkpoints, this is set using the "recovery interval" option.
You might read up on this and discover that the timing of checkpoints is
more complicated than you would have thought.
Russell Fields
"mike" <mike@.ftsl.com> wrote in message
news:635a01c3e5ef$46160ae0$a501280a@.phx.gbl...
> Russell - Thank you for your reply.
> I was also thinking about the possible disk space problem
> but I don't think it's the case here. There was about 10G
> left in the disk when this happened. At the same time, the
> database files (data & transaction) are set to auto-grow.
> If disk space was the problem, I would think rebooting the
> server couldn't solve the problem. Based on my
> calculation, in 5 mins period, properly 3500 record would
> have been inserted to the table.
> >--Original Message--
> >Mike,
> >
> >Perhaps the database space filled up as one table
> attempted to grow again.
> >Tables that still had some space left in them could take
> a few more rows,
> >then they were filled up as well and would no longer take
> any more rows.
> >Etc. Etc.
> >
> >If this is the case, either allocate more space or turn
> on autogrow for the
> >database in question.
> >
> >With regard to your closing comment, checkpoint is not
> equal to a commit.
> >Without knowing the period covered by the 18,000 records
> and something more
> >about your server it is not possible to offer details.
> However, I would
> >think that checkpointing was working just fine.
> >
> >Russell Fields
> >"mike" <mike@.ftsl.com> wrote in message
> >news:5c0401c3e5e3$cb4094d0$a101280a@.phx.gbl...
> >> I have a service that writes a lot of records to many
> >> tables in SQL server. Recently, the service failed in
> >> writing record to any of the tables. But, the retrieving
> >> of the records is still OK. By examining the application
> >> log file, it seems that the SQL server started to
> >> deteriorate by not allowing insertion in one table at a
> >> time and eventually all tables are rejected for
> insertion.
> >> I ended up re-booting the system and everything went
> back
> >> to normal. After the reboot, I checked the SQL log file
> >> and noticed that about 18,000 records were rolled
> forward
> >> during SQL Server start-up.
> >>
> >> Does anybody have any idea as why the insertion would
> >> fail? And why the checkpoint was not executed even there
> >> are about 18,000 committed transactions?
> >>
> >
> >
> >.
> >|||Hi Mike,
Thank you for using the newsgroup and it is my pleasure to help you with
you issue.
As my understanding of your problem, when inserting a large amount of
records to many tables but failed by insert rejection to tables one by one.
You rebooted the system and everything seems OK again. and you found in SQL
log that SQL Server rolled forward the inserting during the startup, so
many records are inserting, you wonder why insert failed and how the
rolling forward come, right?
I would better to explain the rolling forwand first. The log records for a
transaction ( both a transaction.. commit structure or a single T-SQL
statement) are written to disk before the commit acknowledgement is sent to
the client process, but the actual changed data might not have been
physically written out to the data pages. That is, writes to data pages
need only be posted to the operating system, and SQL Server can check later
to see that they were completed. They don't have to complete immediately
because the log contains all the information needed to redo the work, even
in the event of a power failure or system crash before the write completes.
When you reboot you system and start SQL Server, recovery performs both
redo (rollforward) and undo (rollback) operations. In a redo operation, the
log is examined and each change is verified as being already reflected in
the database. (After a redo, every change made by the transaction is
guaranteed to have been applied.) If the change doesn't appear in the
database, it is again performed from the information in the log. Every
database page has an LSN in the page header that uniquely identifies it, by
version, as rows on the page are changed over time. This page LSN reflects
the location in the transaction log of the last log entry that modified a
row on this page. During a redo operation of transactions, the LSN of each
log record is compared to the page LSN of the data page that the log entry
modified; if the page LSN is less than the log LSN, the operation indicated
in the log entry should be redo, that is, should be roll forward. As for
the 18,000 record insert in the SQL start process, they are the modified
records that have been recorded in the transaction log, but have not been
writen to the database.
When you commit the transaction, the data modifications will be have been
made a permanent part of the database. no roll forward or roll back is
needed.
For the checkpiont, it will flush dirty data and log pages from the buffer
cache of the current database, minimizing the number of modifications that
have to be rolled forward during a recovery. So, that 18,000 records
insertings active operations that have been recorded in the log file after
or when check point occures, they are logged into the transaction log. For
more information of checkpoint, when it will occure, what SQL Server will
do when checkpoint come, please refer to 'checkpoing' in the SQL Server
Books Online.
As for your questions of how the records insertings are rejected by tables
one by one, it would be very helpful for you to provide the detailed error
message, the error log and the T-SQL you were excuting at that time for
further analysis. Thanks.
Best regards
Baisong Wei
Microsoft Online Support
----
Get Secure! - www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only. Thanks.
I have a service that writes a lot of records to many
tables in SQL server. Recently, the service failed in
writing record to any of the tables. But, the retrieving
of the records is still OK. By examining the application
log file, it seems that the SQL server started to
deteriorate by not allowing insertion in one table at a
time and eventually all tables are rejected for insertion.
I ended up re-booting the system and everything went back
to normal. After the reboot, I checked the SQL log file
and noticed that about 18,000 records were rolled forward
during SQL Server start-up.
Does anybody have any idea as why the insertion would
fail? And why the checkpoint was not executed even there
are about 18,000 committed transactions?|||From: v-baiwei@.online.microsoft.com (Baisong Wei[MSFT])
Date: Fri, 30 Jan 2004 05:10:41 GMT
Subject: RE: Insertion failed
Newsgroups: microsoft.public.sqlserver.server
Hi Mike,
Thank you for using the newsgroup and it is my pleasure to help you with
you issue.
As my understanding of your problem, when inserting a large amount of
records to many tables but failed by insert rejection to tables one by one.
You rebooted the system and everything seems OK again. and you found in SQL
log that SQL Server rolled forward the inserting during the startup, so
many records are inserting, you wonder why insert failed and how the
rolling forward come, right?
I would better to explain the rolling forwand first. The log records for a
transaction ( both a transaction.. commit structure or a single T-SQL
statement) are written to disk before the commit acknowledgement is sent to
the client process, but the actual changed data might not have been
physically written out to the data pages. That is, writes to data pages
need only be posted to the operating system, and SQL Server can check later
to see that they were completed. They don't have to complete immediately
because the log contains all the information needed to redo the work, even
in the event of a power failure or system crash before the write completes.
When you reboot you system and start SQL Server, recovery performs both
redo (rollforward) and undo (rollback) operations. In a redo operation, the
log is examined and each change is verified as being already reflected in
the database. (After a redo, every change made by the transaction is
guaranteed to have been applied.) If the change doesn't appear in the
database, it is again performed from the information in the log. Every
database page has an LSN in the page header that uniquely identifies it, by
version, as rows on the page are changed over time. This page LSN reflects
the location in the transaction log of the last log entry that modified a
row on this page. During a redo operation of transactions, the LSN of each
log record is compared to the page LSN of the data page that the log entry
modified; if the page LSN is less than the log LSN, the operation indicated
in the log entry should be redo, that is, should be roll forward. As for
the 18,000 record insert in the SQL start process, they are the modified
records that have been recorded in the transaction log, but have not been
writen to the database.
When you commit the transaction, the data modifications will be have been
made a permanent part of the database ( in the log file).
For the checkpiont, it will flush dirty data and log pages from the buffer
cache of the current database, minimizing the number of modifications that
have to be rolled forward during a recovery. So, that 18,000 records
insertings active operations that have been recorded in the log file after
or when check point occures, they are logged into the transaction log. For
more information of checkpoint, when it will occure, what SQL Server will
do when checkpoint come, please refer to 'checkpoing' in the SQL Server
Books Online. You could also refer to 'recoery mode' too.
As for your questions of how the records insertings are rejected by tables
one by one, it would be very helpful for you to provide the detailed error
message, the error log and the T-SQL you were excuting at that time for
further analysis. Thanks.
Best regards
Baisong Wei
Microsoft Online Support
----
Get Secure! - www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only. Thanks.|||Hi Mike,
Thank you for using the newsgroup and it is my pleasure to help you with
you issue.
As my understanding of your problem, when inserting a large amount of
records to many tables but failed by insert rejection to tables one by one.
You rebooted the system and everything seems OK again. and you found in SQL
log that SQL Server rolled forward the inserting during the startup, so
many records are inserting, you wonder why insert failed and how the
rolling forward come, right?
I would better to explain the rolling forwand first. The log records for a
transaction ( both a transaction.. commit structure or a single T-SQL
statement) are written to disk before the commit acknowledgement is sent to
the client process, but the actual changed data might not have been
physically written out to the data pages. That is, writes to data pages
need only be posted to the operating system, and SQL Server can check later
to see that they were completed. They don't have to complete immediately
because the log contains all the information needed to redo the work, even
in the event of a power failure or system crash before the write completes.
When you reboot you system and start SQL Server, recovery performs both
redo (rollforward) and undo (rollback) operations. In a redo operation, the
log is examined and each change is verified as being already reflected in
the database. (After a redo, every change made by the transaction is
guaranteed to have been applied.) If the change doesn't appear in the
database, it is again performed from the information in the log. Every
database page has an LSN in the page header that uniquely identifies it, by
version, as rows on the page are changed over time. This page LSN reflects
the location in the transaction log of the last log entry that modified a
row on this page. During a redo operation of transactions, the LSN of each
log record is compared to the page LSN of the data page that the log entry
modified; if the page LSN is less than the log LSN, the operation indicated
in the log entry should be redo, that is, should be roll forward. As for
the 18,000 record insert in the SQL start process, they are the modified
records that have been recorded in the transaction log, but have not been
writen to the database.
When you commit the transaction, the data modifications will be have been
made a permanent part of the database ( in the log file).
For the checkpiont, it will flush dirty data and log pages from the buffer
cache of the current database, minimizing the number of modifications that
have to be rolled forward during a recovery. So, that 18,000 records
insertings active operations that have been recorded in the log file after
or when check point occures, they are logged into the transaction log. For
more information of checkpoint, when it will occure, what SQL Server will
do when checkpoint come, please refer to 'checkpoing' in the SQL Server
Books Online. You could also refer to 'recoery mode' too.
As for your questions of how the records insertings are rejected by tables
one by one, it would be very helpful for you to provide the detailed error
message, the error log and the T-SQL you were excuting at that time for
further analysis. Thanks.
Best regards
Baisong Wei
Microsoft Online Support
----
Get Secure! - www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only. Thanks.|||Hi Baisong,
Thank you for your reply and I understand the description
provided. Here is more background regarding to my problem:
1. We use ADO command object to perform single T-SQL
statement with a time-out value set to one minute.
2. On average, we perform approximately 36,960 T-SQL
statements per hour.
3. Database and transaction files are set to auto grew at
a rate of (10%)
4. Unfortunately, we don't know what the error message was
generated by SQL Server when the records failed to be
written.
5. All the non-SQL processes appeared to be running
normally at the time of the restart. This information was
gathered using Windows Task Manager.
6. No rolled-forward information was logged in SQL Server
log files in previous restarts.
7. Prior to the restart, new information was unable to
insert into the database, but old records can be retrieved.
8. The largest table in the database had 25,000,000
records at the time of the problem.
Question:
What could be happening on the server that would result in
1) failed to insert, and 2) rolled forward of 18000
records in restart.|||Hi Mike,
Thank you for using the newsgroup and it is my pleasure to help you with
you issue.
In you first post, you mentioned that 'By examining the application log
file, it seems that the SQL server started to deteriorate by not allowing
insertion in one table at a time and eventually all tables are rejected for
insertion.' and in your last post, you mentioned that 'we don't know what
the error message was generated by SQL Server when the records failed to be
written'. I wonder if you could provide the content of this part of the log
that you made this judgement, what made you think that the insertion is
rejected while no error message indicating any abnormal. Any blocking of
locks or any information from you application side? This information is
helpful for our analysis. Second, as I mentioned in my last post, when an
operation is executed, this operation will be write to log but not
necessary to write to .MDB file, when checkpoint come, they will be write
to the MDB file. I think that it is because the 18,000 records' insert have
been written into the log file, but not to the .MDB file, so, when SQL
Server restart, it will compare with the LSN of page and log to decide
roll-forward or roll-back, no difference between, in the SQL Server start
process, no this action is taken, as in your past restart. Besides the log
message, could you please provide the recovery mode of you database?
As the ADO time-out option, I suggest you to set to zero, which prepresent
no limit for time-out.
Looking for your reply and thanks.
Best regards
Baisong Wei
Microsoft Online Support
----
Get Secure! - www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only. Thanks.|||Baisong,
What I meant by 'By examining the application log
file,...' is checking the SQL server tables that the
application is writing to. All the written records
contain a time stamp; therefore I can tell when the
insertion started to fail on these tables.
'we don't know what the error message was generated by SQL
Server when the records failed' simply means that the
application did not log any error returned by the ADO
command object.
The recovery mode that the database uses is 'Full Recovery
Mode'
Mike|||Hi Mike,
Thank you for your update.
Yes, from the time stamps in your table that indicating when the insertion
was inserted, we could figure out that the inserting is stopped. However,
we could this can just provide the information about when the inserting
failure happened. From the error log and other informations collected,
could we figure out the cause of the problem. From the information now I
could have, I doubt that there were blocks and later deadlocks that cause
the insertion stopped and then cause all the insert failed. As I mentioned
in my previous reply, before the deadlock happen, some of the insert is
succeeded and this inserting is logged in the transactional log file, but
not in the data file. Before the checkpoint came, you re-booted the system,
and the SQL Server will also restart. SQL Server will compare the LSN of
data page and the log, then will decide which to roll forward and which to
rolled back. The operations already in the transactional log file will be
rolled forward, such as the 18,000 record in your case.
However, this is just a assumption based on my experience. To judge what
happend at that time, we need the corresponding error log for analysis.
Also, when this problem happened again, we could collect the useful
information by the following command. They are:
1) sp_lock: displaying the active locking information
2) sp_who, sp_who2: displaying the current users and processes information
You could also refer to this part in the SQL Server Books Online:
Troubleshooting Deadlocks
Hope this helps!
Best regards
Baisong Wei
Microsoft Online Support
----
Get Secure! - www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only. Thanks.

Wednesday, March 7, 2012

Inserting/Updating and locking

We are inserting and updating large amounts of rows and find that users are
complaining that their SELECT queries on the same tables are blocking (not
finishing) until our INSERT or UPDATE finishes.
Is there any way to tell an INSERT or UPDATE to not lock the whole table so
that SELECT's can still be run on the table?Mike W wrote:
> We are inserting and updating large amounts of rows and find that
> users are complaining that their SELECT queries on the same tables
> are blocking (not finishing) until our INSERT or UPDATE finishes.
> Is there any way to tell an INSERT or UPDATE to not lock the whole
> table so that SELECT's can still be run on the table?
That's probably not the problem. While it's likey that SQL Server is
escalating row locks to page locks and even possibly a table lock if
enough rows are affected by the insert/update, while that transaction is
running, there are exclusive locks on those rows/pages/table.
When a page has an exclusive lock, no other readers or writers can touch
the page. They are blocked until the update/insert transaction
completes. That is, unless they use the read uncommited or NOLOCK table
hint on the tables in the Select statements. But since you are updating
information, is it ok for your users to read dirty data? I don't know.
That's up to you to determine. Dirty data arises when data is
updated/inserted in a transaction and another user reads the data using
read uncommitted isolation level. if the update transaction then rolls
back the changes the user that selected the data is staring at data that
doesn't exist any longer in the database.
The other option is to keep the insert/update transactions as short as
possible. Use batches if you need to to. That will keep the outstanding
locks to a minimim.
SQL Server 2005 offers a method for readers to see the original data
even if it's being updated by another user, but this option will likley
introduce overhead in the database because the data is temporarily
written to tempdb so it's available for other users to read. And it's a
database-wide setting.
For SQL 2000, it's either dirty reads or blocking. But short
transactions mitigate most of these problems.
David Gugick
Imceda Software
www.imceda.com|||Mike W wrote:
> We are inserting and updating large amounts of rows and find that
> users are complaining that their SELECT queries on the same tables
> are blocking (not finishing) until our INSERT or UPDATE finishes.
> Is there any way to tell an INSERT or UPDATE to not lock the whole
> table so that SELECT's can still be run on the table?
Another thing to consider is where the newly inserted rows are going.
For example, if you have a clustered index on an IDENTITY column, then
inserting new rows will have less of an effect on existing data because
the new most of the rows are inserted on new pages. Updated rows will
still cause problems.
What is your clustered index on? What data are you inserting? Can you
insert and update in different transactions? Can you also update and
insert in small amounts, say 1,000 rows at a time, rather than all at
once?
David Gugick
Imceda Software
www.imceda.com|||Thanks for your responses. I believe the table that is being updated does
have a Clustered index on the Primary key field, but it's not an Identity
field.
The data we are inserting is customer lead information.
I will ask about the last 2 questions since it is not me that is doing the
updates.
Thanks again.
"David Gugick" <davidg-nospam@.imceda.com> wrote in message
news:On30MJv4EHA.1596@.tk2msftngp13.phx.gbl...
> Mike W wrote:
>> We are inserting and updating large amounts of rows and find that
>> users are complaining that their SELECT queries on the same tables
>> are blocking (not finishing) until our INSERT or UPDATE finishes.
>> Is there any way to tell an INSERT or UPDATE to not lock the whole
>> table so that SELECT's can still be run on the table?
> Another thing to consider is where the newly inserted rows are going. For
> example, if you have a clustered index on an IDENTITY column, then
> inserting new rows will have less of an effect on existing data because
> the new most of the rows are inserted on new pages. Updated rows will
> still cause problems.
> What is your clustered index on? What data are you inserting? Can you
> insert and update in different transactions? Can you also update and
> insert in small amounts, say 1,000 rows at a time, rather than all at
> once?
>
> --
> David Gugick
> Imceda Software
> www.imceda.com|||Mike W wrote:
> Thanks for your responses. I believe the table that is being updated
> does have a Clustered index on the Primary key field, but it's not an
> Identity field.
> The data we are inserting is customer lead information.
> I will ask about the last 2 questions since it is not me that is
> doing the updates.
> Thanks again.
>
If the clustered index is on something other than a date or an identity,
you have a few potential issues:
1-The clustered index is probably causing page splits as new rows are
inserted. This is a very expensive operation because it requires a page
is split, a new one created, and rows moved around. While this operation
is going on, both pages are locked by SQL Server, adding to the locking
overhead of this operation.
2- The clustered index is likely requiring the disk heads move all
around the physical disk to locate the page to update. There is nothing
slower than random disk access for a database, further slowing down the
operation.
You can mitigate some of the problems here (not the physical disk heads
moving around) by leaving space in your clustered index using a fill
factor. However, this requires you rebuild the clustered index as needed
to maintain the free space before a bulk update of data. Using a fill
factor will leave a percentage of space available in each row and help
prevent page splitting. However, it will make the table a percentage
larger, but this may not be a problem if the free space is going to be
filled with new data anyway.
And the rebuilding of the clustered index will likely affect concurrency
for that table while the rebuild occurs.
If your clustered index is, in fact, on a column or set of columns that
are causing these problem, you may need to consider changing the index
to something that keeps all new data at the end of the table. The other
option is to temporarily remove the clustered index during the load and
rebuild when complete, but this will cause an automatic rebuild of all
non-clustered indexes as well and may be more expensive than what you
want.
A clustered index on an identity can prevent a host of problems and
speed the load process significantly in this case. It's worth a test to
see if it helps the load and if it affects other queries run on that
table.
David Gugick
Imceda Software
www.imceda.com

Inserting/Updating and locking

We are inserting and updating large amounts of rows and find that users are
complaining that their SELECT queries on the same tables are blocking (not
finishing) until our INSERT or UPDATE finishes.
Is there any way to tell an INSERT or UPDATE to not lock the whole table so
that SELECT's can still be run on the table?
Mike W wrote:
> We are inserting and updating large amounts of rows and find that
> users are complaining that their SELECT queries on the same tables
> are blocking (not finishing) until our INSERT or UPDATE finishes.
> Is there any way to tell an INSERT or UPDATE to not lock the whole
> table so that SELECT's can still be run on the table?
That's probably not the problem. While it's likey that SQL Server is
escalating row locks to page locks and even possibly a table lock if
enough rows are affected by the insert/update, while that transaction is
running, there are exclusive locks on those rows/pages/table.
When a page has an exclusive lock, no other readers or writers can touch
the page. They are blocked until the update/insert transaction
completes. That is, unless they use the read uncommited or NOLOCK table
hint on the tables in the Select statements. But since you are updating
information, is it ok for your users to read dirty data? I don't know.
That's up to you to determine. Dirty data arises when data is
updated/inserted in a transaction and another user reads the data using
read uncommitted isolation level. if the update transaction then rolls
back the changes the user that selected the data is staring at data that
doesn't exist any longer in the database.
The other option is to keep the insert/update transactions as short as
possible. Use batches if you need to to. That will keep the outstanding
locks to a minimim.
SQL Server 2005 offers a method for readers to see the original data
even if it's being updated by another user, but this option will likley
introduce overhead in the database because the data is temporarily
written to tempdb so it's available for other users to read. And it's a
database-wide setting.
For SQL 2000, it's either dirty reads or blocking. But short
transactions mitigate most of these problems.
David Gugick
Imceda Software
www.imceda.com
|||Mike W wrote:
> We are inserting and updating large amounts of rows and find that
> users are complaining that their SELECT queries on the same tables
> are blocking (not finishing) until our INSERT or UPDATE finishes.
> Is there any way to tell an INSERT or UPDATE to not lock the whole
> table so that SELECT's can still be run on the table?
Another thing to consider is where the newly inserted rows are going.
For example, if you have a clustered index on an IDENTITY column, then
inserting new rows will have less of an effect on existing data because
the new most of the rows are inserted on new pages. Updated rows will
still cause problems.
What is your clustered index on? What data are you inserting? Can you
insert and update in different transactions? Can you also update and
insert in small amounts, say 1,000 rows at a time, rather than all at
once?
David Gugick
Imceda Software
www.imceda.com
|||Thanks for your responses. I believe the table that is being updated does
have a Clustered index on the Primary key field, but it's not an Identity
field.
The data we are inserting is customer lead information.
I will ask about the last 2 questions since it is not me that is doing the
updates.
Thanks again.
"David Gugick" <davidg-nospam@.imceda.com> wrote in message
news:On30MJv4EHA.1596@.tk2msftngp13.phx.gbl...
> Mike W wrote:
> Another thing to consider is where the newly inserted rows are going. For
> example, if you have a clustered index on an IDENTITY column, then
> inserting new rows will have less of an effect on existing data because
> the new most of the rows are inserted on new pages. Updated rows will
> still cause problems.
> What is your clustered index on? What data are you inserting? Can you
> insert and update in different transactions? Can you also update and
> insert in small amounts, say 1,000 rows at a time, rather than all at
> once?
>
> --
> David Gugick
> Imceda Software
> www.imceda.com
|||Mike W wrote:
> Thanks for your responses. I believe the table that is being updated
> does have a Clustered index on the Primary key field, but it's not an
> Identity field.
> The data we are inserting is customer lead information.
> I will ask about the last 2 questions since it is not me that is
> doing the updates.
> Thanks again.
>
If the clustered index is on something other than a date or an identity,
you have a few potential issues:
1-The clustered index is probably causing page splits as new rows are
inserted. This is a very expensive operation because it requires a page
is split, a new one created, and rows moved around. While this operation
is going on, both pages are locked by SQL Server, adding to the locking
overhead of this operation.
2- The clustered index is likely requiring the disk heads move all
around the physical disk to locate the page to update. There is nothing
slower than random disk access for a database, further slowing down the
operation.
You can mitigate some of the problems here (not the physical disk heads
moving around) by leaving space in your clustered index using a fill
factor. However, this requires you rebuild the clustered index as needed
to maintain the free space before a bulk update of data. Using a fill
factor will leave a percentage of space available in each row and help
prevent page splitting. However, it will make the table a percentage
larger, but this may not be a problem if the free space is going to be
filled with new data anyway.
And the rebuilding of the clustered index will likely affect concurrency
for that table while the rebuild occurs.
If your clustered index is, in fact, on a column or set of columns that
are causing these problem, you may need to consider changing the index
to something that keeps all new data at the end of the table. The other
option is to temporarily remove the clustered index during the load and
rebuild when complete, but this will cause an automatic rebuild of all
non-clustered indexes as well and may be more expensive than what you
want.
A clustered index on an identity can prevent a host of problems and
speed the load process significantly in this case. It's worth a test to
see if it helps the load and if it affects other queries run on that
table.
David Gugick
Imceda Software
www.imceda.com

Inserting Values Into Primary Key and Foreign Key Tables

I have two tables that I would like to insert values into at the same time
and would like help with the SQL statement to do so. One table (Member_Info)
has a PK and the other (Image_Info) a FK. The relationship the two tables
share is through the (E_Mail) column. Example structure:
Member_Info table columns:
First_Name
Last_Name
Birthday
E_Mail (PK)
Image_Info table columns:
E_Mail (FK)
Use
Name
Please Help! Thanks!Use INSERT TRIGGER in Member_Info table
"Willie Davis via webservertalk.com" wrote:

> I have two tables that I would like to insert values into at the same time
> and would like help with the SQL statement to do so. One table (Member_Inf
o)
> has a PK and the other (Image_Info) a FK. The relationship the two tables
> share is through the (E_Mail) column. Example structure:
> Member_Info table columns:
> First_Name
> Last_Name
> Birthday
> E_Mail (PK)
> Image_Info table columns:
> E_Mail (FK)
> Use
> Name
>
> Please Help! Thanks!
>|||Due to not determining what table will be filled by your code and what table
has to be filled "automagicaly", the following just depends on my guesswork.
CREATE TRIGGER TrgIns Member_Info
FOR INSERT
AS
BEGIN
INSERT INTO (EMail, Name)
SELECT EMail, COALESCE(FirstName,'') + ' ' + COALESCE(LastName,'') FROM
Inserted
END
I dont know wheter you Name column will store the First and the Lastname or
just one of it, but you should consider an Update Trigger to update this
information if its changed in the Primary Key Table. (Guess you sure wont
need it, if this just stores the name of the Member, even this is already
stored in the PK Table)
HTH, Jens Suessmeyer.
http://www.sqlserver2005.de
--
"Willie Davis via webservertalk.com" <forum@.nospam.webservertalk.com> schrieb im
Newsbeitrag news:4183b2e7baef434899249fa71b414bf6@.SQ
webservertalk.com...
>I have two tables that I would like to insert values into at the same time
> and would like help with the SQL statement to do so. One table
> (Member_Info)
> has a PK and the other (Image_Info) a FK. The relationship the two tables
> share is through the (E_Mail) column. Example structure:
> Member_Info table columns:
> First_Name
> Last_Name
> Birthday
> E_Mail (PK)
> Image_Info table columns:
> E_Mail (FK)
> Use
> Name
>
> Please Help! Thanks!|||Please post DDL, so that people do not have to guess what the keys,
constraints, Declarative Referential Integrity, datatypes, etc. in your
schema are. Sample data is also a good idea, along with clear
specifications.
same time M<<
SQL works with one table at a time. Put you code into a stored
procedure, do two inserts and depend on DRI actions to maintain
integrity.

Inserting values into multiple tables

Hi. I have a primary key field in one table that is a foreign key field in another table. How do I write an INSERT statement that puts the value in both tables? Can this even be done?
On Thu, 20 May 2004 20:56:02 -0700, twright wrote:

>Hi. I have a primary key field in one table that is a foreign key field in another table. How do I write an INSERT statement that puts the value in both tables? Can this even be done?
Hi twright,
Not in one statement. You have to use two INSERT statements. If you want
to be sure that either both or none are executed, whatevert happens,
enclose them in a transaction:
BEGIN TRANSACTION
INSERT FirstTable (KeyColumn)
VALUES (17)
INSERT SecondTable (OtherKeyColumn, ForeignKeyColumn)
VALUES (231, 17)
COMMIT TRANSACTION
Best, Hugo
(Remove _NO_ and _SPAM_ to get my e-mail address)

inserting values for alias names within tables?

hi every1

I have a database table in which one of the fields is an alias for the identity field. That is the alias field self references the table and takes its value for the "id" field of tht table
as follows:
system_id int
system_name varchar(20)
sys_alias int
sys_pro varchar(80)
sys_values varchar(80)
so here the sys_alias takes wtever value the system assigns to the id field, system_id while inserting values into the table..
now my problem is i have to insert records into ths table frm aother tables using insert into...select from statements bt since the id values r enerated by the comp during value insertion i dunno how to give values fr the alias?

e.g insert into my_table(system_name,sys_alias,sys_pro,sys_values)
select x,(how to give ths field),y,z
from another_table
where...

neone who understood my problem and can help puhllezz post me a reply asap!
thnx:)

shuchiSounds like you'd have to do this in a trigger.|||what is the expression for the "alias" column? is it exactly the same value, or is it something like a character prefix plus the value?

perhaps you might consider using a view instead

rudy|||hi...
its exactly the same value..jus like a copy column fr the identity column..
i tried using max(syb_identity) and @.@.identity functions but they do not work specially since i put them in a sub query so theyre not calculated recursivelye but just once and insert the same identity value for all other inserted records!
can anyone help me what sortof view i shud create?
i am doing ths whole process thru a perl script so whatever sql tht needs to be done is done thru an sql file run from my perl script. mebbe asome sorta trigger will work tht automatically inserts the newest value of identityt generated into the alias field while insertion of records?
gosh i really need help here...so any ideas wud be gr8:)
thnx fr all the suggestions:)

-shuchi|||if this "alias" column is to have exactly the same values, then you don't really need it

just select it twice in any query -- select system_id
, system_name
, system_id as sys_alias
, sys_pro
, sys_values
from yourtable

rudy|||thanx a ton fr all the help...managed it:)

-shuchi

Inserting unique values into a different tables if they don''t exists already.

Hi

I am trying to insert values into a table that doesn't exist there yet from another table, my problem is that because it is joined to the other table it keeps on selecting more values that i don't want.

Code Snippet

SET NOCOUNT ON

INSERT INTO _MemberProfileLookupValues (MemberID, OptionID, ValueID)

SELECT M.MemberID, '6', CASE M.MaritalStatusID WHEN 1 THEN '7'

WHEN 2 THEN '8'

WHEN 3 THEN '9'

WHEN 4 THEN '10'

END

FROM Members M

INNER JOIN _MemberProfileLookupValues ML

ON M.MemberID = ML.MemberID

WHERE M.Active = 1

AND OptionID <> 6

When i execute that code it returns all the values, let say OptionID = 3 is smoking already exists in the MemberProfileLookupValues table then it is going to select that persons memberID

I want to insert only members values that aren't already in the _MemberProfileLookupValues from the Members table (I think that it is because of the join statement that is in my code, but i don't know how i am going to select members that aren't in the table, because i have a few other queries that are very similar that are inserting different values, so ultimately

ONLY INSERT THE MemberID the values 6 and the statusID of X if it is not in the table already.

Any ideas / help will be greatly appreciated. Please help.

Kind Regards

Carel Greaves

The following query insert the new members who option id 6 is not exist in the MemberProfileLookupValues table,

Code Snippet

SET NOCOUNT ON

INSERT INTO _MemberProfileLookupValues (MemberID, OptionID, ValueID)

SELECT

M.MemberID

, '6'

, CASE M.MaritalStatusID WHEN 1 THEN '7'

WHEN 2 THEN '8'

WHEN 3 THEN '9'

WHEN 4 THEN '10'

END

FROM

Members M

Where

NOT EXISTS

(

Select 1 From _MemberProfileLookupValues ML

Where M.MemberID = ML.MemberID And OptionID = 6

)

And M.Active = 1

|||

Thanks a lot.

That is exactly what i was looking for.

Kind Regards

Carel Greaves

Inserting to table with dashes in fieldnames

Hello,

I have had a request, one of our tables is used as a report and I have been
asked that all fieldnames for months have dashes in them,
like Jan-05 instead of Jan05 and so on...

Now what we have is a CURSOR which loops through all values in another table
generating these fieldnames, like 'Jan-05', Feb-05' etc..
Then the table definition is modified so these months display as fieldnames.

That is fine,
Except later it calls another stored procedure which inserts values from
another table into this one.
INSERT INTO TableName
SELECT *
FROM Tablename

Obviously this is failing because the newly amended table now has dashes in
the fieldnames now.

Now as the fieldnames are created dynamically I cannot simply do INSERT
Tablename ([Jan-05], [feb-5]) etc, as one would normally do,

How could I do this bearing in mind its not always going to be 12 months we
use,

Basically I am asking how can I modify this INSERT statement so it takes
into account the fieldnames with dashes in them
Bearing in mind I don't know what the fieldnames are called as they are
created dynamically by a CURSOR in a different routine.
And its not always 12 months.

have you any ideas?
and the dashes are required

JayneDynamically built field names is a bit unusual. But you will most likely
have to dynamically build the insert command and then execute it. Use the
INFORMATION_SCHEMA.COLUMNS view to define a cursor of column names you need.

Ray

"Little PussyCat" <SPAMSPAM@.NOSPAM.com> wrote in message
news:5s5ug2-nc1.ln1@.tiger.sphynx...
> Hello,
> I have had a request, one of our tables is used as a report and I have
> been
> asked that all fieldnames for months have dashes in them,
> like Jan-05 instead of Jan05 and so on...
> Now what we have is a CURSOR which loops through all values in another
> table
> generating these fieldnames, like 'Jan-05', Feb-05' etc..
> Then the table definition is modified so these months display as
> fieldnames.
> That is fine,
> Except later it calls another stored procedure which inserts values from
> another table into this one.
> INSERT INTO TableName
> SELECT *
> FROM Tablename
> Obviously this is failing because the newly amended table now has dashes
> in
> the fieldnames now.
> Now as the fieldnames are created dynamically I cannot simply do INSERT
> Tablename ([Jan-05], [feb-5]) etc, as one would normally do,
> How could I do this bearing in mind its not always going to be 12 months
> we
> use,
> Basically I am asking how can I modify this INSERT statement so it takes
> into account the fieldnames with dashes in them
> Bearing in mind I don't know what the fieldnames are called as they are
> created dynamically by a CURSOR in a different routine.
> And its not always 12 months.
> have you any ideas?
> and the dashes are required
> Jayne|||Little PussyCat (SPAMSPAM@.NOSPAM.com) writes:
> I have had a request, one of our tables is used as a report and I have
> been asked that all fieldnames for months have dashes in them, like
> Jan-05 instead of Jan05 and so on...
> Now what we have is a CURSOR which loops through all values in another
> table generating these fieldnames, like 'Jan-05', Feb-05' etc.. Then the
> table definition is modified so these months display as fieldnames.

This is a design that I would strongly discourage use of. Month data
is best stored as rows in a subtable. Then if someone want to looks
at this with the rows as column, you define a view for him, but let
the underlying tables be.

> That is fine,
> Except later it calls another stored procedure which inserts values from
> another table into this one.
> INSERT INTO TableName
> SELECT *
> FROM Tablename
> Obviously this is failing because the newly amended table now has dashes
> in the fieldnames now.

There are two more cases of violations of good practice here:

1) Always list columns explicitly with INSERT:
2) Don't use SELECT * in production code.

Anyway, if you let this poor table be and define that view instead, the
problem will go away.

This may not be the answer you were looking for, but creating columns
dynamically is definitely not what relational databases are designed for,
and if you go there, it becomes very painful.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||> This is a design that I would strongly discourage use of. Month data
> is best stored as rows in a subtable. Then if someone want to looks
> at this with the rows as column, you define a view for him, but let
> the underlying tables be.

I have recently taken over from someone else in a job so for the time being
I am having to work with how the original code is written, plus I am
learning how all the systems hook together.
Yes, in my opinion I would have done it in a View but until I feel 100% sure
of how all the systems work I would rather work this way.

Oh, I solved my problem, the stored procedure was failing at the ALTER TABLE
Column command in that CURSOR,
It was failing because of the dashes in the fieldname, so all I have to do
is surround the fieldnames with square brackets when its creating the
column, like this

Exec ('ALTER TABLE TableName ADD' + '[' + @.ColName + ']' + 'Money NULL')

And yes I know having dashes in fieldnames is a big No No, and normally I
wouldn't do it, later on when I feel more comfortable in my job I will
change this to a View then I can simply create Aliases for the Month Names

Thanks for your help anyway :-)

Regards,

Jayne|||Little PussyCat (SPAMSPAM@.NOSPAM.com) writes:
> I have recently taken over from someone else in a job so for the time
> being I am having to work with how the original code is written, plus I
> am learning how all the systems hook together. Yes, in my opinion I
> would have done it in a View but until I feel 100% sure of how all the
> systems work I would rather work this way.

I think that is a fallacy. You only run the risk to be let out on
lonely winding roads that you have no idea where they lead to or
even is what behind the next curve.

> And yes I know having dashes in fieldnames is a big No No,

I wouldn't say so. After all, SQL is designed to accommodate this
possibility. I don't use it myself, as all those brackets or quotes
makes the code more bulky. I'd say the dashes is the smallest of your
problems.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp

Friday, February 24, 2012

Inserting to multiple tables in SQL Server 2005 that use identity specification

Hi, I am having a bit of hassle with trying to enter details to multiple tables in SQL Server 2005.
I have four tables, an

Attendance Table (AttendanceID(PK Identity specific), MembershipNo, Date)
Resistance Table (ResistId(PK Identity specific), Weight , Reps, Sets)
Aerobics Tables(AerobicsID(PK Identity specific), MachineID, Intensity, Time)
and a linking table for all of them... ExerciseMaster(AttendanceID,ResistanceID,AerobicsI D)

My problem is that I can insert data to each specific table by itself using seperate insert statements....eg...

//insert an attendance record to the attendance table
string userID;

userID = Session["User"].ToString();

SqlDataSource pgpDataSource = new SqlDataSource();
pgpDataSource.ConnectionString = ConfigurationManager.ConnectionStrings["ConnectionStringLogin"].ToString();

pgpDataSource.InsertCommandType = SqlDataSourceCommandType.Text;
pgpDataSource.InsertCommand = "INSERT INTO [Attendance] ([MembershipNo], [Date]) VALUES (@.MembershipNo, @.Date)";
pgpDataSource.InsertParameters.Add("MembershipNo", userID);
pgpDataSource.InsertParameters.Add("Date", txtVisitDate.Text);

int RowsAffected = 0;

try
{
RowsAffected = pgpDataSource.Insert();
}

catch (Exception ex)
{
Server.Transfer("~/Problem.aspx");
}

finally
{
pgpDataSource = null;
}

//insert an aerobics record into the aerocibs table

SqlDataSource pgpDataSource = new SqlDataSource();
pgpDataSource.ConnectionString = ConfigurationManager.ConnectionStrings["ConnectionStringLogin"].ToString();

pgpDataSource.InsertCommandType = SqlDataSourceCommandType.Text;
pgpDataSource.InsertCommand = "INSERT INTO [Aerobics] ([MachineID], [Intensity], [ExerciseTime]) VALUES (@.MachineID, @.Intensity, @.ExerciseTime)";

pgpDataSource.InsertParameters.Add("MachineID", rower.ToString());
pgpDataSource.InsertParameters.Add("Intensity", txtRowerLevel.Text);
pgpDataSource.InsertParameters.Add("ExerciseTime", txtRowerTime.Text);

int RowsAffected = 0;

try
{
RowsAffected = pgpDataSource.Insert();
}

catch (Exception ex)
{
Server.Transfer("~/Problem.aspx");
}

finally
{
pgpDataSource = null;
}
//same code as above for the resistance table

However, i am facing the problem where this does not populate the link table(ExerciseMaster) with any information as i am unable to write the relevant IDs into the table that have been auto generated by SQL Server for each of the subTables.
I have read several forums where they recommend using something called @.@.IDENTITY but i have no idea how or where to use this in order to fill my exercise table...
Any help would be so much appreciated... Also, hopefully what i have said all makes sense and someone will be able to help me...oh and one more thing...this is an ASP.NET page coding in C#
Cheers
ScottyI think the problem is in your design. I'm thinking you should dump the ExerciseMaster table.

If you have a one-to-one relationship between attendance, Resistance, and Aerobics, then a single table will do. If you have a one-to-many relationship (for a given attendance, there may be zero or more resistance sessions and zero or more aerobics sessions) then this schema should work for you:

1. Attendance Table (AttendanceID(PK Identity specific), MembershipNo, Date)
2. Resistance Table (ResistId(PK Identity specific), AttendanceID(FK), Weight , Reps, Sets)
3. Aerobics Table(AerobicsID(PK Identity specific), AttendanceID(FK), MachineID, Intensity, Time)

You can insert the attendance record and then either recover the ID created using @.@.Identity or switch to GUID identifiers and let your interface create the ID. Then use that ID when you insert the Resistance and Aerobics records.|||Thanks Blindman, I see where you are coming from about the tables and I agree with what you have suggested. However, i am still a bit lost as to what the actual code would be to use the @.@.IDENTITY...
eg.
Suppose i have changed my database design to reflect your suggestions and want to enter code to the Aerobics table, how do i use the attendanceID that is used in the attendance table and enter it into the attendanceID in the Aerobics table..I have no idea how or where to use the @.@.IDENTITY in my code...

//insert an attendance record to the attendance table
string userID;

userID = Session["User"].ToString();

SqlDataSource pgpDataSource = new SqlDataSource();
pgpDataSource.ConnectionString = ConfigurationManager.ConnectionStrings["ConnectionStringLogin"].ToString();

pgpDataSource.InsertCommandType = SqlDataSourceCommandType.Text;
pgpDataSource.InsertCommand = "INSERT INTO [Attendance] ([MembershipNo], [Date]) VALUES (@.MembershipNo, @.Date)";
pgpDataSource.InsertParameters.Add("MembershipNo", userID);
pgpDataSource.InsertParameters.Add("Date", txtVisitDate.Text);

int RowsAffected = 0;

try
{
RowsAffected = pgpDataSource.Insert();
}

catch (Exception ex)
{
Server.Transfer("~/Problem.aspx");
}

finally
{
pgpDataSource = null;
}

//insert an aerobics record into the aerocibs table

SqlDataSource pgpDataSource = new SqlDataSource();
pgpDataSource.ConnectionString = ConfigurationManager.ConnectionStrings["ConnectionStringLogin"].ToString();

pgpDataSource.InsertCommandType = SqlDataSourceCommandType.Text;
pgpDataSource.InsertCommand = "INSERT INTO [Aerobics] ([AttendanceID], [/COLOR][MachineID], [Intensity], [ExerciseTime]) VALUES (@.MachineID, @.Intensity, @.ExerciseTime)";

pgpDataSource.InsertParameters.Add("MachineID", rower.ToString());
pgpDataSource.InsertParameters.Add("Intensity", txtRowerLevel.Text);
pgpDataSource.InsertParameters.Add("ExerciseTime", txtRowerTime.Text);

int RowsAffected = 0;

try
{
RowsAffected = pgpDataSource.Insert();
}

catch (Exception ex)
{
Server.Transfer("~/Problem.aspx");
}

finally
{
pgpDataSource = null;
}|||Well, the other thing you need to do is to stop issuing dynamic SQL code from your application. It is more complicated, less efficient, and insecure.

Write a store procedure called "InsertAttendance" that takes MembershipNo and Date as parameters. The stored procedure will insert the new record (after checking to make sure it does not already exist), and then call @.@.Identity (better yet, use @.@.SCOPEIDENTITY...) to get the ID that was just created. The sproc can pass the new ID back as an output parameter.

Or, as I said, you can switch to using GUIDs rather than identity integers as your primary key. Then you can actually create the new GUID within your application code, and you never have to query the database to find out what ID was assigned.|||is there any chance you could provide some sample code as to how i would call the @.@.IDENTITY, where i would call it from and how i would use it to actually write the information to another table ? im a bit lost as im very new to all this...
Thanks|||set nocount on
--Create a dummy table
create table SampleTable (IdentityColumn int identity, DummyData varchar(50))
go

--Create a dummy sproc
create procedure InsertSample (@.DummyValue varchar(50), @.NewIdentity int output)
as
begin
insert into SampleTable (DummyData) values(@.DummyValue)
set @.NewIdentity = Scope_Identity()
end
go

--Run some insert statements and check the ID values retrieved
declare @.NewID int
exec InsertSample 'Fubar1', @.NewIdentity = @.NewID output
select @.NewID
exec InsertSample 'Fubar2', @.NewIdentity = @.NewID output
select @.NewID
exec InsertSample 'Fubar3', @.NewIdentity = @.NewID output
select @.NewID

--Cleanup
drop procedure InsertSample
drop table SampleTable
go|||Thanks for that, Have that working but only when i execute the stored procedure from the database. I am still unable to actually use it unfortunatley within the application...

string userID;
const int macId = 3;

userID = Session["User"].ToString();

SqlDataSource pgpDataSource = new SqlDataSource();
pgpDataSource.ConnectionString = ConfigurationManager.ConnectionStrings["ConnectionStringLogin"].ToString();
pgpDataSource.SelectCommandType = SqlDataSourceCommandType.Text;

pgpDataSource.InsertCommandType = SqlDataSourceCommandType.StoredProcedure;
pgpDataSource.InsertCommand = "InsertAttendance";

pgpDataSource.InsertParameters.Add("MembershipNo", userID);
pgpDataSource.InsertParameters.Add("Date", txtVisitDate.Text);
//I think this may be the problem, both here and again below where
// i have the line of code commented out again...
//pgpDataSource.InsertParameters.Add("AttendanceID", newID.ToString());
pgpDataSource.InsertParameters.Add("MachineID", macId.ToString());
pgpDataSource.InsertParameters.Add("Intensity", txtRowerLevel.Text );
pgpDataSource.InsertParameters.Add("ExerciseTime", txtRowerTime.Text);
//pgpDataSource.InsertParameters.Add("AttendanceID", newID.ToString());
pgpDataSource.InsertParameters.Add("Reps", txtReps.Text);
pgpDataSource.InsertParameters.Add("Sets", txtSets.Text);

int RowsAffected = 0;

try
{
RowsAffected = pgpDataSource.Insert();
}

catch (Exception ex)
{
Server.Transfer("~/Problem.aspx");
}

finally
{
pgpDataSource = null;
}

if (RowsAffected != 1)
{
Response.Redirect("~/Problem.aspx");
}
else
{
Response.Redirect("~/Log.aspx");
}
}

This is the storedProcedure i created to fill all three tables which works but not when i try to use it in the application...

ALTER PROCEDURE InsertAttendance

@.NewIdentity int output,
@.MembershipNo smallint,
@.Date datetime,
@.Reps smallint,
@.Sets smallint,
@.Weight smallint,
@.MachineID smallint,
@.Intensity smallint,
@.ExerciseTime int

AS

BEGIN

INSERT Attendance (MembershipNo, Date)
Values (@.MembershipNo, @.Date)

INSERT Aerobics (AttendanceID, MachineID, Intensity, ExerciseTime)
Values (@.NewIdentity, @.MachineID, @.Intensity, @.ExerciseTime)

INSERT Resistance ( AttendanceID, Reps, Sets, Weight)
VALUES (@.NewIdentity, @.Reps, @.Sets, @.Weight)
Set @.NewIdentity = Scope_Identity()

END|||How you call this stored procedure from your application code depends upon your programming platform. I don't code interfaces, so I can't help you with that. But I'd have a hard time believing that any respectable programming language would not have a method of calling a stored procedure and retrieving an output parameter.
I suggest you ask about this on a forum specific to your development platform.|||Are you sure this works properly? I would have thought that you would have to set the value of @.NewIdentity before you used it in the Insert statements for the Aerobics and Resistance tables.

[/QUOTE]

INSERT Attendance (MembershipNo, Date)
Values (@.MembershipNo, @.Date)

INSERT Aerobics (AttendanceID, MachineID, Intensity, ExerciseTime)
Values (@.NewIdentity, @.MachineID, @.Intensity, @.ExerciseTime)

INSERT Resistance ( AttendanceID, Reps, Sets, Weight)
VALUES (@.NewIdentity, @.Reps, @.Sets, @.Weight)
Set @.NewIdentity = Scope_Identity()

END[/QUOTE]

inserting to multiple fields using a button

I have multiple textboxes in a page. How do i make them insert their values to multiple fields on multiple tables using a button.

You can just execute some SqlCommands to insert data to different tables in the Click event of a button, for example:

protected void Button2_Click(object sender, EventArgs e)
{
using (SqlConnection conn = new SqlConnection(@."Data Source=labsh96223\iori2000;Integrated Security=SSPI;Database=tempdb"))
{
conn.Open();
string insertSql = @."Insert into myTbl_1 select @.id, @.name";
SqlCommand myCommand = new SqlCommand(insertSql, conn);
myCommand.Parameters.Add("@.id", SqlDbType.Int);
myCommand.Parameters.Add("@.name", SqlDbType.VarChar, 100);
myCommand.Parameters["@.id"].Value = Int32.Parse(TextBox1.Text);
myCommand.Parameters["@.name"].Value = TextBox2.Text;
int i = myCommand.ExecuteNonQuery();
myCommand.CommandText = @."Insert into myTbl_2 select @.id, @.Description";
myCommand.Parameters.Clear();
myCommand.Parameters.Add("@.id", SqlDbType.Int);
myCommand.Parameters.Add("@.Description", SqlDbType.VarChar, 1000);
myCommand.Parameters["@.id"].Value = Int32.Parse(TextBox3.Text);
myCommand.Parameters["@.name"].Value = TextBox4.Text;
i= myCommand.ExecuteNonQuery();
//execute other commands
}
}