Showing posts with label insertion. Show all posts
Showing posts with label insertion. Show all posts

Friday, March 9, 2012

Insertion with single quotes problem

I have a problem with inserting a string with single quotes. For instance,

string testme = "we don't have anything";

insert into tableone (buff) values ("'" + testme + "'");

I get an error with the word "don't" with single quote. But if I delete the single quote "dont" then it is okay. Is is a bug in sql 2005? Please help. Thanks.

blumonde

The best bet is to use parameters. Building the SQL String as you are can cause all sorts of problems.

Absent that, you need to "Escape" the apostrophe. SO, do the following...

string testme = "we don''t have anything";

Note there are 2 apostrophes in a row...

|||

douglas.reilly:

The best bet is to use parameters. Building the SQL String as you are can cause all sorts of problems.

Absent that, you need to "Escape" the apostrophe. SO, do the following...

string testme = "we don''t have anything";

Note there are 2 apostrophes in a row...

Hi Douglas,

The problem is that end-users write those statements and hit insert. And they don't type two apostrophes. I can't "escape" it. Thanks.

blumonde

|||

You then have two choices.

1.Use parameters.

2. Search the string for ' and replace it with '' (two apostrophes). Of course users are not going to use two apostrophes.

|||

douglas.reilly:

You then have two choices.

1.Use parameters.

2. Search the string for ' and replace it with '' (two apostrophes). Of course users are not going to use two apostrophes.

I will try using parameter first. Thanks.

blumonde

|||

Parameters did it for me. Thanks.

blumonde

Insertion With Explicit IDENTITY

Hi,
I'm writing a database interface to someone else's code and am getting passed INSERT statements that look something like this:

INSERT INTO my_table(my_id, my_val)
VALUES (%s, 5)

My code is expected to replace the %s with a suitable value for an ID. Now, my_table looks like this:

CREATE TABLE my_table (
my_id INT IDENTITY(1,1) PRIMARY KEY,
my_val INT
)

and I'm trying to get a suitable value for the '%s' such that the insert will work correctly. I understand that I need to 'SET IDENTITY_INSERT my_table ON' prior to the insert, but what can I replace '%s' with? I can't put a separate SELECT statement in there, and @.@.IDENTITY is too global to be useful (it's a multithreaded app with a lot of inserts across multiple tables). Hacking the input string to remove the 'my_id, ' and '%s, ' completely is not allowed (unfortunately).

I've tried NULL in the hope that SQLServer will work it out but it complains. I don't want to do a 'SELECT IDENT_CURRENT('my_table')' prior to the INSERT due to the overhead and potential concurrency problems. Is there some special keyword or similar I can put in here?

DB is SQLServer2000 SP2. Any help is greatly appreciated.

Cheers,
Jim.Sounds like you have an application that was designed to run against diffrent vendor databases OR someone has missed the design boat. I don't understand why you would define a table with an identity attribute AND have a requirment that you pass a value for the identity value.

To my knowledge there is no way to pass a place hold in the fashion you have described.|||Originally posted by Paul Young
Sounds like you have an application that was designed to run against diffrent vendor databases OR someone has missed the design boat. I don't understand why you would define a table with an identity attribute AND have a requirment that you pass a value for the identity value.

The reason is that the app talks to a number of different back-end databases and they all have different ways of doing an insert on a table with a unique ID. For instance, Postgres expects to have nextval('my_table_f_id_seq') or similar in place of the %s. I'm just stuck (really stuck, as it turns out) with coding to the supplied spec.

Thanks for the reply.|||SET IDENTITY_INSERT my_table ON

INSERT INTO my_table(my_id,my_val)
VALUES (IDENT_CURRENT('my_table') + 1, 5)

SET IDENTITY_INSERT my_table OFF

May want to replace the + 1 with call to function IDENT_INCR()

Tim S|||Tim, that works a treat. Thanks!

insertion scheduled job in SQL

Hi guys,
i want to make a scheduled job that inserts 10 user defined row each minute continously in enterprise managerI'd write a stored procedure to do the insertion and then use SQL Agent to schedule it to run every minute.
But, what do you mean by "10 user defined row"?

Insertion of Data

I am using vs2005 and sqlserver2000My problem is i have 5 checkboxes and some textboxes.In this user selects the checkbox and textboxes dynamically .In this user selects one or two or three or when user selects header checkbox then all checkboxes are selected.And in my database i mentioned a single column for group of checkboes.So how should i insert the data into databasedim con as new sqlconnection("userid=sa;pwd=; initial catalog=test")dim cmd as sql commandProtected Sub Button1_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles Button1.Clickcmd.CommandText = "insert into check values(" what should i write here ...'" & text.text &"','" @. text2.text "'... ")con.open()cmd.executenonquery()con.close()End Subyour insert will majorly depend on the structure of the database.....how r u storing the it in the database...as bits or strings ......be a little more elaborate about the database structure.|||

It looks like you have one column for check boxes, and then one column each for all of the text fields? This depends on that you want to put in the checkbox column, but if you are inserting text to a field for each of the textbox columns and those correspond to a certian checkbox, you will be able to search your table for which textboxes have text for an entry and decide from that which boxes where checked, so you might even be able to just use the checkbox column as your identity (primary key). i.e., data structure like so:

Columns: checkbox / text1 / text2 / text3 / text4

Data: "some number" / "" / "some text" / "" / "some text"

If you were to query that data row above, you would be able to see that the check boxes in question are the ones without null text field rows. However, if you allow null text values you'd have to re-think this method a bit.

Anyway, the checkbox field can be whatever you want, and that will just depend on whatever logic you come up with. You could do it the long way and make a reference table that holds a unique value for each combination of textboxes, but this will get long and messy. Using 1 column to signify a checkbox group might not be the best way to do things, but that's not up to me since it's your data. Anyway, with that setup an insert query something like this might work for you:

"INSERT INTO check (checkBoxField,TextField1,TextField2,TextField3...)VALUES (" & "checkboxNumber" & "," & text1.text & ", " & text2.text & ", " & text3.text & ")"
..where *check* is the table name, and checkboxNumber is whatever you decide to put in the checkbox field.

insertion into sql server table failed

Hi ,
At SQL server's end , i have put my windows id as a readonly for that
database.
however from MS Access when i specify the ODBC i used the sa id & pwd
from Ms Access , i linked to the SQL server db using ODBC connection ,when i
tried to insert some data into SQL server i was not allowed
why is that so as i was using sa pwd from the odbc connection
Could anyone kindly advise ?
tks & rdgsDon't think it has much to do with the security context -
more likely related to how the tables are linked or an ODBC
issue depending on the details of the error you receive.
Make sure you have defined a primary key or unique
identifier for the tables you are linking.
-Sue
On Tue, 6 Sep 2005 18:10:03 -0700, "maxzsim"
<maxzsim@.discussions.microsoft.com> wrote:
>Hi ,
> At SQL server's end , i have put my windows id as a readonly for that
>database.
> however from MS Access when i specify the ODBC i used the sa id & pwd
>from Ms Access , i linked to the SQL server db using ODBC connection ,when i
>tried to insert some data into SQL server i was not allowed
> why is that so as i was using sa pwd from the odbc connection
> Could anyone kindly advise ?
>tks & rdgs|||Hi
From the local table in Ms Access ,i have added a primary and tried
inserting it into the tbl in SQL Server in Production but i have got a key
violation error
However, when i insert into a local SQL Server it's ok
tks & rdgs
"Sue Hoegemeier" wrote:
> Don't think it has much to do with the security context -
> more likely related to how the tables are linked or an ODBC
> issue depending on the details of the error you receive.
> Make sure you have defined a primary key or unique
> identifier for the tables you are linking.
> -Sue
> On Tue, 6 Sep 2005 18:10:03 -0700, "maxzsim"
> <maxzsim@.discussions.microsoft.com> wrote:
> >Hi ,
> >
> > At SQL server's end , i have put my windows id as a readonly for that
> >database.
> >
> > however from MS Access when i specify the ODBC i used the sa id & pwd
> >
> >from Ms Access , i linked to the SQL server db using ODBC connection ,when i
> >tried to insert some data into SQL server i was not allowed
> >
> > why is that so as i was using sa pwd from the odbc connection
> >
> > Could anyone kindly advise ?
> >
> >tks & rdgs
>

insertion into sql server table failed

Hi ,
At SQL server's end , i have put my windows id as a readonly for that
database.
however from MS Access when i specify the ODBC i used the sa id & pwd
from Ms Access , i linked to the SQL server db using ODBC connection ,when i
tried to insert some data into SQL server i was not allowed
why is that so as i was using sa pwd from the odbc connection
Could anyone kindly advise ?
tks & rdgsDon't think it has much to do with the security context -
more likely related to how the tables are linked or an ODBC
issue depending on the details of the error you receive.
Make sure you have defined a primary key or unique
identifier for the tables you are linking.
-Sue
On Tue, 6 Sep 2005 18:10:03 -0700, "maxzsim"
<maxzsim@.discussions.microsoft.com> wrote:

>Hi ,
> At SQL server's end , i have put my windows id as a readonly for that
>database.
> however from MS Access when i specify the ODBC i used the sa id & pwd
>from Ms Access , i linked to the SQL server db using ODBC connection ,when
i
>tried to insert some data into SQL server i was not allowed
> why is that so as i was using sa pwd from the odbc connection
> Could anyone kindly advise ?
>tks & rdgs|||Hi
From the local table in Ms Access ,i have added a primary and tried
inserting it into the tbl in SQL Server in Production but i have got a key
violation error
However, when i insert into a local SQL Server it's ok
tks & rdgs
"Sue Hoegemeier" wrote:

> Don't think it has much to do with the security context -
> more likely related to how the tables are linked or an ODBC
> issue depending on the details of the error you receive.
> Make sure you have defined a primary key or unique
> identifier for the tables you are linking.
> -Sue
> On Tue, 6 Sep 2005 18:10:03 -0700, "maxzsim"
> <maxzsim@.discussions.microsoft.com> wrote:
>
>

insertion into sql server table failed

Hi ,
At SQL server's end , i have put my windows id as a readonly for that
database.
however from MS Access when i specify the ODBC i used the sa id & pwd
from Ms Access , i linked to the SQL server db using ODBC connection ,when i
tried to insert some data into SQL server i was not allowed
why is that so as i was using sa pwd from the odbc connection
Could anyone kindly advise ?
tks & rdgs
Don't think it has much to do with the security context -
more likely related to how the tables are linked or an ODBC
issue depending on the details of the error you receive.
Make sure you have defined a primary key or unique
identifier for the tables you are linking.
-Sue
On Tue, 6 Sep 2005 18:10:03 -0700, "maxzsim"
<maxzsim@.discussions.microsoft.com> wrote:

>Hi ,
> At SQL server's end , i have put my windows id as a readonly for that
>database.
> however from MS Access when i specify the ODBC i used the sa id & pwd
>from Ms Access , i linked to the SQL server db using ODBC connection ,when i
>tried to insert some data into SQL server i was not allowed
> why is that so as i was using sa pwd from the odbc connection
> Could anyone kindly advise ?
>tks & rdgs
|||Hi
From the local table in Ms Access ,i have added a primary and tried
inserting it into the tbl in SQL Server in Production but i have got a key
violation error
However, when i insert into a local SQL Server it's ok
tks & rdgs
"Sue Hoegemeier" wrote:

> Don't think it has much to do with the security context -
> more likely related to how the tables are linked or an ODBC
> issue depending on the details of the error you receive.
> Make sure you have defined a primary key or unique
> identifier for the tables you are linking.
> -Sue
> On Tue, 6 Sep 2005 18:10:03 -0700, "maxzsim"
> <maxzsim@.discussions.microsoft.com> wrote:
>
>

Insertion failed

I have a service that writes a lot of records to many
tables in SQL server. Recently, the service failed in
writing record to any of the tables. But, the retrieving
of the records is still OK. By examining the application
log file, it seems that the SQL server started to
deteriorate by not allowing insertion in one table at a
time and eventually all tables are rejected for insertion.
I ended up re-booting the system and everything went back
to normal. After the reboot, I checked the SQL log file
and noticed that about 18,000 records were rolled forward
during SQL Server start-up.
Does anybody have any idea as why the insertion would
fail? And why the checkpoint was not executed even there
are about 18,000 committed transactions?Mike,
Perhaps the database space filled up as one table attempted to grow again.
Tables that still had some space left in them could take a few more rows,
then they were filled up as well and would no longer take any more rows.
Etc. Etc.
If this is the case, either allocate more space or turn on autogrow for the
database in question.
With regard to your closing comment, checkpoint is not equal to a commit.
Without knowing the period covered by the 18,000 records and something more
about your server it is not possible to offer details. However, I would
think that checkpointing was working just fine.
Russell Fields
"mike" <mike@.ftsl.com> wrote in message
news:5c0401c3e5e3$cb4094d0$a101280a@.phx.gbl...
quote:

> I have a service that writes a lot of records to many
> tables in SQL server. Recently, the service failed in
> writing record to any of the tables. But, the retrieving
> of the records is still OK. By examining the application
> log file, it seems that the SQL server started to
> deteriorate by not allowing insertion in one table at a
> time and eventually all tables are rejected for insertion.
> I ended up re-booting the system and everything went back
> to normal. After the reboot, I checked the SQL log file
> and noticed that about 18,000 records were rolled forward
> during SQL Server start-up.
> Does anybody have any idea as why the insertion would
> fail? And why the checkpoint was not executed even there
> are about 18,000 committed transactions?
>
|||Russell - Thank you for your reply.
I was also thinking about the possible disk space problem
but I don't think it's the case here. There was about 10G
left in the disk when this happened. At the same time, the
database files (data & transaction) are set to auto-grow.
If disk space was the problem, I would think rebooting the
server couldn't solve the problem. Based on my
calculation, in 5 mins period, properly 3500 record would
have been inserted to the table.
quote:

>--Original Message--
>Mike,
>Perhaps the database space filled up as one table

attempted to grow again.
quote:

>Tables that still had some space left in them could take

a few more rows,
quote:

>then they were filled up as well and would no longer take

any more rows.
quote:

>Etc. Etc.
>If this is the case, either allocate more space or turn

on autogrow for the
quote:

>database in question.
>With regard to your closing comment, checkpoint is not

equal to a commit.
quote:

>Without knowing the period covered by the 18,000 records

and something more
quote:

>about your server it is not possible to offer details.

However, I would
quote:

>think that checkpointing was working just fine.
>Russell Fields
>"mike" <mike@.ftsl.com> wrote in message
>news:5c0401c3e5e3$cb4094d0$a101280a@.phx.gbl...
insertion.[QUOTE]
back[QUOTE]
forward[QUOTE]
>
>.
>
|||Mike,
Sorry that I do not have a better idea.
We had problems at one time when another process besides SQL Server was
using the same disk. It wrote LOTS of data into an operating system file
eating up the disk, then either deleted it again or ran out of space itself,
causing the file write to abort. Since you have 10 GB free, then it is hard
to believe that a similar thing is happening to you.
Regarding checkpoints, this is set using the "recovery interval" option.
You might read up on this and discover that the timing of checkpoints is
more complicated than you would have thought.
Russell Fields
"mike" <mike@.ftsl.com> wrote in message
news:635a01c3e5ef$46160ae0$a501280a@.phx.gbl...[QUOTE]
> Russell - Thank you for your reply.
> I was also thinking about the possible disk space problem
> but I don't think it's the case here. There was about 10G
> left in the disk when this happened. At the same time, the
> database files (data & transaction) are set to auto-grow.
> If disk space was the problem, I would think rebooting the
> server couldn't solve the problem. Based on my
> calculation, in 5 mins period, properly 3500 record would
> have been inserted to the table.
>
> attempted to grow again.
> a few more rows,
> any more rows.
> on autogrow for the
> equal to a commit.
> and something more
> However, I would
> insertion.
> back
> forward|||Hi Mike,
Thank you for using the newsgroup and it is my pleasure to help you with
you issue.
As my understanding of your problem, when inserting a large amount of
records to many tables but failed by insert rejection to tables one by one.
You rebooted the system and everything seems OK again. and you found in SQL
log that SQL Server rolled forward the inserting during the startup, so
many records are inserting, you wonder why insert failed and how the
rolling forward come, right?
I would better to explain the rolling forwand first. The log records for a
transaction ( both a transaction.. commit structure or a single T-SQL
statement) are written to disk before the commit acknowledgement is sent to
the client process, but the actual changed data might not have been
physically written out to the data pages. That is, writes to data pages
need only be posted to the operating system, and SQL Server can check later
to see that they were completed. They don't have to complete immediately
because the log contains all the information needed to redo the work, even
in the event of a power failure or system crash before the write completes.
When you reboot you system and start SQL Server, recovery performs both
redo (rollforward) and undo (rollback) operations. In a redo operation, the
log is examined and each change is verified as being already reflected in
the database. (After a redo, every change made by the transaction is
guaranteed to have been applied.) If the change doesn't appear in the
database, it is again performed from the information in the log. Every
database page has an LSN in the page header that uniquely identifies it, by
version, as rows on the page are changed over time. This page LSN reflects
the location in the transaction log of the last log entry that modified a
row on this page. During a redo operation of transactions, the LSN of each
log record is compared to the page LSN of the data page that the log entry
modified; if the page LSN is less than the log LSN, the operation indicated
in the log entry should be redo, that is, should be roll forward. As for
the 18,000 record insert in the SQL start process, they are the modified
records that have been recorded in the transaction log, but have not been
writen to the database.
When you commit the transaction, the data modifications will be have been
made a permanent part of the database. no roll forward or roll back is
needed.
For the checkpiont, it will flush dirty data and log pages from the buffer
cache of the current database, minimizing the number of modifications that
have to be rolled forward during a recovery. So, that 18,000 records
insertings active operations that have been recorded in the log file after
or when check point occures, they are logged into the transaction log. For
more information of checkpoint, when it will occure, what SQL Server will
do when checkpoint come, please refer to 'checkpoing' in the SQL Server
Books Online.
As for your questions of how the records insertings are rejected by tables
one by one, it would be very helpful for you to provide the detailed error
message, the error log and the T-SQL you were excuting at that time for
further analysis. Thanks.
Best regards
Baisong Wei
Microsoft Online Support
----
Get Secure! - www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only. Thanks.
I have a service that writes a lot of records to many
tables in SQL server. Recently, the service failed in
writing record to any of the tables. But, the retrieving
of the records is still OK. By examining the application
log file, it seems that the SQL server started to
deteriorate by not allowing insertion in one table at a
time and eventually all tables are rejected for insertion.
I ended up re-booting the system and everything went back
to normal. After the reboot, I checked the SQL log file
and noticed that about 18,000 records were rolled forward
during SQL Server start-up.
Does anybody have any idea as why the insertion would
fail? And why the checkpoint was not executed even there
are about 18,000 committed transactions?|||From: v-baiwei@.online.microsoft.com (Baisong Wei[MSFT])
Date: Fri, 30 Jan 2004 05:10:41 GMT
Subject: RE: Insertion failed
Newsgroups: microsoft.public.sqlserver.server
Hi Mike,
Thank you for using the newsgroup and it is my pleasure to help you with
you issue.
As my understanding of your problem, when inserting a large amount of
records to many tables but failed by insert rejection to tables one by one.
You rebooted the system and everything seems OK again. and you found in SQL
log that SQL Server rolled forward the inserting during the startup, so
many records are inserting, you wonder why insert failed and how the
rolling forward come, right?
I would better to explain the rolling forwand first. The log records for a
transaction ( both a transaction.. commit structure or a single T-SQL
statement) are written to disk before the commit acknowledgement is sent to
the client process, but the actual changed data might not have been
physically written out to the data pages. That is, writes to data pages
need only be posted to the operating system, and SQL Server can check later
to see that they were completed. They don't have to complete immediately
because the log contains all the information needed to redo the work, even
in the event of a power failure or system crash before the write completes.
When you reboot you system and start SQL Server, recovery performs both
redo (rollforward) and undo (rollback) operations. In a redo operation, the
log is examined and each change is verified as being already reflected in
the database. (After a redo, every change made by the transaction is
guaranteed to have been applied.) If the change doesn't appear in the
database, it is again performed from the information in the log. Every
database page has an LSN in the page header that uniquely identifies it, by
version, as rows on the page are changed over time. This page LSN reflects
the location in the transaction log of the last log entry that modified a
row on this page. During a redo operation of transactions, the LSN of each
log record is compared to the page LSN of the data page that the log entry
modified; if the page LSN is less than the log LSN, the operation indicated
in the log entry should be redo, that is, should be roll forward. As for
the 18,000 record insert in the SQL start process, they are the modified
records that have been recorded in the transaction log, but have not been
writen to the database.
When you commit the transaction, the data modifications will be have been
made a permanent part of the database ( in the log file).
For the checkpiont, it will flush dirty data and log pages from the buffer
cache of the current database, minimizing the number of modifications that
have to be rolled forward during a recovery. So, that 18,000 records
insertings active operations that have been recorded in the log file after
or when check point occures, they are logged into the transaction log. For
more information of checkpoint, when it will occure, what SQL Server will
do when checkpoint come, please refer to 'checkpoing' in the SQL Server
Books Online. You could also refer to 'recoery mode' too.
As for your questions of how the records insertings are rejected by tables
one by one, it would be very helpful for you to provide the detailed error
message, the error log and the T-SQL you were excuting at that time for
further analysis. Thanks.
Best regards
Baisong Wei
Microsoft Online Support
----
Get Secure! - www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only. Thanks.|||Hi Mike,
Thank you for using the newsgroup and it is my pleasure to help you with
you issue.
As my understanding of your problem, when inserting a large amount of
records to many tables but failed by insert rejection to tables one by one.
You rebooted the system and everything seems OK again. and you found in SQL
log that SQL Server rolled forward the inserting during the startup, so
many records are inserting, you wonder why insert failed and how the
rolling forward come, right?
I would better to explain the rolling forwand first. The log records for a
transaction ( both a transaction.. commit structure or a single T-SQL
statement) are written to disk before the commit acknowledgement is sent to
the client process, but the actual changed data might not have been
physically written out to the data pages. That is, writes to data pages
need only be posted to the operating system, and SQL Server can check later
to see that they were completed. They don't have to complete immediately
because the log contains all the information needed to redo the work, even
in the event of a power failure or system crash before the write completes.
When you reboot you system and start SQL Server, recovery performs both
redo (rollforward) and undo (rollback) operations. In a redo operation, the
log is examined and each change is verified as being already reflected in
the database. (After a redo, every change made by the transaction is
guaranteed to have been applied.) If the change doesn't appear in the
database, it is again performed from the information in the log. Every
database page has an LSN in the page header that uniquely identifies it, by
version, as rows on the page are changed over time. This page LSN reflects
the location in the transaction log of the last log entry that modified a
row on this page. During a redo operation of transactions, the LSN of each
log record is compared to the page LSN of the data page that the log entry
modified; if the page LSN is less than the log LSN, the operation indicated
in the log entry should be redo, that is, should be roll forward. As for
the 18,000 record insert in the SQL start process, they are the modified
records that have been recorded in the transaction log, but have not been
writen to the database.
When you commit the transaction, the data modifications will be have been
made a permanent part of the database ( in the log file).
For the checkpiont, it will flush dirty data and log pages from the buffer
cache of the current database, minimizing the number of modifications that
have to be rolled forward during a recovery. So, that 18,000 records
insertings active operations that have been recorded in the log file after
or when check point occures, they are logged into the transaction log. For
more information of checkpoint, when it will occure, what SQL Server will
do when checkpoint come, please refer to 'checkpoing' in the SQL Server
Books Online. You could also refer to 'recoery mode' too.
As for your questions of how the records insertings are rejected by tables
one by one, it would be very helpful for you to provide the detailed error
message, the error log and the T-SQL you were excuting at that time for
further analysis. Thanks.
Best regards
Baisong Wei
Microsoft Online Support
----
Get Secure! - www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only. Thanks.|||Hi Baisong,
Thank you for your reply and I understand the description
provided. Here is more background regarding to my problem:
1. We use ADO command object to perform single T-SQL
statement with a time-out value set to one minute.
2. On average, we perform approximately 36,960 T-SQL
statements per hour.
3. Database and transaction files are set to auto grew at
a rate of (10%)
4. Unfortunately, we don't know what the error message was
generated by SQL Server when the records failed to be
written.
5. All the non-SQL processes appeared to be running
normally at the time of the restart. This information was
gathered using Windows Task Manager.
6. No rolled-forward information was logged in SQL Server
log files in previous restarts.
7. Prior to the restart, new information was unable to
insert into the database, but old records can be retrieved.
8. The largest table in the database had 25,000,000
records at the time of the problem.
Question:
What could be happening on the server that would result in
1) failed to insert, and 2) rolled forward of 18000
records in restart.|||Hi Mike,
Thank you for using the newsgroup and it is my pleasure to help you with
you issue.
In you first post, you mentioned that 'By examining the application log
file, it seems that the SQL server started to deteriorate by not allowing
insertion in one table at a time and eventually all tables are rejected for
insertion.' and in your last post, you mentioned that 'we don't know what
the error message was generated by SQL Server when the records failed to be
written'. I wonder if you could provide the content of this part of the log
that you made this judgement, what made you think that the insertion is
rejected while no error message indicating any abnormal. Any blocking of
locks or any information from you application side? This information is
helpful for our analysis. Second, as I mentioned in my last post, when an
operation is executed, this operation will be write to log but not
necessary to write to .MDB file, when checkpoint come, they will be write
to the MDB file. I think that it is because the 18,000 records' insert have
been written into the log file, but not to the .MDB file, so, when SQL
Server restart, it will compare with the LSN of page and log to decide
roll-forward or roll-back, no difference between, in the SQL Server start
process, no this action is taken, as in your past restart. Besides the log
message, could you please provide the recovery mode of you database?
As the ADO time-out option, I suggest you to set to zero, which prepresent
no limit for time-out.
Looking for your reply and thanks.
Best regards
Baisong Wei
Microsoft Online Support
----
Get Secure! - www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only. Thanks.|||Baisong,
What I meant by 'By examining the application log
file,...' is checking the SQL server tables that the
application is writing to. All the written records
contain a time stamp; therefore I can tell when the
insertion started to fail on these tables.
'we don't know what the error message was generated by SQL
Server when the records failed' simply means that the
application did not log any error returned by the ADO
command object.
The recovery mode that the database uses is 'Full Recovery
Mode'
Mike

Insertion failed

I have a service that writes a lot of records to many
tables in SQL server. Recently, the service failed in
writing record to any of the tables. But, the retrieving
of the records is still OK. By examining the application
log file, it seems that the SQL server started to
deteriorate by not allowing insertion in one table at a
time and eventually all tables are rejected for insertion.
I ended up re-booting the system and everything went back
to normal. After the reboot, I checked the SQL log file
and noticed that about 18,000 records were rolled forward
during SQL Server start-up.
Does anybody have any idea as why the insertion would
fail? And why the checkpoint was not executed even there
are about 18,000 committed transactions?Mike,
Perhaps the database space filled up as one table attempted to grow again.
Tables that still had some space left in them could take a few more rows,
then they were filled up as well and would no longer take any more rows.
Etc. Etc.
If this is the case, either allocate more space or turn on autogrow for the
database in question.
With regard to your closing comment, checkpoint is not equal to a commit.
Without knowing the period covered by the 18,000 records and something more
about your server it is not possible to offer details. However, I would
think that checkpointing was working just fine.
Russell Fields
"mike" <mike@.ftsl.com> wrote in message
news:5c0401c3e5e3$cb4094d0$a101280a@.phx.gbl...
> I have a service that writes a lot of records to many
> tables in SQL server. Recently, the service failed in
> writing record to any of the tables. But, the retrieving
> of the records is still OK. By examining the application
> log file, it seems that the SQL server started to
> deteriorate by not allowing insertion in one table at a
> time and eventually all tables are rejected for insertion.
> I ended up re-booting the system and everything went back
> to normal. After the reboot, I checked the SQL log file
> and noticed that about 18,000 records were rolled forward
> during SQL Server start-up.
> Does anybody have any idea as why the insertion would
> fail? And why the checkpoint was not executed even there
> are about 18,000 committed transactions?
>|||Russell - Thank you for your reply.
I was also thinking about the possible disk space problem
but I don't think it's the case here. There was about 10G
left in the disk when this happened. At the same time, the
database files (data & transaction) are set to auto-grow.
If disk space was the problem, I would think rebooting the
server couldn't solve the problem. Based on my
calculation, in 5 mins period, properly 3500 record would
have been inserted to the table.
>--Original Message--
>Mike,
>Perhaps the database space filled up as one table
attempted to grow again.
>Tables that still had some space left in them could take
a few more rows,
>then they were filled up as well and would no longer take
any more rows.
>Etc. Etc.
>If this is the case, either allocate more space or turn
on autogrow for the
>database in question.
>With regard to your closing comment, checkpoint is not
equal to a commit.
>Without knowing the period covered by the 18,000 records
and something more
>about your server it is not possible to offer details.
However, I would
>think that checkpointing was working just fine.
>Russell Fields
>"mike" <mike@.ftsl.com> wrote in message
>news:5c0401c3e5e3$cb4094d0$a101280a@.phx.gbl...
>> I have a service that writes a lot of records to many
>> tables in SQL server. Recently, the service failed in
>> writing record to any of the tables. But, the retrieving
>> of the records is still OK. By examining the application
>> log file, it seems that the SQL server started to
>> deteriorate by not allowing insertion in one table at a
>> time and eventually all tables are rejected for
insertion.
>> I ended up re-booting the system and everything went
back
>> to normal. After the reboot, I checked the SQL log file
>> and noticed that about 18,000 records were rolled
forward
>> during SQL Server start-up.
>> Does anybody have any idea as why the insertion would
>> fail? And why the checkpoint was not executed even there
>> are about 18,000 committed transactions?
>
>.
>|||Mike,
Sorry that I do not have a better idea.
We had problems at one time when another process besides SQL Server was
using the same disk. It wrote LOTS of data into an operating system file
eating up the disk, then either deleted it again or ran out of space itself,
causing the file write to abort. Since you have 10 GB free, then it is hard
to believe that a similar thing is happening to you.
Regarding checkpoints, this is set using the "recovery interval" option.
You might read up on this and discover that the timing of checkpoints is
more complicated than you would have thought.
Russell Fields
"mike" <mike@.ftsl.com> wrote in message
news:635a01c3e5ef$46160ae0$a501280a@.phx.gbl...
> Russell - Thank you for your reply.
> I was also thinking about the possible disk space problem
> but I don't think it's the case here. There was about 10G
> left in the disk when this happened. At the same time, the
> database files (data & transaction) are set to auto-grow.
> If disk space was the problem, I would think rebooting the
> server couldn't solve the problem. Based on my
> calculation, in 5 mins period, properly 3500 record would
> have been inserted to the table.
> >--Original Message--
> >Mike,
> >
> >Perhaps the database space filled up as one table
> attempted to grow again.
> >Tables that still had some space left in them could take
> a few more rows,
> >then they were filled up as well and would no longer take
> any more rows.
> >Etc. Etc.
> >
> >If this is the case, either allocate more space or turn
> on autogrow for the
> >database in question.
> >
> >With regard to your closing comment, checkpoint is not
> equal to a commit.
> >Without knowing the period covered by the 18,000 records
> and something more
> >about your server it is not possible to offer details.
> However, I would
> >think that checkpointing was working just fine.
> >
> >Russell Fields
> >"mike" <mike@.ftsl.com> wrote in message
> >news:5c0401c3e5e3$cb4094d0$a101280a@.phx.gbl...
> >> I have a service that writes a lot of records to many
> >> tables in SQL server. Recently, the service failed in
> >> writing record to any of the tables. But, the retrieving
> >> of the records is still OK. By examining the application
> >> log file, it seems that the SQL server started to
> >> deteriorate by not allowing insertion in one table at a
> >> time and eventually all tables are rejected for
> insertion.
> >> I ended up re-booting the system and everything went
> back
> >> to normal. After the reboot, I checked the SQL log file
> >> and noticed that about 18,000 records were rolled
> forward
> >> during SQL Server start-up.
> >>
> >> Does anybody have any idea as why the insertion would
> >> fail? And why the checkpoint was not executed even there
> >> are about 18,000 committed transactions?
> >>
> >
> >
> >.
> >|||Hi Mike,
Thank you for using the newsgroup and it is my pleasure to help you with
you issue.
As my understanding of your problem, when inserting a large amount of
records to many tables but failed by insert rejection to tables one by one.
You rebooted the system and everything seems OK again. and you found in SQL
log that SQL Server rolled forward the inserting during the startup, so
many records are inserting, you wonder why insert failed and how the
rolling forward come, right?
I would better to explain the rolling forwand first. The log records for a
transaction ( both a transaction.. commit structure or a single T-SQL
statement) are written to disk before the commit acknowledgement is sent to
the client process, but the actual changed data might not have been
physically written out to the data pages. That is, writes to data pages
need only be posted to the operating system, and SQL Server can check later
to see that they were completed. They don't have to complete immediately
because the log contains all the information needed to redo the work, even
in the event of a power failure or system crash before the write completes.
When you reboot you system and start SQL Server, recovery performs both
redo (rollforward) and undo (rollback) operations. In a redo operation, the
log is examined and each change is verified as being already reflected in
the database. (After a redo, every change made by the transaction is
guaranteed to have been applied.) If the change doesn't appear in the
database, it is again performed from the information in the log. Every
database page has an LSN in the page header that uniquely identifies it, by
version, as rows on the page are changed over time. This page LSN reflects
the location in the transaction log of the last log entry that modified a
row on this page. During a redo operation of transactions, the LSN of each
log record is compared to the page LSN of the data page that the log entry
modified; if the page LSN is less than the log LSN, the operation indicated
in the log entry should be redo, that is, should be roll forward. As for
the 18,000 record insert in the SQL start process, they are the modified
records that have been recorded in the transaction log, but have not been
writen to the database.
When you commit the transaction, the data modifications will be have been
made a permanent part of the database. no roll forward or roll back is
needed.
For the checkpiont, it will flush dirty data and log pages from the buffer
cache of the current database, minimizing the number of modifications that
have to be rolled forward during a recovery. So, that 18,000 records
insertings active operations that have been recorded in the log file after
or when check point occures, they are logged into the transaction log. For
more information of checkpoint, when it will occure, what SQL Server will
do when checkpoint come, please refer to 'checkpoing' in the SQL Server
Books Online.
As for your questions of how the records insertings are rejected by tables
one by one, it would be very helpful for you to provide the detailed error
message, the error log and the T-SQL you were excuting at that time for
further analysis. Thanks.
Best regards
Baisong Wei
Microsoft Online Support
----
Get Secure! - www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only. Thanks.
I have a service that writes a lot of records to many
tables in SQL server. Recently, the service failed in
writing record to any of the tables. But, the retrieving
of the records is still OK. By examining the application
log file, it seems that the SQL server started to
deteriorate by not allowing insertion in one table at a
time and eventually all tables are rejected for insertion.
I ended up re-booting the system and everything went back
to normal. After the reboot, I checked the SQL log file
and noticed that about 18,000 records were rolled forward
during SQL Server start-up.
Does anybody have any idea as why the insertion would
fail? And why the checkpoint was not executed even there
are about 18,000 committed transactions?|||From: v-baiwei@.online.microsoft.com (Baisong Wei[MSFT])
Date: Fri, 30 Jan 2004 05:10:41 GMT
Subject: RE: Insertion failed
Newsgroups: microsoft.public.sqlserver.server
Hi Mike,
Thank you for using the newsgroup and it is my pleasure to help you with
you issue.
As my understanding of your problem, when inserting a large amount of
records to many tables but failed by insert rejection to tables one by one.
You rebooted the system and everything seems OK again. and you found in SQL
log that SQL Server rolled forward the inserting during the startup, so
many records are inserting, you wonder why insert failed and how the
rolling forward come, right?
I would better to explain the rolling forwand first. The log records for a
transaction ( both a transaction.. commit structure or a single T-SQL
statement) are written to disk before the commit acknowledgement is sent to
the client process, but the actual changed data might not have been
physically written out to the data pages. That is, writes to data pages
need only be posted to the operating system, and SQL Server can check later
to see that they were completed. They don't have to complete immediately
because the log contains all the information needed to redo the work, even
in the event of a power failure or system crash before the write completes.
When you reboot you system and start SQL Server, recovery performs both
redo (rollforward) and undo (rollback) operations. In a redo operation, the
log is examined and each change is verified as being already reflected in
the database. (After a redo, every change made by the transaction is
guaranteed to have been applied.) If the change doesn't appear in the
database, it is again performed from the information in the log. Every
database page has an LSN in the page header that uniquely identifies it, by
version, as rows on the page are changed over time. This page LSN reflects
the location in the transaction log of the last log entry that modified a
row on this page. During a redo operation of transactions, the LSN of each
log record is compared to the page LSN of the data page that the log entry
modified; if the page LSN is less than the log LSN, the operation indicated
in the log entry should be redo, that is, should be roll forward. As for
the 18,000 record insert in the SQL start process, they are the modified
records that have been recorded in the transaction log, but have not been
writen to the database.
When you commit the transaction, the data modifications will be have been
made a permanent part of the database ( in the log file).
For the checkpiont, it will flush dirty data and log pages from the buffer
cache of the current database, minimizing the number of modifications that
have to be rolled forward during a recovery. So, that 18,000 records
insertings active operations that have been recorded in the log file after
or when check point occures, they are logged into the transaction log. For
more information of checkpoint, when it will occure, what SQL Server will
do when checkpoint come, please refer to 'checkpoing' in the SQL Server
Books Online. You could also refer to 'recoery mode' too.
As for your questions of how the records insertings are rejected by tables
one by one, it would be very helpful for you to provide the detailed error
message, the error log and the T-SQL you were excuting at that time for
further analysis. Thanks.
Best regards
Baisong Wei
Microsoft Online Support
----
Get Secure! - www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only. Thanks.|||Hi Mike,
Thank you for using the newsgroup and it is my pleasure to help you with
you issue.
As my understanding of your problem, when inserting a large amount of
records to many tables but failed by insert rejection to tables one by one.
You rebooted the system and everything seems OK again. and you found in SQL
log that SQL Server rolled forward the inserting during the startup, so
many records are inserting, you wonder why insert failed and how the
rolling forward come, right?
I would better to explain the rolling forwand first. The log records for a
transaction ( both a transaction.. commit structure or a single T-SQL
statement) are written to disk before the commit acknowledgement is sent to
the client process, but the actual changed data might not have been
physically written out to the data pages. That is, writes to data pages
need only be posted to the operating system, and SQL Server can check later
to see that they were completed. They don't have to complete immediately
because the log contains all the information needed to redo the work, even
in the event of a power failure or system crash before the write completes.
When you reboot you system and start SQL Server, recovery performs both
redo (rollforward) and undo (rollback) operations. In a redo operation, the
log is examined and each change is verified as being already reflected in
the database. (After a redo, every change made by the transaction is
guaranteed to have been applied.) If the change doesn't appear in the
database, it is again performed from the information in the log. Every
database page has an LSN in the page header that uniquely identifies it, by
version, as rows on the page are changed over time. This page LSN reflects
the location in the transaction log of the last log entry that modified a
row on this page. During a redo operation of transactions, the LSN of each
log record is compared to the page LSN of the data page that the log entry
modified; if the page LSN is less than the log LSN, the operation indicated
in the log entry should be redo, that is, should be roll forward. As for
the 18,000 record insert in the SQL start process, they are the modified
records that have been recorded in the transaction log, but have not been
writen to the database.
When you commit the transaction, the data modifications will be have been
made a permanent part of the database ( in the log file).
For the checkpiont, it will flush dirty data and log pages from the buffer
cache of the current database, minimizing the number of modifications that
have to be rolled forward during a recovery. So, that 18,000 records
insertings active operations that have been recorded in the log file after
or when check point occures, they are logged into the transaction log. For
more information of checkpoint, when it will occure, what SQL Server will
do when checkpoint come, please refer to 'checkpoing' in the SQL Server
Books Online. You could also refer to 'recoery mode' too.
As for your questions of how the records insertings are rejected by tables
one by one, it would be very helpful for you to provide the detailed error
message, the error log and the T-SQL you were excuting at that time for
further analysis. Thanks.
Best regards
Baisong Wei
Microsoft Online Support
----
Get Secure! - www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only. Thanks.|||Hi Baisong,
Thank you for your reply and I understand the description
provided. Here is more background regarding to my problem:
1. We use ADO command object to perform single T-SQL
statement with a time-out value set to one minute.
2. On average, we perform approximately 36,960 T-SQL
statements per hour.
3. Database and transaction files are set to auto grew at
a rate of (10%)
4. Unfortunately, we don't know what the error message was
generated by SQL Server when the records failed to be
written.
5. All the non-SQL processes appeared to be running
normally at the time of the restart. This information was
gathered using Windows Task Manager.
6. No rolled-forward information was logged in SQL Server
log files in previous restarts.
7. Prior to the restart, new information was unable to
insert into the database, but old records can be retrieved.
8. The largest table in the database had 25,000,000
records at the time of the problem.
Question:
What could be happening on the server that would result in
1) failed to insert, and 2) rolled forward of 18000
records in restart.|||Hi Mike,
Thank you for using the newsgroup and it is my pleasure to help you with
you issue.
In you first post, you mentioned that 'By examining the application log
file, it seems that the SQL server started to deteriorate by not allowing
insertion in one table at a time and eventually all tables are rejected for
insertion.' and in your last post, you mentioned that 'we don't know what
the error message was generated by SQL Server when the records failed to be
written'. I wonder if you could provide the content of this part of the log
that you made this judgement, what made you think that the insertion is
rejected while no error message indicating any abnormal. Any blocking of
locks or any information from you application side? This information is
helpful for our analysis. Second, as I mentioned in my last post, when an
operation is executed, this operation will be write to log but not
necessary to write to .MDB file, when checkpoint come, they will be write
to the MDB file. I think that it is because the 18,000 records' insert have
been written into the log file, but not to the .MDB file, so, when SQL
Server restart, it will compare with the LSN of page and log to decide
roll-forward or roll-back, no difference between, in the SQL Server start
process, no this action is taken, as in your past restart. Besides the log
message, could you please provide the recovery mode of you database?
As the ADO time-out option, I suggest you to set to zero, which prepresent
no limit for time-out.
Looking for your reply and thanks.
Best regards
Baisong Wei
Microsoft Online Support
----
Get Secure! - www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only. Thanks.|||Baisong,
What I meant by 'By examining the application log
file,...' is checking the SQL server tables that the
application is writing to. All the written records
contain a time stamp; therefore I can tell when the
insertion started to fail on these tables.
'we don't know what the error message was generated by SQL
Server when the records failed' simply means that the
application did not log any error returned by the ADO
command object.
The recovery mode that the database uses is 'Full Recovery
Mode'
Mike|||Hi Mike,
Thank you for your update.
Yes, from the time stamps in your table that indicating when the insertion
was inserted, we could figure out that the inserting is stopped. However,
we could this can just provide the information about when the inserting
failure happened. From the error log and other informations collected,
could we figure out the cause of the problem. From the information now I
could have, I doubt that there were blocks and later deadlocks that cause
the insertion stopped and then cause all the insert failed. As I mentioned
in my previous reply, before the deadlock happen, some of the insert is
succeeded and this inserting is logged in the transactional log file, but
not in the data file. Before the checkpoint came, you re-booted the system,
and the SQL Server will also restart. SQL Server will compare the LSN of
data page and the log, then will decide which to roll forward and which to
rolled back. The operations already in the transactional log file will be
rolled forward, such as the 18,000 record in your case.
However, this is just a assumption based on my experience. To judge what
happend at that time, we need the corresponding error log for analysis.
Also, when this problem happened again, we could collect the useful
information by the following command. They are:
1) sp_lock: displaying the active locking information
2) sp_who, sp_who2: displaying the current users and processes information
You could also refer to this part in the SQL Server Books Online:
Troubleshooting Deadlocks
Hope this helps!
Best regards
Baisong Wei
Microsoft Online Support
----
Get Secure! - www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only. Thanks.

Insertion data via Stored Procedure [URGENT]!

Hi all;

Question:
=======
Q1) How can I insert a record into a table "Parent Table" and get its ID (its PK) (which is an Identity "Auto count" column) via one Stored Procedure??

Q2) How can I insert a record into a table "Child Table" whose (FK) is the (PK) of the "Parent Table"!! via another one Stored Procedure??

Example:
----
I have two tables "Customer" and "CustomerDetails"..

SP1: should insert all "Customer" data and return the value of an Identity column (I will use it later in SP2).

SP2: should insert all "CustomerDetials" data in the record whose ID (the returned value from SP1) is same as ID of the "Customer" table.

FYI:
--
MS SQL Server 2000
VS.NET EA 2003
Win XP SP1a
VB.NET/ASP.NET :)

Thanks in advanced!There are a couple of ways to get the last inserted IDENT, but I prefer to use IDENT_CURRENT('table_name') to get the ID from the last inserted record of thespecified table.


INSERT INTO Customer (val1,val2,etc) VALUES (@.Val1,@.val2,etc)

INSERT INTO CustomerDetails (ID,Val1,Val2, etc) VALUES ((IDENT_CURRENT('Customer'),@.val1,@.val2,etc)

|||Ok, what about if I want to do these process in two SPs??

I want to take the IDDENTITY value from the fitrst "INSERT INTO" statment, because I need this value in my source code as well as other Stored Procedure(s).

Thanks in advanced!|||Insert Blah...;
Return SCOPE_IDENTITY()|||Thanks gays.

I think my problem is how to get the returned value (IDDENTITY value) from the VB.NET code (in other words, how to extract it from VB.NET/ADO.NET code)?

Note:
--
VB.NET/ADO.NET
or
C#.NET/ADO.NET

Are are ok, if you would like to demonstrate your replay. ( I want the answer!).

Thanks again.|||YES!!

The problem was from my ADO.NET part.

Thanks for your help gays.

Insertion and Updates on 20.000.000 tuples table.

Hi,
I have a table with 20.000.000 of tuples.
I have been monitoring the performance of the insertion and updates,
but not convince me at all.
The table have 30 columns, what and 12 of it, are calcultated column.

The test that i do was this:

1 Insertion with all the columns and calculing the calcultated columns
in the insertion sentence.

1 insertion and all the columns calculated in @.vars..

1 insertion with the basic fields, and 10 updates.

And the result was that the last test was the most performant.

What is your opinion?Andrix (elkpichico@.gmail.com) writes:
> I have a table with 20.000.000 of tuples.
> I have been monitoring the performance of the insertion and updates,
> but not convince me at all.
> The table have 30 columns, what and 12 of it, are calcultated column.
> The test that i do was this:
> 1 Insertion with all the columns and calculing the calcultated columns
> in the insertion sentence.
> 1 insertion and all the columns calculated in @.vars..
> 1 insertion with the basic fields, and 10 updates.
> And the result was that the last test was the most performant.
> What is your opinion?

That your posting is not very clear. I would take "calculated columns"
to mean "computed columns", but since you can't insert explicit values
in computed columns that does not really fit.

Why not post the code you used, so it's easier to understand what you
are talking about.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Hi.

The test that i do, were this:
select @.PK = ....
INSERT INTO shared_calc
VALUES (@.PK,10,20,223,"calculo trivial",...... ,@.imp * @.ohter, ...)

I mean that in the same Insert sentence, i do all the calcs to insert
in the table.

the other,
was
select @.cal1 = @.imp * @.other
select @.cal2 = @.imp * @.other - @.umbral

select @.PK = ....
INSERT INTO shared_calc
VALUES (@.PK,10,20,223,"calculo trivial",...... ,@.calc1,@.calc2, ...)

and the last one was:
select @.PK = ....
INSERT INTO shared_calc
VALUES (@.PK,10,20,223,"calculo trivial",...... )

select @.cal1 = @.imp * @.other
select @.cal2 = @.imp * @.other - @.umbral

update shared_calc_imp
set calc1 = @.calc1
where pk = @.PK

update shared_calc_imp
set calc2 = @.calc2
where pk = @.PK

thanks!

Andrix.

Erland Sommarskog wrote:
> Andrix (elkpichico@.gmail.com) writes:
> > I have a table with 20.000.000 of tuples.
> > I have been monitoring the performance of the insertion and updates,
> > but not convince me at all.
> > The table have 30 columns, what and 12 of it, are calcultated column.
> > The test that i do was this:
> > 1 Insertion with all the columns and calculing the calcultated columns
> > in the insertion sentence.
> > 1 insertion and all the columns calculated in @.vars..
> > 1 insertion with the basic fields, and 10 updates.
> > And the result was that the last test was the most performant.
> > What is your opinion?
> That your posting is not very clear. I would take "calculated columns"
> to mean "computed columns", but since you can't insert explicit values
> in computed columns that does not really fit.
> Why not post the code you used, so it's easier to understand what you
> are talking about.
> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server 2005 at
> http://www.microsoft.com/technet/pr...oads/books.mspx
> Books Online for SQL Server 2000 at
> http://www.microsoft.com/sql/prodin...ions/books.mspx|||Do not store calculations in a table. You can do the calculations in a
VIEW, in the application or in computed columns (a proprietary
shorthand) for a VIEW. This will save you disk space of course. It
will also run faster, since reading from a disk is very slow compared
to math done in main storage. But the real benefit is data integrity,
which your approach will destroy.|||Andrix (elkpichico@.gmail.com) writes:
> The test that i do, were this:
> select @.PK = ....
> INSERT INTO shared_calc
> VALUES (@.PK,10,20,223,"calculo trivial",...... ,@.imp * @.ohter, ...)
> I mean that in the same Insert sentence, i do all the calcs to insert
> in the table.
> the other,
> was
> select @.cal1 = @.imp * @.other
> select @.cal2 = @.imp * @.other - @.umbral
> select @.PK = ....
> INSERT INTO shared_calc
> VALUES (@.PK,10,20,223,"calculo trivial",...... ,@.calc1,@.calc2, ...)
> and the last one was:
> select @.PK = ....
> INSERT INTO shared_calc
> VALUES (@.PK,10,20,223,"calculo trivial",...... )
> select @.cal1 = @.imp * @.other
> select @.cal2 = @.imp * @.other - @.umbral
> update shared_calc_imp
> set calc1 = @.calc1
> where pk = @.PK
> update shared_calc_imp
> set calc2 = @.calc2
> where pk = @.PK

And you are saying that the last test had the best performance?
Surprising.

But how many times did you run each test? Did you ensure that there
was no other load on the server? Did you work with the same table
that you just added to each time? Or did you recreate the table for
each test? And what were the results in numbers? Were the huge
differences?

Running performance tests requires care to avoid sources of error. One
very important is that any measurement below 50 ms contains to much
white noise to be reliable.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx

Wednesday, March 7, 2012

Insertion / updation problem in SSIS

“I have a scenario where i am trying to insert 200,000 lac records & update 200,000 lac record in destination table using SISS Package (SQL SERVER 2005) but I am not able to neither update nor insert the records . while executing the package its not showing any error also . what could be the problem ? “

We have business logic in Package creation 1) Insert New records and 2) Update Existing Records using the follow Data flow diagram

For update we are using OLEDB command, for insert we are using OLEDB Destination.

We are using merge join for spliting record into insert and update.

Perhaps there is blocking on the destination table. This can often happen if you're attempting 2 operations simultaneously.

Execute sp_who2 to see if there's any blocking going on.

-Jamie

|||

is there any other solution for this problem, is this not possible to run for achive both insertion and updation in the same package? if i try to run the package for less records, it is succeded, if i try to run the package for more records, then same problem coming again and again

Thanks & Regards

S.Nagarajan

|||

In that case I am even more sure that blocking is a problem. Did you bother to execute sp_who2 like I advised?

There is an easy fix to this problem. Continue to do the insert but push teh adta to be updated into a raw file. You can then use the contents of the raw file in another data-flow in order to do the update,.

-Jamie

|||I ran sp_who2 and found the blocking, how can i remove the blocking, is there any query to remove the blocking. I couldn't get your solution clearly, please brief me your alternate solution.|||

Do you know what raw files are? If not, go away and study them. When you have finished read #1 here: http://blogs.conchango.com/jamiethomson/archive/2006/02/17/2877.aspx

It describes a different scenario for using raw files but the usage is the same.

-Jamie

|||

We have completed upto move to flat destination files using raw files, could you please help to move this flat files data (for update) to database. what logic we need to use? is it required to add dataflow diagram after raw file destination component

|||

Please stop writing the same question in multiple threads simultaneously. People are here to help and don't want to waste their time clicking through and reading the same thing more than once.

-Jamie