Wednesday, March 21, 2012
Install Error - 'NOLOCK'
XP SP2 with all current updates. The only other thing installed is .NET 2.0.
Does anyone know how to resolve the issue.
[Microsoft][SQL Native Client][SQL Server]Could not continue scan with NOLOCK
due to data movement
Thanks,
Message posted via droptable.com
http://www.droptable.com/Uwe/Forums.aspx/sql-server-msde/200706/1
jwbutler,
I do not know, but I have would ask: Is your SQL Express engine still
running when you apply SP2?
The error reported is one that SQL Server throws when SELECTing with NOLOCK
and the data is deleted during a narrow window of time, as described here:
http://support.microsoft.com/kb/815008. Could that be happening to you? If
so, shut down SQL Express and try the update.
RLF
"jwbutler via droptable.com" <u16619@.uwe> wrote in message
news:7455d61651947@.uwe...
>I get the following error when installing SQL Server Express SP2 on Windows
> XP SP2 with all current updates. The only other thing installed is .NET
> 2.0.
> Does anyone know how to resolve the issue.
> [Microsoft][SQL Native Client][SQL Server]Could not continue scan with
> NOLOCK
> due to data movement
> Thanks,
> --
> Message posted via droptable.com
> http://www.droptable.com/Uwe/Forums.aspx/sql-server-msde/200706/1
>
sql
Monday, March 19, 2012
Installation
I'm using the most current version of MSDE, I've ran all my updates on a
2000 Pro machine.
Someone else did the install and I'm receiving an error message that SQL is
not installed properly
or has been tampered with, pleas uninstall and reinstall, so I went to the
add/remove and uninstalled
MSDE and reinstalled but I'm still receiving the same error message. I go
to Services to start it and
it won't start.
So that's where I'm at, thanks, Craig
Hi Craig,
Thank you for using Microsoft NewsGroup!
Since the installation issue might be related to many possibilities, please
collect the following information to help my analyze this issue, thank you!
1. Take a screenshot of the error message, and send it to me, my email:
v-warnw@.microsoft.com
Create a screenshot
=================
a) When the message appears in the window, press PrScrn (the key right of
F12).
b) Click start menu->all programs->accessories->paint.
c) Press Ctrl+V to paste the screen shot into the paint, and save the
picture in GIF file type.
2. Collect related MPSREPORT to me.
1). Please go to the following link, and download the MPSRPT SQL.exe.
http://www.microsoft.com/downloads/d...c7c-7ca5-408f-
88b7-f9c79b7306c0&displaylang=en
2). Double click to open this file, click Yes to start to collect the
information.
3). When the tool is done you will see an Explorer Window open up. In the
Explorer window should be a .CAB and a .NFO file that you need to send to
me.
If there is anything unclear, please feel free to let me know. Thank you!
Sincerely,
Warner Wang
Microsoft Online Partner Support
Get Secure! - www.microsoft.com/security
================================================== ===
When responding to posts, please "Reply to Group" via
your newsreader so that others may learn and benefit
from your issue.
================================================== ===
This posting is provided "AS IS" with no warranties, and confers no rights.
|||Thanks Warner,
It appears that the user had a bad hard drive.
Thanks for your time, Craig
"Warner wang" <v-warnw@.microsoft.com> wrote in message
news:nRchyXgMGHA.128@.TK2MSFTNGXA01.phx.gbl...
> Hi Craig,
> Thank you for using Microsoft NewsGroup!
> Since the installation issue might be related to many possibilities,
> please
> collect the following information to help my analyze this issue, thank
> you!
> 1. Take a screenshot of the error message, and send it to me, my email:
> v-warnw@.microsoft.com
> Create a screenshot
> =================
> a) When the message appears in the window, press PrScrn (the key right of
> F12).
> b) Click start menu->all programs->accessories->paint.
> c) Press Ctrl+V to paste the screen shot into the paint, and save the
> picture in GIF file type.
> 2. Collect related MPSREPORT to me.
> 1). Please go to the following link, and download the MPSRPT SQL.exe.
> http://www.microsoft.com/downloads/d...c7c-7ca5-408f-
> 88b7-f9c79b7306c0&displaylang=en
> 2). Double click to open this file, click Yes to start to collect the
> information.
> 3). When the tool is done you will see an Explorer Window open up. In the
> Explorer window should be a .CAB and a .NFO file that you need to send to
> me.
> If there is anything unclear, please feel free to let me know. Thank you!
> Sincerely,
> Warner Wang
> Microsoft Online Partner Support
> Get Secure! - www.microsoft.com/security
> ================================================== ===
> When responding to posts, please "Reply to Group" via
> your newsreader so that others may learn and benefit
> from your issue.
> ================================================== ===
> This posting is provided "AS IS" with no warranties, and confers no
> rights.
>
|||Hi Craig,
I am glad to know you figured this out. If you need my further assistance,
please feel free to let me know by posting to this web. Thank you very much!
Sincerely,
Warner Wang
Microsoft Online Partner Support
Get Secure! - www.microsoft.com/security
================================================== ===
When responding to posts, please "Reply to Group" via
your newsreader so that others may learn and benefit
from your issue.
================================================== ===
This posting is provided "AS IS" with no warranties, and confers no rights.
Friday, March 9, 2012
Inserts and Updates question (newbie)...should be simple to explain
Here is the situation:
We have an old propriatary database that is used for daily tasks. Not much we can do with it, but we have to use it.
I can create basically a read only connection to it through ODBC.
I would like to, on a timed interval copy certain data for reporting from this slow thing to my SQL server so that I can learn to program, and create some cool reports etc without having to wait on this server all day.
So here is what I don't quite understand.
I had originally planned on just deleting the contents of the tbl on my SQL server just before I populated it each time, but found out that my AutoNumber field will continue to increase, and I'm assuming that eventually I'm going to run into a problem as a result.
Should I be doing some kind of update instead? if so do I need to first CHECK if the record exisit, if not then do an insert, if so do an update type thing?
Or is there a way to basically do it in one command?
I hope this makes sense. I would show you some code but there really isn't much to show you other than my insert statement :->
Thanks for any advice!
Josh
Well, the quickest and easiest answer will solve the first problem thatyou mentioned. If you use TRUNCATE TABLE yourTableName instead ofDELETE FROM yourTableName, you'll reseed the identity value and it'llstart all over from the beginning. As an added bonus, thisperforms better, with less overhead than using DELETE.
|||
This "AutoNumber field" is better known as an identity field. It is used in conjuction with a data-type such as int, or bigint. It automates the unique identification of records in a table. It is usually associated with the Primary Key field of a table. If you want to truely replicate the data across different instances, then you should preserve this number. This is especially important if you have other tables that reference these records, which you are also trying to replicate now or later. You can do this by updating the fields of the records based on this key field.
By the way... with an integer (SQL 2000) data-type you have about 2 billion records until you'll need to worry about running out of numbers. If you're getting that much data in, then you should look into data-warehousing concepts.
Example of update statement: update tablename set field2 = infield2 where keyfield1 = inkeyfield1, ...
Adam
Xanderno wrote:
Well, the quickest and easiest answer will solve the first problem thatyou mentioned. If you use TRUNCATE TABLE yourTableName instead ofDELETE FROM yourTableName, you'll reseed the identity value and it'llstart all over from the beginning. As an added bonus, thisperforms better, with less overhead than using DELETE.
Just be careful with TRUNCATE TABLE. This is a non-loggedoperation. If you are doing something like transaction logshipping this will throw a major wrench into it. This is likelynot an issue for you, but I have run into the problem so I thought I'dmention it.
The alternative is a DELETE with a subsequent DBCC CHECKIDENT to resetthe identity seed value. As Xander says, this will incur moreoverhead than a simple TRUNCATE TABLE.
|||Additionally, if you have records in your table which are related toother tables in the database by that identity value, the best course ofaction is an in-place UPDATE, with an INSERT for new records and aDELETE of records that have disappeared from the data source. FWIW Iactually converted a daily import procedure from the "delete allrecords and insert fresh" approach to the UPDATE/INSERT/DELETE approachand was surprised how quick it still ran (for about 6000records). (This was the same stored procedure in which I triedthe TRUNCATE TABLE, incurring the wrath of the DBA.)
|||Really, I have to wonder if you need the identity column at all. If it's only there for looks, (and I'd imagine that it is, since you'reasking about how large it'll get rather than how to maintain it for agiven row across imports) why not do away with it entirely? Ifit's just a static lookup table, and the only reason for importing itinto SQL Server is to reduce overhead, I'd suggest that the SQL Servertable not contain *anything* other than what you have in the originaltable.
As to how you get it into SQL Server, I'll leave it to you to decidewhether truncating the table, or updating/inserting new and changedrows will be most efficient, but do look at both options and keep thecaveats that Terri mentioned in mind.
inserts and updates issue using sp
Hi, i have the following sp. im using vs.net2005, sqlserver2005.
now how do i implement functions like - > okay using datasets, exception is thrown that the row has been modified and i get to inform the user, reload the modified row, .
how do i get same functionality using sp? is that possible ? how/
setANSI_NULLSON
setQUOTED_IDENTIFIERON
go
ALTERProcedure [dbo].[sp_Insert_Or_Update_Bill]
@.BillID_All uniqueidentifier,
@.BillID uniqueidentifier,
@.Pono nvarchar(25),
@.Date_ smalldatetime,
@.SupplierCode nvarchar(25),
@.Reference nvarchar(25),
@.AmountType nchar(10),
@.BillType nvarchar(25),
@.TypeCode nchar(10),
@.AmountFC decimal(18,2),
@.ROE decimal(9,2),
@.Currency nchar(5),
@.LinkBill uniqueidentifier,
@.multiplicityID uniqueidentifier,
@.payment_or_bill smallint
as
IfExists(Select*from bill where billid_all= @.BillID_All and amounttype = @.amounttype)
begin
update bill
SET [BillID_All] = @.BillID_All
,[BillID] = @.BillID
,[Pono] = @.Pono
,[Date_] = @.Date_
,[SupplierCode] = @.SupplierCode
,[Reference] = @.Reference
,[AmountType] = @.AmountType
,[BillType] = @.BillType
,[TypeCode] = @.TypeCode
,[AmountFC] = @.AmountFC
,[ROE] = @.ROE
,[Currency] = @.Currency
,[LinkBill] = @.LinkBill
,[multiplicityID] = @.multiplicityID
,[payment_or_bill] = @.payment_or_bill
where billid_all= @.BillID_All and amounttype = @.amounttype
end
else
ifnot @.AmountFC = 0
begin
begin
INSERTINTO [Costing].[dbo].[Bill]
([BillID_All]
,[BillID]
,[Pono]
,[Date_]
,[SupplierCode]
,[Reference]
,[AmountType]
,[BillType]
,[TypeCode]
,[AmountFC]
,[ROE]
,[Currency]
,[LinkBill]
,[multiplicityID]
,[payment_or_bill])
VALUES
(@.BillID_All
,@.BillID
,@.Pono
,@.Date_
,@.SupplierCode
,@.Reference
,@.AmountType
,@.BillType
,@.TypeCode
,@.AmountFC
,@.ROE
,@.Currency
,@.LinkBill
,@.multiplicityID
,@.payment_or_bill)
end
end
This is known as optimistic locking, and it requires you to check to see if the copy of the data you originally fetched still exists using one of a few methods:
1. Pass all of the non-modified data you have in the table to the procedure and check to see if it has changed. That is probably how your app does it now. Not very efficient, and won't work with text.
2. Add a rowversion (commonly called timestamp) column to the table. Then check to see if it has changed. rowversions manage themselves, so every modification to the table will spawn a change to the rowversion.
ALTER TABLE test add row_version rowversion
Then in your update or delete:
...
,[payment_or_bill] = @.payment_or_bill
where billid_all = @.BillID_All
and amounttype = @.amounttype
and row_version = @.row_version --value fetched with to the client.
if @.@.rowcount <> 1 --what you expected
begin
if not exists (select --check to see if a row exists with the primary key value you are trying to update
raiserror ('The row has been modified',16,1)
3. Add an update_date to tables and check this. The downside here is that you have to manage the change values for yourself with a trigger, or even in the stored procedures (I always use triggers). Do the same sort of check as for the rowversion
It takes some work, but the last two are far more efficient than number 1...
Inserts and updated sqlserver
I have a multithreaded application. VC6++, VB6 and OLEDB. Every thread
has its own session to the database.
If I do all the inserts and do updates in one thread it works fine.
All my data is in the database.
If I split the work to be done on more than one thread it shows in the
SQL Profiler the it does the inserts and updates, but when I look in
the database not all the data was added. There were no errors logged.
Does anyone have an idea what could cause this?
Thanks
GertGert wrote:
> Hi
> I have a multithreaded application. VC6++, VB6 and OLEDB. Every thread
> has its own session to the database.
> If I do all the inserts and do updates in one thread it works fine.
> All my data is in the database.
> If I split the work to be done on more than one thread it shows in the
> SQL Profiler the it does the inserts and updates, but when I look in
> the database not all the data was added. There were no errors logged.
Do you insert the data in the transaction? Maybe you do not call COMMIT at
the end of it and the transaction is rolled back after the timeout.
sincerely,
--
Sebastian K. Zaklada
Skilled Software
http://www.skilledsoftware.com
This posting is provided "AS IS" with no warranties, and confers no rights.
Inserts and updated sqlserver
I have a multithreaded application. VC6++, VB6 and OLEDB. Every thread
has its own session to the database.
If I do all the inserts and do updates in one thread it works fine.
All my data is in the database.
If I split the work to be done on more than one thread it shows in the
SQL Profiler the it does the inserts and updates, but when I look in
the database not all the data was added. There were no errors logged.
Does anyone have an idea what could cause this?
Thanks
GertGert wrote:
> Hi
> I have a multithreaded application. VC6++, VB6 and OLEDB. Every thread
> has its own session to the database.
> If I do all the inserts and do updates in one thread it works fine.
> All my data is in the database.
> If I split the work to be done on more than one thread it shows in the
> SQL Profiler the it does the inserts and updates, but when I look in
> the database not all the data was added. There were no errors logged.
Do you insert the data in the transaction? Maybe you do not call COMMIT at
the end of it and the transaction is rolled back after the timeout.
sincerely,
--
Sebastian K. Zaklada
Skilled Software
http://www.skilledsoftware.com
This posting is provided "AS IS" with no warranties, and confers no rights.
Insertion and Updates on 20.000.000 tuples table.
I have a table with 20.000.000 of tuples.
I have been monitoring the performance of the insertion and updates,
but not convince me at all.
The table have 30 columns, what and 12 of it, are calcultated column.
The test that i do was this:
1 Insertion with all the columns and calculing the calcultated columns
in the insertion sentence.
1 insertion and all the columns calculated in @.vars..
1 insertion with the basic fields, and 10 updates.
And the result was that the last test was the most performant.
What is your opinion?Andrix (elkpichico@.gmail.com) writes:
> I have a table with 20.000.000 of tuples.
> I have been monitoring the performance of the insertion and updates,
> but not convince me at all.
> The table have 30 columns, what and 12 of it, are calcultated column.
> The test that i do was this:
> 1 Insertion with all the columns and calculing the calcultated columns
> in the insertion sentence.
> 1 insertion and all the columns calculated in @.vars..
> 1 insertion with the basic fields, and 10 updates.
> And the result was that the last test was the most performant.
> What is your opinion?
That your posting is not very clear. I would take "calculated columns"
to mean "computed columns", but since you can't insert explicit values
in computed columns that does not really fit.
Why not post the code you used, so it's easier to understand what you
are talking about.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Hi.
The test that i do, were this:
select @.PK = ....
INSERT INTO shared_calc
VALUES (@.PK,10,20,223,"calculo trivial",...... ,@.imp * @.ohter, ...)
I mean that in the same Insert sentence, i do all the calcs to insert
in the table.
the other,
was
select @.cal1 = @.imp * @.other
select @.cal2 = @.imp * @.other - @.umbral
select @.PK = ....
INSERT INTO shared_calc
VALUES (@.PK,10,20,223,"calculo trivial",...... ,@.calc1,@.calc2, ...)
and the last one was:
select @.PK = ....
INSERT INTO shared_calc
VALUES (@.PK,10,20,223,"calculo trivial",...... )
select @.cal1 = @.imp * @.other
select @.cal2 = @.imp * @.other - @.umbral
update shared_calc_imp
set calc1 = @.calc1
where pk = @.PK
update shared_calc_imp
set calc2 = @.calc2
where pk = @.PK
thanks!
Andrix.
Erland Sommarskog wrote:
> Andrix (elkpichico@.gmail.com) writes:
> > I have a table with 20.000.000 of tuples.
> > I have been monitoring the performance of the insertion and updates,
> > but not convince me at all.
> > The table have 30 columns, what and 12 of it, are calcultated column.
> > The test that i do was this:
> > 1 Insertion with all the columns and calculing the calcultated columns
> > in the insertion sentence.
> > 1 insertion and all the columns calculated in @.vars..
> > 1 insertion with the basic fields, and 10 updates.
> > And the result was that the last test was the most performant.
> > What is your opinion?
> That your posting is not very clear. I would take "calculated columns"
> to mean "computed columns", but since you can't insert explicit values
> in computed columns that does not really fit.
> Why not post the code you used, so it's easier to understand what you
> are talking about.
> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server 2005 at
> http://www.microsoft.com/technet/pr...oads/books.mspx
> Books Online for SQL Server 2000 at
> http://www.microsoft.com/sql/prodin...ions/books.mspx|||Do not store calculations in a table. You can do the calculations in a
VIEW, in the application or in computed columns (a proprietary
shorthand) for a VIEW. This will save you disk space of course. It
will also run faster, since reading from a disk is very slow compared
to math done in main storage. But the real benefit is data integrity,
which your approach will destroy.|||Andrix (elkpichico@.gmail.com) writes:
> The test that i do, were this:
> select @.PK = ....
> INSERT INTO shared_calc
> VALUES (@.PK,10,20,223,"calculo trivial",...... ,@.imp * @.ohter, ...)
> I mean that in the same Insert sentence, i do all the calcs to insert
> in the table.
> the other,
> was
> select @.cal1 = @.imp * @.other
> select @.cal2 = @.imp * @.other - @.umbral
> select @.PK = ....
> INSERT INTO shared_calc
> VALUES (@.PK,10,20,223,"calculo trivial",...... ,@.calc1,@.calc2, ...)
> and the last one was:
> select @.PK = ....
> INSERT INTO shared_calc
> VALUES (@.PK,10,20,223,"calculo trivial",...... )
> select @.cal1 = @.imp * @.other
> select @.cal2 = @.imp * @.other - @.umbral
> update shared_calc_imp
> set calc1 = @.calc1
> where pk = @.PK
> update shared_calc_imp
> set calc2 = @.calc2
> where pk = @.PK
And you are saying that the last test had the best performance?
Surprising.
But how many times did you run each test? Did you ensure that there
was no other load on the server? Did you work with the same table
that you just added to each time? Or did you recreate the table for
each test? And what were the results in numbers? Were the huge
differences?
Running performance tests requires care to avoid sources of error. One
very important is that any measurement below 50 ms contains to much
white noise to be reliable.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx