Showing posts with label standard. Show all posts
Showing posts with label standard. Show all posts

Thursday, March 29, 2012

About a RS version and previous conditions of use

I own a Windows Small Bussiness 2003 license which includes SQL server 2000
Standard Edition and additionally came with a version of Reporting Services.
I want to learn and use it (Reporting Services) as a beginner but when I
try to install it a message appears indicating that I need to install or
configure previously two products:
a) Visual Studio .Net 2003
b) IIS 5.0
Do I need both of 'em just to begin doing simple reports?
I supposed a simple use like I could obtain through Crystal reports 7.0 or
so on.
Please, help me.
Probably next year we will migrate to a new version of Microsoft SBS
Is it worth to do efforts with the versions I own nowadays or not?
Thanks alot in advance.
--
sanpetusRS 2000 report designer require some copy of VS 2003 to be installed. In the
past VB.net 2003 was the cheapest way to do this (about $100). I don't know
now. Note that the VB 2005 will not work for this.
In RS 2005 it comes with a version of VS 2005 so no additional purchase is
necessary.
RS is a asp.net application and as such it needs IIS. IIS comes with all
servers. It might need to configured though.
Bruce Loehle-Conger
MVP SQL Server Reporting Services
"sanpetus" <sanpetus@.discussions.microsoft.com> wrote in message
news:14FE57B4-A059-4B0F-8539-C9D64CBAA6B6@.microsoft.com...
>I own a Windows Small Bussiness 2003 license which includes SQL server 2000
> Standard Edition and additionally came with a version of Reporting
> Services.
> I want to learn and use it (Reporting Services) as a beginner but when I
> try to install it a message appears indicating that I need to install or
> configure previously two products:
> a) Visual Studio .Net 2003
> b) IIS 5.0
> Do I need both of 'em just to begin doing simple reports?
> I supposed a simple use like I could obtain through Crystal reports 7.0 or
> so on.
> Please, help me.
> Probably next year we will migrate to a new version of Microsoft SBS
> Is it worth to do efforts with the versions I own nowadays or not?
> Thanks alot in advance.
> --
> sanpetus

Sunday, March 11, 2012

A Smart Way to Design a Database for Companies, People, Addresses?

I have a database design question. I am dealing with a bunch of
standard content categories: people, addresses and companies. I don't
know whether I should have a table for each type of data and associate
addresses (via join tables) with people or companies or both; or
whether the companies table and the people tables should both include
address info. How far does one usually go to normalize these standard
types of data?
Sure, you could say that it depends on the specific business logic,
but I am curious if anyone out there has a way of normalizing this
type of data that has proven effective for most applications. I'm not
sure how our app will grow over time and therefore I want to
accommodate unforeseen requirements upfront without wasting untold
hours on far edge cases. Should I just bite the bullet and assume
we'll need a many-to-many relationship for each?
Any advice or suggested reading would be much appreciated.>> How far does one usually go to normalize these standard types of data?
First of all there are no "standard types" of data. Every piece of data one
deals with during logical modeling depends on the conceptual model and
business rule. Of course, you could find certain aspects of commonality
among models, but relying on them as certain patterns is often a bad
approach.
In general, normalization beyond 1NF is the process of reducing certain
types of redundancies and removing database bias. In the process certain
known problems like update/delete anomalies are elimination. Overall, this
leads to a better quality design. Therefore the answer to your question is
you go as far as you want to keep your database redundancy-free.
>> Sure, you could say that it depends on the specific business logic, but I
>> am curious if anyone out there has a way of normalizing this type of data
>> that has proven effective for most applications.
There are many who claim (universal models, b-patterns etc. for more, google
for David Hayes or Len Silverston) that this is possible, but many
recognized scholars and experts in the database field are not in favor of
such approaches.
>> I'm not sure how our app will grow over time and therefore I want to
>> accommodate unforeseen requirements upfront without wasting untold hours
>> on far edge cases. Should I just bite the bullet and assume we'll need a
>> many-to-many relationship for each?
Expecting the worst is often the wisest policy :-)
--
Anith|||For efficiency's sake it is common practice design such tables with the
address (and often contact) information of the entity (Business, Employee,
Customer, etc) contained within the main entity table.
"Chris" <christopher.b.lewis@.gmail.com> wrote in message
news:1191611766.490849.42420@.50g2000hsm.googlegroups.com...
>I have a database design question. I am dealing with a bunch of
> standard content categories: people, addresses and companies. I don't
> know whether I should have a table for each type of data and associate
> addresses (via join tables) with people or companies or both; or
> whether the companies table and the people tables should both include
> address info. How far does one usually go to normalize these standard
> types of data?
> Sure, you could say that it depends on the specific business logic,
> but I am curious if anyone out there has a way of normalizing this
> type of data that has proven effective for most applications. I'm not
> sure how our app will grow over time and therefore I want to
> accommodate unforeseen requirements upfront without wasting untold
> hours on far edge cases. Should I just bite the bullet and assume
> we'll need a many-to-many relationship for each?
> Any advice or suggested reading would be much appreciated.
>|||> For efficiency's sake it is common practice design such tables with the
> address (and often contact) information of the entity (Business, Employee,
> Customer, etc) contained within the main entity table.
But this quickly changes if the app requires separate addresses for mailing,
shipping, billing, physical, etc. And then when a company can have multiple
branches the schema becomes even more complex.
Same is true for an employee, or a customer, if multiple addresses are
possible then the main entity table is out of the question. Consider Amazon
or PayPal, where you can store as many addresses as you like. I'm not sure
what their upper bound is (and haven't tried), but I think I have 5 on
PayPal, and it's highly doubtful they have a table like this:
CREATE TABLE dbo.Customers
(
FirstName,
LastName,
PrimaryAddress1,
PrimaryAddress2,
PrimaryCity,
PrimaryRegion,
PrimaryZip,
SecondaryAddress1,
...,
SecondaryZip,
TertiaryAddress1,
...,
TertiaryZip,
/*
etc.
etc.
*/
);
A|||>> Same is true for an employee, or a customer, if multiple addresses are
>> possible then the main entity table is out of the question. Consider
>> Amazon or PayPal, where you can store as many addresses as you like.
That is almost an universal requirement these days with many financial
institutions too. While it is hard to generalize, one pattern for a commonly
occurring entity class like customer is to have 3 core m:n relationship
designations in the schema,
-- the core customer entity:
name, ssn, passport_nbr, drivers_license etc.
-- the locality:
address, state, zip, region, country etc.
-- the contact/communication information:
phone_nbr, email_address, icq/mkt/linkedin etc.
One book I recently read, Data Model Patterns by D. Hay emphasizes on this
approach. His line of thought is mostly on the Zachman Framework to derive
at this common pattern and provides some reasonable justifications to adopt
it as a baseline. The UDM folks also have a similar approach :
http://www.univdata.com/images/sep98752-fig6.gif
--
Anith|||>Should I just bite the bullet and assume we'll need a many-to-many
>relationship for each?
Yes. Some years back I fell into the "put the address in the base table"
trap.
People -< PeopleCompany >- Companies
| |
^ ^
PeopleAddresses CompanyAddresses
Y Y
| |
Addresses >-- AddressTypes
"Chris" <christopher.b.lewis@.gmail.com> wrote in message
news:1191611766.490849.42420@.50g2000hsm.googlegroups.com...
>I have a database design question. I am dealing with a bunch of
> standard content categories: people, addresses and companies. I don't
> know whether I should have a table for each type of data and associate
> addresses (via join tables) with people or companies or both; or
> whether the companies table and the people tables should both include
> address info. How far does one usually go to normalize these standard
> types of data?
> Sure, you could say that it depends on the specific business logic,
> but I am curious if anyone out there has a way of normalizing this
> type of data that has proven effective for most applications. I'm not
> sure how our app will grow over time and therefore I want to
> accommodate unforeseen requirements upfront without wasting untold
> hours on far edge cases. Should I just bite the bullet and assume
> we'll need a many-to-many relationship for each?
> Any advice or suggested reading would be much appreciated.
>|||On Fri, 05 Oct 2007 12:16:06 -0700, Chris
<christopher.b.lewis@.gmail.com> wrote:
>I have a database design question. I am dealing with a bunch of
>standard content categories: people, addresses and companies. I don't
>know whether I should have a table for each type of data and associate
>addresses (via join tables) with people or companies or both; or
>whether the companies table and the people tables should both include
>address info. How far does one usually go to normalize these standard
>types of data?
The more time you spend with databases, the more you tend to factor
things into 3NF or better as a default. Break out the addresses!
>Sure, you could say that it depends on the specific business logic,
>but I am curious if anyone out there has a way of normalizing this
>type of data that has proven effective for most applications. I'm not
>sure how our app will grow over time and therefore I want to
>accommodate unforeseen requirements upfront without wasting untold
>hours on far edge cases. Should I just bite the bullet and assume
>we'll need a many-to-many relationship for each?
>Any advice or suggested reading would be much appreciated.
Here's my advice: don't be afraid of the join.
Josh|||> Here's my advice: don't be afraid of the join.
Depends on what your objective is. Maximum flexibility or maximum
performance.
"JXStern" <JXSternChangeX2R@.gte.net> wrote in message
news:5dqgg3ho4gqhc31kbcg4m2u76snergnt9f@.4ax.com...
> On Fri, 05 Oct 2007 12:16:06 -0700, Chris
> <christopher.b.lewis@.gmail.com> wrote:
>>I have a database design question. I am dealing with a bunch of
>>standard content categories: people, addresses and companies. I don't
>>know whether I should have a table for each type of data and associate
>>addresses (via join tables) with people or companies or both; or
>>whether the companies table and the people tables should both include
>>address info. How far does one usually go to normalize these standard
>>types of data?
> The more time you spend with databases, the more you tend to factor
> things into 3NF or better as a default. Break out the addresses!
>
>>Sure, you could say that it depends on the specific business logic,
>>but I am curious if anyone out there has a way of normalizing this
>>type of data that has proven effective for most applications. I'm not
>>sure how our app will grow over time and therefore I want to
>>accommodate unforeseen requirements upfront without wasting untold
>>hours on far edge cases. Should I just bite the bullet and assume
>>we'll need a many-to-many relationship for each?
>>Any advice or suggested reading would be much appreciated.
> Here's my advice: don't be afraid of the join.
> Josh
>|||>> > Here's my advice: don't be afraid of the join.
>> Depends on what your objective is. Maximum flexibility or maximum
>> performance.
Except that one does not always have to sacrifice one to gain the other. And
a logical operation like join has little to do with either anyway.
Otherwise, anyone whose objective is maximum performance would be better of
avoiding joins altogether by having a single table :-)
--
Anith|||On Sun, 7 Oct 2007 19:52:32 -0500, "Anith Sen"
<anith@.bizdatasolutions.com> wrote:
>> > Here's my advice: don't be afraid of the join.
>> Depends on what your objective is. Maximum flexibility or maximum
>> performance.
>Except that one does not always have to sacrifice one to gain the other. And
>a logical operation like join has little to do with either anyway.
>Otherwise, anyone whose objective is maximum performance would be better of
>avoiding joins altogether by having a single table :-)
Well, by having a lot of redundant data pre-digested (joined and/or
aggregated) for common queries.
J.

Thursday, March 8, 2012

A simple varchar comparison

First, Hi to all I'm new to this forum.
Second, I don't know if standard SQL involves stored procedures, but anyway I'll post my doubt.
I have chessy little procedure to get a password from a login. login is a varchar

CREATE PROCEDURE getPassw
@.login as varchar
AS
SELECT T_Worker.pass
FROM T_Worker
WHERE T_Worker.login = @.login

If I try " exec getPassw 'abc' ", I never get anything. The data exists in the tables. If I do something like

CREATE PROCEDURE getPassw
/*@.login as varchar*/
AS
SELECT T_Worker.pass
FROM T_Worker
WHERE T_Worker.login = 'abc'

The password shows up : '456' .
If I remove the '@.' from the WHERE query, all columns are returned... ?!?!? :confused:
I running the commands on a Microsoft's SQL server. Thank you for your attention. Any help would be seriously apreciated.This post really belongs in the Microsoft SQL (http://www.dbforums.com/f7/) forum.

I think the only problem is that you didn't cut yourself enough rope... You need to make the parameter longer, like:CREATE PROCEDURE getPassw
@.login as varchar(50)
AS

SELECT T_Worker.pass
FROM T_Worker
WHERE T_Worker.login = @.login

RETURN-PatP|||AH!!! Something that simple... But it did work... Sorry not posting this in the right place :D

A significant part of sql server process memory has been paged out

On a SQL Server 2005 x64 Standard Edition cluster I get the error listed below and then the SQL server service restarts. The SQL server is unavailable for 5-10 minutes during that time. Any ideas?

Error:

A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 647 seconds. Working set (KB): 11907776, committed (KB): 28731732, memory utilization: 41%%.

This means that Windows has decided to move SQL Server memory out of RAM and stored it in disk until it's needed again. This article has some more information on how to avoid this:

http://support.microsoft.com/kb/918483/en-us

You also mentioned that SQL Server has restarted after this. Could you please post the last 10 lines or so of errorlog when this occurred?

Thanks,

Fabricio.

|||

I did look at that article but I don't think that will work for SQL Standard Edition.

Note SQL Server 2005 Enterprise Edition is the only edition that is designed to use lock pages in memory.

|||

In that case, try to identify what's causing Windows to page SQL Server out to disk. SQL Server tries to fill up the available memory on the machine. When you have other apps running on the same machine, they may use a lot of memory and push SQL Server out to the swap file. SQL Server will detect this and trim its memory, but in some rare cases it can't release enough memory or can't release fast enough and may end up being swapped out. This message is to notify you that such event happened and that SQL Server's memory is being fetched from disk, causing some temporary slowness.

Please, let me know if you need help figuring out what app is using memory on the machine.

Thanks,

Fabricio.

|||

I believe the KB article is incorrect stating it’s only valid for x64 Enterprise Edition. I encountered the same problem you reported and the Lock Pages In Memory policy work just fine with x64 standard edition.

|||

This article explains the problem I was having:

http://blogs.technet.com/askperf/archive/2007/05/18/sql-and-the-working-set.aspx

The system was running Windows 2003 SP1. Have upgraded to Windows 2003 SP2 and have not had the problem since.

|||

Kmoskal,

The KB article is correct. Only SQL Server Enterprise will lock pages in memory. You may set the priviledge with any other edition, but it will have no effect in SQL Server. I guess you just have been lucky (and SQL Server has been doing a great job keeping itself in RAM) :-)

Thanks,

Fabricio.

Saturday, February 11, 2012

a group of sp_MSins stored procs has 2 members?

I decide to check with the forum because I haven't got any lucky result.
Here is the transactional replication between two SQL 2000 standard edition
+ sp3 servers I inherited a few month ago. On the subscriber, when I pushed
a new subcription there was an error "Cannot insert duplicate key row in
object bla bla with unique index 'bla bla'". Well, the errors are easy to
understand and fix. To my surprise, I noticed that the replication insert
stored proc has two stored procs defined in it. It looks like this:
create procedure "sp_MSins_tablename" @.c1 smalldatetime,@.c2 int,@.c3
smallint,@.c4 money,@.c5 money,@.c6 int,@.c7 char(3),@.c8 int
AS
BEGIN
insert into "smfPrice"(
"PriceDate", "SecurityId", "SecurityVersion", "AdjustedPrice", "Price",
"AuditInformationId", "IsoCurrencyCode", "DataSourceId"
)
values (
@.c1, @.c2, @.c3, @.c4, @.c5, @.c6, @.c7, @.c8
)
END
GO
create procedure "sp_MSins_tablename";2 @.c1 smalldatetime,@.c2 int,@.c3
smallint,@.c4 money,@.c5 money,@.c6 int,@.c7 char(3),@.c8 int
as
if exists ( select * from "tablename"
where "PriceDate" = @.c1 and "SecurityId" = @.c2 and "SecurityVersion" = @.c3
)
begin
update "tablename" set "AdjustedPrice" = @.c4,"Price" =
@.c5,"AuditInformationId" = @.c6,"IsoCurrencyCode" = @.c7,"DataSourceId" = @.c8
where "PriceDate" = @.c1 and "SecurityId" = @.c2 and "SecurityVersion" = @.c3
end
else
begin
insert into "tablename" (
"PriceDate","SecurityId","SecurityVersion","Adjust edPrice","Price","AuditInf
ormationId","IsoCurrencyCode","DataSourceId" ) values (
@.c1,@.c2,@.c3,@.c4,@.c5,@.c6,@.c7,@.c8 )
end
GO
I think the second version is better writen than the first one and can
suppress the said error. But obviously SQL server replication is not able to
pick the right one. When I swaped the procs titles while leaving the
definitions unchanged, the error went away.
My puzzle is: What event triggers the generation of the second set of procs?
What internal mechanism guarantees replication to use the correct version?
I'm thinking of removing the first instance of procs because the second one
already covers it. But for the future, I want to find out why it happens and
potentially come up with a better solution. I hope this is not a bug.
I'd like to hear your thoughs.
Richard
the second proc is a custom proc generated by a user. I suspect each proc
has a different owner. You should be able to check this by doing an sp_help
sp_MSins_tablename
or by browsing them in EM,
"Richard Ding" <rding@.acadian-asset.com> wrote in message
news:u7IKTHttEHA.4044@.TK2MSFTNGP09.phx.gbl...
>I decide to check with the forum because I haven't got any lucky result.
> Here is the transactional replication between two SQL 2000 standard
> edition
> + sp3 servers I inherited a few month ago. On the subscriber, when I
> pushed
> a new subcription there was an error "Cannot insert duplicate key row in
> object bla bla with unique index 'bla bla'". Well, the errors are easy to
> understand and fix. To my surprise, I noticed that the replication insert
> stored proc has two stored procs defined in it. It looks like this:
> create procedure "sp_MSins_tablename" @.c1 smalldatetime,@.c2 int,@.c3
> smallint,@.c4 money,@.c5 money,@.c6 int,@.c7 char(3),@.c8 int
> AS
> BEGIN
> insert into "smfPrice"(
> "PriceDate", "SecurityId", "SecurityVersion", "AdjustedPrice", "Price",
> "AuditInformationId", "IsoCurrencyCode", "DataSourceId"
> )
> values (
> @.c1, @.c2, @.c3, @.c4, @.c5, @.c6, @.c7, @.c8
> )
> END
> GO
> create procedure "sp_MSins_tablename";2 @.c1 smalldatetime,@.c2 int,@.c3
> smallint,@.c4 money,@.c5 money,@.c6 int,@.c7 char(3),@.c8 int
> as
> if exists ( select * from "tablename"
> where "PriceDate" = @.c1 and "SecurityId" = @.c2 and "SecurityVersion" = @.c3
> )
> begin
> update "tablename" set "AdjustedPrice" = @.c4,"Price" =
> @.c5,"AuditInformationId" = @.c6,"IsoCurrencyCode" = @.c7,"DataSourceId" =
> @.c8
> where "PriceDate" = @.c1 and "SecurityId" = @.c2 and "SecurityVersion" = @.c3
> end
> else
> begin
> insert into "tablename" (
> "PriceDate","SecurityId","SecurityVersion","Adjust edPrice","Price","AuditInf
> ormationId","IsoCurrencyCode","DataSourceId" ) values (
> @.c1,@.c2,@.c3,@.c4,@.c5,@.c6,@.c7,@.c8 )
> end
> GO
> I think the second version is better writen than the first one and can
> suppress the said error. But obviously SQL server replication is not able
> to
> pick the right one. When I swaped the procs titles while leaving the
> definitions unchanged, the error went away.
> My puzzle is: What event triggers the generation of the second set of
> procs?
> What internal mechanism guarantees replication to use the correct version?
> I'm thinking of removing the first instance of procs because the second
> one
> already covers it. But for the future, I want to find out why it happens
> and
> potentially come up with a better solution. I hope this is not a bug.
> I'd like to hear your thoughs.
>
> Richard
>
>
|||These two procs are actually in one definition, except they have a suffix.
And they all belong to dbo.
I am able to reproduce what has happened by checking the "do not block"
option on and off on the snapshot tab in publicatio properties, I could see
two different version of procs are generated. I can also see the diference
is with the sp_addpublication @.sync_method variable. One is 'native' that
creates one sp_MSins proc; When I check the said option, SQL uses
'concurrent' and creates two procs in one set.
To clarify this, the two procs in one set means two "create proc" statements
in the GUI when you double click on the proc, one of which is tagged with
";2" as in sp_MSinstablename;2
99% of my publication are created this way and they have been running well
with no problems. That means SQL Server replicaiton knows when to use which
proc to do the job. My current error indicates that replication somehow lost
its judgement like it used to. It takes the default proc (the top one)
instead of taking the second (bottom one). So far, I've no luck in finding
out any useful hints.
Microsoft developers, possible to shed some light on this?
Richard
"Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
news:eEzksEutEHA.3200@.TK2MSFTNGP14.phx.gbl...
> the second proc is a custom proc generated by a user. I suspect each proc
> has a different owner. You should be able to check this by doing an
sp_help[vbcol=seagreen]
> sp_MSins_tablename
> or by browsing them in EM,
> "Richard Ding" <rding@.acadian-asset.com> wrote in message
> news:u7IKTHttEHA.4044@.TK2MSFTNGP09.phx.gbl...
to[vbcol=seagreen]
insert[vbcol=seagreen]
@.c3[vbcol=seagreen]
@.c3[vbcol=seagreen]
"PriceDate","SecurityId","SecurityVersion","Adjust edPrice","Price","AuditInf[vbcol=seagreen]
able[vbcol=seagreen]
version?
>
|||I've made a bit more progress:
It appears sp_script_reconciliation_insproc created sp_MSins;2.
sp_scriptinsproc created sp_MSins.
sp_depends shows nobody depends on sp_script_reconciliation_insproc.
Looks like two separate processes triggered each stored proc individually
with sp_scriptinsproc doing it first.
Is this a normal behavior?
"Richard Ding" <rding@.acadian-asset.com> wrote in message
news:u9mlzW5tEHA.3200@.TK2MSFTNGP09.phx.gbl...
> These two procs are actually in one definition, except they have a suffix.
> And they all belong to dbo.
> I am able to reproduce what has happened by checking the "do not block"
> option on and off on the snapshot tab in publicatio properties, I could
see
> two different version of procs are generated. I can also see the diference
> is with the sp_addpublication @.sync_method variable. One is 'native' that
> creates one sp_MSins proc; When I check the said option, SQL uses
> 'concurrent' and creates two procs in one set.
> To clarify this, the two procs in one set means two "create proc"
statements
> in the GUI when you double click on the proc, one of which is tagged with
> ";2" as in sp_MSinstablename;2
> 99% of my publication are created this way and they have been running well
> with no problems. That means SQL Server replicaiton knows when to use
which
> proc to do the job. My current error indicates that replication somehow
lost[vbcol=seagreen]
> its judgement like it used to. It takes the default proc (the top one)
> instead of taking the second (bottom one). So far, I've no luck in finding
> out any useful hints.
> Microsoft developers, possible to shed some light on this?
>
> Richard
> "Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
> news:eEzksEutEHA.3200@.TK2MSFTNGP14.phx.gbl...
proc[vbcol=seagreen]
> sp_help
result.[vbcol=seagreen]
in[vbcol=seagreen]
> to
> insert
"Price",[vbcol=seagreen]
> @.c3
=
> @.c3
>
"PriceDate","SecurityId","SecurityVersion","Adjust edPrice","Price","AuditInf[vbcol=seagreen]
> able
> version?
second[vbcol=seagreen]
happens
>