Hey friends, I usually get questions and concerns like "how do I shrink this database", "the logfile is so big", "please help me".
How do I shrink a database?
DBCC SHRINKDATABASE('databasename', 10)
-- 10 is the amount of percent free space in the database after the shrink operation
How do I shrink a transactionlog?
DBCC SHRINKFILE(2, 10)
-- 2 is the second file of the database, usually the transactionlog
-- 10 is the amount of percent free space in the file after the shrink operation
How do I get stats ablout the transactionlog?
These two queries might be of help...
select * from sys.dm_db_log_stats(db_id())
select * from sys.dm_db_log_info(db_id())
12/11/22
10/28/22
Just a sample script of refreshing indexes in the current database
-- To rebuild or reorganize just the "few" fragmented of all indexes
-- in the current database...
-- And remember, this is just a sample of how you could do it.
-- See this as inspiration rather than the "best way"
DECLARE iterator CURSOR
FOR
SELECT
'ALTER INDEX [' + INDEX_NAME + '] ON [' + SCHEMA_NAME + '].[' + TABLE_NAME + '] ' +
CASE
WHEN avg_fragmentation_in_percent < 30 AND page_count > 200 THEN 'REORGANIZE'
ELSE 'REBUILD'
END stmt
FROM (
SELECT
c.name SCHEMA_NAME,
t.name TABLE_NAME,
i.name INDEX_NAME,
s.avg_fragmentation_in_percent,
s.page_count,
i.indid
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, NULL) s
INNER JOIN sys.tables t ON s.object_id = t.object_id
INNER JOIN sys.sysindexes i ON s.object_id = i.id AND s.index_id = i.indid
INNER JOIN sys.schemas c ON t.schema_id = c.schema_id
WHERE s.avg_fragmentation_in_percent >= 10 AND i.indid > 0) m
ORDER BY m.indid ASC
DECLARE
@stmt varchar(max)
OPEN iterator
FETCH NEXT FROM iterator INTO @stmt
WHILE @@FETCH_STATUS = 0
BEGIN
-- PRINT 'EXECUTING: ' + @stmt -- Just if you want some output
EXEC (@stmt)
FETCH NEXT FROM iterator INTO @stmt
END
CLOSE iterator
DEALLOCATE iterator
-- in the current database...
-- And remember, this is just a sample of how you could do it.
-- See this as inspiration rather than the "best way"
DECLARE iterator CURSOR
FOR
SELECT
'ALTER INDEX [' + INDEX_NAME + '] ON [' + SCHEMA_NAME + '].[' + TABLE_NAME + '] ' +
CASE
WHEN avg_fragmentation_in_percent < 30 AND page_count > 200 THEN 'REORGANIZE'
ELSE 'REBUILD'
END stmt
FROM (
SELECT
c.name SCHEMA_NAME,
t.name TABLE_NAME,
i.name INDEX_NAME,
s.avg_fragmentation_in_percent,
s.page_count,
i.indid
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, NULL) s
INNER JOIN sys.tables t ON s.object_id = t.object_id
INNER JOIN sys.sysindexes i ON s.object_id = i.id AND s.index_id = i.indid
INNER JOIN sys.schemas c ON t.schema_id = c.schema_id
WHERE s.avg_fragmentation_in_percent >= 10 AND i.indid > 0) m
ORDER BY m.indid ASC
DECLARE
@stmt varchar(max)
OPEN iterator
FETCH NEXT FROM iterator INTO @stmt
WHILE @@FETCH_STATUS = 0
BEGIN
-- PRINT 'EXECUTING: ' + @stmt -- Just if you want some output
EXEC (@stmt)
FETCH NEXT FROM iterator INTO @stmt
END
CLOSE iterator
DEALLOCATE iterator
9/23/22
Data Modelling, why is it important to understand logical and physical...
Coming
from a 200K+ IT Consultants organisation, and at least 25 years of solving the
"performance issues" after "stuff went into production" I'd
say it depends, but usually the Projects lacked a Data(base) Architect during
the architectural design, early design/implementation phase.
I usually say it's a database, not an objectbase. When we model data for a solution we do it in multiple layers. Usually it starts with business objects like Customers, Suppliers, Products, Orders, and then it comes down to Classes, that need "storage" by an Entity Model (Yes, any ORM would go but as long EF is without "Core" I'm happy). This logical data model need physical support, thus a database. Basically every logical data model object need a storage object, at the lowest level a Table, which gives Us a one to one relation between Entity and Table.
And this is how it usually ends, I'm not saying this is right. I'm only saying this is where it usually ends up.
Back to my saying this is a database, not an objectbase. Databases have data storage objects (Tables), and data access objects (Procedures, Views, I'm leaving out the functions here). The application want to have something to feed the Entities from the Tables, this is where we use views as proxies between the logical and physical model as we probably want to model the data (This is what most people would call normalize data, I use the terms of normalize, generalize, and denormalize and care less about what xNF we get. My goal is the most optimal physical datamodel for performance, data integrity, size.) to support the R in CRUD(Views for -R--). And then add Procedures for the C-UD in CRUD. Basic rule is one table and Three procedures per Entity, let us call the View by the EntityName, and the procedures ins-, upd, and delEntityName, and then add them as methods for the entities(EF supports this, EFCore doesn't!). And then add instead of triggers on the views launching the procedures for the developer not using the entity model.
So what is the problem of this approach, developers cannot change entity models without understanding what would happen with the database objects. So, if they after a release want to add attributes to an entity they need to change a table to add a column, change the view to handle the column, change three procedures, and then change the triggers. This would suck big time... To mitigate that problem the developer can create a supporting class for the minor release, think patch. In practical terms, create a new class with the keys, and additional attributes, getting a table for that, running CRUD against that table. And then, when the Product reaching a Major release merge together. But with this in mind, how ofter does a datamodel actually change after the release? I'd say not often, small changes, really seldom...
And finally, what are the benefits. First that people gets in mind is performance (I always get Premature Optimization n my face every time I bring this up.), and we get data integrity. Both in terms of data security, and data quality. With security we can make sure that only the right user can reach the right data, all the way down on row/column level, actually all the way down to every single data point we might have. But we also get higher quality in the data as we have a chance of applying a "higher level of normalization" as we can normalize, generalize, and denormalize data with the focus on data, not business objects.
As an example, take a Contacts table containing whatever needed for sending postcards, covering everyone in the World. We would need attributes like this: ContactID, FirstName, MiddleName, LastName, Address, Zipcode, City, County, State, Country, Continent. See this as a conceptual model, it doesn't need to be perfect...
Let us estimate a logical row to something between 20 and 500 bytes depending on the strings, and let us say that the average string is 9 characters, and we go by unicode giving us an average of 18 bytes per string, and that we have 10 strings and one key which need to support at least 8.000.000.000 individuals and 2.000.000.000 companies, that is 10.000.000.000 rows in the table. SQL Server has a storage level called data page that limits how many rows can be fetched through an IO operation. A data page is 8kB, and can store a single physical row up to 8060 B.
I usually say it's a database, not an objectbase. When we model data for a solution we do it in multiple layers. Usually it starts with business objects like Customers, Suppliers, Products, Orders, and then it comes down to Classes, that need "storage" by an Entity Model (Yes, any ORM would go but as long EF is without "Core" I'm happy). This logical data model need physical support, thus a database. Basically every logical data model object need a storage object, at the lowest level a Table, which gives Us a one to one relation between Entity and Table.
And this is how it usually ends, I'm not saying this is right. I'm only saying this is where it usually ends up.
Back to my saying this is a database, not an objectbase. Databases have data storage objects (Tables), and data access objects (Procedures, Views, I'm leaving out the functions here). The application want to have something to feed the Entities from the Tables, this is where we use views as proxies between the logical and physical model as we probably want to model the data (This is what most people would call normalize data, I use the terms of normalize, generalize, and denormalize and care less about what xNF we get. My goal is the most optimal physical datamodel for performance, data integrity, size.) to support the R in CRUD(Views for -R--). And then add Procedures for the C-UD in CRUD. Basic rule is one table and Three procedures per Entity, let us call the View by the EntityName, and the procedures ins-, upd, and delEntityName, and then add them as methods for the entities(EF supports this, EFCore doesn't!). And then add instead of triggers on the views launching the procedures for the developer not using the entity model.
So what is the problem of this approach, developers cannot change entity models without understanding what would happen with the database objects. So, if they after a release want to add attributes to an entity they need to change a table to add a column, change the view to handle the column, change three procedures, and then change the triggers. This would suck big time... To mitigate that problem the developer can create a supporting class for the minor release, think patch. In practical terms, create a new class with the keys, and additional attributes, getting a table for that, running CRUD against that table. And then, when the Product reaching a Major release merge together. But with this in mind, how ofter does a datamodel actually change after the release? I'd say not often, small changes, really seldom...
And finally, what are the benefits. First that people gets in mind is performance (I always get Premature Optimization n my face every time I bring this up.), and we get data integrity. Both in terms of data security, and data quality. With security we can make sure that only the right user can reach the right data, all the way down on row/column level, actually all the way down to every single data point we might have. But we also get higher quality in the data as we have a chance of applying a "higher level of normalization" as we can normalize, generalize, and denormalize data with the focus on data, not business objects.
As an example, take a Contacts table containing whatever needed for sending postcards, covering everyone in the World. We would need attributes like this: ContactID, FirstName, MiddleName, LastName, Address, Zipcode, City, County, State, Country, Continent. See this as a conceptual model, it doesn't need to be perfect...
Let us estimate a logical row to something between 20 and 500 bytes depending on the strings, and let us say that the average string is 9 characters, and we go by unicode giving us an average of 18 bytes per string, and that we have 10 strings and one key which need to support at least 8.000.000.000 individuals and 2.000.000.000 companies, that is 10.000.000.000 rows in the table. SQL Server has a storage level called data page that limits how many rows can be fetched through an IO operation. A data page is 8kB, and can store a single physical row up to 8060 B.
To
calculate the rows per page we simply divide 8060 bytes per page / bytes per
row giving us rows per page. To get the size we divide rows per page with rows
per table giving us the result in pages per table. And as every page is 8 kB we
just multiply with 8 and get kB per table.
Legend:
B
= Bytes, kB = Kilobytes, r = Row, p = Page, t = Table, r = Rows, Br = Bytes per
Row, Bp = Bytes per Page, kBp = Kilobytes per Page, rp = Rows per Page, pt =
Pages per Table, rt = Rows per table, Bt = Bytes per Table, kBt = Kilobytes per
Table
Formulas:
Bp
/ Br = rp (Round down)
rt
/ rp = pt (Round up)
pt
* kBp = kBt
10.000.000.000
rows per table and 20 Bytes per row:
403
Rows per page, 24.813.896 pages per table, 193.859 MB in the table
10.000.000.000
rows per table and 100 Bytes per row:
80
Rows per page, 125.000.000 pages per table, 976.563 MB in the table
10.000.000.000
rows per table and 500 Bytes per row:
16
Rows per page, 625.000.000 pages per table, 4.882.813 MB in the table
Breaking
this table up in a Contacts, Streets, Cities, Regions
Regions(RegionID,
State, Country, Region)
Cities(CityID,
City, County, RegionID)
Streets(StreetID,
Street, Zipcode, CityID)
Contacts(ContactID,
Names, StreetID, StreetNumber)
Makes
this probably down to 60% in size.
This
is normalization!
There
is a lot of strings, create a string table and assign StringIDs to the strings
in the entity tables. Let us do some generalization! I add some rough estimates
Strings(StringID,
String), 5.000.000 unique strings, avg row 20 B, total of 97 MB
Regions(RegionID,
StateStringID, CountryStringID, RegionStringID), 10.000 states, always 20 B per
row, total of 200 kB
Cities(CityID,
CityStringID, CountyStringID, RegionID), 250.000 cities, always 20 B per row,
total of 5 MB
Streets(StreetID,
StreetStringID, Zipcode, CityID), 25.000.000 streets, always 25 B per row,
total of 607 MB
Contacts(ContactID,
FirstNameStringID, MiddleNameStringID, LastNameStringID, StreetID, StreetNumber),
10.000.000.000 contacts, always 38 B per row, total of 368.514 MB
Adding
this together gives us a model of 368 GB. Compare 976 GB to 368 GB is a new
model that is less than 38 % of the original model.
And
then we have a really meshy model, so let us denormalize some…
Strings(StringID,
String), this is the same as before, 97 MB.
Addresses(StreetID,
StreetStringID, Zipcode, CityStringID, CountyStringID, StateStringID, CountryStringID,
RegionStringID), always 42 B per row, total of 1023 MB.
Contacts(ContactID,
FirstNameStringID, MiddleNameStringID, LastNameStringID, StreetID, StreetNumber),
same as before, 368.514 MB
This
final model is probably somewhere in the 35-40% range of the original model
giving us approximatly 3 times so many business objects, in the same amount of
RAM in cache on the SQL Server. And we will have a more slow, linear growth in
in the database when we add content instead of an exponential growth.
And
then we have the views and procedures making the model work from the business
layer.
Long
reply, adding to your story. Hoping to get the developers to understand the
importance of a difference between an entity and/or class in the business
layer, and a table in the database. There is no relation at all in between, as
the business layer works with business objects, and in the database we work
with data.
9/7/22
dbo.digits() - a sample In-line TVF using CTE
I found a question in one of the online forums where a user wanted to generate digits, like for an counter, but control how many digits and a range of allowed digits.
ex. I want 2 digits, and only the digits between 3 and 5 giving me a result like this.
digits
33
34
35
43
44
45
53
54
55
I came up with this solution...
CREATE FUNCTION dbo.digits(
@len int,
@min int,
@max int
)
RETURNS TABLE
AS
RETURN
WITH
digits (digit) AS (
SELECT '0' UNION ALL SELECT '1' UNION ALL SELECT '2' UNION ALL
SELECT '3' UNION ALL SELECT '4' UNION ALL SELECT '5' UNION ALL
SELECT '6' UNION ALL SELECT '7' UNION ALL SELECT '8' UNION ALL
SELECT '9'
),
selection AS (
SELECT
digit
FROM digits
WHERE
CAST(digit AS int) BETWEEN @min AND @max
),
result AS (
SELECT
0 AS position,
CAST(digit as varchar(max)) digit
FROM selection
UNION ALL
SELECT
r.position + 1,
CONCAT(s.digit,r.digit)
FROM result r
CROSS JOIN selection s
WHERE r.position < @len
)
SELECT
digit AS digits
FROM result
WHERE position = @len - 1
You then execute the TVF with this line:
SELECT * FROM dbo.digits(2, 3, 5) ORDER BY 1
ex. I want 2 digits, and only the digits between 3 and 5 giving me a result like this.
digits
33
34
35
43
44
45
53
54
55
I came up with this solution...
CREATE FUNCTION dbo.digits(
@len int,
@min int,
@max int
)
RETURNS TABLE
AS
RETURN
WITH
digits (digit) AS (
SELECT '0' UNION ALL SELECT '1' UNION ALL SELECT '2' UNION ALL
SELECT '3' UNION ALL SELECT '4' UNION ALL SELECT '5' UNION ALL
SELECT '6' UNION ALL SELECT '7' UNION ALL SELECT '8' UNION ALL
SELECT '9'
),
selection AS (
SELECT
digit
FROM digits
WHERE
CAST(digit AS int) BETWEEN @min AND @max
),
result AS (
SELECT
0 AS position,
CAST(digit as varchar(max)) digit
FROM selection
UNION ALL
SELECT
r.position + 1,
CONCAT(s.digit,r.digit)
FROM result r
CROSS JOIN selection s
WHERE r.position < @len
)
SELECT
digit AS digits
FROM result
WHERE position = @len - 1
You then execute the TVF with this line:
SELECT * FROM dbo.digits(2, 3, 5) ORDER BY 1
7/24/22
Skriptet
USE minaAdresser;
GO
CREATE SCHEMA Tabeller;
GO
CREATE SCHEMA Adresser;
GO
CREATE TABLE Tabeller.Namn
(
NamnID int IDENTITY(1,1) NOT NULL
CONSTRAINT pk_Tabeller_Namn
PRIMARY KEY CLUSTERED,
Namn nvarchar(255) NOT NULL
CONSTRAINT uq_Tabeller_Namn_Namn
UNIQUE
)
CREATE TABLE Tabeller.Adresser_Adress
(
AdressID int IDENTITY(1,1) NOT NULL
CONSTRAINT pk_Adresser_Adress
PRIMARY KEY CLUSTERED,
Förnamn int NOT NULL
CONSTRAINT fk_Adresser_Adress_Förnamn
FOREIGN KEY REFERENCES Tabeller.Namn(NamnID),
Efternamn int NOT NULL
CONSTRAINT fk_Adresser_Adress_Efternamn
FOREIGN KEY REFERENCES Tabeller.Namn(NamnID),
Gata int NOT NULL
CONSTRAINT fk_Adresser_Adress_Gata
FOREIGN KEY REFERENCES Tabeller.Namn(NamnID),
Gatunummer nchar(15) NOT NULL
CONSTRAINT df_Adresser_Adress_Gatunummer
DEFAULT (''),
Postnummer char(6) NOT NULL
CONSTRAINT ck_Adresser_Adress_Gatunummer
CHECK (Postnummer LIKE '[1-9][0-9][0-9] [0-9][0-9]'),
Ort int NOT NULL
CONSTRAINT fk_Adresser_Adress_Ort
FOREIGN KEY REFERENCES Tabeller.Namn(NamnID)
)
GO
CREATE VIEW Adresser.Adress
AS
SELECT
a.AdressID,
fn.Namn AS Förnamn,
en.Namn AS Efternamn,
g.Namn AS Gata,
a.Gatunummer,
a.Postnummer,
o.Namn AS Ort
FROM Tabeller.Adresser_Adress a
INNER JOIN Tabeller.Namn fn ON a.Förnamn = fn.NamnID
INNER JOIN Tabeller.Namn en ON a.Efternamn = en.NamnID
INNER JOIN Tabeller.Namn g ON a.Gata = g.NamnID
INNER JOIN Tabeller.Namn o ON a.Ort = o.NamnID
GO
CREATE VIEW Adresser.Utskrift
AS
SELECT
a.AdressID,
fn.Namn + ' ' + en.Namn AS Mottagare,
g.Namn + ' ' + a.Gatunummer AS Postadress,
a.Postnummer + ' ' + UPPER(o.Namn) AS Postort
FROM Tabeller.Adresser_Adress a
INNER JOIN Tabeller.Namn fn ON a.Förnamn = fn.NamnID
INNER JOIN Tabeller.Namn en ON a.Efternamn = en.NamnID
INNER JOIN Tabeller.Namn g ON a.Gata = g.NamnID
INNER JOIN Tabeller.Namn o ON a.Ort = o.NamnID
GO
CREATE FUNCTION Adresser.hämtaNamn(@NamnID int)
RETURNS nvarchar(255)
AS
BEGIN
RETURN(SELECT Namn FROM Tabeller.Namn WHERE NamnID = @NamnID)
END
GO
CREATE FUNCTION Adresser.hämtaNamnID(@Namn nvarchar(255))
RETURNS int
AS
BEGIN
RETURN(SELECT NamnID FROM Tabeller.Namn WHERE Namn = @Namn)
END
GO
CREATE PROCEDURE Adresser.skapaNamn @Namn nvarchar(255), @NamnID int OUTPUT
AS
BEGIN
DECLARE @Retur int = 0
IF @Namn IS NULL
BEGIN
SET @Namn = ''
SET @Retur += 10
END
SET @NamnID = Adresser.hämtaNamnID(@Namn)
IF @NamnID IS NULL
BEGIN
INSERT INTO Tabeller.Namn(Namn) VALUES(@Namn)
SET @NamnID = SCOPE_IDENTITY()
SET @Retur += 1
END
RETURN @Retur
END
GO
CREATE PROCEDURE Adresser.uppdateraAdress
@Förnamn nvarchar(255),
@Efternamn nvarchar(255),
@Gata nvarchar(255),
@Gatunummer nvarchar(255),
@Postnummer nvarchar(255),
@Ort nvarchar(255),
@AdressID int OUTPUT,
@updateAdressID int = -1
AS
BEGIN
DECLARE
@Retur int = 0,
@Exec int = 0,
@FörnamnID int,
@EfternamnID int,
@GataID int,
@OrtID int
IF @Postnummer NOT LIKE '[1-9][0-9][0-9] [0-9][0-9]' SET @Retur = -3
ELSE IF @Gatunummer IS NULL SET @Retur = -2
ELSE
BEGIN
EXEC @Exec = Adresser.skapaNamn @Förnamn, @FörnamnID OUTPUT
IF @exec != 0
BEGIN
SET @Retur += @Exec * 10
SET @Exec = 0
END
EXEC @Exec = Adresser.skapaNamn @Efternamn, @EfternamnID OUTPUT
IF @exec != 0
BEGIN
SET @Retur += @Exec * 1000
SET @Exec = 0
END
EXEC @Exec = Adresser.skapaNamn @Gata, @GataID OUTPUT
IF @exec != 0
BEGIN
SET @Retur += @Exec * 100000
SET @Exec = 0
END
EXEC @Exec = Adresser.skapaNamn @Ort, @OrtID OUTPUT
IF @exec != 0
BEGIN
SET @Retur += @Exec * 10000000
SET @Exec = 0
END
END
IF @Retur >= 0
BEGIN
IF @updateAdressID = -1
BEGIN
SELECT @AdressID = AdressID FROM Tabeller.Adresser_Adress
WHERE
Förnamn = @FörnamnID
AND Efternamn = @EfternamnID
AND Gata = @GataID
AND Gatunummer = @Gatunummer
AND Postnummer = @Postnummer
AND Ort = @OrtID
IF @AdressID IS NULL
BEGIN
INSERT INTO Tabeller.Adresser_Adress(Förnamn, Efternamn, Gata, Gatunummer, Postnummer, Ort)
VALUES(@FörnamnID, @EfternamnID, @GataID, @Gatunummer, @Postnummer, @OrtID)
SET @AdressID = SCOPE_IDENTITY()
SET @Retur += 1
END
END
ELSE
BEGIN
DECLARE @updated TABLE (AdressID int)
UPDATE Tabeller.Adresser_Adress
SET
Förnamn = @FörnamnID,
Efternamn = @EfternamnID,
Gata = @GataID,
Gatunummer = @Gatunummer,
Postnummer = @Postnummer,
Ort = @OrtID
OUTPUT inserted.AdressID INTO @updated
WHERE AdressID = @updateAdressID
SELECT @AdressID = AdressID FROM @updated WHERE AdressID = @updateAdressID
SET @Retur += 2
END
END
RETURN @Retur
END
GO
USE master
GO
CREATE SCHEMA Tabeller;
GO
CREATE SCHEMA Adresser;
GO
CREATE TABLE Tabeller.Namn
(
NamnID int IDENTITY(1,1) NOT NULL
CONSTRAINT pk_Tabeller_Namn
PRIMARY KEY CLUSTERED,
Namn nvarchar(255) NOT NULL
CONSTRAINT uq_Tabeller_Namn_Namn
UNIQUE
)
CREATE TABLE Tabeller.Adresser_Adress
(
AdressID int IDENTITY(1,1) NOT NULL
CONSTRAINT pk_Adresser_Adress
PRIMARY KEY CLUSTERED,
Förnamn int NOT NULL
CONSTRAINT fk_Adresser_Adress_Förnamn
FOREIGN KEY REFERENCES Tabeller.Namn(NamnID),
Efternamn int NOT NULL
CONSTRAINT fk_Adresser_Adress_Efternamn
FOREIGN KEY REFERENCES Tabeller.Namn(NamnID),
Gata int NOT NULL
CONSTRAINT fk_Adresser_Adress_Gata
FOREIGN KEY REFERENCES Tabeller.Namn(NamnID),
Gatunummer nchar(15) NOT NULL
CONSTRAINT df_Adresser_Adress_Gatunummer
DEFAULT (''),
Postnummer char(6) NOT NULL
CONSTRAINT ck_Adresser_Adress_Gatunummer
CHECK (Postnummer LIKE '[1-9][0-9][0-9] [0-9][0-9]'),
Ort int NOT NULL
CONSTRAINT fk_Adresser_Adress_Ort
FOREIGN KEY REFERENCES Tabeller.Namn(NamnID)
)
GO
CREATE VIEW Adresser.Adress
AS
SELECT
a.AdressID,
fn.Namn AS Förnamn,
en.Namn AS Efternamn,
g.Namn AS Gata,
a.Gatunummer,
a.Postnummer,
o.Namn AS Ort
FROM Tabeller.Adresser_Adress a
INNER JOIN Tabeller.Namn fn ON a.Förnamn = fn.NamnID
INNER JOIN Tabeller.Namn en ON a.Efternamn = en.NamnID
INNER JOIN Tabeller.Namn g ON a.Gata = g.NamnID
INNER JOIN Tabeller.Namn o ON a.Ort = o.NamnID
GO
CREATE VIEW Adresser.Utskrift
AS
SELECT
a.AdressID,
fn.Namn + ' ' + en.Namn AS Mottagare,
g.Namn + ' ' + a.Gatunummer AS Postadress,
a.Postnummer + ' ' + UPPER(o.Namn) AS Postort
FROM Tabeller.Adresser_Adress a
INNER JOIN Tabeller.Namn fn ON a.Förnamn = fn.NamnID
INNER JOIN Tabeller.Namn en ON a.Efternamn = en.NamnID
INNER JOIN Tabeller.Namn g ON a.Gata = g.NamnID
INNER JOIN Tabeller.Namn o ON a.Ort = o.NamnID
GO
CREATE FUNCTION Adresser.hämtaNamn(@NamnID int)
RETURNS nvarchar(255)
AS
BEGIN
RETURN(SELECT Namn FROM Tabeller.Namn WHERE NamnID = @NamnID)
END
GO
CREATE FUNCTION Adresser.hämtaNamnID(@Namn nvarchar(255))
RETURNS int
AS
BEGIN
RETURN(SELECT NamnID FROM Tabeller.Namn WHERE Namn = @Namn)
END
GO
CREATE PROCEDURE Adresser.skapaNamn @Namn nvarchar(255), @NamnID int OUTPUT
AS
BEGIN
DECLARE @Retur int = 0
IF @Namn IS NULL
BEGIN
SET @Namn = ''
SET @Retur += 10
END
SET @NamnID = Adresser.hämtaNamnID(@Namn)
IF @NamnID IS NULL
BEGIN
INSERT INTO Tabeller.Namn(Namn) VALUES(@Namn)
SET @NamnID = SCOPE_IDENTITY()
SET @Retur += 1
END
RETURN @Retur
END
GO
CREATE PROCEDURE Adresser.uppdateraAdress
@Förnamn nvarchar(255),
@Efternamn nvarchar(255),
@Gata nvarchar(255),
@Gatunummer nvarchar(255),
@Postnummer nvarchar(255),
@Ort nvarchar(255),
@AdressID int OUTPUT,
@updateAdressID int = -1
AS
BEGIN
DECLARE
@Retur int = 0,
@Exec int = 0,
@FörnamnID int,
@EfternamnID int,
@GataID int,
@OrtID int
IF @Postnummer NOT LIKE '[1-9][0-9][0-9] [0-9][0-9]' SET @Retur = -3
ELSE IF @Gatunummer IS NULL SET @Retur = -2
ELSE
BEGIN
EXEC @Exec = Adresser.skapaNamn @Förnamn, @FörnamnID OUTPUT
IF @exec != 0
BEGIN
SET @Retur += @Exec * 10
SET @Exec = 0
END
EXEC @Exec = Adresser.skapaNamn @Efternamn, @EfternamnID OUTPUT
IF @exec != 0
BEGIN
SET @Retur += @Exec * 1000
SET @Exec = 0
END
EXEC @Exec = Adresser.skapaNamn @Gata, @GataID OUTPUT
IF @exec != 0
BEGIN
SET @Retur += @Exec * 100000
SET @Exec = 0
END
EXEC @Exec = Adresser.skapaNamn @Ort, @OrtID OUTPUT
IF @exec != 0
BEGIN
SET @Retur += @Exec * 10000000
SET @Exec = 0
END
END
IF @Retur >= 0
BEGIN
IF @updateAdressID = -1
BEGIN
SELECT @AdressID = AdressID FROM Tabeller.Adresser_Adress
WHERE
Förnamn = @FörnamnID
AND Efternamn = @EfternamnID
AND Gata = @GataID
AND Gatunummer = @Gatunummer
AND Postnummer = @Postnummer
AND Ort = @OrtID
IF @AdressID IS NULL
BEGIN
INSERT INTO Tabeller.Adresser_Adress(Förnamn, Efternamn, Gata, Gatunummer, Postnummer, Ort)
VALUES(@FörnamnID, @EfternamnID, @GataID, @Gatunummer, @Postnummer, @OrtID)
SET @AdressID = SCOPE_IDENTITY()
SET @Retur += 1
END
END
ELSE
BEGIN
DECLARE @updated TABLE (AdressID int)
UPDATE Tabeller.Adresser_Adress
SET
Förnamn = @FörnamnID,
Efternamn = @EfternamnID,
Gata = @GataID,
Gatunummer = @Gatunummer,
Postnummer = @Postnummer,
Ort = @OrtID
OUTPUT inserted.AdressID INTO @updated
WHERE AdressID = @updateAdressID
SELECT @AdressID = AdressID FROM @updated WHERE AdressID = @updateAdressID
SET @Retur += 2
END
END
RETURN @Retur
END
GO
USE master
7/4/22
Really, is the backup running?
Yes, and now you launched all those backup statements, and you wonder when they are done...
This simple Query might help you a bit.
SELECT
session_id,
request_id,
command,
d.name as database_name,
start_time,
DATEDIFF(SECOND, start_time, GETDATE()) AS elapsed_time_in_seconds,
percent_complete AS estimated_percent_complete,
DATEADD(SECOND, ROUND(DATEDIFF(SECOND, start_time, GETDATE()) / percent_complete * 100.0, 0), start_time) AS estimated_finish_time
FROM sys.dm_exec_requests r INNER JOIN sys.databases d ON r.database_id = d.database_id WHERE command LIKE '%BACKUP%' OR command LIKE '%RESTORE%'
This simple Query might help you a bit.
SELECT
session_id,
request_id,
command,
d.name as database_name,
start_time,
DATEDIFF(SECOND, start_time, GETDATE()) AS elapsed_time_in_seconds,
percent_complete AS estimated_percent_complete,
DATEADD(SECOND, ROUND(DATEDIFF(SECOND, start_time, GETDATE()) / percent_complete * 100.0, 0), start_time) AS estimated_finish_time
FROM sys.dm_exec_requests r INNER JOIN sys.databases d ON r.database_id = d.database_id WHERE command LIKE '%BACKUP%' OR command LIKE '%RESTORE%'
5/24/22
Finally, just minutes away the CTP2 of Microsoft SQL Server 2022 launched, get it now and try it
It is with my greatest satisfaction I proudly can introduce Microsoft SQL Server 2022, CTP2. Read more about the new version here, What's new in SQL Server 2022 - SQL Server | Microsoft Docs.
And download the Public Preview here, SQL Server 2022 Preview Trial (microsoft.com)
If you're not registered for msbuild and watching the news now, jump over here, https://mybuild.microsoft.com/en-US/home.
I'm busy watching all the announcements at the moment, more will come.
Subscribe to:
Posts (Atom)