It’s funny that I procrastinated so long about re-upholstering my couch and den chair since the actual time it took to perform the task was only about 12 hours. I have a couch with 6 cushions that were in sad shape – the seams were beginning to split and the seating surface had developed holes where the fabric simply fell apart. I repaired what I could of the original seats, just to keep it looking somewhat acceptable, but eventually I realized that the fabric was so worn out that it simply could not be repaired. The foam cushions had deteriorated to that state in which anyone who sat on the couch felt like the couch was attempting to absorb them into the lower reaches of the seating area.
I decided that we either had to completely replace the couch or at least replace the cushions. I waited until there was a sale on foam sheets (foam is amazingly expensive), then purchased 3 72”x36”x3” foam sheets. I measured the cushions and then marked cutting lines on the foam to match the sizes of the existing cushions. My husband used a kitchen knife to cut the foam into several pieces.
I purchased about 15 yards of denim (using a 40% off coupon at JoAnn’s), then let it age for about 3 years in my stash until I finally decided that the couch looked so bad that it was ebarrassing to have guests see the couch…I then purchased enough zippers by the yard to ensure that I had enough lengths of zipper for the cushions.
Cutting out the pieces was fairly simple: I put the existing cushion on the fabric yardage and wrapped it around the cushion. I then folded over appropriate amount for seam allowance / zipper space and cut it to that size. The width was cut about 1/2” wider on each side of the cushion. I cut the side gussets by measuring the thickness and width of the cushion.
I serged around every piece I cut, then proceeded to assemble them. It was far easier than I thought it would be since the zipper is attached first (attach to one end of the fabric strip, right sides together. Fold over and top stitch the fabric to the edge of the zipper. Then, put the fabric from the other half of the main piece onto the zipper right sides together. Turn right side out and topstitch along the second side of the zipper. On each end of the zipper, bar tack 3 times (once at the edge of the fabric, once about 1/4” in, and again just over 1/2” in…this way the zipper is less likely to open up from the wrong end)
You now have a tube of fabric with a zipper in it. Pin the side gussets to one end of the fabric tube and sew around it. Before you repeat for the second side, be sure to open the zipper at least a few inches so that it can be turned right side out. Then sew second gusset and turn right side out. Stuff with the new foam and then with the old foam (which is so battered that it easily crushes to less than half its original 6” size, thus filling the cushion to the originally intended size).
Place on couch. Done!
Yes, it really is that simple, so don’t procrastinate.
Monday, July 16, 2012
Thursday, April 19, 2012
SSIS - Speeding Up Package Load Time
Your Package Will Load...Someday
While working with SSIS, I was often frustrated by how long the package takes to load for editing. Frequently, I was simply interested in opening the SSIS packages to see what they did, but really had no need to actually edit them. Sadly, if you try to open an SSIS package on a machine that either does not contain all of the databases referenced in the package or that cannot connect to those databases, you often have to wait an incredible length of time for the package to open before you even have a chance to change the connections to something valid.I discovered a small trick which cures that dilemma - open the SSIS package in a text editor (the package content is actually XML), make some minor edits, save, then finally open the SSIS package in BIDS.
Delaying Package Validation Until Later
Open your package in a text editor and search for all instances ofReplace those with"DelayValidation">0</DTS:
"DelayValidation">1</DTS:
This simple change will prevent BIDS from attempting to connect to and validate every object before allowing you to view the content of the SSIS package. It will still attempt to validate an object when you click on that object to edit it, but at least you won't have to wait for it to time out on its attempts to validate every item.
Correct Connection Strings
Open your package in a text editor and search for all instances of<DTS:Property DTS:Name="ConnectionString">
Check the value for Data Source and Initial Catalog - are these correct for the environment you're working in? Many times, the package may have been created with connection strings specific to the server they were built on and either the server no longer exists or is no longer accessible. By correcting the connection string information prior to opening the package in BIDS, you can speed up the package load time and avoid the flood of validation errors that go along with incorrect connection information.*
=======
* Yes, I know there are some SSIS tricks for handling different connection strings, such as using a configuration file to hold the connection information, however many SSIS packages that people inherit do not have configuration files for the connection information.
Wednesday, February 29, 2012
Adventures in SSIS: Data Migration
After escaping from technical support to the “greener fields” of Product Engineering, I thought that I would finally be in my element and have completely stress-free workdays.
So untrue.
So painfully untrue.
The new position offered enough stress to make event the most stable person ready to fling primary colored cartoon-like stuffed birds at an equally cartoon-like stuffed pig (who, for some unexplained reason, is lime green rather than pink, and seems to be just a head). Not that I do that frequently, but these poor birds have seen better days, as their ruffled feathers can attest. I suppose I could try to take my stress out on other things, but HR gets a wee bit upset if I throw chairs or coworkers (drat).Fortunately, they don’t seem to mind the occasional Angry Bird™ flying through the cube farm, however, so I fling away.
What is the current stressor?
SSIS.
I must admit that up until now I haven’t touched SSIS at all. I have plenty of experience working with DTS packages, but none with SSIS.
Until now.
There is no easy DTS to SSIS knowledge conversion. They’re completely different animals. They may do similar things, but nowhere near in the same manner. Books Online offers some information, but if you don’t already know SSIS the documentation might as well be in Klingon since it assumes you fully understand everything and are just using the books online for a quick definition check.
Great. And it’s due by the end of February. Which, by the way was yesterday.
Thank GOD it’s a leap year, otherwise I would have had to ensure I learned invisibility instantly.
Some of the sections of the package I was able to copy from other migration packages I found in our Source Control (it’s Tortoise Subversion, BTW), but the rest had to be created by me, and quickly.
Some things I could figure out: get data from Source, compare to Destination using LookUp, then Insert New unmatched data into Destination. Works great unless you have some wiseguy who customizes the data and causes a primary key violation between Source and Destination. Simply inserting a new row without keeping the key value didn’t work because the Destination table does not have identity insert (aka “autoNumber”) on the key. I searched everywhere for ideas—checking “SSIS Upsert” and equivalents in Bing and Goolgle—to see if anyone had written code that could help me. Most articles assumed that the destination had identity insert turned on.
Thankfully, I had the #SQLHelp hastag on Twitter and received responses from some awesome people* who were able to provide me the hints / guidance that I needed to plow through this disaster waiting to happen. The saving tweet (thanks @EricWisdahl) contained a link to an extremely helpful article by Phil Brammer called, “Generating Surrogate Keys”. It was exactly what I needed.
Well, mostly….
SSIS is extremely fussy about certain things—C# coding is case sensitive; datatypes must match exactly; math on certain numbers mysteriously changes the datatype. Things like that.
The issue I had was that the Source table use int for its KeyID, and the Destination table used tinyInt. Most searches for an equivalent in SSIs won’t tell you the full story… but I finally found it.
First, several sources tell you that
After many false starts (including learning that the C# scripting is painfully case sensitive), I discovered that
and it worked.
Now stress is greatly reduced and my Angry Birds and Green Pig are happily noshing on my cereal.
* HUGS! to @onpnt and @EricWisdahl for being “first responders”
So untrue.
So painfully untrue.
The new position offered enough stress to make event the most stable person ready to fling primary colored cartoon-like stuffed birds at an equally cartoon-like stuffed pig (who, for some unexplained reason, is lime green rather than pink, and seems to be just a head). Not that I do that frequently, but these poor birds have seen better days, as their ruffled feathers can attest. I suppose I could try to take my stress out on other things, but HR gets a wee bit upset if I throw chairs or coworkers (drat).Fortunately, they don’t seem to mind the occasional Angry Bird™ flying through the cube farm, however, so I fling away.
What is the current stressor?
SSIS.
I must admit that up until now I haven’t touched SSIS at all. I have plenty of experience working with DTS packages, but none with SSIS.
Until now.
There is no easy DTS to SSIS knowledge conversion. They’re completely different animals. They may do similar things, but nowhere near in the same manner. Books Online offers some information, but if you don’t already know SSIS the documentation might as well be in Klingon since it assumes you fully understand everything and are just using the books online for a quick definition check.
Great. And it’s due by the end of February. Which, by the way was yesterday.
Thank GOD it’s a leap year, otherwise I would have had to ensure I learned invisibility instantly.
Some of the sections of the package I was able to copy from other migration packages I found in our Source Control (it’s Tortoise Subversion, BTW), but the rest had to be created by me, and quickly.
Some things I could figure out: get data from Source, compare to Destination using LookUp, then Insert New unmatched data into Destination. Works great unless you have some wiseguy who customizes the data and causes a primary key violation between Source and Destination. Simply inserting a new row without keeping the key value didn’t work because the Destination table does not have identity insert (aka “autoNumber”) on the key. I searched everywhere for ideas—checking “SSIS Upsert” and equivalents in Bing and Goolgle—to see if anyone had written code that could help me. Most articles assumed that the destination had identity insert turned on.
Thankfully, I had the #SQLHelp hastag on Twitter and received responses from some awesome people* who were able to provide me the hints / guidance that I needed to plow through this disaster waiting to happen. The saving tweet (thanks @EricWisdahl) contained a link to an extremely helpful article by Phil Brammer called, “Generating Surrogate Keys”. It was exactly what I needed.
Well, mostly….
SSIS is extremely fussy about certain things—C# coding is case sensitive; datatypes must match exactly; math on certain numbers mysteriously changes the datatype. Things like that.
The issue I had was that the Source table use int for its KeyID, and the Destination table used tinyInt. Most searches for an equivalent in SSIs won’t tell you the full story… but I finally found it.
First, several sources tell you that
tinyInt is equivalent to DT_UI1 (single byte un-signed integer),
but if you check the list of datatypes for variables, you’ll note that it’s missing (or if not missing, then named something that is NOT intuitively equal. What does DT_UI1 mean anyway?). After many false starts (including learning that the C# scripting is painfully case sensitive), I discovered that
the Byte data type works for tinyInt Variables.
The other thing I found was that in SSIS scripts, when you add something to a value, it automatically forces a conversion to int datatype—and it’s very difficult to undo that. What I did was use Eric’s script unchanged, except to the final statementRow.ClientKey = NextKey;
I added a conversion to byte:
Row.ClientKey = (byte)NextKey;
and it worked.
Now stress is greatly reduced and my Angry Birds and Green Pig are happily noshing on my cereal.
* HUGS! to @onpnt and @EricWisdahl for being “first responders”
Monday, January 16, 2012
January #Meme15 Assignment
<*Cough* *Cough*> It’s awfully dusty here, isn’t it? Yeah, I’ve been neglecting my blog. Terribly sorry about that…. Well, nothing like trying to get things going again at the start of a new year. Hopefully this year I’ll be more consistent about posting (and not quit half-way through the year)
The assignment for this month was posted on Jason Strate’s blog – talk about Twitter, answering “Why should average Jane or Joe professional consider using Twitter?” and “What benefit have you seen in your career because of Twitter?“
Let’s get started.
If that’s all there was to it, then it would probably have gone the way of the 8-track tape within a few months. But thankfully, following people on Twitter can offer far greater benefits, as I discovered at the 2009 SQL PASS Summit conference. About halfway though the first day, I found out that the majority of the SQL people I really admired were all using Twitter as their main means of keeping in touch with other SQL professionals. And they weren’t tweeting useless stuff – they were posting announcements of new blog posts, links to articles about SQL, free online training, and other SQL-related items.
For SQL server professionals, Twitter definitely has benefits – just follow all of the awesome SQL gurus and the #SQLHelp and #SQLPass hashtags. For other professionals, it may or may not be helpful – it all depends upon whether other professions have a significant number of people tweeting about their profession.
From Twitter, I’ve also been able to find out about free online webinars and more SQL articles and blogs than what I have time to read in a day. Without Twitter, it would likely have taken me far longer to find the same information – or I would’ve completely missed seeing the information at all.
Finally, the most important benefit of chatting on Twitter with all of these SQL professionals is that when I attend a SQL conference these people actually know me by name – which has made networking so much easier.
#Meme15 Assignment #2
The #Meme15 is a meme started by a group of people in the #SQLFamily who wanted to discuss how they use Social Networks to enhance their careers and professional development.The assignment for this month was posted on Jason Strate’s blog – talk about Twitter, answering “Why should average Jane or Joe professional consider using Twitter?” and “What benefit have you seen in your career because of Twitter?“
Let’s get started.
Why should average Jane or Joe professional consider using Twitter?
That’s exactly what I wondered when I first heard about Twitter – why bother slogging through countless random postings about useless things written by strangers who have too much spare time on their hands? I really don’t need to know that you’re taking your goat for a walk or that you ate sushi last night. Besides, I likely already saw your post on Facebook, Linked In and Google+ on exactly the same thing. Sounds like a major time waster, right?If that’s all there was to it, then it would probably have gone the way of the 8-track tape within a few months. But thankfully, following people on Twitter can offer far greater benefits, as I discovered at the 2009 SQL PASS Summit conference. About halfway though the first day, I found out that the majority of the SQL people I really admired were all using Twitter as their main means of keeping in touch with other SQL professionals. And they weren’t tweeting useless stuff – they were posting announcements of new blog posts, links to articles about SQL, free online training, and other SQL-related items.
For SQL server professionals, Twitter definitely has benefits – just follow all of the awesome SQL gurus and the #SQLHelp and #SQLPass hashtags. For other professionals, it may or may not be helpful – it all depends upon whether other professions have a significant number of people tweeting about their profession.
What benefit have you seen in your career because of Twitter?
I’ve used the #SQLHelp hashtag several times to ask SQL-related questions and have received answers so quickly from SQL experts that it felt like they were right there with me helping me along.From Twitter, I’ve also been able to find out about free online webinars and more SQL articles and blogs than what I have time to read in a day. Without Twitter, it would likely have taken me far longer to find the same information – or I would’ve completely missed seeing the information at all.
Finally, the most important benefit of chatting on Twitter with all of these SQL professionals is that when I attend a SQL conference these people actually know me by name – which has made networking so much easier.
Friday, July 29, 2011
Are Your Database Statistics Fresh?
SQL maintains a vast amount of data – or statistics – about the content of each object in a database. The statistics can become stale if they have not been updated very often, or if a large number of changes have occurred within the database. As the statistics become less useful, the time for running queries can increase dramatically.
In Production systems, statistics should be updated during the usual maintenance window to ensure that the metadata is fresh.
To see how fresh the statistics are for one object, run:
DBCC SHOW_STATISTICS ( 'CCAuditSessionType' ,CCAuditSessionType_PK)
If you need to see the statistics for all databases, run this instead:
select STATS_DATE(o.object_id,s.stats_id) as StatsDate,o.name as TableName, s.name as StatsName, auto_created, user_created, no_recompute
from sys.stats s
join sys.objects o on s.object_id=o.object_id
where o.type='U'
To update statistics that are out of date, execute the command
exec sp_updatestats
on each database on the server that needs to have its statistics updated.
Use the following query to generate a script to update the statistics on all databases
declare @db varchar(30)
, @dbID int
, @sql varchar(max)
create table #t
(DbName varchar(30), databaseID int)
Insert #t (DbName, databaseID)
select [name], database_id
from sys.databases
where database_id > 4
Select @dbID = MIN(databaseID)
from #t
While @dbID is not NULL
BEGIN
select @db=DbName
from #t
where databaseID=@dbID
set @sql = 'Use [' + @db + ']' + CHAR(13) + 'go ' + CHAR(13)
set @sql = @sql + 'exec sp_updatestats' + CHAR(13) + 'go '
PRINT @sql
Select @dbID=min(databaseID)
from #t
where databaseID>@dbID
END
drop table #t
Thursday, July 14, 2011
Adventures with Denali CTP3–Part 1
I usually only realize how slow downloads can be when I’m eager to begin working with the item being downloaded. The hour it took to download the AdventureWorks sample databases felt far longer than it actually was.
One thing that surprised me was that the downloads for the databases were only the MDF (data) file – the log file was not included. After fiddling unsuccessfully with attaching it using the UI in Management Studio (no, I didn’t think of deleting the log file name from the file list in the UI – I’d assumed it was required and didn’t realize that if you did not list a logfile that it automatically treated it as an ATTACH_REBUILD_LOG command), I finally decided that it would be sensible to actually read the instructions. Technet provided me a very simple query to attach the database
CREATE DATABASE AdventureWorks2008R2 ON (FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdventureWorks2008R2_Data.mdf') FOR ATTACH_REBUILD_LOG ;
Worked like a charm.
I modified the query and attached the AdventureWorksDWDenali database in a similar manner then ran a few quick SELECT queries on various tables to see what they contained. I was pleasantly surprised to see that the OLAP database’s dimDate table contained English, French and Spanish day and month names.
I then launched the BI Development Studio and opened the AdventureWorksDWMultidimensionalDenali project (provided in samples pages on Codeplex). I verified the connection information in the datasource and successfully deployed the cube.
If everyone knew how easy this was, I’d probably be out of a job.
One thing that surprised me was that the downloads for the databases were only the MDF (data) file – the log file was not included. After fiddling unsuccessfully with attaching it using the UI in Management Studio (no, I didn’t think of deleting the log file name from the file list in the UI – I’d assumed it was required and didn’t realize that if you did not list a logfile that it automatically treated it as an ATTACH_REBUILD_LOG command), I finally decided that it would be sensible to actually read the instructions. Technet provided me a very simple query to attach the database
CREATE DATABASE AdventureWorks2008R2 ON (FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdventureWorks2008R2_Data.mdf') FOR ATTACH_REBUILD_LOG ;
Worked like a charm.
I modified the query and attached the AdventureWorksDWDenali database in a similar manner then ran a few quick SELECT queries on various tables to see what they contained. I was pleasantly surprised to see that the OLAP database’s dimDate table contained English, French and Spanish day and month names.
I then launched the BI Development Studio and opened the AdventureWorksDWMultidimensionalDenali project (provided in samples pages on Codeplex). I verified the connection information in the datasource and successfully deployed the cube.
If everyone knew how easy this was, I’d probably be out of a job.
Query to Pull Database File Information
This query will list the name, size and location of all files for all databases. This is handy for checking and documenting database server configuration to confirm whether the server follows our recommended best practices.
I’m sure that there are other ways to pull this data, however, in some environments your permissions may restrict you from using any method other than this to pull the data.
set nocount on
declare @sql varchar(max), @sql2 varchar(max), @name varchar(100)
if object_id('tempdb..#t') is not null drop table #t
create table #t (DbName varchar(30), LogicalName varchar(30), FileName varchar(100), sizeMB int, UsedSpaceMB int, growthMB int, is_percent_growth bit)
set @sql = ' substring(name,1,30) as LogicalName,substring(physical_name,1,75) as FileName,'
set @sql = @sql + 'size * 8192./1024/1024 as SizeMB,sum(a.total_pages * 8192./1024/1024 ) as UsedSpaceMB, '
set @sql = @sql + 'growth * 8192./1024/1024 as growthMB, is_percent_growth from '
declare c cursor for
select name from sys.databases where database_id > 4
open c
fetch next from c into @name
while @@fetch_status=0begin
set @sql2 = @sql + @name + '.sys.database_files df left join '
set @sql2 = @sql2 + @name + '.sys.allocation_units a on df.data_space_id=a.data_space_id left join '
set @sql2 = @sql2 + @name + '.sys.partitions p on p.partition_id = a.container_id '
set @sql2 = @sql2 + 'group by df.name, df.physical_name, growth, is_percent_growth, df.size'
begin try exec ('insert #t select ''' + @name + ''' as DbName, ' + @sql2 ) end try begin catch end catch
fetch next from c into @name
end
close c
deallocate c
select * from #t
I’m sure that there are other ways to pull this data, however, in some environments your permissions may restrict you from using any method other than this to pull the data.
Subscribe to:
Posts (Atom)
How to Turn Off Smart Charging
After a recent update, I discovered that my battery was down to 80% while plugged in, which I hadn't expected to see. I also noticed tha...
-
I've been playing with PowerBI recently, and have discovered their Gallery of Custom Visuals . I decided to try some to see what they ...
-
This post is part of T-SQL Tuesday , a monthly blog party on the second Tuesday of each month which was started back in 2009. This month...
-
TSQL2sday Linkback Happy TSQL Tuesday Everyone! This week's TSQL2day is hosted by the lovely Jen McCown ( blog | twitter ), ...