Showing posts with label SQL. Show all posts
Showing posts with label SQL. Show all posts

Thursday, May 12, 2022

3 Cloud Seeking Awesome Azure People

My company is hiring!!

3Cloud is a technology services firm helping clients transform their business through the power of the cloud. We leverage Microsoft Azure to help clients speed up innovation, operate more efficiently and be more responsive to their client’s needs. As a Microsoft partner, we specialize in Azure migration, cloud-scale custom application development, Internet of Things, analytics, and DevOps achieved through Azure-enabled infrastructure automation. 3Cloud is headquartered in Chicago with additional offices in Dallas and Tampa. 
Get your Azure on and join an awesome team with people like me. 


These are remote roles based in any US city.


Use the link below to see what roles are open for hiring. https://grnh.se/a6fd7f562us





Friday, March 25, 2022

Converting Views to Tables in SQL


Every once in a while, I have a data conversion project in which they want to move data to another server and change a number of legacy views into tables. This is most common when the views use data from linked servers and we have to migrate everything over to Azure SQL Database which doesn't allow cross database queries. 

One could manually write the CREATE TABLE scripts for each, but that's a lot of work. We can leverage the system tables to help us create the scripts from the views. 

Here's my rendition of the script: 

SELECT CONCAT('CREATE TABLE [',s.name,'].[',v.name,']
(',STRING_AGG(CONCAT (
   '[',c.name,'] ',t.name
   ,CASE WHEN t.name = 'varchar' and c.max_length = -1 
         THEN '(max)'
         WHEN t.name IN ('varbinary','varchar')
         THEN CONCAT ('(',convert(VARCHAR(50), c.max_length),')')
 WHEN t.name IN ('decimal','numeric')
 THEN CONCAT ('(',convert(VARCHAR(50), c.precision),',',convert(VARCHAR(50), c.scale),')')
WHEN t.name = 'datetime2'
THEN CONCAT ('(',convert(VARCHAR(50), c.scale),')')
ELSE ''
END
,' '
,CASE WHEN c.is_nullable = 1
      THEN 'NULL'
      ELSE 'NOT NULL'
END), ',') WITHIN GROUP (ORDER BY c.column_id),')
GO ') AS tablename
FROM sys.all_columns c
JOIN sys.types t ON c.system_type_id = t.system_type_id
JOIN sys.VIEWS v ON c.object_id = v.object_id
JOIN sys.schemas s ON v.schema_id = s.schema_id
WHERE v.name IN ('myView1','myView2')  ---put your view names here
GROUP BY s.name,v.name
ORDER BY s.name,v.name

The advantage of my script is that it aggregates all of the text for each table in a single row in the results. The STRING_AGG gracefully handles skipping the comma after the last column, so nothing has to be deleted from the script when copying it to a query window to run, and the CREATE TABLE statement is formatted pretty close to they way most people format their table scripts. I also included the square brackets around the Schema Name, Table Name, and Column Name in  case the legacy views used reserved words in any of those places.  

I'm sure this will come in handy in the future since this is the second project where I've needed this script and I forgot to save it last time. 


Friday, November 5, 2021

Data Community Fun: A visit with Paul Turley

 


A few weeks ago, Paul Turley, his wife Shirley, their two spunky dogs and anti-social cat pulled up at the "Letourneau Resort" for a 3-day visit. We spent some time working together on the patio while his dogs raced around my property. The cat hid in the Data Bus and only went out when forced to do so. We shared several meals and spent many hours working at our computers (he visited Thursday, Friday, & Saturday. We both had to work Thursday & Friday). 



"Fancy" patio set-up. Just the computer, but we do have internet out there and the weather was perfect for sitting outside enjoying the fresh air while also getting a lot of work done. 

View from patio when you look up from computer. :) 

On the last day, Saturday, my boss at 3 Cloud, Kathi Vick, and her husband joined us. Paul has a blog called "Data On The Road", so he interviewed both of us before getting back on the road heading eastward. 


Guy loaned Paul his "Director's clapper" - it was the award from the Sakuracon 2005 AMV competition and was signed by a bunch of the convention guests - for the filming. The interviews took place on my patio couch. The clapper was definitely a fun way to start the videos.


I got to put the "Arizona" sticker on their "Places We've Been" map attached to the side of the Data Bus.

In early 2020, Paul and I had planned on being co-presenters at the Phoenix SQL Saturday, so I made cloud shirts for both of us so we'd match. Unfortunately, COVID came along and the Phoenix SQL Saturday never happened. 

For the interview with me, Paul and I donned our matching shirts to demonstrate how cool it would have been if we had presented with matching cloud shirts. They look pretty spiffy, eh? And the cloud earrings are of course a necessary accessory for any cloud data professional!


The above is a screenshot of the video posted to Paul's blog. I recommend you go see the video on his page, and also check out the other interviews he did with a number of other wonderful people in the data community. He also has a schedule posted telling you where he'll be and when. If he's coming to your area, reach out to him and say, "hi". 
 
I can't wait to see the Data Bus again when it next passes through the Phoenix area. 

Thursday, June 24, 2021

Moving Azure SQL Database from One Subscription to Another

Multiple Environments

Many companies like to separate Production from Development / UAT environments. I have frequently seen Dev / UAT deployed to one subscription, and Production to a different subscription. This works well for the Finance team because the billing for each environment is clearly separated. However, having the environments in different subscriptions inserts a speed bump in a common practice among DBAs - restoring a copy of the Production database in the UAT and Dev environments to ensure that code is synchronized across all environments. 

Azure SQL Database Restore Database 



In Azure SQL Database, when you select "Restore", your options are limited to restoring the same server. Notice that the options for changing the subscription, resource group and server are greyed out. Only the database name can be changed. 

If you need to restore a copy of your Azure SQL database on the same server so that you can copy an accidentally deleted table back into your database, then this works fine. 

The restore Azure SQL database operation will not allow restoring the database over the existing database, however, you can change the name of the database at any time. 

Rename Azure SQL Database

In SQL Server management Studio run the following command on your Azure SQL Server: 

ALTER DATABASE OriginalDatabaseName MODIFY NAME = NewDatabaseName;

Azure SQL Database Copy Database



The next option would be to COPY the database. This option looks promising because it allows the database to be copied to another server, and if a different Azure SQL server does not exist, a new one may be created without having to leave the Copy dialog and create one separately. 



Although the New Server option looks promising, it does not provide the desired options: changing the subscription and resource group. 

All is not lost, a solution exists: since an Azure SQL Server can move from one Subscription to another, the database can be copied to a Migration server, moved to the other Subscription then copied onto the existing server to refresh the existing database. 

It's Easier than It Sounds.

You must have at least Contributor rights in both subscriptions to successfully perform this operation.
  • Copy the Azure SQL Database to a new migration Azure SQL server using the "Copy database" blade as shown above. This server will exist only for the duration of the database transfer and can be deleted immediately following the completion of the process. 
  • Go to the Migration Azure SQL Server's Overview page. 
 

  • Click on "Move" and select "Move to another subscription" in the drop list

In this above example, the user does not have access to move resources to the selected resource group. 
  • With appropriate permissions, the next step is validation of resources to move. 
 


(The above image is linked to the image in Microsoft Documents "Move resources to a new subscription" article because my account doesn't have rights to perform this action so I cannot get a good screenshot. The validation dialog should show only the Migration Azure SQL server in this step.) 

  • Upon completion of Validation, the Review screen pops up.


  • Confirm Source and Target subscriptions, resource groups, and number of resources to move then check the box and click "Move". 
  • Azure will pop up a notification when the move has completed.
  • Go to the subscription and resource group where the Migration server has moved.
  • Go to the overview page of your Azure SQL Database on the Migration server.
  • Click on "Copy"
  • Select your UAT or Dev Azure SQL Server and set the database name. 
  • Click on "Review + Create"
  • Click on "Create" to complete the process
Upon successful completion of the deployment, the migration server may be deleted. 




Monday, June 21, 2021

Your Synapse Moment: Adding Synapse Workspace User

 Just a quick script for Azure Synapse. Let's say your synapse workspace is named "dev-synapse-workspace", and you've already set up the Managed Identity


To allow synapse to actually interact with your SQL pool user databases, in each database you should run the following script (put your synapse workspace name in place of the example name)

CREATE USER [dev-synapse-workspace] FROM EXTERNAL PROVIDER;

exec sp_addrolemember 'db_owner', 'dev-synapse-workspace';

After that, your pipelines should be able to easily access your Synapse SQL Pools. 

This same code works in an Azure SQL database if you need to access one from Azure Synapse. 

Happy coding!

Monday, April 8, 2019

I'm Speaking at Phoenix SQL Saturday!

Just a quick post to let you know that I'll be speaking at the Phoenix SQL Saturday on May 4, 2019.

SQLSaturday is a free training event for Microsoft Data Platform professionals and those wanting to learn about SQL Server, Business Intelligence and Analytics. This event will be held on May 04 2019 at Chandler-Gilbert Community College Pecos Campus, 2626 East Pecos Road, Chandler, Arizona, 85225, United States

Hope to see you there!!




Shoe lovers: I'm a DSW ambassador, so please use this link when going to DSW Shoes

Thursday, March 1, 2018

SQL Saturday Phoenix - 17 March 2018


Just wanted to say that I'll be speaking at SQL Saturday Phoenix on 17 March 2018 on "Power BI: The Magic of M."  My presentation with be in the first set of sessions (10 AM), so be sure to arrive early! I'll be debuting my super special (yellow & black) "Power BI" dress, too.

The abstract is:

Power BI: The Magic of M

As the popularity of Power BI increases, knowledge of M, the language used in Power Query and queries in Power BI, is becoming more necessary. Although the Query Editor in Power BI generates most of the M needed for basic transformations, it's just the beginning of its capabilities. In this session we will look under the hood at the M query and demonstrate some of the magic of this powerful language.


SQL Saturday Phoenix will be held at Grand Canyon University, 3300 West Camelback Road, Phoenix, Arizona, 85017, United States

Thursday, June 9, 2016

Better SQL Syntax to Test for Existence Before Dropping



For the longest time, I've been using code like this

IF EXISTS (
    SELECT 1
    FROM sys.objects
    WHERE object_id = OBJECT_ID(N'[dbo].[FakeProc]')
      AND type IN (N'P',N'PC')
    )
  DROP PROCEDURE [dbo].[FakeProc]
GO

IF EXISTS (
     SELECT [name]
     FROM sysobjects
     WHERE [name] = 'trFakeTrigger'
AND type = 'TR'
     )
BEGIN
DROP TRIGGER trFakeTrigger;

END
Go

to test for existence of a Procedure or Trigger prior to dropping it. The challenge inherent within using the above syntax was remembering the abbreviation used to define the object type.

Later on, I found that I could use

IF OBJECT_ID('FakeProc') IS NOT NULL
BEGIN
DROP PROCEDURE FakeProc
END;
GO

IF OBJECT_ID('dbo.fn_FakeFunction') IS NOT NULL
BEGIN
DROP FUNCTION dbo.fn_FakeFunction
END;

GO


This was certainly more memorable than the earlier version but was still not the best choice. I needed a template that could be easily copied multiple times and modified with minimal issues.

I then discovered the most efficient trick: surround the drop command with a TRY - CATCH block.

BEGIN TRY DROP TRIGGER FakeTrigger END TRY BEGIN CATCH END CATCH;

BEGIN TRY DROP TABLE #Test END TRY BEGIN CATCH END CATCH;

BEGIN TRY ALTER TABLE MyTable DROP CONSTRAINT FakeConstraint END TRY BEGIN CATCH END CATCH;

BEGIN TRY DROP PROCEDURE FakeProc END TRY BEGIN CATCH END CATCH;

The best part is that MSDN already has written the code for a TRY-CATCH snippet. This snippet doesn't come built into SSMS, but the page provides clear and simple instructions for creating and importing the snippet. Give it a try and Happy Coding!



Tuesday, April 19, 2016

Mutually INCLUSIVE Sets In SQL

SQL is your best friend when you need to find mutually exclusive sets of data – the more conditions you add to your query, the fewer records are returned. But how do you handle mutually INCLUSIVE sets where you want the content of each group to expand as more criteria are met?

 Most people when faced with a request such as that will respond similar to our Mikos (on the left) – with the “You want what? When?” look.

I had the person requesting this code twice demonstrate how he calculated the mutually inclusive set within Excel before taking on the project so I could be certain I understood the logic he was employing to calculate them.

The first demonstration included some “black magic”. During the second demonstration, I asked him to slow down and explain why he was doing what he did.

Terminology

Essentially, they had a many to many relationship between Project Milestones and Contracts for each Project and they needed to be able to accurately report the revenue for these items.
Normally, there’s a one-to-one or one-to-many relationship between Milestones and Contracts for Projects (at least in the software that I have worked with), so this was to handle the exceptional relationship so they could calculate Project costs and revenue appropriately.
Project – A project – like “build a house”
Project Milestone – Completion of certain tasks for a project – like Painting or Install Carpeting
Contract – the agreement between Customer & the subcontractor doing the work – the painter who also installs carpeting
Reporting Groups – Mutually inclusive sets related to the Milestone & Contract combinations – one reporting group contains all milestones for each Contract as well as all Contracts for each milestone

Algorithm

Determining the correct Reporting Group value is a multi-step process calculated over all Contracts and Milestones associated to the Project.
  1. Sort by Milestone
  2. Assign First-pass group value using DENSE_RANK() OVER (ORDER BY Milestone)
  3. Sort By Contract
  4. For each Contract with multiple Milestones, if the grouping values differ, assign all instances of the grouping value to the lowest value (Example: In Item 16, there are two Milestones. One has Grouping Value 4 & other has Grouping Value 6. Set all Grouping Value 6 instances = 4. Repeat for each Contract where a similar situation exists)
  5. Sort by Grouping Value
  6. Update Grouping Values so that any gaps in the sequence are closed (Example: Grouping Value 6 was removed in an earlier step, so all items with Grouping Value 7 become 6, then all the 8s become 7, etc. so that the Grouping Values are consecutive.)

Code

I found that the only way to reproduce the results was to loop through the data sets similar to the demonstrated method which includes suspiciously redundant code required to produce the “correct-according-to-the-customer” results.
The following may not be ideal SQL, but I found that it returned results exactly matching several examples provided by the Customer. Yes, the seemingly redundant updates are necessary because without them the final output does not match his final results. I have not found any alternative method to replicate the same results more efficiently.
Please note that I’ve simplified the content:
  • Only include what was directly relevant to the calculation of the mutually inclusive data sets
  • The sample dataset that is entered into the @ContractMilestones table is not nearly close to the 100+ line examples given to me
  • I have excluded any direct references to proprietary tables
Please note that the final insert has been commented out since the “FinalReportingGroups” table was not defined in the code, but it exists within the Customer’s database to store the Reporting Groups thus calculated. I’m sure that Readers of this post understand that they only need to view the final results of the calculations since updating the final table would depend upon their own personal database schema.)

    DECLARE @ContractMilestones TABLE (
        Co TINYINT
        ,Project VARCHAR(10)
        ,[Contract] VARCHAR(16)
        ,Milestone VARCHAR(20)
        )
       
INSERT @ContractMilestones (Co, Project, [Contract], Milestone)
VALUES
     ( 102,'Test', '5','493')
    ,( 102,'Test', '1.01','111')
    ,( 102,'Test', '1.01', '211')
    ,( 102,'Test', '1.01', '311')
    ,( 102,'Test', '2.02', '111')
    ,( 102,'Test', '3.01', '211')

    DECLARE @ReportingGroups TABLE (
        Co TINYINT
        ,Project VARCHAR(10)
        ,Milestone VARCHAR(20)
        ,[Contract] VARCHAR(16)
        ,MilestoneRptGroup INT
        ,ContractGrouping INT
        ,FinalGroup INT
        );

    WITH preranked (
        Co
        ,Project
        ,Milestone
        ,[Contract]
        ,MilestoneRptGroup
        )
    AS (
        SELECT Co
            ,Project
            ,Milestone
            ,[Contract]
            ,DENSE_RANK() OVER (
                PARTITION BY Co
                ,Project ORDER BY Co
                    ,Project
                    ,Milestone
                ) AS MilestoneRptGroup
        FROM @ContractMilestones
        )
        ,contractgroupd (
        Co
        ,Project
        ,Milestone
        ,[Contract]
        ,MilestoneRptGroup
        ,ContractGrouping
        )
    AS (
        SELECT Co
            ,Project
            ,Milestone
            ,[Contract]
            ,MilestoneRptGroup
            ,min(MilestoneRptGroup) OVER (
                PARTITION BY [Contract] ORDER BY [Contract] DESC
                ) AS ContractGrouping
        FROM preranked
        )
    INSERT @ReportingGroups (
        Co
        ,Project
        ,Milestone
        ,[Contract]
        ,MilestoneRptGroup
        ,ContractGrouping
        ,FinalGroup
        )
    SELECT Co
        ,Project
        ,Milestone
        ,[Contract]
        ,MilestoneRptGroup
        ,ContractGrouping
        ,DENSE_RANK() OVER (
            ORDER BY Co
                ,Project
                ,ContractGrouping
            ) AS FinalGroup
    FROM contractgroupd
    ORDER BY FinalGroup

    DECLARE @ItemGrouping INT
    DECLARE @MilestoneGroup TABLE (MilestoneRptGroup INT)
    DECLARE @ContractGroup TABLE (ContractRptGroup INT)

    SELECT @ItemGrouping = min(FinalGroup)
    FROM @ReportingGroups

    WHILE @ItemGrouping IS NOT NULL
    BEGIN
        INSERT @MilestoneGroup (MilestoneRptGroup)
        SELECT DISTINCT MilestoneRptGroup
        FROM @ReportingGroups
        WHERE FinalGroup = @ItemGrouping

        UPDATE @ReportingGroups
        SET FinalGroup = @ItemGrouping
            ,MilestoneRptGroup = @ItemGrouping
        WHERE MilestoneRptGroup IN (
                SELECT MilestoneRptGroup
                FROM @MilestoneGroup
                )

        DELETE
        FROM @MilestoneGroup

        INSERT @ContractGroup (ContractRptGroup)
        SELECT DISTINCT ContractGrouping
        FROM @ReportingGroups
        WHERE FinalGroup = @ItemGrouping

        UPDATE @ReportingGroups
        SET FinalGroup = @ItemGrouping
            ,ContractGrouping = @ItemGrouping
        WHERE ContractGrouping IN (
                SELECT ContractRptGroup
                FROM @ContractGroup
                )

        DELETE
        FROM @ContractGroup

        INSERT @MilestoneGroup (MilestoneRptGroup)
        SELECT DISTINCT MilestoneRptGroup
        FROM @ReportingGroups
        WHERE FinalGroup = @ItemGrouping

        UPDATE @ReportingGroups
        SET FinalGroup = @ItemGrouping
            ,MilestoneRptGroup = @ItemGrouping
        WHERE MilestoneRptGroup IN (
                SELECT MilestoneRptGroup
                FROM @MilestoneGroup
                )

        DELETE
        FROM @MilestoneGroup

        INSERT @ContractGroup (ContractRptGroup)
        SELECT DISTINCT ContractGrouping
        FROM @ReportingGroups
        WHERE FinalGroup = @ItemGrouping

        UPDATE @ReportingGroups
        SET FinalGroup = @ItemGrouping
            ,ContractGrouping = @ItemGrouping
        WHERE ContractGrouping IN (
                SELECT ContractRptGroup
                FROM @ContractGroup
                )

        DELETE
        FROM @ContractGroup

        SELECT @ItemGrouping = min(FinalGroup)
        FROM @ReportingGroups
        WHERE FinalGroup > @ItemGrouping
    END;

-- This is the final table that holds the data we've retrieved
-- insert commented out so we can do a simple select.   
  --  INSERT FinalReportingGroups (
  --      Co
  --     ,Project
  --     ,[Contract]
  --     ,ReportingGroup
  --     ,Milestone
  --     )
    SELECT Co
        ,Project
        ,[Contract]
        ,DENSE_RANK() OVER (
            ORDER BY Co
                ,Project
                ,ContractGrouping
            ) AS ReportingGroup
        ,Milestone
    FROM @ReportingGroups
    ORDER BY Co
        ,Project
        ,[Contract]
        ,Milestone



Wednesday, November 13, 2013

Pivoting SQL Data for Instant Reporting

I always get a bit of geeky enjoyment out of finding an excuse to take advantage of features introduced into newer versions of SQL server. Today, a coworker asked me to help him pivot some data from an issue tracking table he was using. He didn’t know what “pivot” was, but that is what he wanted.
Here’s the table with sample data:
CREATE TABLE [dbo].[Issues](
    [Id] [int] IDENTITY(1,1) NOT NULL,
    [WorkItem] [nvarchar](50) NOT NULL,
    [DateSubmitted] [date] NOT NULL,
    [DateFixed] [date] NULL,
PRIMARY KEY CLUSTERED (
    [Id] ASC
)
)

INSERT [dbo].[Issues](
    [WorkItem],
    [DateSubmitted],
    [DateFixed]
)
VALUES
('Test', '20130310', '20130310'),
('Test1', '20130310', NULL),
('Test2', '20130311', '20130313'),
('Test3', '20130311', '20130311'),
('Test4', '20130311', '20130320'),
('Test5', '20130312', '20130314'),
('Test6', '20130314', '20130315'),
('Test7', '20130314', '20130315'),
('Test8', '20130316', '20130319'),
('Test9', '20130319', '20130330'),
('Test10', '20130320', NULL)

He asked me to go through the table, and for each day count how many issues were submitted and how many were fixed. When he manually counted the values, he got the following results:
Date Submitted Fixed
3/10/2013 2 1
3/11/2013 3 1
3/12/2013 1 0
3/13/2013 0 1
3/14/2013 2 1
3/15/2013 0 2
3/16/2013 1 0
3/19/2013 1 1
3/20/2013 1 1
3/30/2013 0 1
After staring at the tables for a few moments, I realized that the easiest way to transform the table into the results was to use PIVOT.
Select [Date]
, ISNULL([Submitted],0) as [Submitted]
, ISNULL([Fixed],0) as [Fixed]
FROM
  (Select DateFixed as [Date], COUNT(Id) as ct, 'Fixed' as md
     from dbo.[Issues]
    where DateFixed is not NULL
group by DateFixed
UNION ALL
   Select DateSubmitted as [Date], COUNT(Id) as ct, 'Submitted' as md
     from dbo.[Issues]
    where DateSubmitted is not NULL
group by DateSubmitted) as SourceTable
PIVOT
  (sum(ct)
   for md in ([Submitted], [Fixed])) as pvt
order by [Date]

And there you go. Fun day with new PIVOT-y toys.

Monday, September 30, 2013

Easy Trace File Query

Whenever a customer has an issue with slowness, we request a trace file to help us figure out where the problem may be. The trace allows us to see which procs are called most often, and which are taking up the most time.

I used to import the trace file into a temporary table, then run my own queries on the file to pinpoint issues. While that works well, it requires me to make additional copies of the customer's data which all has to be tracked due to in-house security rules. Since I don't like having to do excessive paperwork, I found a way to easily read in the data from where it sits rather than importing it into SQL server:

Select ObjectName
, substring(DatabaseName, 1, 18) as dbname
AVG(cpu) as cpuuse
, AVG(reads) as avgreads
, AVG(writes) as avgwrites
, COUNT(EndTime) as instances
, AVG(duration) as avgduration
, COUNT(EndTime) * AVG(reads) as cumexpense
from fn_trace_gettable ('C:\Temp\MyTraceFile.trc', DEFAULT)
group by ObjectName, substring(DatabaseName, 1, 18)
order by avgreads desc

The above query orders the items in the trace by average number of reads. Next, order the list by Average CPU then Average Duration. Any queries that are consistently at the top of the list are the first candidates for tuning.

I've also used the following to zero in on data regarding specific procs:

select  DatabaseName, ObjectName, TextData, CPU, Reads, Writes
from fn_trace_gettable ('C:\Temp\MyTraceFile.trc', DEFAULT)
where ObjectName ='MyStoredProc' or ObjectName='YourStoredProc'

Have fun analyzing your trace files.

Tuesday, October 23, 2012

Preparing for SQL PASS-Part 5

Since I've attended SQL Pass Summit a couple of times, this year I volunteered to help out with the "Summit First-Timers" program. This program is designed specifically to help people who are attending SQL Pass Summit for the first time to get the most out of their experience at Summit. I've decided to post a series of blog entries here to not only help out my flock of first-timers, but to also ensure that the information is shared with as many other first-time (and alumni) attendees as possible. This is the Fifth posting in the series. Here are the links for Part 1, Part 2 Part 3, and Part 4.

I hope most of you watched Denny Cherry's "SQL PASS First-Timers" webcast. I attended it and discovered that they have already moved the location of the breakfast & lunch room from its traditional location in cozy little 4B to the cavernous 4E-F. Check out the map to see the conference center layout. The 4th and 6th floor are where most of the activities will be held.

First-Timers Networking Sessions

On Tuesday afternoon, there are several 45-minute networking sessions led by Don Gabor. These are exclusively for First-timers and are invite only. All first-timers should have received an email with links to RSVP to the session of their choice. If you are a first-timer and have not received the email, please contact PASS, and let them know. Any sessions not filled before the end of this week will be opened to other attendees.

First-Timers' Orientation

Lady Runa's Flock
Tuesday evening, at 5:15 is the First-Timers' Orientation. This is where First-Timers will first meet their alumni Mentor. If I am your Alumni Mentor, you'll receive a sticker of the graphic to the left of this paragraph to put on your badge to make it easier for you to identify the other "official" members of my flock. If you aren't part of my flock, I hope that the person who is your mentor has his own set of stickers to brand his herd with. (I printed twice as many stickers as I need, so if you REALLY want to be a part of the flock, I can be bribed convinced to give you one). Further details about this event, and the schedule from there will be available later - generally speaking, shortly after the orientation, we'll be heading down to the Welcome Reception.

Don't forget to bring along business cards to hand out to people you meet!

Homework

Thought I'd let you off easy, eh? Well, since SQL PASS Summit is only 2 weeks away, I have homework for you to do:

  • Ensure you have a login for SQL PASS (free registration link here).
  • Log in to the Schedule Builder and select sessions you wish to attend*. If you're having difficulty choosing sessions to attend, you may consider following the advice from the Expert Picks provided by PASS Virtual Chapter leaders and community Experts.
  • If you will be present in Seattle on Monday and/or Tuesday, consider signing up for one of the pre-conference sessions. These are full day in-depth sessions that are well worth the additional $395 each to attend.
*PASS uses the data from the Schedule Builder to plan session room sizes - the most popular sessions are moved to larger rooms, and less popular sessions to smaller ones - ensuring that the sessions you want to attend will have enough room available for you to squeeze in.

Wednesday, October 17, 2012

Preparing For SQL Pass Summit–Part Four

Since I've attended SQL Pass Summit a couple of times, this year I volunteered to help out with the "Summit First-Timers" program. This program is designed specifically to help people who are attending SQL Pass Summit for the first time to get the most out of their experience at Summit. I've decided to post a series of blog entries here to not only help out my flock of first-timers, but to also ensure that the information is shared with as many other first-time (and alumni) attendees as possible. This is the fourth posting in the series – Part 1 can be found here, Part 2 is here, and Part 3 is here.

Answers to Questions from my Flock

asyobiI was happy to see that several of my First-Timers responded to my emails with some interesting questions. Since the answers to these questions could also help out others, I thought I’d include my answers here.

Q: What’s Twitter? Why do we want it?

Twitter (http://www.twitter.com) is a social media forum that functions like a micro-blogging site. It’s used for social messaging, news reporting, media marketing, and learning about SQL!
If you follow mostly SQL server people – yes, they occasionally post silly stuff – but frequently they tweet extremely useful things like twitter_bird_normal
  • Links to blog posts about SQL that are important
  • Information on SQL PASS activities
  • Links to free SQL training and books
In addition, you can even request help on something SQL-related – simply include #sqlhelp in your tweet. For example, one guy asked: ” Is it possible to remove a static IP from an availability group listener? If so, how? #sqlhelp” … and within a few minutes, Denny Cherry, Brent Ozar and Allan Hirt responded with helpful information that allowed the asker to go on with his work. I’ve posted a few #SQLHelp questions myself, whenever I’ve gotten stuck on something. Usually someone answers within minutes. It’s like having a mentor available at your beck and call. Of course, not all answers are perfect or correct, but they are helpful.
Brent Ozar has published an eBook called “the Simple Twitter Book” that explains twitter and how it can help you in your career. Here’s another article you may want to examine. 
 

  Q: Is Pre-Registration required for Sessions?

Only for the pre-conference sessions, and this is because you have to pay an additional $395 to attend them. The cost for the regular sessions is included in your Summit registration. As long as you are wearing your badge, you can attend any of the sessions you choose. For a session that you believe is particularly interesting, you may wish to line up early to ensure you can grab a good seat.
 

Q:How many people are attending SQL PASS Summit?

I don’t know for sure, but I’ve heard numbers from 4000 – 6000.
 

Q:What do you do with four pet goats?

VladMiklosYears ago, we would go hiking with them and they would get to carry all of our stuff. This meant that we could hike for greater distances because we wouldn’t get tired from hauling all the food and water that we needed to bring along. The goats would get tired, but not us.
Now, they’re just pets. They eat the grass, weeds, blackberries, rose bushes and leftovers from dinner. They are especially fond of eating the leftover potato chips and Doritos following parties. Sometimes they provide entertainment by running around chasing each other and butting heads.
Besides, you’ve got to admit they are rather cute.
 

Random Reminders

  • Be sure to vote (if you’re registered to vote in the United States) before coming to SQL PASS Summit, which happens during the week of Election Day (Nov 6). Most states allow voters to sign up for absentee voting (usually this means you vote by mail a few weeks prior to Election Day – check your state’s government page for details on how to vote absentee) and some states like Oregon and Washington simply have vote by mail, which allows voters to cast their votes any time within a month prior to election day.
  • Purchase the SQL PASS Summit Session Recordings. Why? Because it’s impossible for one person to attend all of the sessions offered. In fact, this year the schedule lists15 sessions in each of 4 time slots per day (5 time slots on Friday).The recordings are less expensive if you order them when you register, but if you haven’t ordered them ahead of time, you can order them while at SQL PASS Summit. If you attend a pre-con seminar, be sure to order the recordings from the pre-con sessions – there is a separate charge for this set of recordings.
  • If this is your first time attending SQL PASS Summit, try to attend as many of the First-Timer Events as you can.
  • If you are attending SQL Pass Summit with coworkers
    • Plan on attending different sets of sessions to maximize the variety of training that everyone in your work group receives.
    • Sit at separate tables during meals to expand networking opportunities.
    • Compare sets of vendor information that each coworker picked up since you may be able to eliminate taking home duplicate pamphlets.
  • Bring business cards listing your name and contact information to facilitate networking.
  • After Summit, be sure to connect to the people you met via Linked In and Twitter.
  • If you didn’t join PASS when registering for Summit, please do so now.
  • Bring “Emergen-C” drink mix to add to your water bottle. Each packet provides 1000 mg Vitamin C, plus a mix of electrolytes, antioxidants, and vitamins to help keep you alert. It also tastes better than plain water.
  • Get your flu shot a few weeks prior to attending. Getting sick is no fun. Take care of yourself.

Important SUMMIT links for you

  • Official Schedule (just released on 9/26/2012)
  • Seattle 101 (thanks Kendra!) – an excellent guide to what’s in Seattle and how to get around town. Definitely something to read if you will be in Seattle for more days than the days SQL PASS Summit is running.
  • The list of people attending SQL PASS Summit includes attendee names as well as their twitter handles. You may want to follow several of them on twitter.
  • Workstations loaded with Microsoft Hands-on Labs will be at SQL PASS Summit for you to try (and Microsoft people will provide assistance to you if you need it during the labs)
  • Several Community Bloggers have posted additional articles that should be helpful for SQL PASS First-Timers.











Resoved: Error Creating Storage Event Trigger in Azure Synapse

My client receives external files from a vendor and wants to ingest them into the Data Lake using the integration pipelines in Synapse (Syna...