Export (0) Print
Expand All

Understanding and Using PowerShell Support in SQL Server 2008

SQL Server 2008

by Allen White; SQL Server MVP and Trainer/Consultant. Visit Allen’s blog at http://sqlblog.com/blogs/allen_white/default.aspx .

Introduction

In this paper we’ll walk through an introduction to Windows PowerShell and its key elements.  We’ll then look at the special PowerShell executable program provided with SQL Server 2008, sqlps.exe.  Sqlps.exe incorporated a set of cmdlets specific to using PowerShell with SQL Server, and adds a PowerShell drive (PSDrive) allowing us to navigate SQL Server as though it were a filesystem.  We’ll explore these features, then look at some scripts we can use to administer SQL Server as well as to extract data as needed.

Introduction to the PowerShell language

To provide a framework for understanding the basics of PowerShell we’ll look at cmdlets, variables, the pipeline, flow control, functions, error handling, and security.

Using Cmdlets

PowerShell’s cmdlets provide the core functionality necessary for any shell environment.  To provide consistency they’re named using a standard verb-noun naming convention.  So, to get a list of the processes running on the system use the Get-Process cmdlet.  For example:

PS> Get-Process
Handles  NPM(K)    PM(K)      WS(K) VM(M)   CPU(s)     Id ProcessName
-------  ------    -----      ----- -----   ------     -- -----------
    780       6     2532       5052    31   117.91    328 csrss
     97       4      544       3052    21    44.94   3716 ctfmon
    313      11     8388      13112    61   247.17   3632 explorer
      0       0        0         16     0               0 Idle
    479      30     9524      13128    89    34.73   1192 inetinfo
    729      27     8592       9312    44    36.47    412 lsass
    150       6     1744       4132    23     1.73   1012 msdtc
    159       4    12256      13320   104     1.95   1256 MsDtsSrvr
    716      46    36820      28084   211   149.56   1352 msmdsrv
    152      10     2268       5564    37     1.26   1756 pop3svc
    201       5    24396      21128   126     2.17   2736 powershell

To stop a service running on the system, use the Stop-Service cmdlet.  Help is available on any cmdlet using the Get-Help cmdlet followed by the name of the cmdlet.  To get help on the Get-Help command type this:

PS> Get-Help Get-Help
NAME
    Get-Help
SYNOPSIS
    Displays information about Windows PowerShell cmdlets and concepts.
SYNTAX
    Get-Help [[-name] <string>] [-component <string[]>] [-functionality <string[]>] [-role <string[]>] [-category <stri
    ng[]>] [-full] [<CommonParameters>]
    Get-Help [[-name] <string>] [-component <string[]>] [-functionality <string[]>] [-role <string[]>] [-category <stri
    ng[]>] [-detailed] [<CommonParameters>]
    Get-Help [[-name] <string>] [-component <string[]>] [-functionality <string[]>] [-role <string[]>] [-category <stri
    ng[]>] [-examples] [<CommonParameters>]
    Get-Help [[-name] <string>] [-component <string[]>] [-functionality <string[]>] [-role <string[]>] [-category <stri
    ng[]>] [-parameter <string>] [<CommonParameters>]
DETAILED DESCRIPTION
    The Get-Help cmdlet displays information about Windows PowerShell cmdlets and concepts. You can also use "Help {<cmdlet name> | <topic-name>" or "<cmdlet-name> /?". "Help" displays the help topics one page at a time. The "/?" displays help for cmdlets on a single page.
RELATED LINKS
    Get-Command
    Get-PSDrive
    Get-Member
REMARKS
    For more information, type: "get-help Get-Help -detailed".
    For technical information, type: "get-help Get-Help -full".

As you can see there are a number of options available including detailed help (-detailed), technical information (-full), or one of the best options, to see examples of how to use the cmdlet, use the –examples option.  In one of the related links listed in our example, the Get-Command cmdlet returns a list of all the available cmdlets.

Storing Variables

No language is complete without the ability to store interim results in some sort of variable, and PowerShell provides this capability as well.  Variables are defined by using the dollar sign character as the first character of the variable name.  Variables are not just a reference to a stored value, though, they store an object.  Objects are used extensively in the .NET Framework, and are well supported in PowerShell.  Objects have methods and properties so casting a variable as a string object, for example, enables all the methods and properties of a string object for that variable.  A quick example follows:

PS> $lit = 'Cleveland Rocks!'
PS>
PS> $lit
Cleveland Rocks!
PS> $lit.Length
16
PS>

The Length property for the literal value assigned to the $lit variable contains the value 16.  You can find out the members and properties for any variable by piping the variable to the Get-Member cmdlet like this (results are condensed):

PS> $lit | Get-Member
   TypeName: System.String
Name             MemberType            Definition
----             ----------            ----------
Clone            Method                System.Object Clone()
...
Length           Property              System.Int32 Length {get;}

There are times where PowerShell may not choose the correct type of object for a variable assignment.  A case in point is when a numeric value is assigned to a variable but the variable needs to be string object.  To control this you can cast the variable as the type you need, like this:

PS> $strval = [string]'7'

Another benefit of objects is the ability to group objects into collections of objects.  Collections are like an array without a predefined limit.  You can create a collection like this:

PS> $col = 1,3,4,6,7,9

The individual members of the collection can be accessed by their ordinal number, where an ordinal number of 0 represents the first member of the collection.  So, to retrieve the value of 4 out of our collection we can use this command:

PS> $col[2]
4

The power of collections will become evident as we examine flow control.  One really useful cmdlet that creates a collection is Get-Contents.  This cmdlet makes it easy to load a text file into a variable, which then contains a collection of string objects:

PS> $servers = Get-Content 'servers.txt'

This will load the contents of the servers.txt file in the local directory into the $servers variable. 

The Pipeline

In the introduction we discussed using the pipeline to send the results of one cmdlet to the next, and demonstrated that briefly in looking at the Get-Member cmdlet.  The pipeline, invoked by using the vertical bar character (‘|’), takes the results from one cmdlet and sends it to the next.  This ability creates the power that Unix administrators became used to with the shell scripting environments on that platform.  An example that provides some interesting results when examining our systems for performance problems is this:

PS> get-process | sort-object workingset -descending | select-object
-first 10
Handles  NPM(K)    PM(K)      WS(K) VM(M)   CPU(s)     Id ProcessName
-------  ------    -----      ----- -----   ------     -- -----------
    637      82   163668     157312  1228   232.92   2132 sqlservr
    535      80   120208     117256  1225   261.53   1344 sqlservr
    562      18    99972      77364   357   457.14   3580 Ssms
    598      11    52048      50352   179    57.86   4012 powershell
    308      73    61612      45740  1155   156.54    728 sqlservr
    602      17    57452      37956   255   298.60   1400 ReportingServicesService
    494      10    26636      33144   155     5.93   3308 SQLPS
    713      46    36704      27984   210   241.31   1264 msmdsrv
   1011      42    12872      19556    80   144.29    808 svchost
    158       4    12248      13272   104     2.22   1204 MsDtsSrvr

This takes the output of the Get-Process cmdlet we examined earlier and pipes the output to the sort-object cmdlet, which allows us to sort the results in a particular order.  In this case we’re sorting the results on the WorkingSet property in descending order, so the process using the most memory will be returned first, followed by the process using the next most amount of memory, and so on.  The results of this are then piped into the select-object cmdlet, which allows us to limit the resultset, and in this case we’re selecting just the first 10.  So, we’re returning the top ten processes in terms of memory use on our system.

To make things more convenient, PowerShell provides aliases for most of the cmdlets.  The aliases use well-known command names for certain functions.  For example, to view the items in directory PowerShell provides the Get-ChildItem cmdlet, but this cmdlet is aliased as “dir”, which is familiar to Windows users, and “ls”, which is familiar to Unix users, and provides the same functionality.  You can find out all the defined aliases by using the Get-Alias cmdlet.  Another great feature in PowerShell is the fact that either the slash (/) or the backslash (\) can be used in specifying directory trees, making PowerShell a more comfortable experience for both Windows and Unix admins.

Where Unix shell scripts pipe text from one command to another, PowerShell pipes objects from one cmdlet to the next.  This eliminates the need to parse the output of one command before sending it to the next.  Thus, our sort-object cmdlet understands that WorkingSet is a property in the resultset of the Get-Process cmdlet.

Flow Control

Flow control is important in any language to allow conditional and iterative processing, and PowerShell provides the constructs necessary for flow control.  Comparison operators are a necessary component of flow control, and PowerShell supports this set:

Operator

Description

-lt

less than

-le

less than or equal to

-gt

greater than

-ge

greater than or equal to

-eq

equal to

-ne

not equal to

-like

like wildcard pattern matching

-and

logical and

-or

logical or

Table 1: PowerShell Comparison Operators

Flow control is then handled using this set of commands:

Control

Example Code

If

if ($val -eq "target") {

  #work

  }

For

For ($i=0; $i -lt 10; $i++) {

  #work

  }

ForEach

Foreach ($obj in $objects) {

  #work

  }

Switch

Switch ($val) {

  "Val1" {

          #work

          }

  "Val2" {

          #work

          }

  }

Do Until

Do {

  #work

  }

  Until ($val -eq "target")

Do While

Do {

  #work

  }

  While ($val -eq "target")

While

While ($val -eq "target") {

  #work

  }

Table 2: Flow Control Operators and Examples

Notice that each of these commands, once the condition is met, is followed by a pair of braces({}).  This set of braces, along with the commands contained within them, is referred to as a scriptblock.  Scriptblocks can be created anywhere on a command line or in a script, and can be nested.

In addition, PowerShell provides these cmdlets for controlling flow:

Control Cmdlet

Description

ForEach-Object

Executes once for each member in the collection

Where-Object

Filters objects based on conditions

Select-Object

Pipes only the specified properties

Sort-Object

Sorts the objects

Tee-Object

Sends the objects in two directions

Table 3: Flow Control Cmdlets

Assembling calls to cmdlets using the tools shown here into a script you can use whenever needed allows you to automate the processes you use regularly, allowing you to be more efficient.  Scripts can be built to perform any number of administrative functions, from creating a database to extracting HR data and importing it into Active Directory Domain Services (AD DS) to keep your organization running smoothly.

Functions

Sets of script code you may need to run multiple times within the script can be grouped together into a function.  Functions can be defined with zero or more parameters to provide flexibility.  The basic form of a function is as follows:

       Function MyFunction {
              #work
              }

Within the scriptblock you can write the code necessary for your script to perform its desired function.  You can add parameters in one of two ways.  The first uses a single parameter, like this:

       Function MyFunction ($param) {
              #work
              }

The second, and more preferred way is like this:

       Function MyFunction {
              param (
                   [int]$x = 7,
                   [int]$y = 9
                   )
              #work
              }

Since PowerShell is an interpreted language, functions must be placed in the script before they’re called in the main part of the script.  Best practices include putting all of your functions at the beginning of the script for this reason.

Error Handling

Errors in PowerShell 1.0 are best handled using a Trap function.  Here’s an example:

Function Err_Handler {
  $err = "Error Category: " + $error[0].CategoryInfo.Category
  $err = $err + ". Error Object: " + $error[0].TargetObject
  $err = $err + " Error Message: " + $error[0].Exception.Message
  $err = $err + " Error Message: " + $error[0].FullyQualifiedErrorId
  $log = New-Object System.Diagnostics.EventLog('Application')
  $log.set_source("MyScript")
  $log.WriteEntry($err)
  }
Trap {
  # Handle the error
  Err_Handler
  # End the program.
  break;
  }
# Your Code Here …

The error handler will write the error to the Windows system Application log, which can be viewed by an administrator to diagnose the problem.

Security

One of the important issues in any application is security.  PowerShell 1.0, as installed by default, doesn’t allow scripts to run.  The Get-ExecutionPolicy cmdlet will return the current security setting for PowerShell, and by default it’s set to Restricted.  To allow scripts which are on the local machine to run use the Set-ExecutionPolicy cmdlet to set the property to RemoteSigned.  This will allow local scripts to be run, but scripts anywhere else must be digitally signed.  For more information about execution policy get help on “about_signing”. Also, in order to prevent command “hijacking”, script location must be qualified to run, so to run a script in the current directory called myscript.ps1 the command to use is:

PS> ./myscript.ps1

The full path can be used as well, and the slash or backslash character can be used in the path name.

SQL Server PowerShell Extensions

Microsoft has incorporated PowerShell into its Common Engineering Criteria for server products, and SQL Server 2008 has included PowerShell in its management toolset.  PowerShell by design allows the inclusion of “snap-ins” that provide access to the application that wants to allow a PowerShell interface into the application.  The most notable of these applications is Microsoft Exchange 2007, which rebuilt the application from the ground up with administrative interfaces built for PowerShell, including hundreds of cmdlets for various functions, and the Exchange administrative application executes the PowerShell cmdlets.

Shell scripting environments address everything on a server as though it were a file system.  PowerShell Drives provide this capability in PowerShell, and the Get-PSDrive cmdlet will list the available drives in your session.  PSDrives included in basic PowerShell include (besides real file system drives):

  • ENV: (Environment variables)
  • HKCU: (HKEY_CURRENT_USER Registry tree)
  • HKLM: (HKEY_LOCAL_MACHINE)

You can actually navigate the Windows Registry using filesystem commands like cd (Set-Location cmdlet) and dir (Get-ChildItem).

The SQLSERVER: Drive

SQL Server 2008 adds its own PSDrive for the environment in the form of the SQLSERVER: drive.  (For detailed steps on loading the SQL PowerShell snap-ins into native PowerShell, visit Michiel Wories’ blog post: http://blogs.msdn.com/mwories/archive/2008/06/14/SQL2008_5F00_Powershell.aspx.) With the SQL Server snap-in you can navigate SQL Server objects through one of four directories under SQLSERVER:

  • The SQL folder accesses the database engine, SQL Server Agent, Service Broker, and Database Mail.
  • The SQLPolicy folder accesses Policy-Based Management.
  • The SQLRegistration folder accesses the Registered Servers and the new Central Management Server.
  • The DataCollection folder accesses the Data Collector objects provided with Management Data Warehouse.

These folders can be navigated like the filesystem, as is shown in this screen capture:

SQLDrives.jpg

Figure 1: Navigating the SQLServer: PowerShell Drive

Now, by navigating to the Databases folder under an instance of SQL Server, you can use the following command to send the list of databases to a web page for anyone in your organization to view:

PS> dir | select name, CompatibilityLevel, RecoveryModel, Size, SpaceAvailable | convertto-html > c:\inetpub\wwwroot\databases.html

If you then navigate to the Tables folder under a specific database you can use the following command to send the list of tables in that database, in descending order by the number of rows in the table, showing the largest (in rows) tables first, to another web page for your management tool set.

PS> dir | select schema, name, rowcount, filegroup | sort rowcount -descending | convertto-html > c:\inetpub\wwwroot\DBTables.html

SQL Server Cmdlets

These are basic web pages, but can be enhanced for your needs. The SQL Server PowerShell extensions also provide new cmdlets.  The cmdlets are:

  • Invoke-Sqlcmd
  • Invoke-PolicyEvaluation
  • Encode-SqlName
  • Decode-SqlName
  • Convert-UrnToPath

These cmdlets provide functionality not available through other means: 

  • Invoke-Sqlcmd takes a query in text form and sends it to SQL Server for processing.  The results are returned in object form and standard PowerShell functions can be used to manipulate the data as necessary.  It takes either a standard Transact-SQL query or an XQuery statement as input.
  • Invoke-PolicyEvaluation uses the SQL Server 2008 Policy-Based Management feature.  It evaluates policies defined for one or more servers to determine whether or not the servers are in compliance.  It can also reset object settings to comply with the policy.  Lara Rubbelke wrote a set of blog posts about this cmdlet at http://sqlblog.com/blogs/lara_rubbelke/archive/2008/06/19/evaluating-policies-on-demand-through-powershell.aspx.

There are characters acceptable for use within SQL Server that are not acceptable in PowerShell, such as the backslash between the server and instance names in defining a SQL Server instance.

·         The Encode-SqlName allows conversion from a SQL Server acceptable name (such as HOME\MyInstance) to one usable by PowerShell (HOME%5CMyInstance). 

·         The Decode-SqlName cmdlet reverses the above process.

SMO uses Uniform Resource Names (URN) for its objects.

·         Convert-URNToPath cmdlet is provided to convert those URN values to path names. That path name can then be used in a Set-Location cmdlet, to navigate SQL Server.  The URN for the Sales.Order table in AdventureWorks on HOME\MyInstance is

Server[@Name='HOME\MyInstance']\Database[@Name='AdventureWorks']\Table[@Name='Order' and @Schema='Sales']

That URN will be converted by the cmdlet to: SQLSERVER:\SQL\HOME\MyInstance\Databases\AdventureWorks\Tables\Sales.Order

These five cmdlets, in conjunction with the objects in the SMO object library, provide full access to the management features in SQL Server Management Studio.

SQLPS.exe – The SQL Server Mini-Shell

To make things easier for administrators using PowerShell for SQL Server, Microsoft created an executable - a mini-shell - which automatically loads the SQL Server PowerShell snap-ins.  The program is called sqlps.exe and is included when you install SQL Server 2008.  Besides automatically loading the snap-ins the sqlps.exe environment enables script execution automatically by setting the execution policy to RemoteSigned.

This mini-shell can be invoked in a number of ways.  You can use the Windows Start, Run menu item and type in sqlps and start it that way.  SQL Server Agent jobs can use the SQL Server Agent PowerShell subsystem in a step to run a script by selecting the step type of PowerShell.  Each time a step runs the PowerShell step it loads a separate copy of sqlps.exe into memory, each taking about 40MB, so caution is advised when using this feature.

The most interesting method is within SQL Server Management Studio.  In Object Explorer you can right-click on any object in the “tree” under an instance and select “Start PowerShell”, and sqlps.exe will start up, and the Set-Location cmdlet is used to set the current location in the shell window to the object you pointed to.  This provides a very nice ability to do some ad-hoc browsing of SQL Server objects as necessary.

Using the .NET Framework with SQL Server

There are two perspectives for working with SQL Server, administration and data access.  SQL Server Management Objects (SMO) contains the objects we’ll use to manage SQL Server, and ADO.NET allows us access to the data.

SQL Server Management Objects - SMO

SQL Server Management Objects (SMO) is the object library provided by Microsoft to perform administrative actions against SQL Server.  The library is built on top of the .NET Framework (2.0) so applications written using SMO are using managed code and benefit from the features available within the Framework, including the security and cleanup features.  SMO was introduced with SQL Server 2005 and allows you to manage servers running SQL Server 2000, SQL Server 2005, and SQL Server 2008.

To use SMO within PowerShell (if you’re not using sqlps.exe) you’ll need to load the SMO DLLs.  This is easy to do at the beginning of your scripts using these commands:

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO')  | out-null
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMOExtended')  | out-null

The Out-Null cmdlet at the end of the command simply ignores the version information returned when the DLL is loaded.

                The SMO object library is best envisioned as a tree-type structure, starting with the Server object.  You can connect to a server using the following command:

$srv = new-object ('Microsoft.SqlServer.Management.Smo.Server') 'HOME\MyInstance'

The new-object cmdlet creates an object of the type specified within parentheses, in this case a new Server object.  Once you’ve set a variable to the server object you can navigate through the various collections and properties available.  For example the Information collection contains properties about this instance, like Edition, Version, location of the system data and log files (MasterDBData and MasterDBLogData), and more.  The Settings collection contains useful information such as the BackupDirectory and default data and log directory for user databases (DefaultFile and DefaultLog).

Figure 2: SMO Server Information and Settings objects.

The databases defined on the instance are identified in the Databases collection.  For example, details on the physical files can be accessed by the following structure:

Figure 3: SMO Database File and Log File objects.

So, how do we make use of this information?  Here’s a script to create a database, using the server’s default file and log locations:

#createsimpledb.ps1
#Creates a new database using defaults
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO')  | out-null
$s = new-object ('Microsoft.SqlServer.Management.Smo.Server') 'HOME\MyInstance'
$dbname = 'SMOSimple_DB'
$db = new-object ('Microsoft.SqlServer.Management.Smo.Database') ($s, $dbname)
$db.Create()

This creates a database with all the default sizes and locations, using the settings in the model database.  If you want to have more control over the creation of the database you’ll need to set the properties in the SMO objects defined for creating databases, as shown in this more complete script: 

#createdb.ps1
#Creates a new database using our specifications
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO')  | out-null
$s = new-object ('Microsoft.SqlServer.Management.Smo.Server') 'HOME\MyInstance'
$dbname = 'SMO_DB'
$syslogname = $dbname + '_SysData'
$applogname = $dbname + '_AppData'
$loglogname = $dbname + '_Log'
$fileloc = $s.Settings.DefaultFile
$logloc = $s.Settings.DefaultLog
if ($fileloc.Length = 0) {
    $fileloc = $s.Information.MasterDBPath
    }
if ($logloc.Length = 0) {
    $logloc = $s.Information.MasterDBLogPath
    }
$dbsysfile = $fileloc + '\' + $syslogname + '.mdf'
$dbappfile = $fileloc + '\' + $applogname + '.ndf'
$dblogfile = $logloc + '\' + $loglogname + '.ldf'
# Instantiate the database object and add the filegroups
$db = new-object ('Microsoft.SqlServer.Management.Smo.Database') ($s, $dbname)
$sysfg = new-object ('Microsoft.SqlServer.Management.Smo.FileGroup') ($db, 'PRIMARY')
$db.FileGroups.Add($sysfg)
$appfg = new-object ('Microsoft.SqlServer.Management.Smo.FileGroup') ($db, 'AppFG')
$db.FileGroups.Add($appfg)
# Create the file for the system tables
$dbdsysfile = new-object ('Microsoft.SqlServer.Management.Smo.DataFile') ($sysfg, $syslogname)
$sysfg.Files.Add($dbdsysfile)
$dbdsysfile.FileName = $dbsysfile
$dbdsysfile.Size = [double](5.0 * 1024.0)
$dbdsysfile.GrowthType = 'Percent'
$dbdsysfile.Growth = 25.0
$dbdsysfile.IsPrimaryFile = 'True'
# Create the file for the Application tables
$dbdappfile = new-object ('Microsoft.SqlServer.Management.Smo.DataFile') ($appfg, $applogname)
$appfg.Files.Add($dbdappfile)
$dbdappfile.FileName = $dbappfile
$dbdappfile.Size = [double](25.0 * 1024.0)
$dbdappfile.GrowthType = 'Percent'
$dbdappfile.Growth = 25.0
$dbdappfile.MaxSize = [double](100.0 * 1024.0)
# Create the file for the log
$dblfile = new-object ('Microsoft.SqlServer.Management.Smo.LogFile') ($db, $loglogname)
$db.LogFiles.Add($dblfile)
$dblfile.FileName = $dblogfile
$dblfile.Size = [double](10.0 * 1024.0)
$dblfile.GrowthType = 'Percent'
$dblfile.Growth = 25.0
# Create the database
$db.Create()
# Set the default filegroup to AppFG
$appfg = $db.FileGroups['AppFG']
$appfg.IsDefault = $true
$appfg.Alter()
$db.Alter()

This database now has two filegroups, PRIMARY and App_FG, which is now the default, so new objects aren’t placed in the PRIMARY filegroup with the database metadata.

Objects used in another key function, Backup, can be used to script regular backups.  Here are the objects:

Figure 4:SMO Backup Objects.

These are used in the following script to back up user databases:

#backup.ps1
#Performs a Full backup on all user databases
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SmoExtended') | out-null
$s = new-object ('Microsoft.SqlServer.Management.Smo.Server') 'HOME\MyInstance'
$bkdir = $s.Settings.BackupDirectory
$dbs = $s.Databases
# Iterate through all databases and backup each user database
$dbs | foreach-object {
       $db = $_
       if ($db.IsSystemObject -eq $False) {
              $dbname = $db.Name
              $dt = get-date -format yyyyMMddHHmmss
              $dbbk = new-object ('Microsoft.SqlServer.Management.Smo.Backup')
              $dbbk.Action = 'Database'
              $dbbk.BackupSetDescription = "Full backup of " + $dbname
              $dbbk.BackupSetName = $dbname + " Backup"
              $dbbk.Database = $dbname
              $dbbk.MediaDescription = "Disk"
              $dbbk.Devices.AddDevice($bkdir + "\" + $dbname + "_db_" + $dt + ".bak", 'File')
              $dbbk.SqlBackup($s)
              }
       }

Notice that the Get-Date cmdlet is used to extract the current system date and incorporate that value into the name of the backup file, so you know exactly when the backup was created.

Books Online documents all of the SMO objects and provides many examples of basic administrative tasks, and the Get-Member cmdlet is useful to determine the properties and methods available for each SMO object.

SMO allows us to build scripts to manage SQL Server, but frequently we need to work with the data within our databases.  The Invoke-Sqlcmd cmdlet allows us to query the database from PowerShell, but a more powerful method to return data is through the use of ADO.NET.

Using ADO.NET for Queries

ADO.NET has two sets of objects, a connected set that allows you to connect with a data source, run queries and return result sets, and a disconnected set of objects that allow you to work with the data after it’s been collected.  Here’s a list of the objects.

ADO.NET Object

Description

Connection Object

A connection to the data source

Command Object

Can represent a query against a database, a call to a stored procedure, or a direct request to return the contents of a specific table

DataReader Object

Designed to return query results as quickly as possible

Transaction Object

Groups a number of changes to a database and treats them as a single unit of work

The Connection object has a BeginTransaction method that can be used to create Transaction objects

Parameter Object

Allows the specification of parameters for stored procedures or parameterized queries

DataAdapter Object

Acts as a bridge between the database and the disconnected objects in the ADO.NET object model

Table 4: ADO.NET Connected Objects

ADO.NET Object

Description

DataTable Object

Allows the examination of data through collections of rows and columns

DataColumn Object

Corresponds to a column in a table

Constraint Object

Defines and enforces column constraints

DataRow Object

Provides access to the DataTable's Rows collection

DataSet Object

The container for a number of DataTable objects

DataRelation Object

Defines the relations between DataTables in the DataSet object

DataView Object

Allows the examination of DataTable data in different ways

Table 5: ADO.NET Disconnected Objects

Here’s a script which will return a comma-separated list from the AdventureWorks database based on the results of a query which returns product inventory counts where the current inventory level is below the reorder level:

#reorder.ps1
#This script pulls info from the Production Product and Inventory tables
# in AdventureWorks and presents it to the user
$cn = new-object system.data.SqlClient.SqlConnection("Data Source=HOME\MyInstance;Integrated Security=SSPI;Initial Catalog=AdventureWorks");
$ds = new-object "System.Data.DataSet" "dsInventoryData"
$q = "SELECT p.[ProductID]"
$q = $q + "      ,p.[Name]"
$q = $q + "      ,p.[ProductNumber]"
$q = $q + "      ,p.[SafetyStockLevel]"
$q = $q + "      ,p.[ReorderPoint]"
$q = $q + "      ,pi.QOH"
$q = $q + "  FROM [Production].[Product] p"
$q = $q + "  JOIN (SELECT [ProductID] ,Sum([Quantity]) as QOH"
$q = $q + "          FROM [Production].[ProductInventory]"
$q = $q + "          GROUP BY [ProductID]) pi"
$q = $q + "   ON pi.ProductID = p.ProductID"
$q = $q + "  WHERE pi.QOH < p.ReorderPoint"
$da = new-object "System.Data.SqlClient.SqlDataAdapter" ($q, $cn)
$da.Fill($ds)
$dtInventory = new-object "System.Data.DataTable" "dsInventoryData"
$dtInventory = $ds.Tables[0]
$dtInventory | FOREACH-OBJECT {[string]$_.ProductID + "," + $_.Name + "," + $_.ProductNumber + "," + $_.SafetyStockLevel + "," + $_.ReorderPoint + "," + $_.QOH
       }

To connect with SQL Server we need a standard connection string.  You can get an appropriate connection string for your script at http://www.connectionstrings.com/sql-server-2008.  Once the connection is established with the server you build your query and run it using your query string and the connection object as parameters.  The script does this by creating a DataSet object to store the results, then, after building the query string, creating a DataAdapter object with the query and connection objects.  The Fill method of the DataAdapter object runs the query and loads the results into the DataSet.  The DataSet is an in-memory database of the results of the query, and contains DataTable objects.  Because we only submitted one query the DataSet contains only one DataTable.  We create a DataTable object and load the table from the DataSet into our DataTable, and then pipe that to a ForEach-Object cmdlet to iterate through the results.  The $_ variable indicates the current object in the set we’re working through, so within the ForEach-Object scriptblock we’re building a string containing a comma-separated list of the results.  Here are the results from our script:

7

386,Hex Nut 1,HN-4402,1000,750,725

462,Lower Head Race,LR-8520,1000,750,701

853,Women's Tights, M,TG-W091-M,4,3,0

859,Half-Finger Gloves, M,GL-H102-M,4,3,0

876,Hitch Rack - 4-Bike,RA-H123,4,3,0

882,Short-Sleeve Classic Jersey, M,SJ-0194-M,4,3,0

910,HL Mountain Seat/Saddle,SE-M940,500,375,355

The first row in the results contains the number of rows in our set of results.

We can achieve similar results to this method using the new Invoke-Sqlcmd cmdlet by first creating a variable (say $q) with our query, as we did in the script, and using the Invoke-Sqlcmd cmdlet, then piping the results to the export-csv cmdlet.  Here’s the updated script:

#reorder2.ps1
#This script pulls info from the Production Product and Inventory tables
# in AdventureWorks and exports it to a csv file
$q = "SELECT p.[ProductID]"
$q = $q + "      ,p.[Name]"
$q = $q + "      ,p.[ProductNumber]"
$q = $q + "      ,p.[SafetyStockLevel]"
$q = $q + "      ,p.[ReorderPoint]"
$q = $q + "      ,pi.QOH"
$q = $q + "  FROM [Production].[Product] p"
$q = $q + "  JOIN (SELECT [ProductID] ,Sum([Quantity]) as QOH"
$q = $q + "          FROM [Production].[ProductInventory]"
$q = $q + "          GROUP BY [ProductID]) pi"
$q = $q + "   ON pi.ProductID = p.ProductID"
$q = $q + "  WHERE pi.QOH < p.ReorderPoint"
invoke-sqlcmd -ServerInstance 'HOME\MyInstance' -Query $q -Database 'AdventureWorks' | export-csv reorder.csv

And the contents of the output file:

#TYPE System.Data.DataRow
ProductID,Name,ProductNumber,SafetyStockLevel,ReorderPoint,QOH
386,"Hex Nut 1",HN-4402,1000,750,725
462,"Lower Head Race",LR-8520,1000,750,701
853,"Women's Tights, M",TG-W091-M,4,3,0
859,"Half-Finger Gloves, M",GL-H102-M,4,3,0
876,"Hitch Rack - 4-Bike",RA-H123,4,3,0
882,"Short-Sleeve Classic Jersey, M",SJ-0194-M,4,3,0
910,"HL Mountain Seat/Saddle",SE-M940,500,375,355

As you can see the results are almost identical, and this example shows you have many different ways to accomplish any task using PowerShell with SQL Server.

Putting it Together

We’ve already seen scripts that create and back up our databases.  Just to show one more example, we can use PowerShell to create a SQL Server Agent job to execute our backup.ps1 script.  The SMO objects needed for this script are as follows:

Figure 5:SMO SQL Server Agent Job Objects.

So, we can use the following script to execute the backup.ps1 script we created earlier:

#fullbackupjob.ps1
#This script creates a SQL Server Agent job which will run a PowerShell script once a day to backup user databases.
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null
$s = new-object ('Microsoft.SqlServer.Management.Smo.Server') 'HOME\MyInstance'
$j = new-object ('Microsoft.SqlServer.Management.Smo.Agent.Job') ($s.JobServer, 'FullBackup')
$j.Description = 'Backup User Databases'
$j.Category = '[Uncategorized (Local)]'
$j.OwnerLoginName = 'sa'
$j.Create()
$jid = $j.JobID
$js = new-object ('Microsoft.SqlServer.Management.Smo.Agent.JobStep') ($j, 'Step 01')
$js.SubSystem = 'CmdExec'
$js.Command = 'powershell "& E:\Scripts\backup.ps1"'
$js.OnSuccessAction = 'QuitWithSuccess'
$js.OnFailAction = 'QuitWithFailure'
$js.Create()
$jsid = $js.ID
$j.ApplyToTargetServer($s.Name)
$j.StartStepID = $jsid
$j.Alter()
$jsch = new-object ('Microsoft.SqlServer.Management.Smo.Agent.JobSchedule') ($j, 'Sched 01')
$jsch.FrequencyTypes = 'Daily'
$jsch.FrequencySubDayTypes = 'Once'
$startts = new-object System.Timespan(2, 0, 0)
$jsch.ActiveStartTimeOfDay = $startts
$endts = new-object System.Timespan(23, 59, 59)
$jsch.ActiveEndTimeOfDay = $endts
$jsch.FrequencyInterval = 1
$jsch.ActiveStartDate = get-date
$jsch.Create()

This script will run the powershell.exe executable, not sqlps.exe.  If the server where this job is to run is running SQL Server 2008 you can change the JobStep’s SubSystem property to ‘PowerShell’, and the Command property to ‘e:\scripts\backup.ps1’.  The job is created by using the new-object cmdlet to create a Job object, and setting its properties.  Then a JobStep object is created, and its properties set as well.  Once the step properties have been set and the step created, the JobStep.ID property is then used to set the StartStepID property of the job, so Agent knows where to start the job.  Finally a schedule is needed, and in this case the job is set up using the properties in the JobSchedule object to run the job once daily at 2am, starting today.

Another powerful feature of PowerShell is the ability to easily retrieve information from the Windows Management Instrumentation (WMI) objects.  SMO provides access to WMI, which allows you to manage the SQL Server services.  An example would be the following script which returns the IP port number used by the instances created on the local system:

#tcpport.ps1
#Evaluates the SQL Server instances on a Windows server and returns the TCP port number used by each instance
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO")  | Out-Null
$m = New-Object ('Microsoft.SqlServer.Management.Smo.WMI.ManagedComputer') 'HOME'
$m.ServerInstances | ForEach-Object { $m.Name + '\' + $_.Name + ', ' +
       $m.ServerInstances[$_.Name].ServerProtocols['Tcp'].IPAddresses['IP1'].IPAddress.IPAddressToString + ':' +
       $m.ServerInstances[$_.Name].ServerProtocols['Tcp'].IPAddresses['IPAll'].IPAddressProperties['TcpDynamicPorts'].Value
       }

In addition to the SMO WMI objects, PowerShell provides the Get-WMIObject cmdlet (aliased as gwmi) to return information from these objects.  We can return information about our current system by executing the following statement:

gwmi -query "select * from Win32_ComputerSystem" | select Name, Model,
  Manufacturer, Description, DNSHostName, Domain, DomainRole,
  NumberOfProcessors, SystemType, TotalPhysicalMemory,
  UserName, Workgroup

This will return an object with the properties listed after the select statement - basic information about the physical machine we’re working with.

If we’re interested in finding out about the disk drives on our server, this query will help:

gwmi -query "select * from Win32_LogicalDisk where
  DriveType=3" | select Name, FreeSpace, Size

For each locally attached drive (DriveType=3) the properties returned are the drive letter (Name) the amount of free space and the size of the drive, in bytes.

We can even retrieve performance counters, as we can see with this query:

gwmi -query "Select * from Win32_perfFormattedData_PerfOs_System" |
  select contextSwitchesPerSec, ProcessorQueueLength

There is a great deal of information available through WMI, and you can get the details on the WMI classes at this link: http://msdn.microsoft.com/en-us/library/aa394554(VS.85).aspx.

Conclusion

Scripting has been a powerful tool for Unix administrators for a long time.  Windows administrators have had fewer and less capable options for automating administrative processes until the introduction of PowerShell. 

There are a great many sites providing quality information on using PowerShell to automate administrative tasks.  The addition of PowerShell support to SQL Server 2008 tool set adds an extra dimension of manageability to the Windows Server environment.  You should review the available books and online material to develop your skills in using this powerful tool, and by doing so automate and streamline the processes in your own environment.

About the author. Allen White is a SQL Server MVP based in Cleveland, Ohio.  He specializes in DBA Automation methods and enjoys teaching SQL Server classes and helping companies use SQL Server more effectively.

Show:
© 2014 Microsoft