Chapter 25
Running Scripts

In this chapter, you will learn to:

  • What Is a Script?
  • Executing a Script
  • Creating a Script
  • Scheduling a Script
  • Script Tips and Hints
  • Loading PowerCLI
  • Logging
  • Commenting Code
  • Passing Credentials
  • Getting Help

PowerShell is an incredibly flexible language that is capable of doing many different things, ranging from managing Active Directory to administering many storage arrays, even providing configuration and change management using the Desired State Configuration features. Many VMware administrators are primarily familiar with PowerShell from the perspective of managing virtual infrastructures, and consequently they aren’t necessarily familiar with all the intricacies of managing and executing scripts across many different environments.

All administrators can take advantage of PowerCLI scripts that have been written by others to help make tedious tasks easy to complete, and easy to repeat in a consistent manner. Reusing that code starts by placing it into a script file, which can be executed from the PowerShell command line, encapsulating the functionality in a convenient container. Let’s explore some various aspects of scripts, how to create and execute them, and some best practices when writing scripts.

What Is a Script?

You have encountered many different types of PowerShell code in this book so far. Many of them have been snippets and one-liners; these show a quick-to-execute, easy-to-grasp bit of code that can be executed from the PowerShell or PowerCLI command window. While created using a scripting language, they are not generally considered scripts.

A script is a text file that contains one or more PowerShell commands. It can be as simple as one line or it can contain many thousands of lines of code. The important factor is that it is a distinct file that is referenced in order to execute the code. Anyone can create a script file, and such files are particularly helpful when you want to reuse a series of commands in the future. By placing them into a script file, you can execute them at any time to take advantage of whatever clever bit of code you have created.

Most often the code is contained in a file that ends in .ps1. The .ps1 at the end simply denotes to the Windows operating system that the file is a PowerShell script. Sometimes you will also see files that have the .psm1 file extension. These represent PowerShell script module files, which means they are meant to be referenced by other scripts to provide common functions, variables, and aliases and other PowerShell functions.

When should you use a script as opposed to a module? Well, the two types of files are very similar. They both contain PowerShell code, they both contain code that has been written to be reused, and they both can be used from the command line or from the PowerShell ISE. The difference between them is the intended usage. A PowerShell script (ending with .ps1) is intended to be executed like a function or cmdlet from the command line. Conversely, a PowerShell module (ending with .psm1) is meant to be included, or referenced, from a script or from the command line.

Executing a Script

The reason you’re reading this book is that you want to take advantage of PowerCLI and its ability to automate your infrastructure. As you begin to explore the world of PowerCLI and PowerShell, you will encounter many snippets and scripts that you can take advantage of. However, you need to ensure that your environment is configured and ready to execute scripts before using them in production. This means you must ensure that the PowerShell execution policy is configured correctly to allow your scripts to execute.

The default execution policy is set to Restricted. This mode will allow the execution of individual commands, for example from the CLI, but will not allow script execution. To begin using PowerCLI you have to modify the policy to use something less restrictive but still secure. The best option is to use the RemoteSigned value, which will enable executing scripts that you have created, but not those that have been downloaded from the Internet.

To modify the execution policy, you must have administrator privileges on the desktop or server you are using. Find the PowerShell item on the Start menu, right-click, and select Run As Administrator.

Now that you have an elevated PowerShell prompt, you can modify the policy. Listing 25-1 shows the command and output of getting and setting the executing policy for the desktop or server you are using.

Listing 25-1: Modifying the PowerShell execution policy

Get-ExecutionPolicy
Unrestricted

Set-ExecutionPolicy -ExecutionPolicy RemoteSigned

Execution Policy Change
The execution policy helps protect you from scripts that you
do not trust. Changing the execution policy might expose you to
the security risks described in the about_Execution_Policies
help topic at http://go.microsoft.com/fwlink/?LinkID=135170. Do
you want to change the execution policy?
[Y] Yes  [N] No  [S] Suspend  [?] Help (default is "Y"): y

That’s all it takes to modify the policy. Now you are able to create scripts, import modules, and take advantage of the other benefits of PowerShell and PowerCLI without worry.

Creating a Script

Let’s take a series of commands that output the name, CPU count, and RAM amount of all the virtual machines managed by vCenter. Listing 25-2 shows the sample code.

Listing 25-2: Sample code that lists VM properties

Get-VM | Select Name,NumCpu,MemoryGB |
  Sort-Object -Property MemoryGB -Descending |
  Format-Table -AutoSize

Name               NumCpu  MemoryGB
----               ------  --------
VM1                     4        16
VM2                     2        12
VM3                     4        12
VM4                     2         6
VM5                     4         6
VM6                     4         4
VM7                     1         1
VM8                     1         1

This gives us a simple view of our virtual machines, sorted by RAM assignment from highest to lowest, and it is an easy example of a snippet of code that we may want to reuse, or execute, frequently. How do we turn this into a reusable script?

Easy! We save the code into a text file that has the .ps1 file extension (see Figure 25-1).

c25f001.tif

Figure 25-1: vm_report.ps1 text file

Now that you have created your script file, let’s execute it:

.vm_report.ps1

Name               NumCpu  MemoryGB
----               ------  --------
VM1                     4        16
VM2                     2        12
VM3                     4        12
VM4                     2         6
VM5                     4         6
VM6                     4         4
VM7                     1         1
VM8                     1         1

Notice that you got the exact same result as before. The code was executed from the script file as though you had typed it into the command line manually. This is quite handy, but let’s expand a bit by exploring PowerShell functions and reusing them across many scripts. Executing a script from a file works well, but it quickly gets cumbersome to manage a directory of all the code blocks that you want to reuse.

You have seen functions published throughout this book. They are an extremely convenient method of providing reusable code snippets across all of your scripts. Let’s create a simple function that will return all Windows virtual machines from our vCenter. Listing 25-3 shows the code for this example function.

Listing 25-3: Sample function that returns all Windows virtual machines

function Get-WindowsVm {
    Get-VM | Where-Object {$_.GuestId -like "*windows*"} |
        Select Name,NumCpu,MemoryGB |
        Sort-Object -Property MemoryGB -Descending
}

Saving this function to a PowerShell script file allows you to have it available at any time, but you can’t execute the function this way. Instead you need to dot-source the PowerShell script file. Dot-sourcing a file is different than simply executing for one primary reason: when you execute code from a script, everything is destroyed and no longer accessible after the script ends, but when you dot-source the script, all of its functions and variables remain accessible to the session that sourced it. This means that after the dot-source operation on the script file saved from Listing 25-3, our function is available to use. Listing 25-4 shows how to dot-source a script file and then execute the function that is contained in the file.

Listing 25-4: Dot-sourcing a script file

# dot source the file
. .windows_vm_report.ps1

# execute the function
Get-WindowsVms
Name               NumCpu  MemoryGB
----               ------  --------
VM1                     4        16
VM4                     2         6
VM5                     4         6
VM7                     1         1
VM8                     1         1

Scheduling a Script

Sometimes you need for an action to happen at a specific time, or after a certain event that you may, or may not, be present for. Maybe you want to run a report every morning before you arrive, or maybe you want to schedule a check each time the server reboots. Regardless, these tasks are accomplished using the Windows Task Scheduler service to execute PowerShell scripts at a configured interval.

To schedule a PowerShell script to be run by Windows, you create the task just like any other task. From the Computer Management console expand the Task Scheduler and browse to the Task Scheduler Library. Create a new basic task, give it a name, and set when it is executed. For our example, you’ll want to select Start A Program.

The program we want to execute is the PowerShell executable: powershell.exe. To specify which script you want to execute, in the Add Arguments field enter -file C:pathscript.ps1, where the latter part is the actual path to your script (see Figure 25-2).

c25f002.tif

Figure 25-2: Creating a scheduled task

Remember that you may need to adjust which user is executing the task to ensure that you have permissions to access the files and other resources that may be needed.

Alternatively, you can use PowerShell to create the scheduled task:

# The action is what the scheduled task will execute
$action = New-ScheduledTaskAction -Execute "powershell.exe" `
  -Argument "-File C:pathscript.ps1"

# execute the task every day at a specific time
$trigger = New-ScheduledTaskTrigger -Daily -At 4am

# Finally, create the scheduled task
Register-ScheduledTask -TaskName "My PowerCLI Script" `
  -Action $action -Trigger $trigger

Script Tips and Hints

The authors of this book have been writing PowerShell scripts individually for many years, and collectively for a lifetime. Over this time we have developed a number of best practices when writing scripts that make them easier to use and maintain as time goes on. These best practices are recommendations that we have frequently learned the hard way over time. You are not obligated to use any of these tips in your scripts, but we believe they will make your PowerShell and PowerCLI experience much easier, and that means more fun too!

Loading PowerCLI

Did you know that the PowerCLI window is special? When PowerCLI is started using the Desktop or Start menu shortcuts, it executes a series of commands to load the cmdlets, aliases, and other preferences to make using PowerCLI easier. Prior to PowerCLI version 6, the cmdlets were loaded using PSSnapins, which means they had to be deliberately loaded when needed.

Unfortunately, this doesn’t happen with every PowerShell process that is started, and there is no guarantee that every time a script is executed it will be executed from a PowerCLI window. So, how do we fix this?

We have created a code listing (Listing 25-5) that you can place at the top of a script file that will check for the PowerCLI modules and, if not present, load the PSSnapins so that your script can execute as expected regardless of how PowerShell was started. This listing is also helpful if you have scripts being executed on multiple hosts with multiple versions of PowerCLI and you want to ensure they are able to execute across all of them. Take note, though, that this code will not fix any issues resulting from using PowerCLI cmdlets from a newer version that do not exist previously. If cross-version compatibility is a concern, you must take special care to use only cmdlets that are available in all PowerCLI versions in your environment.

Listing 25-5: Loading the PSSnapins for a script

# include the following at the top of your script

# check to see if the modules exist, unloaded or loaded
if (! (Get-Module -ListAvailable VMware*) -and
    ! (Get-Module VMware*)
   ) {
    # no modules
    #check to see if the core PSSnapin is loaded
    if (! (Get-PSSnapin VMware.VimAutomation.Core `
             -ErrorAction SilentlyContinue)) {
        # no PSSnapin, load it
        Add-PSSnapin VMware.VimAutomation.Core
    }

    # check for the VDS PSSnapin
    if (! (Get-PSSnapin VMware.VimAutomation.Vds `
             -ErrorAction SilentlyContinue)) {
        # no PSSnapin, load it
        Add-PSSnapin VMware.VimAutomation.Vds
    }
}

Logging

Logging is one of the most frequently overlooked aspects of creating a script, but it is also one of the most important. Logging enables you to know what’s happening during execution; it provides real-time feedback of results and variable values and is invaluable for debugging and troubleshooting.

There are two times when we can log data related to a script. The first is logging the output of the script to a file so that it can be stored and reviewed at any time. The second is writing directly to a log file from inside a script to document script actions and progress.

Logging all output from a script is accomplished using the Start-Transcript and Stop-Transcript cmdlets. Just as the name describes, the start cmdlet begins writing everything that is output to the console to a file. The stop cmdlet ends this behavior. Let’s look at an example of using Start-Transcript and Stop-Transcript (see Listing 25-6).

Listing 25-6: Starting and stopping transcript recording

# Frequently it's helpful to log to the same location as the script.
# Using this path variable we will log to a file in the same directory,
# with the same name, as the invoking script but ending with ".log"
Start-Transcript -Path "$($MyInvocation.MyCommand.Definition).log"

# output something to the console
Get-VM | Where-Object {$_.PowerState -eq "PoweredOn"}

# stop writing to the transcript file
Stop-Transcript

If you check the contents of the file that was created, it will contain everything that was output by the command(s) executed. While logging console output via the transcript is helpful, sometimes you want more control over what’s being logged and where it is being logged. To help with this, we have provided a function, shown in Listing 25-7, that can be included in your scripts and used as a convenient helper for logging messages with different priority.

Listing 25-7: The New-LogEntry function

function New-LogEntry {
    <#  .SYNOPSIS
        Creates log entries in a file and on the console.

        .DESCRIPTION
        Sends the log message provided to the log file and to the
        console using the specified message type.  Useful for quickly
        logging script progress, activity, and other messages to
        multiple locations.

        .EXAMPLE
        New-LogEntry -Log Warning -Message "Something bad happened."

        .EXAMPLE
        New-LogEntry -Message "This will output to the pipeline."

        .EXAMPLE
        New-LogEntry -Log Verbose -Message "Very descriptive events."

        .PARAMETER Log
        The type of log entry to make. Valid values are Output, Verbose,
        Warning, and Error. Default is Output.

        .PARAMETER Message
        The string message to send to the log file and the specified
        console output.

        .INPUTS
        None

        .OUTPUTS
        PSCustomObject
    #>
    [CmdletBinding()]
    Param(
        # the message to log
        [parameter(Mandatory=$true)]
        [String]$Message
        ,

        # set the default to output to the next command in the pipeline
        # or to the console
        [parameter(Mandatory=$false)]
        [ValidateSet('Output', 'Verbose', 'Warning', 'Error')]
        [String]$Log = 'Output'

    )    process {
        # log to the same directory as the invoking script
        $logPath = "$($script:MyInvocation.MyCommand.Definition).log"

        # adding a time/date stamp to the log entry makes it easy to
        # correlate them against actions
        $formattedMessage = "$(Get-Date -Format s) [$($Log.ToUpper())] "
        $formattedMessage += $Message

        # write the message out to the log file
        $formattedMessage | Out-File -FilePath $logPath `
            -Encoding ascii -Append

        # write the message to the selected console location
        Switch ($Log) {
            "Output" { Write-Output $formattedMessage }
            "Verbose" { Write-Verbose $formattedMessage }
            "Warning" { Write-Warning $formattedMessage }
            "Error" { Write-Error $formattedMessage }
        }
    }
}

Using this function, you can easily control log messages using a single function and modify where they are sent (if they are sent to a file), or even just discard messages of a certain type if you decide to. Let’s look at how to use this function in our code, and the output:

if ((Get-VM $vmName).PowerState -eq "PoweredOn") {
    New-LogEntry "Virtual Machine is on."
} else {
    New-LogEntry "Virtual Machine is off." -Log Warning
}

This will result in output like that shown in Figure 25-3, depending on the status of the virtual machine.

c25f003.tif

Figure 25-3: Outputting log messages

There is rarely too much logging that happens, especially when you are debugging or trying to find errors. We highly recommend that when writing PowerCLI scripts you always log as much as needed to determine the status of the script, and this is particularly important when you are executing the script from a scheduled task. Scheduled tasks do not store the console output, so you must write those log messages to a location that can be accessed to verify the actions taken.

Commenting Code

Comments are a favor to the future. Writing scripts can be difficult. You are creating a script based on something that you know how to do right now, because you’ve been trying to do it for a few minutes, or hours, or even days. You know it completely, right now. So you create a script to be able to do it repeatedly, and distribute it to the other administrators in the organization so they can accomplish your automated task with ease.

Time goes on, things happen, and at some point you have to revisit your script. Maybe a new version of PowerCLI has been released, or an updated vSphere environment that necessitates an update to your script. Do you remember how it works? Do you remember why you wrote the code to do a particular action one way versus another?

Commenting code is tedious. Maybe you look at your script and say to yourself, “I know exactly what’s happening here; I’ll never forget that!” But time has a way of making all of us forget these things. Putting comments in code is possibly the most essential part of creating a script. It makes the script supportable and maintainable, and ensures that you (or anyone else) can understand exactly what’s happening and, more importantly, why it’s happening in your script.

There are two primary types of comments that you will see. The first are single-line comments, which start with a pound sign, or hash symbol (#). The second type is for larger blocks of comments. It opens the comment block with the character sequence less than, hash (<#). The block is closed with the sequence hash, greater than (#>). Listing 25-8 shows both.

Listing 25-8: Using comments in code

# this is an example of a single line comment

<# This is a multi-line comment.

   Each line does not have to start with a character, and
     inside the comment block the text can be styled however
     desired to make it readable.

This ends the code block. #>

Comments are a simple thing to implement but so frequently ignored. We can’t emphasize enough how important they are to making scripts supportable and maintainable. Future you will thank current you for helping decode the past!

Passing Credentials

One of the most frequently asked questions we hear is how to receive, store, and retrieve credentials for scripts that are being run. There are a number of different ways to do this:

  • Pass a username and password on the command line
  • Pass a credential object on the command line
  • Store the password securely and retrieve when needed

Let’s look at each of these methods individually.

Pass Username and Password on the Command Line

This is probably the easiest method to implement, but it is by far the least secure. The username and password are both plain text, and are probably being stored in plain text as well if you are scheduling the script to be run automatically. Using this method you simply have parameters for username and password for the script that are then passed to a connection method, such as Connect-VIServer. Here is an example of how this might work:

<# Remember that a script file is very much like a function
   that is kept in a file.  It has parameters, a begin,
   process, and end sections just like a function.

   This is an abbreviated script used for example purposes only!
#>
param(
    [param(Mandatory=$true)]
    [String]$Hostname,

    [param(Mandatory=$true)]
    [String]$Username,

    [param(Mandatory=$true)]
    [String]$Password
)
process {
    # connect to vCenter using plaintext credentials
    Connect-VIServer -Server $Hostname -User $Username -Password $Password

    # do an action
    Get-VM | Where-Object { $_.PowerState -eq "PoweredOn" }

    # disconnect
    Disconnect-VIServer -Confirm:$false
}

The script can be executed as shown here and get the expected result, but we want to reinforce that this is not the recommended way of passing credentials to your scripts.

.Get-PoweredOnVms -Hostname vcenter.domain -Username me -password BadPW

Pass a Credential Object on the Command Line

A much more secure option is to pass a credential object on the command line. This can be done with the Get-Credential cmdlet before or during the invocation of a script.

# create a credential variable to pass along
$creds = Get-Credential

# connect to vCenter
Connect-VIServer -Server $hostname -Credential $creds

The Get-Credential cmdlet opens a dialog box, shown in Figure 25-4, that prompts for a username and password. These are stored securely so that they cannot be seen by others using the system. The downside to this implementation is that someone must be interactively using the system—you cannot use this method for scheduling a script to be run unattended.

c25f004.tif

Figure 25-4: Prompting for credentials

Store the Password Securely and Retrieve when Needed

This is, by far, the best method to use for storing credentials for scripts that will be executed via a scheduled task. The password is always in a secure format, preventing prying eyes from attempting to gain privileges they shouldn’t have. To further enhance security, the Windows Data Protection APIs used by the credential management cmdlets prevent a user other than the original creator from accessing the stored values.

The first step is to collect the credentials. This has to be done only once for each unique username and password combination. After the username and password are stored in a variable, you convert the password to a secure string object and write it to a file for later usage.

# collect credentials and store them in a variable
$credentials = Get-Credential

# convert the password to a secure string
$password = $credentials.Password | ConvertFrom-SecureString
# store the username and password in files
$credentials.UserName | Set-Content ".username.txt"
$password | Set-Content ".password.txt"

Now that the information has been securely stored, you can re-create the PSCredential object when needed.

# get the username from the file
$username = Get-Content ".username.txt"

# get the secure password
$securePassword = Get-Content ".password.txt"

# convert it to a secure string
$password = ConvertTo-SecureString $securePassword

# recreate the credential object
$credential = New-Object System.Management.Automation.PsCredential (
    $username,
    $password
  )

# use normally
Connect-VIServer -Server $hostname -Credential $credential

For a simple one-line implementation, you can use the Export-Clixml cmdlet. This makes for concise code that still stores the credential object securely.

# capture and store the credential
Get-Credential | Export-Clixml .myCredential.xml

# retrieve the credential, after which the object is a standard PSCredential
$credential = Import-Clixml .myCredential.xml

With only minimal setup, you can safely store the password for reuse in scripts without having to worry about it being read by trespassers.

Getting Help

We all need a little help now and then—there’s nothing wrong with that! There are, literally, thousands of cmdlets that enable you to automate nearly anything imaginable. It is simply impossible to remember all of them, their parameters, and how to use them in context. Fortunately, there is a number of ways in which we can leverage PowerShell to give us some help.

To view all of the cmdlets that are a part of the PowerCLI modules, you can execute the PowerShell cmdlet Get-Command and specify just the VMware modules.

# show all PowerCLI cmdlets
Get-Command -Module vm*

# show PowerCLI cmdlets for iSCSI
Get-Command -Module vm* -Name *iscsi*

The last bit of the snippet is helpful when you remember part of a cmdlet’s name, or if you are looking for all cmdlets related to a specific task—in the example, iSCSI.

The Get-Help Cmdlet

Using the Get-Help cmdlet is arguably the fastest way to get the information you want on how to use a particular cmdlet. By simply passing the name of the cmdlet you need help with to Get-Help, you are shown the syntax for executing, a description, and much more, as shown in Figure 25-5.

c25f005.tif

Figure 25-5: Get-Help for a cmdlet

If examples are helpful, change the syntax slightly to pass the -Examples parameter (Figure 25-6).

c25f006.tif

Figure 25-6: Get-Help with examples

Finally, if you want to see everything available for a particular cmdlet, use the -Full parameter, which will return detailed information about the cmdlet’s parameters, inputs, outputs, and much more.

You may have noticed that throughout the book we have included a large block of comments at the start of all functions. This block is known as “comment-based help,” and it enables you to use the Get-Help cmdlet to get syntax, parameters, examples, and everything else you would normally expect for cmdlets and functions from PowerShell.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset