Quantcast
Channel: SYSADMINTUTORIALS IT TECHNOLOGY BLOG
Viewing all 186 articles
Browse latest View live

Automating The Installation Of VMware ESXi With PowerCLI

$
0
0

I saved a tweet a while back from William Lam about the ability to send keystrokes to a VM via PowerCLI. The first thing that popped into mind is “Automating a nested ESXi installation”. As you could imagine this would be extremely convenient in spinning up a new VMware Lab.

Last weekend was the day for testing this out!

The script works by a function that allows you to specify which keystrokes to send via vCenter Server, directly to the VM. For example, as you type a word, IP address, etc, the script will loop through each character and send them 1-by-1 to the VM. Let’s say you want to set the IP address for your ESXi host, you can simply send 192.168.1.10 and voila, it will appear on the host.

Set-VMKeystrokes -VMName MYESXIHOST -StringInput "192.168.1.10"

The script function that William Lam provides is more than awesome in itself, however, I made a few tweaks to add additional functionality. What mods did I make you ask?

The original script is great for single digit characters, however, what if I need to send a function key such as F2 or press the ESC key?

Automating VMware ESXi Installations – Modifications to Original Function

In order to allow for these special keys, I had to change the mandatory -StringInput switch to being optional. I also added another switch called -SpecialKeyInput which includes:

  • All functions keys (F1-F12)
  • Keyboard TAB, ESC, Backspace, Enter
  • Keyboard Up, Down, Left, Right

With the extra modifications being set, it was time to start creating the script line by line. You will find the entire script below. It is important to note that you must do the following before running the script:

  • You will need PowerCLI – You can find the install instructions here
  • The script requires a minimum of ESXi 6.5
  • Set all your variables in the “Set your variables here” section of the script
  • There are sleep timers in the script of 5 seconds. I found within my lab some windows took 1-2 seconds to appear after pressing enter
  • There are some Pauses in the script which allows you to double check a setting or wait for the install. (Optionally you could remove the PAUSE and set a long Sleep timer such as 5 mins)
  • Boot your ESXi VM to the installation screen (see screenshot below)

Automating VMware ESXi Install with PowerCLI

Automating VMware ESXi with PowerCLI Demonstration

 

Automating ESXi Installation PowerCLI Script

Before running the script in your lab, please watch the video above so you can see what the whole process looks like and how the script will behave.

NOTE: The Debug output is turned on for all Set-VMKeyStrokes. You can disable Debug Output by changing the switch from -DebugOn $True to -DebugOn $False

GITHUB: The script is also available on my Github page

<#
    ===========================================================================
    Main Script Created by: David Rodriguez
    Blog:                   www.sysadmintutorials.com
    Twitter:                @systutorials

    Set-VMKeystrokes Function
    Created by:    William Lam
    Organization:  VMware
    Blog:          www.virtuallyghetto.com
    Twitter:       @lamw 
    ===========================================================================
    .DESCRIPTION
    This Script will Automate your ESXi installation by sending keystrokes 
    directly to the VM.
    
    .SETTINGS
    Scroll down to, "Set your Variables here"
    
    .IMPORTANT
    When all your variables are set, you must boot ESXi so that it's waiting 
    at the installation prompt. Once at the installation prompt, you can 
    proceed to run this script and watch the magic happen
#>

# Connect to vCenter Server

$VcenterServer = "vmvcenter.vmlab.local" #Vcenter Server that contains the nested esxi host - Replace with your vCenter server address

Connect-VIServer $VcenterServer -ErrorAction Stop #Connect to vCenter Server

# Set your Variables here

$UseVlan = "No" #Use Yes/No here, if you would like to assign a vlan to your esxi host

$VlanID = "" #If using the above option, set VLAN ID here. If not leave blank ""

$VirtualMachine = Get-VM nsx-a-esxi4 #Set your nested ESXi VM Name here

$Ipv4Address = "192.168.1.89" #Set ESXi Host IP Address

$Ipv4Subnet = "255.255.255.0" #Set ESXi Host Subnet Mask
 
$Ipv4Gateway = "192.168.1.1" #Set ESXi Host Default Gateway

$PrimaryDNS = "192.168.1.101" #Set ESXi Host Primary DNS

$HostName = "nsx-a-esxi4.vmlab.local" #Set ESXi Host Name

$DNSSuffix = "vmlab.local" #Set ESXi DNS suffix

$RootPw = "vmware123!" #Set ESXi Root Password

sleep 5

# Functions
Function Set-VMKeystrokes {
<#
    .NOTES
    ===========================================================================
     Created by:    William Lam
     Organization:  VMware
     Blog:          www.virtuallyghetto.com
     Twitter:       @lamw
    ===========================================================================
    .DESCRIPTION
        This function sends a series of character keystrokse to a particular VM
    .PARAMETER VMName
		The name of a VM to send keystrokes to
	.PARAMETER StringInput
		The string of characters to send to VM
	.PARAMETER DebugOn
		Enable debugging which will output input charcaters and their mappings
    .EXAMPLE
        Set-VMKeystrokes -VMName $VM -StringInput "root"
    .EXAMPLE
        Set-VMKeystrokes -VMName $VM -StringInput "root" -ReturnCarriage $true
    .EXAMPLE
        Set-VMKeystrokes -VMName $VM -StringInput "root" -DebugOn $true
    ===========================================================================
     Modified by:   David Rodriguez
     Organization:  Sysadmintutorials
     Blog:          www.sysadmintutorials.com
     Twitter:       @systutorials
    ===========================================================================
    .MODS
        Made $StringInput Optional
        Added a $SpecialKeyInput - See PARAMETER SpecialKeyInput below
        Added description to write-hosts [SCRIPTINPUT] OR [SPECIALKEYINPUT]
    .PARAMETER StringInput
        The string of single characters to send to the VM
    .PARAMETER SpecialKeyInput
        All Function Keys i.e. F1 - F12
        Keyboard TAB, ESC, BACKSPACE, ENTER
        Keyboard Up, Down, Left Right
    .EXAMPLE
        Set-VMKeystrokes -VMName $VM -SpecialKeyInput "F2"

#>
    param(
        [Parameter(Mandatory=$true)][String]$VMName,
        [Parameter(Mandatory=$false)][String]$StringInput,
        [Parameter(Mandatory=$false)][String]$SpecialKeyInput,
        [Parameter(Mandatory=$false)][Boolean]$ReturnCarriage,
        [Parameter(Mandatory=$false)][Boolean]$DebugOn
    )

    # Map subset of USB HID keyboard scancodes
    # https://gist.github.com/MightyPork/6da26e382a7ad91b5496ee55fdc73db2
    $hidCharacterMap = @{
		"a"="0x04";
		"b"="0x05";
		"c"="0x06";
		"d"="0x07";
		"e"="0x08";
		"f"="0x09";
		"g"="0x0a";
		"h"="0x0b";
		"i"="0x0c";
		"j"="0x0d";
		"k"="0x0e";
		"l"="0x0f";
		"m"="0x10";
		"n"="0x11";
		"o"="0x12";
		"p"="0x13";
		"q"="0x14";
		"r"="0x15";
		"s"="0x16";
		"t"="0x17";
		"u"="0x18";
		"v"="0x19";
		"w"="0x1a";
		"x"="0x1b";
		"y"="0x1c";
		"z"="0x1d";
		"1"="0x1e";
		"2"="0x1f";
		"3"="0x20";
		"4"="0x21";
		"5"="0x22";
		"6"="0x23";
		"7"="0x24";
		"8"="0x25";
		"9"="0x26";
		"0"="0x27";
		"!"="0x1e";
		"@"="0x1f";
		"#"="0x20";
		"$"="0x21";
		"%"="0x22";
		"^"="0x23";
		"&"="0x24";
		"*"="0x25";
		"("="0x26";
		")"="0x27";
		"_"="0x2d";
		"+"="0x2e";
		"{"="0x2f";
		"}"="0x30";
		"|"="0x31";
		":"="0x33";
		"`""="0x34";
		"~"="0x35";
		"<"="0x36";
		">"="0x37";
		"?"="0x38";
		"-"="0x2d";
		"="="0x2e";
		"["="0x2f";
		"]"="0x30";
		"\"="0x31";
		"`;"="0x33";
		"`'"="0x34";
		","="0x36";
		"."="0x37";
		"/"="0x38";
		" "="0x2c";
        "F1"="0x3a";
        "F2"="0x3b";
        "F3"="0x3c";
        "F4"="0x3d";
        "F5"="0x3e";
        "F6"="0x3f";
        "F7"="0x40";
        "F8"="0x41";
        "F9"="0x42";
        "F10"="0x43";
        "F11"="0x44";
        "F12"="0x45";
        "TAB"="0x2b";
        "KeyUp"="0x52";
        "KeyDown"="0x51";
        "KeyLeft"="0x50";
        "KeyRight"="0x4f";
        "KeyESC"="0x29";
        "KeyBackSpace"="0x2a";
        "KeyEnter"="0x28";
    }

    $vm = Get-View -ViewType VirtualMachine -Filter @{"Name"="^$($VMName)$"}

	# Verify we have a VM or fail
    if(!$vm) {
        Write-host "Unable to find VM $VMName"
        return
    }

    #Code for -StringInput
    if($StringInput)
    {
        $hidCodesEvents = @()
    foreach($character in $StringInput.ToCharArray()) {
        # Check to see if we've mapped the character to HID code
        if($hidCharacterMap.ContainsKey([string]$character)) {
            $hidCode = $hidCharacterMap[[string]$character]

            $tmp = New-Object VMware.Vim.UsbScanCodeSpecKeyEvent

            # Add leftShift modifer for capital letters and/or special characters
            if( ($character -cmatch "[A-Z]") -or ($character -match "[!|@|#|$|%|^|&|(|)|_|+|{|}|||:|~|<|>|?|*]") ) {
                $modifer = New-Object Vmware.Vim.UsbScanCodeSpecModifierType
                $modifer.LeftShift = $true
                $tmp.Modifiers = $modifer
            }

            # Convert to expected HID code format
            $hidCodeHexToInt = [Convert]::ToInt64($hidCode,"16")
            $hidCodeValue = ($hidCodeHexToInt -shl 16) -bor 0007

            $tmp.UsbHidCode = $hidCodeValue
            $hidCodesEvents+=$tmp

            if($DebugOn) {
                Write-Host "[StringInput] Character: $character -> HIDCode: $hidCode -> HIDCodeValue: $hidCodeValue"
            }
        } else {
            Write-Host "[StringInput] The following character `"$character`" has not been mapped, you will need to manually process this character"
            break
        }
       
    }
    }

    #Code for -SpecialKeyInput
     if($SpecialKeyInput)
     {
       if($hidCharacterMap.ContainsKey([string]$SpecialKeyInput)) 
        {
        $hidCode = $hidCharacterMap[[string]$SpecialKeyInput]
        $tmp = New-Object VMware.Vim.UsbScanCodeSpecKeyEvent
        $hidCodeHexToInt = [Convert]::ToInt64($hidCode,"16")
            $hidCodeValue = ($hidCodeHexToInt -shl 16) -bor 0007

            $tmp.UsbHidCode = $hidCodeValue
            $hidCodesEvents+=$tmp

            if($DebugOn) {
                Write-Host "[SpecialKeyInput] Character: $character -> HIDCode: $hidCode -> HIDCodeValue: $hidCodeValue"
            }
        } else {
            Write-Host "[SpecialKeyInput] The following character `"$character`" has not been mapped, you will need to manually process this character"
            break
        }
    }
    
    # Add return carriage to the end of the string input (useful for logins or executing commands)
    if($ReturnCarriage) {
        # Convert return carriage to HID code format
        $hidCodeHexToInt = [Convert]::ToInt64("0x28","16")
        $hidCodeValue = ($hidCodeHexToInt -shl 16) + 7

        $tmp = New-Object VMware.Vim.UsbScanCodeSpecKeyEvent
        $tmp.UsbHidCode = $hidCodeValue
        $hidCodesEvents+=$tmp
    }

    # Call API to send keystrokes to VM
    $spec = New-Object Vmware.Vim.UsbScanCodeSpec
    $spec.KeyEvents = $hidCodesEvents
    Write-Host "Sending keystrokes to $VMName ...`n"
    $results = $vm.PutUsbScanCodes($spec)
}

# From this point forward, the ESXi configuration starts

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyEnter" -DebugOn $True #ESXi Setup - Key Enter

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "F11" -DebugOn $True #ESXi Setup - Press F11 to accept EULA

Write-Host "Check that you are installing ESXi to local disk, if correct press Enter" -BackgroundColor DarkRed -ForegroundColor White

PAUSE

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyEnter" -DebugOn $True #ESXi Setup - Key Enter

Write-Host "Setting US Default for Keyboard Layout" -ForegroundColor Yellow

SLEEP 5

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyEnter" -DebugOn $True #ESXi Setup - Key Enter

Write-Host "Setting Root Password for ESXi" -ForegroundColor Yellow

SLEEP 5

Set-VMKeystrokes -VMName $VirtualMachine -StringInput "$RootPw" -ReturnCarriage $True -DebugOn $True #ESXi Setup - Root password

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "TAB" -DebugOn $True #ESXi Setup - Root password

Set-VMKeystrokes -VMName $VirtualMachine -StringInput "$RootPw" -ReturnCarriage $True -DebugOn $True #ESXi Setup - Root password

Write-Host "Beginning the Installation" -ForegroundColor Yellow

SLEEP 5

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "F11" -DebugOn $True #ESXi Setup - Begin Installation

Write-Host "Once the install has finished, press Enter to Reboot and Complete Installation" -BackgroundColor DarkGreen -ForegroundColor White

PAUSE

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyEnter" -DebugOn $True #ESXi Setup - Key Enter

Write-Host "Once Host has rebooted, Press Enter to Begin ESXi setup" -BackgroundColor DarkRed -ForegroundColor White

PAUSE

#Start ESXi Configuration

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "F2" -DebugOn $True #ESXi Setup - F2 to login

SLEEP 5

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "TAB" -DebugOn $True #ESXi Setup - Tab to Password

Set-VMKeystrokes -VMName $VirtualMachine -StringInput "$RootPw" -ReturnCarriage $True -DebugOn $True #ESXi Setup - Root password

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyEnter" -DebugOn $True #ESXi Setup - Key Enter

SLEEP 15

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyDown" -DebugOn $True #ESXi Setup - Key Down to Configure Management Network

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyEnter" -DebugOn $True #ESXi Setup - Key Enter

#Use VLAN or Not

If ($UseVlan -eq "Yes")
    {
    Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyDown" -DebugOn $True #ESXi Setup - Key Down to VLAN

    Set-VMKeystrokes -VMName $VirtualMachine -StringInput " " -ReturnCarriage $True -DebugOn $True #ESXi Setup - Enter

    Set-VMKeystrokes -VMName $VirtualMachine -StringInput "$VlanID" -ReturnCarriage $True -DebugOn $True #ESXi Setup - Enter

    Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyESC" -DebugOn $True #ESXi Setup - ESC back

    Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyDown" -DebugOn $True #ESXi Setup - Key Down to IPv4 Configuration

    Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyEnter" -DebugOn $True #ESXi Setup - Key Enter

    }
ELSE
    {
    Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyDown" -DebugOn $True #ESXi Setup - Key Down

    Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyDown" -DebugOn $True #ESXi Setup - Key Down to IPv4 Configuration

    Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyEnter" -DebugOn $True #ESXi Setup - Key Enter

    }

#Set IPv4 Configuration

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyDown" -DebugOn $True #ESXi Setup - Key Down

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyDown" -DebugOn $True #ESXi Setup - Key Down

Set-VMKeystrokes -VMName $VirtualMachine -StringInput " " -DebugOn $True #ESXi Setup - Select Disable IPv6

SLEEP 5

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyDown" -DebugOn $True #ESXi Setup - Key Down

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyBackSpace" -DebugOn $True #ESXi Setup - Remove existing IPv4 Address

Set-VMKeystrokes -VMName $VirtualMachine -StringInput "$Ipv4Address" -DebugOn $True #ESXi Setup - Enter IPv4 Address

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyDown" -DebugOn $True #ESXi Setup - Key Down

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyBackSpace" -DebugOn $True #ESXi Setup - Remove existing IPv4 Subnet

Set-VMKeystrokes -VMName $VirtualMachine -StringInput "$Ipv4Subnet" -DebugOn $True #ESXi Setup - Enter IPv4 Subnet

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyDown" -DebugOn $True #ESXi Setup - Key Down

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyBackSpace" -DebugOn $True #ESXi Setup - Remove existing IPv4 Default Gateway

Set-VMKeystrokes -VMName $VirtualMachine -StringInput "$Ipv4Gateway" -DebugOn $True #ESXi Setup - Enter IPv4 Subnet

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyEnter" -DebugOn $True #ESXi Setup - Remove existing IPv4 Default Gateway

SLEEP 5

#Disable IPv6

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyDown" -DebugOn $True #ESXi Setup - Key Down

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyEnter" -DebugOn $True #ESXi Setup - IPv6 Configuration

SLEEP 5

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyUp" -DebugOn $True #ESXi Setup - Key Down

Set-VMKeystrokes -VMName $VirtualMachine -StringInput " " -DebugOn $True #ESXi Setup - Select Disable IPv6

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyEnter" -DebugOn $True #ESXi Setup - IPv6 Commit Changes

SLEEP 5

#Set DNS Configuration

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyDown" -DebugOn $True #ESXi Setup - Key Down

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyEnter" -DebugOn $True #ESXi Setup - DNS Configuration

SLEEP 5

Set-VMKeystrokes -VMName $VirtualMachine -StringInput "$PrimaryDNS" -DebugOn $True #ESXi Setup - Primary DNS IP Address

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyDown" -DebugOn $True #ESXi Setup - Key Down

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyDown" -DebugOn $True #ESXi Setup - Key Down

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyBackSpace" -DebugOn $True #ESXi Setup - Remove existing Hostname

Set-VMKeystrokes -VMName $VirtualMachine -StringInput "$HostName" -DebugOn $True #ESXi Setup -Hostname

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyEnter" -DebugOn $True #ESXi Setup - DNS Commit Changes

SLEEP 5

#Set Custom DNS Suffixes

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyDown" -DebugOn $True #ESXi Setup - Key Down

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyEnter" -DebugOn $True #ESXi Setup - DNS Commit Changes

SLEEP 5

Set-VMKeystrokes -VMName $VirtualMachine -StringInput "$DNSSuffix" -DebugOn $True #ESXi Setup -Hostname

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyEnter" -DebugOn $True #ESXi Setup - DNS Commit Changes

SLEEP 5

#Complete Setup and Reboot

Set-VMKeystrokes -VMName $VirtualMachine -SpecialKeyInput "KeyESC" -DebugOn $True #ESXi Setup - ESC back

SLEEP 5

Set-VMKeystrokes -VMName $VirtualMachine -StringInput "Y" -DebugOn $True #ESXi Setup - Apply Changes and reboot host

Write-Host "Your Host is now Ready!!" -BackgroundColor DarkGreen -ForegroundColor White

 

The post Automating The Installation Of VMware ESXi With PowerCLI appeared first on SYSADMINTUTORIALS IT TECHNOLOGY BLOG.


Cisco UCS FI 6332 Bootflash has unrecoverable error

$
0
0

Last week I was testing a Cisco UCS migration from fabric interconnect 6248UP to 6332. Upon testing I noticed straight out of the box that the 6332’s had the following error:

ERROR: bootflash: has unrecoverable error; please do “format bootflash:”

Cisco UCS 6332 ERROR: bootflash: has unrecoverable error; please do “format bootflash”

cisco-ucs-6332-bootflash-error

I logged a ticket with Cisco TAC and we jumped onto a webex. They sent me the matching ucs-dplug file which allows engineering to connect into the underlying Linux operating system (I was running Cisco UCS firmware 3.2(3g))

Once in the Linux operating system, Cisco TAC was able to perform a file system check using the following commands:

  • umount /dev/sda3
  • e2fsck -y /dev/sda3
  • umount /dev/sda4
  • e2fsck -y /dev/sda4
  • umount /dev/sda5
  • e2fsck -y /dev/sda5
  • umount /dev/sda6
  • e2fsck -y /dev/sda6
  • umount /dev/mtdblock3
  • e2fsck -y /dev/mtdblock3
  • umount /dev/sda7
  • e2fsck -y /dev/sda7
  • umount /dev/sda8
  • e2fsck -y /dev/sda8

Upon completing the filesystem check, a reboot was performed which then showed the filesystem as consistent and error-free.

The Underlying Cisco UCS 6332 Issue

However, if the 6332 FI (Fabric Interconnect) has booted and the power is removed from the device, pretty much a hard shutdown, the bootflash: error returns upon power up.

I raised this with Cisco TAC again however they are yet to resolve the issue or find a cause, but to me, it’s quite apparent that the filesystem does not unmount consistently and upon reboot when the bootflash runs its checks, it is unable to repair.

We did not have this issue with the 6248UP FI’s, only with the 6332

Cisco UCS 6332 Consistent Bootflash Workaround

If you need to power down the FI, the only way I found to consistently do this, is to connect to the console, reboot the switch and once the file system has been unmounted press CTRL+L like crazy until the switch lands into the Loader> prompt.

Once in the Loader> prompt you can power off the switch and you will have a consistent bootflash for when you need to start the switch next time.

The post Cisco UCS FI 6332 Bootflash has unrecoverable error appeared first on SYSADMINTUTORIALS IT TECHNOLOGY BLOG.

How to Create VMware vSphere 6.7 Lab On Home PC

$
0
0

In this video tutorial, I walk you through how to set up a fully functional VMware vSphere 6.7 Lab within your home pc utilizing VMware Workstation.

This is the exact way I started with my home lab before I could afford some second-hand servers.

VMware vSphere 6.7 Lab Requirements

The lab requirements are:

  • VMware Workstation. You can download a trial from VMware and if you decide you like it you can purchase a copy online
  • Windows Active Directory Domain Controller – This is used primarily for DNS but can later be used for Active Directory authentication
  • VMware ESXi 6.7 and VMware vCenter 6.7 ISO file – I explain how you can download these in the video
  • Your home PC should have multiple CPU cores, at least 16Gb of RAM (the more RAM you have the better), and lastly an SSD drive of about 120Gb or more. You can run the lab on sata however it will be extremely slow.

VMware vSphere 6.7 Lab Contents

I begin with showing you how you can download a trial version of VMware Workstation, the VMware ESXi 6.7 and vCenter 6.7 ISO. Once VMware Workstation is installed and you have the ISO files downloaded you should go ahead and create a Windows Active Directory Domain Controller. This will be used for DNS when it comes time to install VMware vCenter server. The installation and setup of a Windows Domain Controller is not covered in this video. It is pretty straight forward and if you have any questions please leave a comment and we can help you out.

Next, we move onto creating our first VMware ESXi virtual machine within VMware Workstation. This includes setting up the VM and running through the installation.

Once we have VMware ESXi running, we then access the gui and create a nested VMware vCenter Server Appliance 6.7. There are 2 parts to this installation:

  • Part 1 – Deploying the virtual appliance
  • Part 2 – vCenter server setup

VMware vSphere 6.7 Home Lab

VMware vSphere 6.7 Lab Objective

By the end of this lab, you will have deployed a fully functional VMware vSphere 6.7 lab on your home PC. You will also have a solid vSphere foundation. You can then continue from here and create additional virtual machines, explore other VMware virtual appliances or test some of the more advanced features of vSphere. Because the best way to learn is to get in there and get your hands dirty.

VMware vSphere 6.7 Lab on your Home PC – Build a home lab

Thanks everyone for watching and feel free to post any comments below

The post How to Create VMware vSphere 6.7 Lab On Home PC appeared first on SYSADMINTUTORIALS IT TECHNOLOGY BLOG.

NetApp Ontap 9.6 Simulator Upgrade

$
0
0

Today has been the release of NetApp Ontap 9.6 RC2 and what better way to get familiar with it than to run it up in the lab as a simulator.

The simulator is not yet available via the NetApp Support Site, however, it is possible to upgrade an Ontap 9.5 simulator to 9.6 by following these simple steps.

Upgrading NetApp Ontap 9.5 Simulator to Ontap 9.6

Firstly, you’ll need to download the NetApp Ontap 9.6 image file from the support site. The direct link to the Ontap 9.6 image that can be used in the simulator is found here

Secondly, upgrading to Ontap 9.6 via the 9.5 GUI will not work. I found it actually crashed my simulator, therefore, we need to reboot the Ontap 9.5 simulator and at the boot menu press CTRL C

NetApp Ontap 9.5 Boot Menu

Once you’ve pressed CTRL C, you will be presented with the Boot Menu Options. From within here, select option 7 – Install new software first.

The upgrade will be disruptive if you agree press Y to continue

NetApp Ontap 9.5 Boot Menu Options

The next few steps require a temporary IP to be setup to reach a server that has the image on it. I use HTTP File Server (HFS). For my settings, I used e0c with IP 192.168.1.51, subnet 255.255.255.0, gateway 192.168.1.1 and Ontap 9.6 file location as http://192.168.1.110:8080/96RC2_q_image.tgz

Once all this information has been entered you can follow the prompts to upgrade the system.

When the system rebooted, I went back into the boot menu (shown above) and selected option 4 to Clean configuration and initialize all disks. This wiped the current Ontap 9.5 config and I started from a clean slate.

NetApp Ontap 9.6 System Manager

After logging into System Manager for the first time, you will notice a new button at the top – ‘Try the new experience’. This is the future System Manager, that is currently in read-only mode for now, but I’m guessing will open up in future Ontap releases.

NetApp Ontap 9.6 new System Manager

NetApp Ontap 9.6 Documentation Links

The post NetApp Ontap 9.6 Simulator Upgrade appeared first on SYSADMINTUTORIALS IT TECHNOLOGY BLOG.

How To Automate NetApp Installations With Ansible

$
0
0

Last month, I decided it was time to get my hands dirty and jump right into learning some Ansible Automation with NetApp.

I bookmarked a link to a blog article that was posted on NetApp.io (The Pub) that guides you through installing Ansible, updating NetApp Modules, understanding Playbooks, creating your first playbook, and lastly a complete workflow example. The blog posts were written by one of NetApp’s Technical Marketing Engineers, David Blackwell.

The blog posts are well written, really easy to follow and within no time I had gone through the 5 sections and was ready to start creating my own Ansible Playbooks.

At the end of this blog post is my youtube video where you can see that in 51 seconds, I was able to have an entire cluster setup and ready to serve NFS volumes to VMware vSphere.

For this demonstration, I have prepared my NetApp Simulator by erasing the whole system (option 4 on the boot menu). This leaves us at the Cluster Setup Wizard.

Basic NetApp Cluster Setup in preparation for Ansible

First up, we will quickly run through the NetApp Cluster Setup and then create an Ansible user account with appropriate access within Ontap.

At the cluster setup wizard, the very first prompt is asking you to enable Autosupport. Type yes.

Netapp Cluster Setup

The NetApp simulator has 4 interfaces: e0a, e0b, e0c and e0d. The first 2 are used for the cluster interconnect. I’ll be making e0c my node management and cluster management port. Later on, we will be adding vlans to e0d to make sub-interfaces.

You can see in the screenshot below I have selected e0c as my node management port, entered an IP address of 192.168.1.51, subnet mask of 255.255.255.0 and default gateway of 192.168.1.1

Once you have entered in the basic network information you have the option to continue the cluster setup via the web gui or via CLI. In this demo I will press Enter and continue the setup via the CLI.

Netapp Cluster Setup

Next up, I’ll type in ‘create’ to create a new cluster and when it asks if you would like to use this node as a single node cluster, I will type yes.

When asked for the private cluster network ports, make sure you select e0a and e0b. Anything illustrated in square brackets, for example [e0a,e0b] means those are the default ports.

I type yes to accept the defaults.

Netapp Cluster Setup

Enter in a password for the admin account.

Type a name for the cluster. In this demo my cluster name is CLUSTER96 (96 stands for Ontap 9.6)

Netapp Cluster Setup

Don’t worry about adding in feature license keys as I will be using Ansible to enter in all the NetApp Simulator license keys. You can simply press enter here.

Netapp Cluster Setup

In the screenshot below, you can see that the simulator wanted to place the cluster management IP on e0d, however, I have specified that I want it on e0c. My cluster management IP address is 192.168.1.50, subnet mask is 255.255.255.0 and the default gateway is 192.168.1.1.

I will enter in the DNS domain name for my Windows Active Directory DNS Server which is vmlab.local

Netapp Cluster Setup+

The screen shot below shows that I entered in the DNS domain name of vmlab.local as well as the IP address to my DNS server. Which is 192.168.1.101.

For the question ‘Where is the controller located’, you can simply type anything. I have specified VMLAB.

Netapp Cluster Setup

Voila, with the above few simple steps, the basic NetApp setup is now complete. I can now browse to my Cluster IP with https and login to the Cluster.

NetApp Cluster Setup

Now that we have our NetApp cluster up and running, we can move onto creating our Ansible playbook.

Preparing to Create My first Ansible Playbook

My NetApp simulator is running Ontap 9.6. With the NetApp Cluster Wizard now complete, it was time to think about all the necessary configuration steps in order to prepare the system for a VMware vSphere environment consisting of NFS datastores.

This is what I came up with:

  • Install NetApp Licenses
  • Set NTP
  • Set Timezone
  • Rename the Root Aggregate
  • Create and online a new Data Aggregate
  • Create a Vserver
  • Setup a VLAN
  • Creating a Broadcast Domain
  • Subnet Creation
  • Create an NFS Lif
  • Start NFS
  • Create NFS Export Rule
  • Add DNS Settings to Vserver
  • Create first NFS Volume
  • Create an additional NFS Volume

What Do I Need To Create My First NetApp Ansible Playbook

Ansible Playbooks are constructed using YAML (Yet Another Markup Language). Having very little experience with YAML I found it quite easy to get my head around how to construct one of this Ansible Playbook files. When I say easy, I mean it’s quite intuitive.

To be able to automate any system with Ansible, we need to use an Ansible module. For NetApp, there are plenty. You can find a list of storage modules here: https://docs.ansible.com/ansible/latest/modules/list_of_storage_modules.html

Basically what we want to do in order to start creating our playbook, is look for the module that fits our automation step. For example, in my list above, the first thing I want to do is install the NetApp simulator licenses. I browse the list of Ansible storage modules and click on na-ontap-module

Within the Ansbile NetApp Cluster module, it gives us a list of parameters to use in order to install a license. Underneath the parameters, it also gives you some examples. I then pick which parameters I want to use and construct part of my playbook.

Let’s Dive In And Create Out First Ansible Playbook

Let’s take a look at my first Ansible Playbook which was to install NetApp simulator licenses. Below is the exact code I used:

#########################################################################################################################################
# -= Requirements =-
#
# 1. Make sure ansible user has been created
# 1a. security login create -vserver CLUSTER96 -role admin -application http -authentication-method password -user-or-group-name ansible
# 1b. security login create -vserver CLUSTER96 -role admin -application ontapi -authentication-method password -user-or-group-name ansible
##########################################################################################################################################

---
- hosts: localhost
  name: NetApp licensing
  vars:
   login: &login
   hostname: 192.168.1.50
    username: ansible
    password: Password123
    https: true
    validate_certs: false
    clustername: CLUSTER96
  tasks:
  - name: Install Licenses
    na_ontap_cluster:
      state: present
      cluster_name: "{{ clustername }}"
      license_code: "{{ item }}"
      <<: *login
   loop:
     - CAYHXPKBFDUFZGABGAAAAAAAAAAA
     - APTLYPKBFDUFZGABGAAAAAAAAAAA
     - WSKTAQKBFDUFZGABGAAAAAAAAAAA
     - CGVTEQKBFDUFZGABGAAAAAAAAAAA
     - OUVWXPKBFDUFZGABGAAAAAAAAAAA
     - QFATWPKBFDUFZGABGAAAAAAAAAAA
     - UHGXBQKBFDUFZGABGAAAAAAAAAAA
     - GCEMCQKBFDUFZGABGAAAAAAAAAAA
     - KYMEAQKBFDUFZGABGAAAAAAAAAAA
     - SWBBDQKBFDUFZGABGAAAAAAAAAAA
     - YDPPZPKBFDUFZGABGAAAAAAAAAAA
     - INIIBQKBFDUFZGABGAAAAAAAAAAA

If we break it down, anything with a # relates to a comment. As I didn’t want to use my NetApp admin credentials for my Ansible server, I documented the steps at the beginning of the file about how to create a specific Ansible user.

#########################################################################################################################################
# -= Requirements =-
#
# 1. Make sure ansible user has been created
# 1a. security login create -vserver CLUSTER96 -role admin -application http -authentication-method password -user-or-group-name ansible
# 1b. security login create -vserver CLUSTER96 -role admin -application ontapi -authentication-method password -user-or-group-name ansible
##########################################################################################################################################

Next, we always start a playbook with 3 dashes —

We then move onto specifying the host that runs Ansible, in this case it is my localhost.

Before moving onto variables and tasks, we give this playbook a name. In my instance I called this playbook NetApp Licensing

---
- hosts: localhost
  name: NetApp licensing

It’s now time to start constructing our variables to be used as part of the playbook.

We start our variables with the var: line and then list out all the variables we wish to use. You may be thinking that it’s unsafe to have the Ansible NetApp username and password contained within the file, and I 100% agree, but for a lab it’s fine. For production, you can encrypt the password into a password file.

I’m using a small trick that I learned from reading through David Blackwell’s blog post. Instead of specifying the hostname, username and password in every task, you simple use <<: *login, this relates to everything under login: &login below. You will see <<: *login being used in the next section when we move onto tasks.

vars:
  login: &login
   hostname: 192.168.1.50
   username: ansible
   password: Password123
   https: true
   validate_certs: false
  clustername: CLUSTER96

Tasks can consist of 1 or more actions. In this playbook, we only have the one task which is using the na_ontap_cluster module to install multiple licenses on a specific cluster.

The name of this task is ‘Install Licenses’

The Ansible module responsible for installing licenses to the cluster is called: na_ontap_cluster

When the state = present, it means that the license keys should exist, if they don’t exist install them.

The cluster_name parameter, is calling the clustername variable we created under our vars: section of the playbook file.

When I first created this playbook file, it didn’t look like this. I could only install 1 license key per task. Coming from a Powershell background, I wanted to use a loop. I spent some time researching how to do loops with Ansible and the following section got updated to install multiple license keys under the one task.

The parameter, license_code = item, refers to 1 line under the loop: section i.e. 1 license code. It then loops through each license checking to see if the license is installed on the system or not. If it is not installed, Ansible will go ahead and install it.

tasks:
- name: Install Licenses
  na_ontap_cluster:
    state: present
    cluster_name: "{{ clustername }}"
    license_code: "{{ item }}"
    <<: *login
  loop:
    - CAYHXPKBFDUFZGABGAAAAAAAAAAA
    - APTLYPKBFDUFZGABGAAAAAAAAAAA
    - WSKTAQKBFDUFZGABGAAAAAAAAAAA
    - CGVTEQKBFDUFZGABGAAAAAAAAAAA
    - OUVWXPKBFDUFZGABGAAAAAAAAAAA
    - QFATWPKBFDUFZGABGAAAAAAAAAAA
    - UHGXBQKBFDUFZGABGAAAAAAAAAAA
    - GCEMCQKBFDUFZGABGAAAAAAAAAAA
    - KYMEAQKBFDUFZGABGAAAAAAAAAAA
    - SWBBDQKBFDUFZGABGAAAAAAAAAAA
    - YDPPZPKBFDUFZGABGAAAAAAAAAAA
    - INIIBQKBFDUFZGABGAAAAAAAAAAA

Once I finished created this file I saved it as install_licenses.yml

To run the Ansible playbook, you simply type: ansible-playbook install_licenses.yml

Combining Multiple Tasks into One Ansible Playbook

The first bit was quite simple as I was only doing 1 task. Next I built an Ansible playbook for each of the following tasks:

  • Set NTP
  • Set Timezone
  • Rename the Root Aggregate
  • Create and online a new Data Aggregate
  • Create a Vserver
  • Setup a VLAN
  • Creating a Broadcast Domain
  • Subnet Creation
  • Creating an NFS Lif
  • Start NFS
  • Create NFS Export Rule
  • Add DNS Settings to Vserver
  • Create first NFS Volume
  • Create an additional NFS Volume

Following on from the multiple Ansible playbook files above it was time to combine all these tasks into 1 Ansbile playbook file. As a result, I then only need to execute 1 playbook.

I’m going to post this final code to my GitHub page which you can find by clicking here.

It’s a big file with lots of lines, don’t get scared. I’m hoping that after reading through David Blackwell’s blog above (Getting started with Ansible and NetApp), and reading through how I approached it, you will be able to understand the file, make changes and test it in your lab.

I have gone ahead and labeled all the variables within the YAML file so that you can easily identify which ones to change to suit your environment.

As an indication, to manually configure all the steps above can you take around 1-2 hours. Executing the Ansible playbook with 14 tasks, took me 51 seconds. You can see the live results of the playbook by viewing the video below.

If you have any questions in regards to any parts of the file, please leave a comment. If you’re an Ansible guru and have suggestions on how to make this playbook better, please also leave a comment.

Please take note that all of this testing was in my lab on the NetApp simulator.

Automating A NetApp Cluster Install in 51 Seconds with Ansible

The video below goes through what we spoke about in this blog, you will be able to see it live. With displaying a stopwatch, you can experience first hand how long it takes to setup a NetApp system using Ansible.

 

 

The post How To Automate NetApp Installations With Ansible appeared first on SYSADMINTUTORIALS IT TECHNOLOGY BLOG.

Creating Multi-Part Ansible Playbook With Variables – NetApp and VMware

$
0
0

If you followed my last post where I created an Ansible Playbook to complete a full setup of a NetApp simulator, you would have seen that it was one gigantic playbook.

Working with one gigantic playbook can be a little overwhelming, a little bit messy and as a result, makes it hard to reuse certain tasks in other playbooks.

I have been working this week on creating a multipart Ansible playbook, whereby I have one main playbook, that calls upon multiple smaller playbooks. On top of this, I have also created a separate variables file that gets referenced within the YAML files.

As a bonus, there is a new part of this automation whereby the NFS datastores we created get automatically mounted to each ESXi host within my VMware vSphere environment.

Multi-Part Ansible Playbook Files

Below is a list of files that I have created along with a description of what each one of them does. Later on, we’ll take a look inside and list the changes that have occurred since my previous post:

  • netapp_full_install_multi-part.yml (the main playbook that calls each of the other playbooks, 01 – 06)
  • variables.yml (contains all the variables that are called upon within each file below)
  • 01_install_licenses_setup_ntp.yml (NetApp Simulator: install NetApp licenses, set NTP and Timezone)
  • 02_create_aggregate.yml (NetApp Simulator: rename root aggregate, create and online data aggregate)
  • 03_create_svm.yml (NetApp Simulator: create SVM, start NFS, create NFS export rule, add DNS, create first NFS volume
  • 04_network_setup.yml (Netapp Simulator: setup VLAN, create broadcast domain, create subnet, create NFS lif)
  • 05_create_volume.yml (NetApp Simulator: create additional NFS volume)
  • 06_mount_nfs_datastore.yml (VMware vSphere: Mount the 2 NetApp NFS volumes to ESXi)

Ansible Playbook Imports

Let’s take a look into the main file ‘netapp_full_install_multi-part.yml’. This file is used to call upon all playbooks labelled from 01 – 06

The way that it calls each playbook is via a command called ‘import_playbook’. You can see from the code below that this approach is much neater and structured than having all your commands in 1 big playbook. It also allows you to re-use some of these playbooks such as 05_create_volume.yml for any subsequent volume creation, simply by changing the relevant components in the variables file.

##########################################################
# This Ansible Playbook calls multiple sub yml playbooks #
##########################################################

---
# Install Licenses and Setup NTP
- import_playbook: 01_install_licenses_setup_ntp.yml

# Rename Root Aggregate, Create and online new data aggregate
- import_playbook: 02_create_aggregate.yml

# Crete SVM, start NFS, create NFS export rule, add DNS settings to SVM, create NFS volume
- import_playbook: 03_create_svm.yml

# Create NFS vlan, create broadcast-domain, create subnet, create NFS lif
- import_playbook: 04_network_setup.yml

# Create an additional NFS volume
- import_playbook: 05_create_volume.yml

# Mount NFS datastore to ESXi hosts
- import_playbook: 06_mount_nfs_datastore.yml esxihost=vmhost3.vmlab.local
- import_playbook: 06_mount_nfs_datastore.yml esxihost=vmhost4.vmlab.local
- import_playbook: 06_mount_nfs_datastore.yml esxihost=vmhost5.vmlab.local
- import_playbook: 06_mount_nfs_datastore.yml esxihost=vmhost6.vmlab.local

Ansible Playbook Variables File

If you remember in my previous post, we had a huge amount of variables at the beginning of the file, if you don’t remember it looked like this:

vars:
&nbsp; login: &login
&nbsp; &nbsp;hostname: 192.168.1.50 # NetApp Cluster IP
&nbsp; &nbsp;username: ansible # Cluster User
&nbsp; &nbsp;password: Password123 # Cluster Password
&nbsp; &nbsp;https: true
&nbsp; &nbsp;validate_certs: false
&nbsp;clustername: CLUSTER96 # Cluster Name
&nbsp;ntpservers: 192.168.1.101 # Time Server
&nbsp;aggrrootoldname: aggr0_CLUSTER96_01 # Aggregate root name after Cluster Setup
&nbsp;aggrrootnewname: aggr0_CLUSTER96_01_root # New Aggregate root name
&nbsp;aggrdataname: aggr1_CLUSTER96_01_data # New Data Aggregate name
&nbsp;diskcount: 26 # Number of disks to add to the Data Aggregate
&nbsp;svmname: SVM1 # SVM or Vserver name
&nbsp;ro0tvolname: SVM1_root # SVM root vol name
&nbsp;rootvolaggr: aggr1_CLUSTER96_01_data # Which aggregate to place the SVM root vol
&nbsp;rootvolsecurity: unix # SVM Root vol security stype
&nbsp;allowedaggrs: aggr1_CLUSTER96_01_data # Allowed SVM data Aggregates
&nbsp;allowedprotocols: nfs # Allowed SVM Protocols
&nbsp;nfsclientmatchsubnet: 192.168.2.0/24 # Allow this subnet to access NFS
&nbsp;svmdnsdomain: vmlab.local # SVM DNS Domain
&nbsp;svmdnsservers: 192.168.1.101 # SVM DNS Servers
&nbsp;nfsvolname: NFS_vol1 # First NFS Vol withint your SVM
&nbsp;nfsaggr: aggr1_CLUSTER96_01_data # Which Aggregate to place the NFS Vol on
&nbsp;nfsvolsize: 100 # NFS Vol Size GB
&nbsp;vlan: 5 # NFS VLAN
&nbsp;parentinterface: e0d # Interface where VLAN will be created
&nbsp;broadcastname: NFS # Create a new Broadcast Domain with this name
&nbsp;broadcastports: ["CLUSTER96-01:e0d-5"] # Add ports here - multiple ports use comma
&nbsp;subnetname: NFS-Subnet # NFS Subnet Name
&nbsp;subnetnetwork: 192.168.2.0/24 # NFS Network Subnet
&nbsp;subnetiprange: ["192.168.2.51-192.168.2.52"] # NFS LIF IP within the NFS subnet pool
&nbsp;lifinterfacename: nfs_lif01 # SVM NFS Lif name
&nbsp;lifhomeport: e0d-5 # Home port for SVM NFS Lif
&nbsp;lifhomenode: CLUSTER96-01 # Home node for SVM NFS Lif
&nbsp;lifaddress: 192.168.2.51 # SVM NFS Lif IP Address
&nbsp;lifnetmask: 255.255.255.0 # SVM NFS Lif Subnet
&nbsp;vservername: SVM1 # SVM or Vserver Name
&nbsp;aggr: aggr1_CLUSTER96_01_data # Which Aggregate to create second NFS vol
&nbsp;vol_name: ansibleVol # Second NFS vol name

I have now created a variables.yml file, that each sub-playbook refers too. This approach is much neater, more user friendly, and makes it easy to re-use parts of the code.

This is what the new variables.yml file looks like:

##########################################################
# Variable File for 'netapp_full_install_multi-part.yml' #
##########################################################

# Cluster Login
clusterip: 192.168.1.50
user: ansible
pass: Password123
https_option: true
validate_certs_option: false

# Variables for '01_install_licenses_setup_ntp.yml'
clustername: CLUSTER96
ntpservers: 192.168.1.101

# Variables for '02_create_aggregate.yml'
aggrrootoldname: aggr0_CLUSTER96_01
aggrrootnewname: aggr0_CLUSTER96_01_root
aggrdataname: aggr1_CLUSTER96_01_data
diskcount: 26

# Variables for '03_create_svm.yml'
svmname: SVM1
rootvolname: SVM1_root
rootvolaggr: aggr1_CLUSTER96_01_data
rootvolsecurity: unix
allowedaggrs: aggr1_CLUSTER96_01_data
allowedprotocols: nfs
nfsclientmatchsubnet: 192.168.2.0/24
svmdnsdomain: vmlab.local
svmdnsservers: 192.168.1.101
nfsvolname1: NFS_vol1
nfsaggr: aggr1_CLUSTER96_01_data
nfsvolsize: 100

# Variables for '04_network_setup.yml'
clustername: CLUSTER96
vlan: 5 # NFS VLAN
parentinterface: e0d # Interface where VLAN will be created
broadcastname: NFS
broadcastports: ["CLUSTER96-01:e0d-5"] # Add ports here - multiple ports use comma
subnetname: NFS-Subnet
subnetnetwork: 192.168.2.0/24
subnetiprange: ["192.168.2.51-192.168.2.52"]
lifinterfacename: nfs_lif01
lifhomeport: e0d-5
lifhomenode: CLUSTER96-01
lifaddress: 192.168.2.51
lifnetmask: 255.255.255.0
vservername: SVM1

# Variables for '05_create_volume.yml'
aggr: aggr1_CLUSTER96_01_data
nfsvolname2: NFS_vol2
vservername: SVM1

# Variables: for '06_mount_nfs_datastore.yml'
vcenter_server: 192.168.1.104
vcenter_user: ansible@vsphere.local
vcenter_pass: Password123
voltype: nfs

Ansible Playbooks 01 – 06

I have taken all the tasks out of my previous gigantic playbook and created 6 smaller playbooks. Within each of these 6 playbooks you will notice that nearly all values are variables, as opposed to hardcoded values. These variables are all controlled via the variables.yml file. The benefits of creating a variables.yml file is that there is no need to edit any of the playbooks labelled 01-  – 06 and that it makes the playbooks more portable.

Below is the code for playbook ’01_install_licenses_setup_ntp.yml’. Nearly all the settings have a variable set as the value. To reference the variables.yml file, we need to add the following code to our playbooks:

vars_files:
- variables.yml

I have added a debug msg at the end of each playbook for your reference. This debug message appears once the playbook has completed its run:

- debug: msg="Licenses have been installed on {{ clustername }}"

Ansible Playbook Code 01_install_licenses_setup_ntp.yml

#########################################################################################################################################
# -= Requirements =-
#
# 1. Make sure ansible user has been created
# 1a. security login create -vserver CLUSTER96 -role admin -application http -authentication-method password -user-or-group-name ansible
# 1b. security login create -vserver CLUSTER96 -role admin -application ontapi -authentication-method password -user-or-group-name ansible
##########################################################################################################################################

---
- hosts: localhost
gather_facts: false
name: NetApp licensing
vars:
login: &login
hostname: "{{ clusterip }}"
username: "{{ user }}"
password: "{{ pass }}"
https: "{{ https_option }}"
validate_certs: "{{ validate_certs_option }}"
vars_files:
- variables.yml
tasks:
- name: Install Licenses
na_ontap_cluster:
state: present
cluster_name: "{{ clustername }}"
license_code: "{{ item }}"
<<: *login
loop:
- CAYHXPKBFDUFZGABGAAAAAAAAAAA
- APTLYPKBFDUFZGABGAAAAAAAAAAA
- WSKTAQKBFDUFZGABGAAAAAAAAAAA
- CGVTEQKBFDUFZGABGAAAAAAAAAAA
- OUVWXPKBFDUFZGABGAAAAAAAAAAA
- QFATWPKBFDUFZGABGAAAAAAAAAAA
- UHGXBQKBFDUFZGABGAAAAAAAAAAA
- GCEMCQKBFDUFZGABGAAAAAAAAAAA
- KYMEAQKBFDUFZGABGAAAAAAAAAAA
- SWBBDQKBFDUFZGABGAAAAAAAAAAA
- YDPPZPKBFDUFZGABGAAAAAAAAAAA
- INIIBQKBFDUFZGABGAAAAAAAAAAA
- name: Set NTP
na_ontap_ntp:
state: present
version: auto
server_name: "{{ ntpservers }}"
<<: *login
- name: Set Timezone
na_ontap_command:
command: ['cluster', 'date', 'modify', '-timezone', '{{ timezone }}']
privilege: admin
<<: *login
- debug: msg="Licenses have been installed on {{ clustername }}"

Ansible Playbook 06_mount_nfs_datastore.yml

This playbook is a new addition to my build. It will take the 2 NFS volumes that we created on our NetApp simulator and mount them into my vSphere 6.7 environment.

Within the ‘netapp_full_install_multi-part.yml’ file you can see that I call this playbook 4 times. Each run sets the ‘esxihost’ variable to a different VMware ESXi host.

# Mount NFS datastore to ESXi hosts
- import_playbook: 06_mount_nfs_datastore.yml esxihost=vmhost3.vmlab.local
- import_playbook: 06_mount_nfs_datastore.yml esxihost=vmhost4.vmlab.local
- import_playbook: 06_mount_nfs_datastore.yml esxihost=vmhost5.vmlab.local
- import_playbook: 06_mount_nfs_datastore.yml esxihost=vmhost6.vmlab.local

Back to the ’06_mount_nfs_datastore.yml’ playbook. The loop below mounts the 2 NFS volumes to the VMware ESXi host set within the ‘esxihost’ variable

---
- hosts: localhost
name: Mount NetApp NFS Datastores
gather_facts: false
vars:
login: &login
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
validate_certs: no
vars_files:
- variables.yml
tasks:
- name: Mount NFS Datastores to ESXi Host
vmware_host_datastore:
state: present
datastore_name: "{{ item.name }}"
datastore_type: "{{ item.type }}"
nfs_server: "{{ item.server }}"
nfs_path: "{{ item.path }}"
nfs_ro: no
esxi_hostname: "{{ esxihost }}"
<<: *login
loop:
- { 'name': '{{ nfsvolname1 }}', 'server': '{{ lifaddress }}', 'path': '/{{ nfsvolname1 }}', 'type': '{{ voltype }}'}
- { 'name': '{{ nfsvolname2 }}', 'server': '{{ lifaddress }}', 'path': '/{{ nfsvolname2 }}', 'type': '{{ voltype }}'}
- debug: msg="{{ nfsvolname1 }} & {{ nfsvolname2 }} datastores have been added to ESXi host {{ esxihost }}."

I tried to figure out a way to put all the ESXi hosts into a variable as well as the number of datastores that I wish to mount and attempt to do 2 loops. One loop being the ESXi hosts and the second being the volumes. However, I’m yet to get that part working. When I do figure it out, I will update this post.

Ansible Playbooks on GitHub

I have copied all these Playbooks up to my GitHub repository. If want to copy this code, change some variables and try it out, just make sure all the files are in the same directory on your Ansible host.

If you have any questions or suggestion please let me know by leaving a comment below.

Ansible Playbook Netapp VMware

The post Creating Multi-Part Ansible Playbook With Variables – NetApp and VMware appeared first on SYSADMINTUTORIALS IT TECHNOLOGY BLOG.

NetApp Auto Aggregate Volume Evacuation With PowerShell

$
0
0

Have you ever had to perform a large number of vol moves in order to evacuate or decommission an aggregate? This week I had to evacuate about 50+ volumes from an aggregate that is being decommissioned. Instead of typing out the vol move command 50+ times I decided to automate it with Powershell.

If you have performed a large number of vol move before, you will know that ONTAP has a limit on the maximum number of vol moves you can perform at any one time. This is to prevent your system from running into any i/o or throughput performance issues that could cause your production workloads to suffer.

We are going to break down the script in sections so I can explain what each part does, what you need to change for your environment and a few other variables that you can optionally change. The script is available from my Github page by clicking here.

The requirements for this script are PowerShell and the NetApp PowerShell Toolkit (As of this blog post, toolkit 9.6 is the latest version)

NetApp Auto Aggregate Evacuation Script

This first part checks to see if the DataOntap PowerShell module is loaded. If it isn’t, then it will load the module

# Import Modules
IF (-not (Get-Module -Name DataOntap)) 
 {
 Import-Module DataOntap
 }

The second part prompts you to enter in your NetApp storage credentials and saves them into a variable

# Get-Credentials
$netappcreds = Get-Credential -Message "Enter your NetApp Storage Credentials"

Next, we have created a function to connect to the NetApp storage cluster. We use the credentials variable created above for authentication.

# Connect to storage system
Function connect-netapp
 {
 Connect-NcController 192.168.1.50 -credential $netappcreds
 }

This next part is a function that grabs the number of vol moves currently in-progress on the system. It outputs the volume name, destination aggregate, and percent complete. It also provides a counter number that is used in the script later on.

Function getvolmoves
 {
 # Get List of current volmoves
 $global:currentvolmoves = Get-NcVolMove | Where {$_.State -eq "healthy"}
 $global:counter = 0

 ForEach ($volmove in $currentvolmoves)
  {
  $global:counter++
  Write-Host $volmove.Volume "is still moving to" $volmove.DestinationAggregate "- Percent Complete =" $volmove.PercentComplete -ForegroundColor Yellow
  }
 }

Here we connect to the NetApp storage cluster by executing the function ‘connect-netapp’. The reason why I put the NetApp connection in a function is to make the script neater.  Within larger scripts where you might need to connect to multiple NetApp clusters, I find it’s easier to reference a function. Lastly, if you need to make a change to the connection, you only need to make the change in the function, you don’t need to search through the script looking for each connection line.

# Connect to storage system
connect-netapp

Now we have the source aggregate we wish to evacuate from and the destination aggregate we wish to vol move to. Change these to suit your environment.

# Aggregate to evacuate
$evacaggr = "aggr1_CLUSTER96_01_data"

# Destination aggregate
$destaggr = "aggr2_CLUSTER96_01_data"

The below command will get a list of volumes on the ‘source aggregate’ excluding any volume with the name ‘root’ in it. This is in case you have root load-sharing mirrors setup.

# Get list of volumes on aggregate
$listofvols = Get-NcVol | Where {$_.Aggregate -like $evacaggr -and $_.Name -notlike "*root*"}

This next part is the bread and butter of the script. The main loops.

First up we are going to start cycling through each volume within the variable $listofvols

I then get the latest ‘show vol move’ list with the command ‘Get-NcVolMove’ and look specifically for the volume we are currently working on in the loop. This is placed into the variable $volmovematch.

Now we call the function ‘getvolmoves’. The function will execute and provide a list of current vol moves in-progress as well as save the number of in-progress vol moves within the $counter variable.

Next we have a condition. If the vol move $counter is greater than or equal to 4, we are going to wait 10 mins and execute the getvolmoves function again. At the next run, if the vol move counter is still equal to or greater than 4, the script will repeat the 10min wait.

If the $volmovematch is empty, meaning that the volume to be moved is not in the list of current volume moves, then a vol move is triggered. BUT.. only if the $counter is less than 4.

ForEach ($vol in $listofvols)
 {

 # Look for vol match in list of current vols
 $volmovematch = Get-NcVolMove | Where {$_.Volume -eq $vol.Name}

 getvolmoves

 IF ($global:counter -ge 4)
  {
  Do
   {
   Write-Host "Do Loop ge 4, current counter = " $global:counter
   $date = (Get-Date).Tostring("HH:mm")
   Write-Host "$date - Vol move counter is greater than 4, sleeping 10 mins..." -ForegroundColor Yellow
   sleep 600
   getvolmoves
   }
  Until ($global:counter -lt 4)
  }

 IF (!$volmovematch -and $global:counter -lt 4)
  {
  Write-Host $vol.name "is now moving" -ForegroundColor Green
  Start-NcVolMove -DestinationAggregate $destaggr -Vserver $vol.Vserver -Name $vol.Name
  }
 }

NetApp Auto Aggregate Evacuation Script Screenshots

In this example, I need to migrate 8 CIFS volumes within my NetApp Ontap 9.6 simulator. The 8 volumes will be moving from source aggregate aggr1_CLUSTER96_01_data to destination aggregate aggr1_CLUSTER96_01_data

NetApp Auto Aggregate Evacuation

I use Windows PowerShell ISE to load and execute the script. In the screenshot below you can see that the script starts the vol moves for the first 4 volumes. It then realizes that there are 4 vol moves currently in place,  it pauses for 5 mins before checking again.

NetApp Auto Aggregate Evacuation

After 5 minutes the script checks the status of the vol moves again. It realizes that there are still 4 vol moves in progress and sleeps for an additional 5 minutes. In the output, you can see the progress of the current vol moves.

NetApp Auto Aggregate Evacuation

In this next screenshot, notice that the last sleep time was at 15:11. At the next check, you can see that the vol move counter is set to 0. This means there are no vol moves in progress. Therefore, the script starts the remaining 4 vol moves. Since there are no more volumes to be moved, the script ends.

NetApp Auto Aggregate Evacuation

If we check back in our NetApp System Manager, we can see that all our CIFS volumes have moved to aggregate ‘aggr2_CLUSTER96_01_data ‘

NetApp Auto Aggregate Evacuation

The post NetApp Auto Aggregate Volume Evacuation With PowerShell appeared first on SYSADMINTUTORIALS IT TECHNOLOGY BLOG.

Creating NetApp Root Load-Sharing Mirrors with Ansible

$
0
0

Who loves creating root load-sharing mirrors (LS-mirrors) ? If you manage 1 Ontap system with 1 SVM you may enjoy it, but if you manage multiple systems with many SVM’s then you may not enjoy it as much.

The above process can become quite repetitive in large environments. In today’s blog post I’m going to show you how to make these tasks a little less time consuming and more enjoyable.

Using Ansible along with NetApp modules we can automate the above tasks.

Ansible Playbooks for NetApp Root Load-Sharing Mirror

Similar to my previous Ansible posts I’ll be breaking down the code bit by bit so you can understand what is going on under the hood.

Steps involved in Creating a NetApp Root Load-Sharing Mirror

First up, let’s break down the steps required to create an LS-mirror and then we’ll look at the Ansible code.

  1. Create 2 DP volumes, each 1GB in size. If you have more than 1 HA pair in your cluster, these 2 volumes need to be created on an HA pair that do not contain the original svm root volume.
  2. Create snapmirror relationships with type LS, and assign a schedule
  3. Initialize the snapmirror relationships

Ansbile Files for NetApp Root Load-Sharing Mirror Creation

The files I’ll be using to create my LS-mirrors are:

  • variables.yml (contains all my inputs)
  • 01_create_ls_vols.yml (Create 2 x DP volumes each 1 GB in size)
  • 02_create_ls_mirror.yml (Create and initialize LS-Mirror snapmirror relationships)
  • svm_ls_mirror_setup.yml (main Ansible playbook that calls each mini playbook above)

Variables.yml

The first part of this file contains our connection settings to the NetApp ONTAP cluster.

##################################################
# Variable File for 'netapp_ls-mirror_setup.yml' #
##################################################

# Cluster Login
clusterip: 192.168.1.50
user: ansible
pass: Password123
https_option: true
validate_certs_option: false

We then have our source cluster name. As this is an LS-Mirror the source and destination cluster name will be the same.

# Source Cluster Name
sourcecluster: CLUSTER96

Next variable is the SVM or Vserver name where we will be creating the LS-Mirror

# SVM Name
vservername: SVM1

This is the primary or source root volume that is created when you first create your SVM. It will be the source of our snapmirror LS relationships.

# SVM Name
vservername: SVM1

This is the primary or source root volume that is created when you first create your SVM. It will be the source of our snapmirror LS relationships.

# LS Mirror Source Vol Variable
lsvolsrc: SVM1_root

The next group of variables contain the first and second root volume name, size and the aggregates they will be created on.

# LS Mirror Vol 1 Variable
aggr1: aggr1_CLUSTER96_01_data
lsvol1name: SVM1_root_m01
lsvol1size: 1 # Size is in GB

# LS Mirror Vol 2 Variable
aggr2: aggr2_CLUSTER96_01_data
lsvol2name: SVM1_root_m02
lsvol2size: 1 # Size is in GB

01_create_ls_vols.yml

First part of this playbook contains the connection information for my NetApp ONTAP cluster and also imports the variables.yml file.

---
- hosts: localhost
  name: Create LS Root Vols
  gather_facts: false
  vars:
    login: &login
     hostname: "{{ clusterip }}"
     username: "{{ user }}"
     password: "{{ pass }}"
     https: "{{ https_option }}"
     validate_certs: "{{ validate_certs_option }}"
  vars_files:
  - variables.yml

Next, we have 2 tasks. Each task creates a Data Protection (DP) volume by referencing the variables in the variables.yml file:

tasks:
  - name: Volume 1 Create
    na_ontap_volume:
      state: present
      name: "{{ lsvol1name }}"
      vserver: "{{ vservername }}"
      aggregate_name: "{{ aggr1 }}"
      size: "{{ lsvol1size }}"
      size_unit: gb
      type: DP
      policy: default
      percent_snapshot_space: 0
      space_guarantee: none
      <<: *login
  - debug: msg="Volume {{ lsvol1name }} has been created."
  - name: Volume 2 Create
    na_ontap_volume:
      state: present
      name: "{{ lsvol2name }}"
      vserver: "{{ vservername }}"
      aggregate_name: "{{ aggr2 }}"
      size: "{{ lsvol2size }}"
      size_unit: gb
      type: DP
      policy: default
      percent_snapshot_space: 0
      space_guarantee: none
      <<: *login
  - debug: msg="Volume {{ lsvol2name }} has been created."

02_create_ls_mirror.yml

First part of this playbook contains the connection information for my NetApp ONTAP cluster and also imports the variables.yml file.

---
- hosts: localhost
  name: Create LS Mirror
  gather_facts: false
  vars:
    login: &login
     hostname: "{{ clusterip }}"
     username: "{{ user }}"
     password: "{{ pass }}"
     https: "{{ https_option }}"
     validate_certs: "{{ validate_certs_option }}"
  vars_files:
  - variables.yml

The next 2 tasks create snapmirror relationships with type LS. I have found that only the first snapmirror gets initialized at this stage.

  tasks:
  - name: Create First LS Mirror
    na_ontap_snapmirror:
      state: present
      source_volume: "{{ lsvolsrc }}"
      destination_volume: "{{ lsvol1name }}"
      source_vserver: "{{ vservername }}"
      destination_vserver: "{{ vservername }}"
      schedule: hourly
      relationship_type: load_sharing
      <<: *login
  - debug: msg="First LS Mirror Created."
  - name: Create Second LS Mirror
    na_ontap_snapmirror:
      state: present
      source_volume: "{{ lsvolsrc }}"
      destination_volume: "{{ lsvol2name }}"
      source_vserver: "{{ vservername }}"
      destination_vserver: "{{ vservername }}"
      schedule: hourly
      relationship_type: load_sharing
      <<: *login
  - debug: msg="Second LS Mirror Created."

I then added in a sleep command of 20 seconds which should be enough time for the first snapmirror relationship to initialize.

  - name: sleep for 20 seconds to allow for Volume 1 Snapmirror Initialization
    wait_for:
      timeout: 20

This task uses the na_ontap_command module which allows me to execute the same command as I would on an ONTAP cli shell. Here I’m initializing the second snapmirror relationship.

  - name: Initialize Snapmirror for Volume 2 # This is due to na_ontap_snapmirror not initializing second LS Mirror
    na_ontap_command:
      command: ['snapmirror', 'initialize', '-source-path', "{{ sourcecluster + '://' + vservername + '/' + lsvolsrc }}", '-destination-path', "{{ sourcecluster + '://' + vservername + '/' + lsvol2name }}"]
      privilege: admin
      <<: *login
  - debug: msg="Second LS Mirror initialized."

Svm_ls_mirror_setup.yml

This is the main Ansible playbook that I will execute in the next section. All this playbook does is run the 2 other playbook in order.

##########################################################
# This Ansible Playbook calls multiple sub yml playbooks #
##########################################################

#########################################################################################################################################
# -= Requirements =-
#
# 1. Make sure ansible user has been created
# 1a. security login create -vserver CLUSTER96 -role admin -application http -authentication-method password -user-or-group-name ansible
# 1b. security login create -vserver CLUSTER96 -role admin -application ontapi -authentication-method password -user-or-group-name ansible
# 1c. security login create -vserver CLUSTER96 -role admin -application console -authentication-method password -user-or-group-name ansible
##########################################################################################################################################

---
# Create 2 x 1GB DP Volumes
- import_playbook: 01_create_ls_vols.yml

# Create SVM LS Mirror
- import_playbook: 02_create_ls_mirror.yml

Now it’s time to run our playbook and watch how quick the LS-Mirror is created

Ansible NetApp LS-Mirror Creation

It’s probably a good time to mention now that I’m using Microsoft’s Visual Studio Code to connect to my Ansible server, manage files and run playbooks.

Within the terminal (which is the ssh connection to my Ansible server), I change to the directory and run the following:

svm_ls_mirror_setup.yml

Ansible NetApp LS-Mirror Creation

Once the playbook finishes its run, we can see a summary of changes in yellow:

Ansible NetApp LS-Mirror Creation

If I now return to my NetApp ONTAP cluster we can see that I have a healthy root LS-Mirror setup.

Ansible NetApp LS-Mirror Creation

Ansible Playbook Video

Ansible Code On GitHub

You can download and clone all the Ansbile playbook files from my github. The direct link to the code is here: https://github.com/sysadmintutorials/netapp-ansible-ls-mirrors

The post Creating NetApp Root Load-Sharing Mirrors with Ansible appeared first on SYSADMINTUTORIALS IT TECHNOLOGY BLOG.


Getting Started with Terraform and VMware vCloud Director

$
0
0

If you have heard the announcements earlier this year and at VMworld 2019, you will know that VMware is embracing Terraform for VMware vCloud Director.

This means that your Automation for VMware vCloud Director just got a whole heap simpler saving you time and money.

What I’d like to demonstrate in this blog post is how I got started with Terraform, explaining what you’ll need, some of the code I use and give you a demonstration on how you can automate the creation of a new:

  • vCloud Organization
  • VDC
  • External Network
  • Organization vApp

What You Will Need for Terraform

The Terraform installation is super simple. In fact, it is just a single file. For Linux based system it is purely ‘terraform’ and for Windows based systems it is terraform.exe

You can download the latest version of Terraform from the Hashicorp website using this direct link: https://www.terraform.io/downloads.html

I have used both CentOS and Windows. Firstly starting with Centos and then migrating over to Windows so that I can easily run Powershell and Terraform together.

For my Centos system, I simply used the below command to download the Terraform executable into /opt/terraform. After downloading simply unzip it by typing ‘unzip terraform_0.12.16_linux_amd64.zip’ :

wget https://releases.hashicorp.com/terraform/0.12.16/terraform_0.12.16_linux_amd64.zip

For my Windows based system, I simply downloaded the Terraform Windows x64 file into a folder I created on D:\Scripts called Terraform. I then added this folder into my PATHS variable so that I could run terraform.exe from anywhere. This is done in System Properties – Advanced – Environment Variables

Windows Terraform Path

For this lab I’m running the following versions of software:

  • vCloud Director 9.7
  • vSphere 6.5U3
  • Terraform 0.12.16
  • Terraform provider.vcd 2.5.0
  • Visual Studio Code 1.40.1

Terraform vCloud Provisioning Example

We’ll now move into the exciting part of this post, which is examining the code used to provision a basic vCloud Organization, VDC, External Network and vApp. All code is up on my GitHub page which is located at: https://github.com/sysadmintutorials/terraform-vcloud-create-new-org-external-network

If you have been following my Ansible posts, you will see that I now use Microsoft Visual Studio Code to build and work with my automation scripts. It makes it way more simple, then working with different programs and multiple windows. It’s a free download, check it out.

There are 3 main files that we’ll be working with:

  • variables.tf (defines the variable name as a variable with a short description)
  • terraform.tfvars (contains the actual variable values)
  • main.tf (contains providers and resources)

vCloud Terraform variables.tf

First off we’ll take a look at our variables.tf file. In this file we can define a few settings for the variable. For example, variable “vcd_user” is saying that anytime terraform see’s var.vcd_user (which we’ll see later on in the main.tf file), refer back to this variables.tf file, find the variable and check the following settings type, default, description. In this demo we simply give a short description so we know what the variable is used for.

variable "vcd_user" {
    description = "vCloud user"
}

variable "vcd_pass" {
    description = "vCloud pass"
}

vCloud Terraform terraform.tfvars

Next, we’ll take a look at terraform.tfvars file. This is where we give value to the variables. For example org_name = “Terraform1”, means that anytime var.org_name is referenced in the main.tf file it will = Terraform1. Most variables below are self-explanatory but if you have any questions please leave a comment.

# vCloud Director Connection Variables

vcd_user = "terraform"
vcd_pass = "Password123"
vcd_url = "https://vcloud8director1.vmlab.local/api"
vcd_max_retry_timeout = "60"
vcd_allow_unverified_ssl = "true"

#vCloud Director External Network
extnet_name = "Terraform1-Lan"
extnet_description = "Terraform1 LAN - External VLAN"
extnet_gw = "192.168.10.1"
extnet_mask = "255.255.255.0"
extnet_dns1 = "8.8.8.8"
extnet_dns2 = "8.8.4.4"
extnet_suffix = "terraform1cust.local"
extnet_ip_pool_start = "192.168.10.16"
extnet_ip_pool_end = "192.168.10.31"
extnet_vcenter = "vcloud8vcenter" # vCenter Instance Name as it appears in vCloud Director

# vCloud Director Organization Variables
org_name = "Terraform1"
org_full_name = "My Terraform Organization"
org_description = "Terraform1 Create Org"

# vCloud Director Organization VDC Variables
vdc_alloc_model = "AllocationVApp" # Pay-As-You-Go
vdc_net_pool = "VMLAB pVDC A-VXLAN-NP"
vdc_pvdc_name = "VMLAB pVDC A"
vdc_name = "Terraform1-VDC-A"
vdc_description = "Terraform1 VDC Description"
vdc_storage_name = "Gold Storage Policy"
vdc_storage_limit = "102400"

vCloud Terraform main.tf

We now move onto the main.tf file. This is where we code how we want our end environment to look like.

We do this by using a provider and multiple resources. The provider we are using in this demonstration is “vcd” (vCloud Director). The resources are then responsible for different parts of vCloud Director. For example “vcd_org” is responsible for creating, modifying or deleting an Organization.

Each resource contains multiple argument references. For example, the resource “vcd_org” will have an argument reference called name, where we define the name of the Organization. I have set most of the argument references to variables. Why did I do this? Because if I want to duplicate this terraform folder and use it to set up another vCloud Organization, I only need to change the values in the terraform.tfvars file. I don’t need to scroll through the main.tf file and make changes to each line, especially if the same value is used multiple times, such as Organization Name.

The first section specifies the provider to use, in this case, ‘vcd’. We then specify our connection settings to vCloud Director

# Connect VMware vCloud Director Provider
provider "vcd" {
  user                 = var.vcd_user
  password             = var.vcd_pass
  org                  = "System"
  url                  = var.vcd_url
  max_retry_timeout    = var.vcd_max_retry_timeout
  allow_unverified_ssl = var.vcd_allow_unverified_ssl
}

The second section creates an external network within vCloud Director. This is a vlan backed vSphere portgroup attached in the back-end. Later on, we will add this external network to our Organization VDC.

# Create new External Network

resource "vcd_external_network" "extnet" {
  name        = var.extnet_name
  description = var.extnet_description

    vsphere_network {
    name    = var.extnet_name
    type    = "DV_PORTGROUP"
    vcenter = var.extnet_vcenter
  }

  ip_scope {
    gateway    = var.extnet_gw
    netmask    = var.extnet_mask
    dns1       = var.extnet_dns1
    dns2       = var.extnet_dns2
    dns_suffix = var.extnet_suffix

    static_ip_pool {
      start_address = var.extnet_ip_pool_start
      end_address   = var.extnet_ip_pool_end
    }
  }
}

Terraform vCloud External Network

This next section creates a new vCloud Organisation by specifying the name, full name, and description. You will notice there is a ‘depends_on’ setting. This means that this resource depends on the resource specified before executing. In this instance, before Terraform creates a new Organization, it must have completed the creation of the external network. This is extremely important as it prevents Terraform trying to execute a resource randomly.

# Create new vCloud Org
resource "vcd_org" "org-name" {
  name                = var.org_name
  full_name           = var.org_full_name
  description         = var.org_description
  is_enabled          = "true"
  delete_recursive    = "true"
  delete_force        = "true"
  can_publish_catalogs = "false"
  depends_on = [vcd_external_network.extnet]
}
Terraform vCloud Organization

I found a setting that is missing for the resource vcd_org, and that is policy leases. There isn’t any option to change the leases with Terraform at the moment. Therefore it sets the maximum runtime lease to 7 days and maximum storage lease to 30 days. I have an issue opened on the Terraform GitHub page which you can track with this link: https://github.com/terraform-providers/terraform-provider-vcd/issues/385

Terraform vCloud Organization

We now create our Organization VDC. When I write the code for Terraform, I think about the steps involved to create a VDC manually and then convert that to code.

To create a VDC we need to specify

  • Name
  • Description
  • Organization
  • Allocation Model
  • Network Pool
  • Provider VDC
  • Compute Capacity (i.e. CPU speed and garantee’s)
  • Storage Profiles
  • Network Quota (how many networks the VDC can create)
  • Thin Provisioning (are the VM’s thin provisioned)
  • Fast Provisioning (use a parent VM as the base, track changes in a new file for this VM)

Once we have this information, we then write the code for it. You will notice the ‘depends_on’ setting once again. In order to create an Organization VDC, you must have an Organization created, therefore in this resource, we depend on the creation of the Organization before trying to execute. The code for this section looks like this:

# Create new VDC

resource "vcd_org_vdc" "vdc-name" {
  name        = var.vdc_name
  description = var.vdc_description
  org         = var.org_name

  allocation_model = var.vdc_alloc_model
  network_pool_name = var.vdc_net_pool
  provider_vdc_name = var.vdc_pvdc_name

  compute_capacity {
    cpu {
      allocated = 0
    }
    memory {
      allocated = 0
    }
  }

  storage_profile {
    name = var.vdc_storage_name
    limit = var.vdc_storage_limit
    default = true
  }

  cpu_guaranteed = 0
  memory_guaranteed = 0
  cpu_speed = 2000
  network_quota = 10
  enabled = true
  enable_thin_provisioning = true
  enable_fast_provisioning = false
  delete_force = true
  delete_recursive = true 

  depends_on = [vcd_org.org-name]
}
Terraform vCloud Organization VDC

We’re almost done. In the second last section, I need to add the external network we created earlier to our Organization VDC. The code ‘depends_on’ the creation of an Organization VDC first.

# Org External Network
 resource "vcd_network_direct" "netdirect" {
   org = var.org_name
   vdc = var.vdc_name
   name = "Terraform1-Lan"
   external_network = "Terraform1-Lan"
   depends_on = [vcd_org_vdc.vdc-name]
 }
Terraform vCloud Organization External Network

Lastly, we have the creation of a Server vApp

# Org vApp - Servers
 resource "vcd_vapp" "vapp" {
   name = "Servers"
   org = var.org_name
   vdc = var.vdc_name
   depends_on = [vcd_network_direct.netdirect]
 }
Terraform vCloud Organization vApp

Running our Terraform Plan

This is where all our hard work in writing this code pays off. It’s time to execute our Terraform plan.

Within the same folder where we have our main.tf, terraform.tfvars and variables.tf files, type in ‘terraform init’, this will download the vcd provider. I then type in ‘terraform plan’. This command will go through your files and check the settings against your vCloud Director environment. It will then highlight with a green + (for additions) or red – (for removal) what settings will be changed, without actually making any changes.

Terraform Plan for vCloud Director

At the end of the plan, it will give a summary of how many additions, changes or deletions will be made

Terraform Plan for vCloud Director

Once you have reviewed all the changes being made, it’s time to run ‘terraform apply’. When you run this command, it will give a summary of all the changes being made.

Terraform Apply for vCloud Director

Below is a list of changes being made. Type ‘yes’ to apply.

Terraform Apply for vCloud Director

You will then get a summary of the resources being provisioned and how many additions, changes, and deletions occurred.

Terraform Apply for vCloud Director

Let’s say you want to make a change to the plan. For example, if we change the variable for the vApp name from Servers to Web_Servers, we can simply run ‘terraform apply’, it will recognize there is a change and it will rename the vApp for you.

If you want to go ahead and delete the whole environment, let’s say this is a dev environment that was needed for a short period of time, then you can simply type in ‘terraform destroy’ and the whole environment will be removed.

Terraform Apply for vCloud Director

Terraform VMware vCloud Director Video Demonstration

The post Getting Started with Terraform and VMware vCloud Director appeared first on SYSADMINTUTORIALS IT TECHNOLOGY BLOG.

Ansible Username Password Prompt with vars_prompt

$
0
0

If you have been following my Ansible demonstrations, you will see that authenticating to my NetApp simulator I had the username and password embedded into the variables.yml file. Like this:

# ONTAP Cluster Login
clusterip: 192.168.1.50
user: ansible
pass: Password123
https_option: true
validate_certs_option: false

Let’s face it, adding the username and password into the YAML file is almost like writing your password on a sticky note and hiding it under your keyboard.

This is ok for a lab environment, however, the other issue is running your playbook under the one account. How do you audit who did what ?

Ansible vars_prompt

Today we are going to take a look at integrating vars_prompt into an existing playbook I wrote, which automated the creation of NetApp Root Load-Sharing Mirrors.

In the previous variables.yml file I have removed the ‘user: ansible’ and ‘pass: Password123’ variables.

I also collapsed all the main YAML files into 1 file named ’01_create_ls_mirror.yml’ This means that this file will create the DP volumes as well as the root LS snapmirrors.

If we take a look at the new YAML code, I have introduced the following ‘vars_prompt’ within ’01_create_ls_mirror.yml’:

---
- hosts: localhost
  name: Create LS Root Vols and LS-Mirror
  gather_facts: false
  vars_prompt:
    - name: clusteruser
      prompt: "Enter your {{ sourcecluster }} username ?"
      private: no
    - name: clusterpass
      prompt: "Enter your {{ sourcecluster }} password ?"
  vars:
    login: &amp;login
     hostname: "{{ clusterip }}"
     username: "{{ clusteruser }}"
     password: "{{ clusterpass }}"
     https: "{{ https_option }}"
     validate_certs: "{{ validate_certs_option }}"
  vars_files:
  - variables.yml

Within ‘vars_prompt’ we have ‘name: clusteruser’. This part is setting a new variable called clusteruser.

Next is, ‘prompt: “Enter your {{ sourcecluster }} username ?”. This text will pop up on screen asking you to type in your username. We are going to see what this looks like shortly.

The last setting is ‘private: no’. This means the input you type in, which in this case is your username, is not hidden from view. The default setting of ‘private: yes’ is set for your password.

You can then see, within the ‘vars:’ section, username equals the clusteruser variable. We will enter out username before the playbook fully runs.

The screenshot below shows what the username and password prompt looks like. Once you enter in your credentials, they are checked against the NetApp cluster, authentication is granted and the rest of the Ansible playbook runs.

Ansible vars_prompt - Authentication Prompt

Ansible Code on GitHub

I’ve branched out with ‘update-version2’ from the original code. The direct link to the updated code can be found here: https://github.com/sysadmintutorials/netapp-ansible-ls-mirrors/tree/update-version2

The post Ansible Username Password Prompt with vars_prompt appeared first on SYSADMINTUTORIALS IT TECHNOLOGY BLOG.

How to Upgrade Your NetApp Simulator to ONTAP 9.7

$
0
0

In this article I’m going to walk you through the steps I used to upgrade my NetApp Simulator from ONTAP 9.6 to the new ONTAP 9.7 RC1

Warning: This is strictly performed on the NetApp simulator. Do not follow this process for a production system. Always use NetApp Upgrade Advisor

I will perform the upgrade in the following order:

  • Shutdown current node and take a VM snapshot
  • Add additional disk to current root aggregate
  • Expand vol0 (this is so we can fit the new image)
  • Upload the Data ONTAP 9.7 software into the non-active image
  • Switch the non-active image to be default on next reboot

NetApp ONTAP 9.7 – Expanding vol0

The first thing we want to do is add an extra disk into our aggr0_CLUSTER96_01_root aggregate. As you can see there is only 167.41 MB remaining in this aggregate.

NetApp ONTAP 9.7 Upgrade

Right-click the aggr0_CLUSTER96_01_root aggregate and select Add Capacity.

NetApp ONTAP 9.7 Upgrade

Add in 1 spare disk, which will give me a new usable capacity of 6.86 GB. Click Add.

NetApp ONTAP 9.7 Upgrade

When asked ‘Are you sure you want to add capacity to the root aggregate?’ click Yes.

The aggregate might take a few minutes to show the new capacity while the new disk is zeroed. Hit refresh until you see the new Total Space

NetApp ONTAP 9.7 Upgrade

We’ll now log into the CLI. Establish an SSH session to the cluster. Once logged in type:

volume show

You can see in the screenshot below that my vol0 has only 617MB of free space

NetApp ONTAP 9.7 Upgrade

Within my Cluster Simulator, I only have 1 node. I’m going to drop into the node shell by typing:

node run localhost
NetApp ONTAP 9.7 Upgrade

The NetApp ONTAP 9.7 image is 2.37GB in size, so we need to make some room. To increase the size of vol0, type in:

vol size vol0 6g
NetApp ONTAP 9.7 Upgrade

If we drop out of the node shell by typing ‘exit’, we can see that we now have 3.41GB of free space on vol0

NetApp ONTAP 9.7 Upgrade

NetApp ONTAP 9.7 – CLI Upgrade

Now we are ready to start uploading the ONTAP 9.7 image. The latest ONTAP 9.7 release, at the time of this blog post, is RC1. You can retrieve the image here: https://mysupport.netapp.com/products/ontap9/9.7RC1/index.html

To upload the image to my NetApp cluster, I use a tool called HTTP File Server (HFS). You simply drag and drop the ONTAP 9.7 image into the HFS program and it will provide you with an HTTP link that we will paste into the NetApp Cluster SSH session later on.

NetApp ONTAP 9.7 Upgrade

Returning back to the SSH window, we enter into advanced mode by typing:

set adv

We then take a look at the system image to determine which image is active. You can see that on my simulator, image1 is the default and current image by typing in:

system image show
NetApp ONTAP 9.7 Upgrade

I’m going to be uploading and updating image2 since it is not ‘default’ and not ‘current’. To do that I type in the following command, keeping in mind that the -package URL is the URL provided from my HFS software.

system image update -node CLUSTER96-01 -package http://192.168.1.110:8080/97RC1_q_image.tgz -replace image2
NetApp ONTAP 9.7 Upgrade

Once the image has uploaded, the update will begin. It will take about 10 mins

NetApp ONTAP 9.7 Upgrade

If we now look at our system images, by typing ‘system image show’ we can see that image2 contains 9.7RC1

NetApp ONTAP 9.7 Upgrade

The last part of the upgrade is to set ‘image2’ to be the new default image. We can do that by typing:

system image modify -node CLUSTER96-01 -image image2 -isdefault true

We then verify that image2 has been set as default, by re-typing in ‘system image show’

NetApp ONTAP 9.7 Upgrade

Now I’ll reboot the node, and on the next bootup, we’ll see that the cluster has updated to 9.7RC1

reboot -node CLUSTER96-01
NetApp ONTAP 9.7 Upgrade

Once the node has rebooted, you can open a browser and type https://<cluster management IP>. You can then proceed to login to System Manager

NetApp ONTAP 9.7 Upgrade

You will be presented with the brand new ONTAP System Manager. At the time of this blog post, there are still a few things not working. However, it is easy to revert to the previous version of ONTAP System Manager by click on ‘Return to classic version’ at the top right hand side of the screen.

NetApp ONTAP 9.7 Upgrade

The post How to Upgrade Your NetApp Simulator to ONTAP 9.7 appeared first on SYSADMINTUTORIALS IT TECHNOLOGY BLOG.

How To Join Existing vCenter Servers By Enhanced Linked Mode

$
0
0

Last month I was asked to join two existing VMware vCenter Servers each with embedded platform services controllers, in enhanced linked mode. Additionally, this will create one SSO domain and enable vCenter Enhanced Linked Mode.

The first thing I did was read VMware’s documentation which can be found by clicking here, and secondly, I created a lab with two vCenter 6.7 U3 appliances. Each VCSA was configured with its own embedded platform services controller and both use an SSO domain of vsphere.local.

It is worth to note that repointing an existing vCenter server from one domain to another is only supported in vCenter 6.7 U1 and above.

Current VMware vCenter Configuration

As I mentioned previously I have two VMware vCenter 6.7 U3 servers called VCLOUDPG-VC-A and VCLOUDPG-VC-B (VCLOUDPG stands for vCloud PlayGround, which is an area I do quite a bit of testing in). Each vCenter server has its own embedded platform services controller and has an SSO domain of vsphere.local.

vCenter Enhanced Linked Mode

Our objective is to join the two vCenter servers together to create an enhanced linked mode setup, which will look like this:

vCenter Enhanced Linked Mode

Steps to create vCenter Enhanced Linked Mode

The very first thing I did before making any changes was shutdown each vCenter server and create a VM snapshot. In addition to this, you can also create a vCenter backup via the vCenter Appliance Management page – Backup.

Once all your backups are sorted, I logged into VCLOUDPG-VC-B, as I want to join this vCenter to VCLOUDPG-VC-A. We run a pre-check to ensure that everything is ok and no conflicts are encountered before performing the actual repoint. The syntax of the domain-repoint command can be found by clicking here. The CLI command I enter into VCLOUDPG-VC-B is:

cmsso-util domain-repoint --mode pre-check --src-emb-admin administrator --replication-partner-fqdn vcloudpg-vc-a.vmlab.local --replication-partner-admin administrator --dest-domain-name vsphere.local

The 2 screenshots below are the output of the cmsso-util pre-check

vCenter Enhanced Linked Mode
vCenter Enhanced Linked Mode

As you can see in the screenshot above, in purple, ‘Conflict data, if any, can be found under /storage/domain-data/Conflict*.json. That’s what we are going to do now, browse to that directory and check if we have any conflicts.

I enter into the vCenter shell, change directory to /storage/domain-data and then type ls to list the files. I can see that I have 1 Conflict file named Conflict_Roles.json. Let’s use vi to edit this file and take a look at what’s inside.

vCenter Enhanced Linked Mode

We can see that there are 2 roles, NoCryptoAdmin and Admin that have a conflict with the privilege Vsan.DataProtection.Management. The default action is to copy the roles across. I’ll exit vi by typing :q!

vCenter Enhanced Linked Mode

You can see the Roles, within the Web UI. In the screenshot below we are looking at the No Cryptography Administrator role. On the right hand side under vSAN, you can see the Data Protection Management Privilege.

vCenter Enhanced Linked Mode

It’s now time to perform the actual join. To do this we use the –mode execute option. The full command looks like the output in the following 2 screenshots:

cmsso-util domain-repoint --mode execute --src-emb-admin administrator --replication-partner-fqdn vcloudpg-vc-a.vmlab.local --replication-partner-admin administrator --dest-domain-name vsphere.local
vCenter Enhanced Linked Mode
vCenter Enhanced Linked Mode

Steps to check and verify vCenter Enhanced Linked Mode

Now that the domain re-join has completed successfully, we can check the replication partner via cli from VCLOUDPG-VC-B by entering into the shell, changing directory to /usr/lib/vmware-vmdir/bin and typing the following:

./vdcrepadmin -f showpartners -h localhost -u administrator -w VMware1

You will then see the output showing that this vCenter’s replication partner is VCLOUDPF-VC-A

vCenter Enhanced Linked Mode

Once we log into our vCenter server web UI, we will now see the 2 vCenter servers within the ‘single pane of glass’. Also, take note of the ‘Linked vCenter Server’ tab displaying the linked vCenter.

vCenter Enhanced Linked Mode

VIDEO DEMONSTRATION

The post How To Join Existing vCenter Servers By Enhanced Linked Mode appeared first on SYSADMINTUTORIALS IT TECHNOLOGY BLOG.

How To Easily Create An AWS EC2 Linux Instance

$
0
0

This is a beginner’s guide to creating your first Amazon AWS EC2 Linux Instance in one of the world’s largest hyper scalers.

Before Getting Started

If you are just starting out with Amazon AWS, I would encourage you to have a read of the following posts, which will help you set up your free-tier account, enable multi-factor authentication and set up a billing alarm

Video Tutorial Contents

Within this video tutorial, I walk you through step-by-step, in how to create an AWS EC2 Linux instance, where I give a full description of each setting being used.

Not only will we be setting up a Linux server, but I’ll be using some code that you can paste into the setup wizard which will automate the setup of the webserver. When we browse to the webserver, a static page will display the internal IP address, the public IP address, and the public DNS name. We can do this by making use of the internal metadata URL’s (which are explained within the video)

We will work through the tutorial together and go from this:

How to create an Amazon AWS EC2 Instance

To a brand new Amazon AWS EC2 t2.micro Linux Instance

How to create an Amazon AWS EC2 Instance

Once the server is fully initialized and running, we test the webserver component by browsing to the external DNS. You will then see the following contents within your browser

How to create an Amazon AWS EC2 Instance

Lastly, we create a new public/private key pair and use PuttyGen to extract the private key. I then import this key into Putty where we then establish an SSH session to the EC2 Linux instance.

Amazon AWS EC2 Video Tutorial

If you want to skip to a specific section of the video, you can click on the links below:

1. All ‘Instance Details: 2:48

2. provide a ‘user data’ script that will automatically make your Linux EC2 instance a web server (including 1 custom static index.html page displaying server network information): 10:50

3. Explanation and setup of the firewall rules: 14:45

4. creating a public/private key pair: 17:12

5. extracting the private key and importing it into Putty: 23:35

6. and finally establishing an SSH session to the Linux EC2 Server: 25:18

AWS EC2 User-Data Script

The user-data script referenced in the video is not too long but not too short either, therefore I’ve added the code to my GitHub page where you can easily copy it.

3rd Party Software Needed

In order to extract the private key from the public key pair, as explained in the video, you will need to download Putty and PuttyGen.

The post How To Easily Create An AWS EC2 Linux Instance appeared first on SYSADMINTUTORIALS IT TECHNOLOGY BLOG.

This Is How To Create An AWS Windows EC2 Instance

$
0
0

In last week’s post I went through how to create an Amazon AWS Linux EC2 Instance. Within that tutorial, we created a Linux web server and used an automated script to install the httpd service which then displayed a web page of the server’s internal IP, external IP and DNS name.

Today’s video tutorial is going to guide you through step-by-step, in achieving the same end result as we did with the Linux server tutorial, but on Microsoft Windows Server.

Windows Server EC2 Instance

Windows EC2 Instance Pre-Reqs

If you are brand new and don’t have experience with Amazon AWS, don’t worry, I have written 3 getting started articles that you can easily follow to get you up and running, with a free-tier account.

The 3 articles listed below should be followed in order:

However, if you already have an account and have some experience with AWS, you can jump straight into the tutorial.

Windows EC2 Instance Objective

I will guide you through using the AWS EC2 Instance creation wizard to:

  • Deploy Microsoft Windows Server 2016
  • Use a PowerShell script, to automate the install of IIS webserver, obtain AWS built-in meta-data such as internal IP address, external IP address and DNS name. Publish that information to a webpage called index.html.
  • Secure HTTP and RDP access so that it’s only allowed via a specific source IP
  • Create a new key-pair and use that key-pair to decrypt the Windows password
  • Test RDP and HTTP access to the server

When we browse to our new server, we will see the web page below:

Windows Server EC2 Instance

Windows Server Powershell Code

The Windows Server Powershell code that I paste into the user-data section of the EC2 creation wizard, is quite long. Therefore I have made the code publically accessible on my GitHub page. As a result, you can browse directly to the GitHub repository by clicking here.

Windows Server EC2 Instance Video

The post This Is How To Create An AWS Windows EC2 Instance appeared first on SYSADMINTUTORIALS IT TECHNOLOGY BLOG.

The New Powerful Veeam V10 VM And NAS File Backups

$
0
0

Veeam V10 was recently released and I was keen to get this deployed in the lab and check out some of the new features this awesome product has to offer.

As a Cloud Service Provider, we heavily rely on Veeam to backup petabytes of data and we already have plans to role version 10 into production in the next 1-2 weeks.

Veeam Backup and Replication V10

In today’s demo, we take a look at what features have been introduced when backing up a VMware vSphere environment. In addition, we also setup the brand new NAS File Share backups.

To test the NAS File Share backups I have deployed a NetApp Ontap 9.7P1 Simulator, configured for CIFS and serving out 1 file share with Veeam and NetApp documentation PDF’s.

Veeam Lab Setup

For this lab, I have set up the following (this is explained in more detail within the video below):

  1. Windows 2016 Server running Veeam Backup And Replication V10
  2. Veeam Scale-Out Backup Repository (Containing 2 extent luns)
  3. VMware vSphere 6.7 with 2 Windows VM’s
  4. NetApp Ontap 9.7P1 Simulator as my SMB File Share
  5. Minio S3 Object Storage running on CentOS

Veeam Backup and Replication V10

Veeam Lab Objective

The objective of this lab is to perform a VM backup of the 2 virtual machines within my vSphere 6.7 environment and to take a look at some of the new features introduced with Veeam Version 10.

This includes creating a new Scale-Out backup repository and configuring it to copy backups to object storage (Capacity Tier).

During the VM Backup job creation, I display a screenshot on the left of Veeam v9.5 and a screenshot on the right of Veeam v10 so you can see the differences between the 2 versions.

The next objective is to test the new NAS file-level backup feature in Veeam Version 10. I add the CIFS share to the Veeam console and then create a file share backup job with encryption. Besides a standard backup to the Scale-Out Backup Repository, I also enable Archiving of files to object storage.

Veeam Backup and Replication V10 – VM and NAS File-Share Backups

The post The New Powerful Veeam V10 VM And NAS File Backups appeared first on SYSADMINTUTORIALS IT TECHNOLOGY BLOG.


Veeam Backup and Replication V10 – VM and NAS File Restores

$
0
0

Previously, in part 1 of Veeam Backup and Replication V10, we created a virtual machine and NAS file-share backup.

The virtual machine backup was configured to backup to our ReFS scale-out backup repository. In addition, we also create identical copies of the backup files to our S3 object storage bucket. This is utilizing 1 of many Veeam Backup and Replication V10 features called “Copy backups to object storage as soon as they are created”.

We then configured the new NAS file-share backup. Here we added a NetApp CIFS share to the Veeam Backup and Replication V10 console, set the backup job to keep all file versions for the last 1 day and keep previous file versions within our S3 object storage (minio), for 1 month.

Veeam Backup and Replication V10

Veeam Virtual Machine Restores

In part 2, we take a look at performing the following virtual machine restores:

  • delete a virtual machine (webserver16), and perform a full restore back to the original location
  • restore a virtual machine guest file (webserver16) back to the original location
  • delete a second virtual machine (webserver19), and perform an instant VM recovery directly from S3 Object Storage. We can see the VM powered on and running from our S3 bucket.

Veeam NAS File Share Restores

Following on from virtual machine restores, we then utlize the new Veeam V10 NAS File share backup and recovery to perform the following:

  • restore a modified cifs file from S3 object Storage Archive
  • restore a deleted cifs file from S3 object Storage Archive

Veeam V10 VM and NAS File Share Restore Video

If you expand the video description within Youtube, you can click on the time code which will then take you directly to the Veeam Backup and Replication V10 section of the video you are interested in.

If you have any questions please leave them in the comments below or within Youtube.

The post Veeam Backup and Replication V10 – VM and NAS File Restores appeared first on SYSADMINTUTORIALS IT TECHNOLOGY BLOG.

Learn How To Configure Windows Server 2019 Active Directory

$
0
0

A few comments came in via my Youtube channel requesting a new Active Directory setup tutorial. Especially with some of the VMware Center tutorials requiring some form of DNS.

As a result, here we are with what the followers have requested. Utilizing Microsoft Windows Server 2019.

This video did take a little longer to make due to the fact that I wanted to not only show you guys how to setup Active Directory via the GUI, but how to also accomplish the same task via Powershell.

Before diving into automation and using Powershell, it’s important that you understand all the working components of Active Directory and what it looks like setting it up in the GUI.

Microsoft Windows Server 2019 Active Directory

Active Directory Setup With The GUI

We start this tutorial with a brand new installation of Windows Server 2019, therefore we must set a few crucial components such as the hostname and a static IP address. Optionally we can also enable Remote Desktop and disable ‘IE Enhanced Security Configuration’

Within Servermanager we then install the necessary Roles for Active Directory and enter some basic configuration.

This is followed by further configuration within DNS and lastly, Active Direcotry Sites and Services.

Automating an Active Directory Install with Powershell

Upon completion of the GUI setup, I then revert the server back to its original state via a previous snapshot.

Now comes the fun part. I’ve written 3 scripts that will automate the entire process we went through with the GUI method. Within the video below I walk you through step-by-step how the script works, and what variables you need to change to suite your environment.

With automating a task like this, firstly the speed of deployment is much faster, but most importantly we can utilize the same scripts to configure more domain controllers. Because they do say, if you need to do anything more than once, Automate it!!

Windows Server 2019 Active Directory Domain Controller Install

The post Learn How To Configure Windows Server 2019 Active Directory appeared first on SYSADMINTUTORIALS IT TECHNOLOGY BLOG.

Creating NetApp Root Load-Sharing Mirrors with Ansible

$
0
0

Who loves creating root load-sharing mirrors (LS-mirrors) ? If you manage 1 Ontap system with 1 SVM you may enjoy it, but if you manage multiple systems with many SVM’s then you may not enjoy it as much.

The above process can become quite repetitive in large environments. In today’s blog post I’m going to show you how to make these tasks a little less time consuming and more enjoyable.

Using Ansible along with NetApp modules we can automate the above tasks.

Ansible Playbooks for NetApp Root Load-Sharing Mirror

Similar to my previous Ansible posts I’ll be breaking down the code bit by bit so you can understand what is going on under the hood.

Steps involved in Creating a NetApp Root Load-Sharing Mirror

First up, let’s break down the steps required to create an LS-mirror and then we’ll look at the Ansible code.

  1. Create 2 DP volumes, each 1GB in size. If you have more than 1 HA pair in your cluster, these 2 volumes need to be created on an HA pair that do not contain the original svm root volume.
  2. Create snapmirror relationships with type LS, and assign a schedule
  3. Initialize the snapmirror relationships

Ansbile Files for NetApp Root Load-Sharing Mirror Creation

The files I’ll be using to create my LS-mirrors are:

  • variables.yml (contains all my inputs)
  • 01_create_ls_vols.yml (Create 2 x DP volumes each 1 GB in size)
  • 02_create_ls_mirror.yml (Create and initialize LS-Mirror snapmirror relationships)
  • svm_ls_mirror_setup.yml (main Ansible playbook that calls each mini playbook above)

Variables.yml

The first part of this file contains our connection settings to the NetApp ONTAP cluster.

##################################################
# Variable File for 'netapp_ls-mirror_setup.yml' #
##################################################

# Cluster Login
clusterip: 192.168.1.50
user: ansible
pass: Password123
https_option: true
validate_certs_option: false

We then have our source cluster name. As this is an LS-Mirror the source and destination cluster name will be the same.

# Source Cluster Name
sourcecluster: CLUSTER96

Next variable is the SVM or Vserver name where we will be creating the LS-Mirror

# SVM Name
vservername: SVM1

This is the primary or source root volume that is created when you first create your SVM. It will be the source of our snapmirror LS relationships.

# SVM Name
vservername: SVM1

This is the primary or source root volume that is created when you first create your SVM. It will be the source of our snapmirror LS relationships.

# LS Mirror Source Vol Variable
lsvolsrc: SVM1_root

The next group of variables contain the first and second root volume name, size and the aggregates they will be created on.

# LS Mirror Vol 1 Variable
aggr1: aggr1_CLUSTER96_01_data
lsvol1name: SVM1_root_m01
lsvol1size: 1 # Size is in GB

# LS Mirror Vol 2 Variable
aggr2: aggr2_CLUSTER96_01_data
lsvol2name: SVM1_root_m02
lsvol2size: 1 # Size is in GB

01_create_ls_vols.yml

First part of this playbook contains the connection information for my NetApp ONTAP cluster and also imports the variables.yml file.

---
- hosts: localhost
  name: Create LS Root Vols
  gather_facts: false
  vars:
    login: &login
     hostname: "{{ clusterip }}"
     username: "{{ user }}"
     password: "{{ pass }}"
     https: "{{ https_option }}"
     validate_certs: "{{ validate_certs_option }}"
  vars_files:
  - variables.yml

Next, we have 2 tasks. Each task creates a Data Protection (DP) volume by referencing the variables in the variables.yml file:

tasks:
  - name: Volume 1 Create
    na_ontap_volume:
      state: present
      name: "{{ lsvol1name }}"
      vserver: "{{ vservername }}"
      aggregate_name: "{{ aggr1 }}"
      size: "{{ lsvol1size }}"
      size_unit: gb
      type: DP
      policy: default
      percent_snapshot_space: 0
      space_guarantee: none
      <<: *login
  - debug: msg="Volume {{ lsvol1name }} has been created."
  - name: Volume 2 Create
    na_ontap_volume:
      state: present
      name: "{{ lsvol2name }}"
      vserver: "{{ vservername }}"
      aggregate_name: "{{ aggr2 }}"
      size: "{{ lsvol2size }}"
      size_unit: gb
      type: DP
      policy: default
      percent_snapshot_space: 0
      space_guarantee: none
      <<: *login
  - debug: msg="Volume {{ lsvol2name }} has been created."

02_create_ls_mirror.yml

First part of this playbook contains the connection information for my NetApp ONTAP cluster and also imports the variables.yml file.

---
- hosts: localhost
  name: Create LS Mirror
  gather_facts: false
  vars:
    login: &login
     hostname: "{{ clusterip }}"
     username: "{{ user }}"
     password: "{{ pass }}"
     https: "{{ https_option }}"
     validate_certs: "{{ validate_certs_option }}"
  vars_files:
  - variables.yml

The next 2 tasks create snapmirror relationships with type LS. I have found that only the first snapmirror gets initialized at this stage.

  tasks:
  - name: Create First LS Mirror
    na_ontap_snapmirror:
      state: present
      source_volume: "{{ lsvolsrc }}"
      destination_volume: "{{ lsvol1name }}"
      source_vserver: "{{ vservername }}"
      destination_vserver: "{{ vservername }}"
      schedule: hourly
      relationship_type: load_sharing
      <<: *login
  - debug: msg="First LS Mirror Created."
  - name: Create Second LS Mirror
    na_ontap_snapmirror:
      state: present
      source_volume: "{{ lsvolsrc }}"
      destination_volume: "{{ lsvol2name }}"
      source_vserver: "{{ vservername }}"
      destination_vserver: "{{ vservername }}"
      schedule: hourly
      relationship_type: load_sharing
      <<: *login
  - debug: msg="Second LS Mirror Created."

I then added in a sleep command of 20 seconds which should be enough time for the first snapmirror relationship to initialize.

  - name: sleep for 20 seconds to allow for Volume 1 Snapmirror Initialization
    wait_for:
      timeout: 20

This task uses the na_ontap_command module which allows me to execute the same command as I would on an ONTAP cli shell. Here I’m initializing the second snapmirror relationship.

  - name: Initialize Snapmirror for Volume 2 # This is due to na_ontap_snapmirror not initializing second LS Mirror
    na_ontap_command:
      command: ['snapmirror', 'initialize', '-source-path', "{{ sourcecluster + '://' + vservername + '/' + lsvolsrc }}", '-destination-path', "{{ sourcecluster + '://' + vservername + '/' + lsvol2name }}"]
      privilege: admin
      <<: *login
  - debug: msg="Second LS Mirror initialized."

Svm_ls_mirror_setup.yml

This is the main Ansible playbook that I will execute in the next section. All this playbook does is run the 2 other playbook in order.

##########################################################
# This Ansible Playbook calls multiple sub yml playbooks #
##########################################################

#########################################################################################################################################
# -= Requirements =-
#
# 1. Make sure ansible user has been created
# 1a. security login create -vserver CLUSTER96 -role admin -application http -authentication-method password -user-or-group-name ansible
# 1b. security login create -vserver CLUSTER96 -role admin -application ontapi -authentication-method password -user-or-group-name ansible
# 1c. security login create -vserver CLUSTER96 -role admin -application console -authentication-method password -user-or-group-name ansible
##########################################################################################################################################

---
# Create 2 x 1GB DP Volumes
- import_playbook: 01_create_ls_vols.yml

# Create SVM LS Mirror
- import_playbook: 02_create_ls_mirror.yml

Now it’s time to run our playbook and watch how quick the LS-Mirror is created

Ansible NetApp LS-Mirror Creation

It’s probably a good time to mention now that I’m using Microsoft’s Visual Studio Code to connect to my Ansible server, manage files and run playbooks.

Within the terminal (which is the ssh connection to my Ansible server), I change to the directory and run the following:

svm_ls_mirror_setup.yml

Ansible NetApp LS-Mirror Creation

Once the playbook finishes its run, we can see a summary of changes in yellow:

Ansible NetApp LS-Mirror Creation

If I now return to my NetApp ONTAP cluster we can see that I have a healthy root LS-Mirror setup.

Ansible NetApp LS-Mirror Creation

Ansible Playbook Video

Ansible Code On GitHub

You can download and clone all the Ansbile playbook files from my github. The direct link to the code is here: https://github.com/sysadmintutorials/netapp-ansible-ls-mirrors

The post Creating NetApp Root Load-Sharing Mirrors with Ansible appeared first on SYSADMINTUTORIALS IT TECHNOLOGY BLOG.

Getting Started with Terraform and VMware vCloud Director

$
0
0

If you have heard the announcements earlier this year and at VMworld 2019, you will know that VMware is embracing Terraform for VMware vCloud Director.

This means that your Automation for VMware vCloud Director just got a whole heap simpler saving you time and money.

What I’d like to demonstrate in this blog post is how I got started with Terraform, explaining what you’ll need, some of the code I use and give you a demonstration on how you can automate the creation of a new:

  • vCloud Organization
  • VDC
  • External Network
  • Organization vApp

What You Will Need for Terraform

The Terraform installation is super simple. In fact, it is just a single file. For Linux based system it is purely ‘terraform’ and for Windows based systems it is terraform.exe

You can download the latest version of Terraform from the Hashicorp website using this direct link: https://www.terraform.io/downloads.html

I have used both CentOS and Windows. Firstly starting with Centos and then migrating over to Windows so that I can easily run Powershell and Terraform together.

For my Centos system, I simply used the below command to download the Terraform executable into /opt/terraform. After downloading simply unzip it by typing ‘unzip terraform_0.12.16_linux_amd64.zip’ :

wget https://releases.hashicorp.com/terraform/0.12.16/terraform_0.12.16_linux_amd64.zip

For my Windows based system, I simply downloaded the Terraform Windows x64 file into a folder I created on D:\Scripts called Terraform. I then added this folder into my PATHS variable so that I could run terraform.exe from anywhere. This is done in System Properties – Advanced – Environment Variables

Windows Terraform Path

For this lab I’m running the following versions of software:

  • vCloud Director 9.7
  • vSphere 6.5U3
  • Terraform 0.12.16
  • Terraform provider.vcd 2.5.0
  • Visual Studio Code 1.40.1

Terraform vCloud Provisioning Example

We’ll now move into the exciting part of this post, which is examining the code used to provision a basic vCloud Organization, VDC, External Network and vApp. All code is up on my GitHub page which is located at: https://github.com/sysadmintutorials/terraform-vcloud-create-new-org-external-network

If you have been following my Ansible posts, you will see that I now use Microsoft Visual Studio Code to build and work with my automation scripts. It makes it way more simple, then working with different programs and multiple windows. It’s a free download, check it out.

There are 3 main files that we’ll be working with:

  • variables.tf (defines the variable name as a variable with a short description)
  • terraform.tfvars (contains the actual variable values)
  • main.tf (contains providers and resources)

vCloud Terraform variables.tf

First off we’ll take a look at our variables.tf file. In this file we can define a few settings for the variable. For example, variable “vcd_user” is saying that anytime terraform see’s var.vcd_user (which we’ll see later on in the main.tf file), refer back to this variables.tf file, find the variable and check the following settings type, default, description. In this demo we simply give a short description so we know what the variable is used for.

variable "vcd_user" {
    description = "vCloud user"
}

variable "vcd_pass" {
    description = "vCloud pass"
}

vCloud Terraform terraform.tfvars

Next, we’ll take a look at terraform.tfvars file. This is where we give value to the variables. For example org_name = “Terraform1”, means that anytime var.org_name is referenced in the main.tf file it will = Terraform1. Most variables below are self-explanatory but if you have any questions please leave a comment.

# vCloud Director Connection Variables

vcd_user = "terraform"
vcd_pass = "Password123"
vcd_url = "https://vcloud8director1.vmlab.local/api"
vcd_max_retry_timeout = "60"
vcd_allow_unverified_ssl = "true"

#vCloud Director External Network
extnet_name = "Terraform1-Lan"
extnet_description = "Terraform1 LAN - External VLAN"
extnet_gw = "192.168.10.1"
extnet_mask = "255.255.255.0"
extnet_dns1 = "8.8.8.8"
extnet_dns2 = "8.8.4.4"
extnet_suffix = "terraform1cust.local"
extnet_ip_pool_start = "192.168.10.16"
extnet_ip_pool_end = "192.168.10.31"
extnet_vcenter = "vcloud8vcenter" # vCenter Instance Name as it appears in vCloud Director

# vCloud Director Organization Variables
org_name = "Terraform1"
org_full_name = "My Terraform Organization"
org_description = "Terraform1 Create Org"

# vCloud Director Organization VDC Variables
vdc_alloc_model = "AllocationVApp" # Pay-As-You-Go
vdc_net_pool = "VMLAB pVDC A-VXLAN-NP"
vdc_pvdc_name = "VMLAB pVDC A"
vdc_name = "Terraform1-VDC-A"
vdc_description = "Terraform1 VDC Description"
vdc_storage_name = "Gold Storage Policy"
vdc_storage_limit = "102400"

vCloud Terraform main.tf

We now move onto the main.tf file. This is where we code how we want our end environment to look like.

We do this by using a provider and multiple resources. The provider we are using in this demonstration is “vcd” (vCloud Director). The resources are then responsible for different parts of vCloud Director. For example “vcd_org” is responsible for creating, modifying or deleting an Organization.

Each resource contains multiple argument references. For example, the resource “vcd_org” will have an argument reference called name, where we define the name of the Organization. I have set most of the argument references to variables. Why did I do this? Because if I want to duplicate this terraform folder and use it to set up another vCloud Organization, I only need to change the values in the terraform.tfvars file. I don’t need to scroll through the main.tf file and make changes to each line, especially if the same value is used multiple times, such as Organization Name.

The first section specifies the provider to use, in this case, ‘vcd’. We then specify our connection settings to vCloud Director

# Connect VMware vCloud Director Provider
provider "vcd" {
  user                 = var.vcd_user
  password             = var.vcd_pass
  org                  = "System"
  url                  = var.vcd_url
  max_retry_timeout    = var.vcd_max_retry_timeout
  allow_unverified_ssl = var.vcd_allow_unverified_ssl
}

The second section creates an external network within vCloud Director. This is a vlan backed vSphere portgroup attached in the back-end. Later on, we will add this external network to our Organization VDC.

# Create new External Network

resource "vcd_external_network" "extnet" {
  name        = var.extnet_name
  description = var.extnet_description

    vsphere_network {
    name    = var.extnet_name
    type    = "DV_PORTGROUP"
    vcenter = var.extnet_vcenter
  }

  ip_scope {
    gateway    = var.extnet_gw
    netmask    = var.extnet_mask
    dns1       = var.extnet_dns1
    dns2       = var.extnet_dns2
    dns_suffix = var.extnet_suffix

    static_ip_pool {
      start_address = var.extnet_ip_pool_start
      end_address   = var.extnet_ip_pool_end
    }
  }
}

Terraform vCloud External Network

This next section creates a new vCloud Organisation by specifying the name, full name, and description. You will notice there is a ‘depends_on’ setting. This means that this resource depends on the resource specified before executing. In this instance, before Terraform creates a new Organization, it must have completed the creation of the external network. This is extremely important as it prevents Terraform trying to execute a resource randomly.

# Create new vCloud Org
resource "vcd_org" "org-name" {
  name                = var.org_name
  full_name           = var.org_full_name
  description         = var.org_description
  is_enabled          = "true"
  delete_recursive    = "true"
  delete_force        = "true"
  can_publish_catalogs = "false"
  depends_on = [vcd_external_network.extnet]
}
Terraform vCloud Organization

I found a setting that is missing for the resource vcd_org, and that is policy leases. There isn’t any option to change the leases with Terraform at the moment. Therefore it sets the maximum runtime lease to 7 days and maximum storage lease to 30 days. I have an issue opened on the Terraform GitHub page which you can track with this link: https://github.com/terraform-providers/terraform-provider-vcd/issues/385

Terraform vCloud Organization

Update March 2020: Terraform vcd_provider 2.7+ now includes the ability to set the vApp and vApp templates leases.

To do this you must add the vapp_lease and vapp_template_lease as seen in the code below:

# Create new vCloud Org
resource "vcd_org" "org-name" {
  name                = var.org_name
  full_name           = var.org_full_name
  description         = var.org_description
  is_enabled          = "true"
  delete_recursive    = "true"
  delete_force        = "true"
  can_publish_catalogs = "false"
  depends_on = [vcd_external_network.extnet]
  vapp_lease {
   maximum_runtime_lease_in_sec = "0"
   power_off_on_runtime_lease_expiration = true
   maximum_storage_lease_in_sec = "0"
   delete_on_storage_lease_expiration = false
  }
 vapp_template_lease {
  maximum_storage_lease_in_sec = "0"
  delete_on_storage_lease_expiration = false
 }
}

We now create our Organization VDC. When I write the automation code, I think about the steps involved to create a VDC manually and then convert that to code.

To create a VDC we need to specify

  • Name
  • Description
  • Organization
  • Allocation Model
  • Network Pool
  • Provider VDC
  • Compute Capacity (i.e. CPU speed and garantee’s)
  • Storage Profiles
  • Network Quota (how many networks the VDC can create)
  • Thin Provisioning (are the VM’s thin provisioned)
  • Fast Provisioning (use a parent VM as the base, track changes in a new file for this VM)

Once we have this information, we then write the code for it. You will notice the ‘depends_on’ setting once again. In order to create an Organization VDC, you must have an Organization created, therefore in this resource, we depend on the creation of the Organization before trying to execute. The code for this section looks like this:

# Create new VDC

resource "vcd_org_vdc" "vdc-name" {
  name        = var.vdc_name
  description = var.vdc_description
  org         = var.org_name

  allocation_model = var.vdc_alloc_model
  network_pool_name = var.vdc_net_pool
  provider_vdc_name = var.vdc_pvdc_name

  compute_capacity {
    cpu {
      allocated = 0
    }
    memory {
      allocated = 0
    }
  }

  storage_profile {
    name = var.vdc_storage_name
    limit = var.vdc_storage_limit
    default = true
  }

  cpu_guaranteed = 0
  memory_guaranteed = 0
  cpu_speed = 2000
  network_quota = 10
  enabled = true
  enable_thin_provisioning = true
  enable_fast_provisioning = false
  delete_force = true
  delete_recursive = true 

  depends_on = [vcd_org.org-name]
}
Terraform vCloud Organization VDC

We’re almost done. In the second last section, I need to add the external network we created earlier to our Organization VDC. The code ‘depends_on’ the creation of an Organization VDC first.

# Org External Network
 resource "vcd_network_direct" "netdirect" {
   org = var.org_name
   vdc = var.vdc_name
   name = "Terraform1-Lan"
   external_network = "Terraform1-Lan"
   depends_on = [vcd_org_vdc.vdc-name]
 }
Terraform vCloud Organization External Network

Lastly, we have the creation of a Server vApp

# Org vApp - Servers
 resource "vcd_vapp" "vapp" {
   name = "Servers"
   org = var.org_name
   vdc = var.vdc_name
   depends_on = [vcd_network_direct.netdirect]
 }
Terraform vCloud Organization vApp

Running Terraform Plan

This is where all our hard work in writing this code pays off. It’s time to execute our plan.

Within the same folder where we have our main.tf, terraform.tfvars and variables.tf files, type in ‘terraform init’, this will download the vcd provider. I then type in ‘terraform plan’. This command will go through your files and check the settings against your vCloud Director environment. It will then highlight with a green + (for additions) or red – (for removal) what settings will be changed, without actually making any changes.

Terraform Plan for vCloud Director

At the end of the plan, it will give a summary of how many additions, changes or deletions will be made

Terraform Plan for vCloud Director

Once you have reviewed all the changes being made, it’s time to run ‘terraform apply’. When you run this command, it will give a summary of all the changes being made.

Terraform Apply for vCloud Director

Below is a list of changes being made. Type ‘yes’ to apply.

Terraform Apply for vCloud Director

You will then get a summary of the resources being provisioned and how many additions, changes, and deletions occurred.

Terraform Apply for vCloud Director

Let’s say you want to make a change to the plan. For example, if we change the variable for the vApp name from Servers to Web_Servers, we can simply run ‘terraform apply’, it will recognize there is a change and it will rename the vApp for you.

If you want to go ahead and delete the whole environment, let’s say this is a dev environment that was needed for a short period of time, then you can simply type in ‘terraform destroy’ and the whole environment will be removed.

Terraform Apply for vCloud Director

Terraform VMware vCloud Director Video Demonstration

The post Getting Started with Terraform and VMware vCloud Director appeared first on SYSADMINTUTORIALS IT TECHNOLOGY BLOG.

Ansible Username Password Prompt with vars_prompt

$
0
0

If you have been following my Ansible demonstrations, you will see that authenticating to my NetApp simulator I had the username and password embedded into the variables.yml file. Like this:

# ONTAP Cluster Login
clusterip: 192.168.1.50
user: ansible
pass: Password123
https_option: true
validate_certs_option: false

Let’s face it, adding the username and password into the YAML file is almost like writing your password on a sticky note and hiding it under your keyboard.

This is ok for a lab environment, however, the other issue is running your playbook under the one account. How do you audit who did what ?

Ansible vars_prompt

Today we are going to take a look at integrating vars_prompt into an existing playbook I wrote, which automated the creation of NetApp Root Load-Sharing Mirrors.

In the previous variables.yml file I have removed the ‘user: ansible’ and ‘pass: Password123’ variables.

I also collapsed all the main YAML files into 1 file named ’01_create_ls_mirror.yml’ This means that this file will create the DP volumes as well as the root LS snapmirrors.

If we take a look at the new YAML code, I have introduced the following ‘vars_prompt’ within ’01_create_ls_mirror.yml’:

---
- hosts: localhost
  name: Create LS Root Vols and LS-Mirror
  gather_facts: false
  vars_prompt:
    - name: clusteruser
      prompt: "Enter your {{ sourcecluster }} username ?"
      private: no
    - name: clusterpass
      prompt: "Enter your {{ sourcecluster }} password ?"
  vars:
    login: &amp;login
     hostname: "{{ clusterip }}"
     username: "{{ clusteruser }}"
     password: "{{ clusterpass }}"
     https: "{{ https_option }}"
     validate_certs: "{{ validate_certs_option }}"
  vars_files:
  - variables.yml

Within ‘vars_prompt’ we have ‘name: clusteruser’. This part is setting a new variable called clusteruser.

Next is, ‘prompt: “Enter your {{ sourcecluster }} username ?”. This text will pop up on screen asking you to type in your username. We are going to see what this looks like shortly.

The last setting is ‘private: no’. This means the input you type in, which in this case is your username, is not hidden from view. The default setting of ‘private: yes’ is set for your password.

You can then see, within the ‘vars:’ section, username equals the clusteruser variable. We will enter out username before the playbook fully runs.

The screenshot below shows what the username and password prompt looks like. Once you enter in your credentials, they are checked against the NetApp cluster, authentication is granted and the rest of the Ansible playbook runs.

Ansible vars_prompt - Authentication Prompt

Ansible Code on GitHub

I’ve branched out with ‘update-version2’ from the original code. The direct link to the updated code can be found here: https://github.com/sysadmintutorials/netapp-ansible-ls-mirrors/tree/update-version2

The post Ansible Username Password Prompt with vars_prompt appeared first on SYSADMINTUTORIALS IT TECHNOLOGY BLOG.

Viewing all 186 articles
Browse latest View live