Quantcast
Channel: SYSADMINTUTORIALS IT TECHNOLOGY BLOG
Viewing all 192 articles
Browse latest View live

New Netapp Simulator Tutorial How To Create SSD and Large Disks

$
0
0

This video tutorial walks you through creating SSD’s, larger SAS disks and multiple shelves for your Netapp Simulator. Out of the box the Netapp simulator only allows for 20GB total size for the aggregate.

Once you have finished with the tutorial you will have over 110GB usable space. You will also be able to simulate Flash Pool aggregates with the introduction of SSD’s to the Netapp Simulator.

Netapp Simulator Large Disks and SSD’s – http://www.sysadmintutorials.com/tutorials/netapp/netapp-clustered-ontap/netapp-simulator-large-disks-and-ssd/


Netapp Cluster Mode Vserver Language Change

$
0
0

How to change Netapp Vserver and Volume Language

You may one day have the need to change your Netapp vserver language after you have already configured the vserver.

One example may be when trying to configure a snapmirror relationship between 2 volumes on 2 different vservers that don’t share the same language setting. Trying to do so will result in a log error such as “Relationships not supported between volumes with different languages

Changing the Netapp Vserver Language

To view the current Vserver Language:

VMLABCLUSTER::> vserver show VMLAB -fields language

To change a Vserver Language:

VMLABCLUSTER::> vserver modify VMLAB -language C.UTF-8

Creating a Netapp Volume with a Different Language Setting than the Vserver

VMLABCLUSTER::> volume create -vserver VMLAB -volume VMDS2 -size 100GB 0aggregate aggr1_vmlabcm1 -language C.UTF-8

Netapp Language Change Caveats

  • You cannot change a volumes language, you must delete the volume and re-create it with either the default Vserver’s language setting, or by specifying the language in CLI when you create the volume
  • Changing a Vserver language that contains current volumes will not change the language of the current volumes, but will set the new language on any future volumes

Netapp Packet Capture

$
0
0

Netapp Packet Capture

In this post we look at running a packet capture on a Netapp controller running Clustered Data Ontap

The Scenario

We want to capture packets from a VMware ESXi host running NFS on NODE1. The VMware ESXi host has a vmk IP of 192.168.1.10 and we will store the packet capture in /etc/crash.

CLUSTER::> set -priv diag

CLUSTER::*> node run -node NODE1 pktt start all -i 192.168.1.10 -d /etc/crash

Boot a virtual machine in VMware ESXi on the NFS datastore on NODE1 so that we can grab some traffic. The packet capture will store the files in /etc/crash as *.trc files

Now that we have grabbed some packets let’s stop the packet trace

CLUSTER::*> node run -node NODE1 pktt stop all

We can browse the /etc/crash directory 2 ways, 1 way is via the web which is very handy to download the files to your local computer and open the *.trc file with such tools as WireShark. The second way is via the node shell.

Browsing via the Web

Open up a browser and type in the following URL:

https://192.168.0.5/spi/NODE1/etc/crash

The IP address 192.168.0.5 has to be your cluster management IP address. Simply click on the *.trc file you wish to download.

Browsing via SystemShell

CLUSTER::*> systemshell -node NODE1

Login with your diag username and password

%> cd /mroot/etc/crash

%> ls

Netapp disk ownership with already owned disks

$
0
0

Netapp Disk Ownership

I came across this quite a few times already where a disk has been inserted into a Netapp Clustered Ontap system and that disk already has a disk ownership assigned to it.

What we must do is remove the disk ownership and re-assign this disk to the new node. To do this we use the command below:

VMLABSTORAGE::> Set diag

VMLABSTORAGE::*> disk assign -disk NODE1:4c.10.0 -owner NODE1 -force-owner true

If this command does not work or you get the following Error: command failed: Failed to find node ID for node, you can use these 2 commands to remove ownership or modify ownership:

VMLABSTORAGE::*> disk removeowner -disk NODE1-4c.10.0 -force true

VMLABSTORAGE::*> disk modify -disk NODE1:4c.10.0 -owner NODE1 -force-owner true

For more Netapp Clustered-Ontap CLI commands visit my Netapp Pocket Guide page – http://www.sysadmintutorials.com/tutorials/netapp/netapp-clustered-ontap/netapp-clustered-ontap-cli/

Deleting that annoying Netapp Snapmirror Snapshot

$
0
0

Netapp Snapmirror Snapshot

This happened quite a lot in 7-mode but much less in c-mode, however there still might be the need to remove a snapmirror snapshot that just won’t remove via a snapmirror break or release.

Firstly the correct way to break a snapmirror and remove the snapshots on both source and destination is by typing:

VMLABSTORAGE::> snapmirror quiesce -source-path SOURCE:vol1 -destination-path VMLABSTORAGE:vol1

VMLABSTORAGE::> set -privilege admin

VMLABSTORAGE::> snapmirror break -source-path SOURCE:vol1 -destination-path VMLABSTORAGE:vol1 -delete-snapshots true

If you do forget to use the option -delete-snapshots true you can then log into the source controller and type:

SOURCE::> snapmirror list-destinations -source-path SOURCE:vol1

This will give you a list of destinations along with a relationship-ID (UUID). Once you’ve worked out the correct relationship-ID you can use the following command to delete the snapshot on the source:

SOURCE:> snapmirror release -source-path SOURCE:vol1 -relationship-ID

However, rarely you may not be able to match up a destination to the snapshot that is hanging about. Looking in the CLI or GUI will show that the snapshot is busy or snapmirror dependent. If you are ABSOLUTELY SURE that the snapshot you want to delete is not being used with any snapmirror relationship you can use the -ignore-owners command to delete it:

SOURCE::> set -privilege diag

SOURCE::*> snapshot delete -vserver VMLAB -volume vol1 -snapshot -ignore-owners true

Cisco Nexus 5000 Fabric is already Locked

$
0
0

Cisco Nexus 5000 – Operation Failed. Fabric is already locked

The other day I came across this problem where I could not update the device-alias database for an FCoE WWPN. When I tried to update the database I got the following error – Operation Failed. Fabric is Locked.

This appears when someone else or something else (such as some software managing the switches) has made changes to the device-alias database but has not committed them. This is known as a session.

If you see the notification appear in the CLI that the fabric is locked, you can see which sessions are open by typing:

switch1(config)# show cfs lock

You will then see a table which shows you the switch WWN, IP Address, user that has the lock and the type of user:

Cisco Nexus 5000 Fabric Lock

Cisco Nexus 5000 Fabric Lock

First check with this user that he or she is not currently making any changes, if they are, ask them to complete the changes and commit the database.

If the user is certain they are not in the middle of anything you can clear the lock by typing:

switch1(config)# clear device-alias session

Once the session is cleared type in show cfs lock again and make sure the table is now clear.

Veeam Enterprise Manager with NAT Setup

$
0
0

Veeam Enterprise Manager NAT Setup

We have been using Veeam Backup and Replication along with Veeam Enterprise Manager extensively over the past few months. Each setup is different but one common setup especially for service providers is to have your Veeam Enterprise Manager located in a DMZ. We then need add Veeam Backup and Replication Servers that are sitting behind various firewalls.

To do this we require the following firewall setup:

  1. Port Address Forwarding on the firewall to the Veeam Backup & Replication Server for TCP Port 9392
  2. Port Address Forwarding on the firewall to the Veeam Backup & Replication Server for TCP Port 9393
  3. Allow ICMP replies for the NAT’d internet address

Now that we have our ports opened, when Veeam Enterprise Manager tries to add in a Veeam Backup & Replication Server the following steps occur:

  1. Veeam Enterprise Manager communicates with Veeam Backup & Recovery Server on port 9392
  2. Veeam Backup & Recovery Server sends the Veeam Enterprise Manager it’s own local IP address
  3. This is where the failure occurs, because Veeam Backup & Recovery Server will have a private IP address that the Veeam Enterprise Manager cannot route to if it’s sitting in a DMZ with a real world IP.

To fix this we need to make some changes:

  1. Within the Veeam Backup & Recovery Server, we will add 2 registry strings under HKLMSoftwareVeeamVeeam Backup and Replication
    1. Remoting_UseIPAddress = false
    2. Remoting_MachineName = YourFQDN (FQDN = your Veeam Backup and Recovery Server’s FQDN i.e. veeambr.vmlab.local)
  2. Now within your Veeam Enterprise Manager we need to edit the hosts file (located in C:WindowsSystem32Driversetc) and add the following entry:
    1. InternetIPAddress     veeambr.vmlab.local (Replace InternetIPAddress with the NAT’d IP address of your Veeam Backup and Replication Server)

Now you will be able to successfully add your Veeam Backup and Replication Server to your Veeam Enterprise Manager Server

Thanks to Elliot who originally posted this in the Veeam Forums, here is a link to the original post:

Making Veeam Enterprise Manager work over a WAN link

If you have any further questions on this setup please leave a comment below

Veeam Virtual Disk Size is not Multiple of 1KB

$
0
0

Veeam Backup and Replication Error

This error came up the other day during a backup of a client’s virtual machine – Error: Virtual disk size is not a multiple of 1KB

Veeam Error Not a Multiple of 1KB

Veeam Error Not a Multiple of 1KB

The fix is quite simple, all you need to do is edit the virtual machine within VMware vCenter – Select the Hard Disk and increase the size of the disk. For example I have a 50GB Hard Disk 1, I would increase it to 51GB.

Then re-run the backup job and the backup will succeed.


vCenter 6 Failed to download vCenter Server Support Bundle Logs

$
0
0

vSphere 6 – vCenter 6 Appliance Install Error

You may come across this error message when you attempt to install the vCenter 6 appliance – Firstboot script execution error. Failed to download vCenter Server support bundle logs.

vSphere 6 vCenter Appliance Install Error

vSphere 6 vCenter Appliance Install Error

This error comes about due to DNS mis-configuration. During the vCenter 6 installation, the wizard asks for the system name FQDN or IP address:

vCenter 6 Appliance Install Wizard

vCenter 6 Appliance Install Wizard

As you can see in my example I have set my FQDN to vmvcenter6.vmlab.local. Within my DNS server I need to ensure forward and reverse lookups are working correctly. The IP address for my vcenter 6 server will be 192.168.1.174. In my active directory domain controller, I open up DNS server and add a Host entry in and tick the Create associated pointer (PTR) record. This creates a forward and reverse lookup. You can then test that it is working correctly by opening a command prompt and type: ping vmvcenter6.vmlab.local and to test reverse lookup we can type ping -a 192.168.1.174

vCenter 6 Appliance DNS Setup

vCenter 6 Appliance DNS Setup

After verifying all the DNS is correctly configured and resolving, we can then re-deploy the vCenter 6 Appliance and get a successful installation:

vCenter 6 Appliance Install Success

vCenter 6 Appliance Install Success

Microsoft Windows Ghost Network Adapters in Device Manager

$
0
0

How to Remove Microsoft Windows Ghost Adapters

Within a virtual environment it is common to easily add and remove network adapters to a Windows Server. However sometimes these network adapters do not get removed properly and can cause issues.

Simply going into Device Manager and selecting View – Show hidden devices does not always display the removed NIC.

1. First up open a new admin command prompt (run as administrator)

2. Type in C:>set devmgr_show_nonpresent_devices=1

3. Now open up Device Manager (Start – Run – devmgmt.msc)

4. Click on the View Menu and select Show Hidden Devices

5. Expand Network Adapters and you will see the Network Cards that you previously removed


Windows Hidden Network Card

Netapp Could Not Complete Giveback Because of Non-CA Locks on Volume

$
0
0

Error: Could Not Complete Giveback Because of Non-CA Locks on Volume

I came across this error (Could Not Complete Giveback Because of Non-Ca Locks on Volume) during a failover and give back on a HA pair. Performing the manual failover was fine and all aggregates failed over correctly, however on the failback the root volumes were successful but some SFO aggregates contained an error.

First up I saw the basic error during a giveback using the command:

::> storage failover show

I then went to look into the logs by typing:

::> log show (the newest logs are always at the top of the list)

These are the exact messages from the log:

5/28/2015 21:47:21  NODE1        ERROR         sfo.giveback.failed: Giveback of aggregate aggr1 failed due to Giveback was vetoed..

5/28/2015 21:47:21  NODE1        ERROR         sfo.sendhome.subsystemAbort: The giveback operation of ‘aggr1′ was aborted by ‘lock_manager’.

5/28/2015 21:47:21  NODE1        ERROR         lmgr.gb.nonCA.locks.exist: Could not complete giveback because of non-CA locks on volume volume1@vserver:2345673-5898-11e3-83fb-123478563412.

The volume named volume1 appeared to have a locked cifs session and was preventing the aggregate to failback.

Non-CA Locks on Volume Fix

There are 3 options to fix this:

  1. Wait and then try to perform the give back again
  2. Disable cifs on the vserver to remove the lock
  3. Use the option -override-vetoes true. Take not that using this option might drop the cifs sessions momentarily until the aggregate has failed back. The full command would be:

::> failover giveback -ofnode node1 -orverride-vetoes true

Monitor the giveback with the command:

::> storage failover giveback

Once the failback is successful, lastly check that the cluster ring is all in sync:

::> set diag

::*> cluster ring show

Find Virtual Machine MAC Address in VMware and Hyper-V

$
0
0

How to Find a MAC Address in VMware and Hyper-V

There might be a time when your network team asks you to check on a MAC address coming from your virtualization infrastructure this could be for a number of reasons. Or you have the MAC address of a virtual machine and have no idea which virtual machine the MAC address belongs to. If your environment is small it shouldn’t be too hard to do this via one of the GUI tools, VMware vCenter or Microsoft SCVMM tools.

However if your environment is quite large, the above method can be very time consuming. The quickest way to find which virtual machine has a specific MAC address is to use VMware Powercli for vSphere or Microsoft Powershell for Hyper-V:

Find a MAC Address within vSphere using VMware Powercli

PowerCLI C: Get-VM | Get-NetworkAdapter | Where {$_.MacAddress -like “00:50:56:aa:11:22″} | fl

The Parent Value in the output will be the virtual machine name

Find a MAC Address within Hyper-V using Microsoft Powershell

PS C: Get-SCVirtualMachine | Get-ScVirtualNetworkAdapter | Where {$_.MacAddress -like “00:15:5d:bb:22:33″}

The Name value in the output will be the virtual machine name

 

Netapp Clustered Ontap 8.2.x Cifs Create Fails

$
0
0

Netapp Clustered Ontap 8.2.x Cifs Create Fails

If your Netapp environment consists of 1 or more HA pairs and you attempt a cifs create, it could potentially fail warning you that there is no access to a DNS server.

This could happen especially in a multi-tenant environment where your node management lifs and data lifs (cifs lifs) are on different networks and neither are routable to your active directory servers

When you SSH to your netapp cluster manager lif you will arrive on the node that is owning the cluster manager lif, say in this case it is node1.

If I type cifs create for a particular vserver and go through the wizard, the node will attempt to use a data lif (such as a cifs lif) to establish a connection to your active directory server and dns. If there are no data lifs on this node, the wizard will then attempt to establish a connection via the node management lif. If either lif cannot establish a connection to the active directory and dns server, the cifs create will fail.

You may see the following error messages:

  • Connect to 192.168.1.10 failed (Error: Operation timed out)
  • Cannot conenct to DNS server ‘192.168.1.10’
  • Unable to connect to any of the provided DNS servers
  • FAILURE: Failed to find a domain controller

Cifs Create Pre-Requisites

  • Clustered Data Ontap requires Active Directory to create a cifs server. Unlike 7-mode where you can use local accounts.
  • Create cifs data lifs per node with the correct routes to your active directory servers

For Netapp Cluster Ontap CLI Commands see my Pocket Guide by Click Here

Netapp ACP Inactive No In-Band Connectivity

$
0
0

Netapp ACP Inactive No In-Band Connectivity

If you plug in a new shelf to an existing Netapp system you would be firstly setting the shelf ID, connecting the SAS cables and ensuring the system can see the disks, that the connectivity status is Multi-Path HA and then you would begin to connect the ACP cables.

Once you cable the ACP connectivity and type the following command:

::> node run NODE1 storage show acp

You may see that the ACP Connectivity Status is Partial Connectivity and that under the column Shelf Module, NA. Looking further to the right under the Status column you will notice it may say inactive (no in-band connectivity)

Netapp ACP Inactive No In-Band Connectivity
To fix this you must have a look at the Shelf Module column and determine which shelf and module is missing, in the example above it is shelf 23 module A.

As long as your system is cabled correctly and it displays Multi-Path HA in the sysconfig output, you would locate shelf 23 and remove module A for about 2 mins. After 2 mins has passed you can re-plug the module back into the shelf.

If you then re-type the command:

::> node run NODE1 storage show acp

You should see that shelf 23 module A is now online and that the ACP Connectivity Status is Full Connectivity.

Netapp Motherboard Firmware Update Clustered Data Ontap 8.2.1

$
0
0

Netapp Motherboard Firmware Update for Clustered Data Ontap 8.2.1

If you are looking to upgrade your motherboard and bios firmware on your Netapp System, the first thing to do is to check the compatibility list and ensure the motherboard firmware is compatible with your version of Data Ontap.

You can download the firmware for your system from Netapp by following this link (you will be required to login with your Netapp account):

http://mysupport.netapp.com/NOW/cgi-bin/fw

If you are downloading the SP firmware, an engineer told me that it is much quicker to download the firmware via the SP rather than via the Data Ontap prompt, so I always select the SP download option for SP firmware. The Data Ontap prompt method takes way longer than via the SP method.

To download the Motherboard and Bios firmware I select the Service Image (BIOS) For Use With clustered Data ONTAP 8.X

Next I will check the compatibility chart for my FAS system, Data Ontap version and Bios version to ensure it has the green tick of approval. The link for the compatibility chart is:

http://mysupport.netapp.com/NOW/download/tools/serviceimage/support/

Next step is to download the bios firmware and following the instructions step-by-step to update the firmware.

Netapp Data Ontap Motherboard Firmware Workaround and Bug Report

In my instance I was wanting to update a bios firmware for a FAS 8040 which was running Data Ontap 8.2.1 The firmware I wanted to use was 9.3, however the requirement was that I would have to be running at least Data Ontap 8.2.1 P1.

Talking to our Netapp SAM (Support Account Manager) we were able to work around this. We could not perform a standard install but had to follow a specific procedure which didn’t differ too much from the standard installation, however it is outlined in this Netapp Bug Report article:

http://mysupport.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=804869

Before doing any type of maintenance on a Netapp system such as Data Ontap upgrade or major firmware upgrades, I usually pre-log a support ticket with Netapp in case something goes wrong and I need immediate assistance. In this case the support ticket is ready, usually an engineer would of been assigned to the ticket already and it saves a lot of time.


Netapp Announcements for July

$
0
0

Netapp Announcements for July

Here are a few of the latest Netapp Announcements:

End of Availability of DS4243 and IOM3 Shelf Parts

Netapp will be recommending DS4246, DS4486 or DS2246 shelves for the future. Netapp will continue to support the DS4243 and IOM3 shelves for a limited time.

  • End of Availability 13-Dec-2015
  • Last Shipment 13-Jan-2016
  • End of Support Hardware 31-Jan-2021

Netapp End of Availability

End of Availability of Netapp FAS/V3220 and FAS/V3250 Orderable Spares

The last date for orderable spares of the above mentioned systems is on December 11, 2015

End of Support Hardware 31-Jan-2020

Netapp End of Availability

Availability of Netapp OnCommand Unified Manager 6.3 RC 1

Release Notes

Software Download Page (Requires Netapp Login)

Netapp Releases OnCommand Performance Manager 2.0 RC1

$
0
0

Netapp OnCommand Performance Manager 2.0 RC1

Netapp OnCommand Performance Manager ties in with OnCommand Unified Manager and provides performance monitoring and alerting of your Netapp C-Mode system running Data Ontap 8.3.x and 8.2.x

New Features of OnCommand Performance Manager include:

  • New Graphical interface with additional performance pages
  • Can now create user defined thresholds
  • System-Defined thresholds to monitor common node and aggregate performance issues
  • Support for two node Netapp MetroCluster configurations
  • Can now monitor workloads on All Flash FAS systems
  • Supports the Netapp Cloud Ontap operating system
  • Interoperability with OnCommand Unified Manager 6.2 and 6.3
  • Support for IPv6
  • New backup and restore functionality

Supported Installations of OnCommand Performance Manager:

  • VMware ESXi 5.5 or 6.0
  • Red Hat Enterprise Linux 6.5 or 6.6

Reference the Netapp Interoperability Matrix Tool for a full list of supported platforms – Netapp IMT

To download Netapp OnCommand Performance Manager visit the Netapp Downloads Page – Netapp Software

Moved Hosting Provider

$
0
0

Hi everyone, just a quick post to let you all know that I have moved hosting provider today and you should now experience a much faster site.

With the old hosting provider we noticed timeouts and disconnects in the last 1-2 months due to lack of performance on the server and the growth of traffic to the site.

With the new provider we have doubled the resources and during my tests have noticed a dramatic increase in site performance.

Hope you all enjoy the faster experience and please leave any comments if you happen to come across any bugs or errors

 

Netapp Releases OnCommand Workflow Automation 3.1

$
0
0

OnCommand Workflow Automation 3.1

OnCommand Workflow Automation enables you to create simple or complex workflows for your Netapp storage environment, automating everyday tasks such as provisioning, migration and decommissioning.

OnCommand Workflow Automation also integrates into your favourite 3rd party orchestration tools such as VMware vCloud Automation Center, VMware vCenter Orchestration, HP Operations Orchestration and BMC Atrium Orchestration.

OnCommand Workflow Automation 3.1 New Features:

  • Support for Data Ontap 8.3
  • Support for Unified Manager 5.2.1 and Unified Manager 6.3
  • Storage Automation Store – contains Netapp certified workflow packs
  • Allows for the installation on Linux
  • Support for Perl
  • Designer Enhancements
  • High Availability and DR

OnCommand Workflow Automation 3.1 is available for download from the Netapp support site. You must have a login in order to download:

Netapp OnCommand Workflow Automation 3.1 for Windows

Netapp OnCommand Workflow Automation 3.1 for Linux

Netapp Releases OnCommand Workflow Automation 3.1

$
0
0

OnCommand Workflow Automation 3.1

OnCommand Workflow Automation enables you to create simple or complex workflows for your Netapp storage environment, automating everyday tasks such as provisioning, migration and decommissioning.

OnCommand Workflow Automation also integrates into your favourite 3rd party orchestration tools such as VMware vCloud Automation Center, VMware vCenter Orchestration, HP Operations Orchestration and BMC Atrium Orchestration.

OnCommand Workflow Automation 3.1 New Features:

  • Support for Data Ontap 8.3
  • Support for Unified Manager 5.2.1 and Unified Manager 6.3
  • Storage Automation Store – contains Netapp certified workflow packs
  • Allows for the installation on Linux
  • Support for Perl
  • Designer Enhancements
  • High Availability and DR

OnCommand Workflow Automation 3.1 is available for download from the Netapp support site. You must have a login in order to download:

Netapp OnCommand Workflow Automation 3.1 for Windows

Netapp OnCommand Workflow Automation 3.1 for Linux

Viewing all 192 articles
Browse latest View live