An Introduction to SNIA – SFD16

The first session to kick off SFD16 was presented by SNIA (Storage Networking Industry Association.“The SNIA is a non-profit global organization dedicated to developing standards and education programs to advance storage and information technology.” – www.snia.org

This session was an introduction to SNIA and the role that they play in creating technical standards and educational materials. SNIA as a whole works towards bringing vendors into a neutral zone of standards, simplifying technology and creating boundaries to work within. SNIA runs forums, approximately 50 in the last 3 years where webinars and presentations are run to allow anyone to choose to learn about any particular storage technology. They also provides a plethora of educational items from white papers, articles, blogs, IT Training, conferences and certification courses all free, run by SMEs of their own companies.

SNIA focus’ on many areas from physical storage, persistent memory, data security, network storage, backup, and much, much more. In the words of Dr J Metz, “Generally speaking, if you can think of it, SNIA has a project that’s working on it, or looking to promote it or educate about it.”

Having learnt more about SNIA and the great work they are doing to help promote and educate about storage, I have gone and looked into a number of their education items, particularly the white papers. I encourage you to also head over and check out their material.

For more details, head to www.snia.org and check out the video from Storage Field Day 16.

 

Advertisements

Zerto – Not Just Short Term DR Retention Anymore

Last week I had the opportunity to participate in a session with Zerto at their global headquarters in Boston, MA. as part of Storage Field Day 16. This was a session I was really looking forward to after having been a partner for ~3 years and someone who really likes the technology.

The session started with the companies Chief Marketing Officer, Gil Levonai going over the core details of how the company has grown and how their block based Continuous Data Protection technology has evolved over the years.
Zerto Virtual Replication (ZVR) disaster recovery product that uses block based replication allowing it to be hardware agnostic. This means you can use any underlying storage vendor between sites. Zerto is building out their cloud portfolio to allow replication across multiple hypervisors and public cloud companies from vSphere and Hyper-V, through to AWS and Azure, and beyond. There are two main components that are required at both sites for the replication to work, the Zerto Virtual Manager (ZVM) and the Zerto Virtual Replication Appliance (ZVRA). The ZVM is a Windows VM that connects to vCenter/Hyper-V Manager to run the management WebGUI and present and coordinate the VMs and Virtual Protection Groups (VPGs) between sites. The ZVRAs are loaded on to each hypervisor as an appliance and is used to replicate the blocks across sites while compressing the data. One storage platform they do no support currently is VVOLs, however, they are a company that will develop for the technology as there is demand.
You can set your target RPO to a mere 10 seconds and retain your recoverable data in the short-term journal from 1 hour up to 30 days – meaning you can restore data from a specific time rather than when the backup was last run..
The VPGs are groups of VMs you want to be part of a failover group. This is where you can create a group for say a 3 tier app where you need each VM to restart in a certain order at certain intervals.

You can see the Gil’s talk here: https://vimeo.com/277582934

So, what was the technical discussion during this session? Mike Khusid (Product Management Leader) took us through their new Long Term Retention (LTR) piece that is currently under development to extend the capabilities of ZVR. This is  due to to be included in their next Major release, Zerto 7. This requirement for many enterprises is driven by the need to meet compliance standards and be able to retain data from 7 to 99 years. The benefit of this being included in Zerto’s Continuous Data protection means that you will have an available copy of data that was created ~3 minutes prior to being deleted, ensuring it will be recoverable within the set retention period.

This is certainly a great way for Zerto to extend their product set to be able to meet the compliance demands that many companies face. As a partner using Zerto, I know this will be a great piece to be able to pass on to our customers.

You can also catch Mike’s segment here: https://vimeo.com/277583291

Thank you Zerto for taking the time to present at Storage Field Day #16.

Storage Field Day 16 – I’m going on an Adventure!

***Update – Added NetApp session to Timetable.

This is a bit of a late post, however it is done. In less than a week now, I will be boarding my first ever international flight as I will be heading over to Boston, MA, USA for 4 days. Why? I have been invited by the good folks at GestaltIT and the Tech Field Day (TFD)  team to attend and be a delegate at Storage Field Day #16. (SFD16).

This is a great honour to be a part of, an opportunity where I can meet likeminded folk, discuss storage and general technology while diving deep into the guts of the products, meet vendors and staff and most importantly of all, to learn and grow in the knowledge and experience that will come from attending.

What is Storage Field Day?
Well.. as this is going to be my first Tech Field Day event that I am attending, there is only so much I know at this point in time, however I will try and explain it best I can.
Storage Field Day, along with Cloud, Networking, Mobility and Data field days, is a 2-3 day event where a group of delegates selected by the TFD team are taken to multiple sessions presented by vendors on their technology. Each vendor presenting purchases a time slot in which they will discuss their technology, either their current or latest and greatest coming to market, as well as possibly discussing their roadmap. During the sessions, the delegates have an opportunity to ask the hard questions, discuss their views and experiences and  write up their thoughts on the information presented, while being completely open and honest.

Storage Field Day #16
Storage Field Day 16 will be a 2 day event, travelling around the city and outer city of Boston, MA. held between the 27th and 28th of June, 2018. There are currently 6 sponsors for the event announced, each purchasing a session or two to present on their choice of product. The sponsors and session times for #SFD16 are: (Taken from SFD16 page)

Wednesday, Jun 27 8:30 – 9:30 SNIA Presents NVMe Over Fabrics at Storage Field Day 16
Wednesday, Jun 27 10:00 – 12:00 StorONE Presents at Storage Field Day 16
Wednesday, Jun 27 13:15 – 17:15 Dell EMC Storage Presents at Storage Field Day 16
Thursday, Jun 28 8:00-9:00 Zerto Presents at Storage Field Day 16
Thursday, Jun 28 10:00 – 12:00 NetApp Presents at Storage Field Day 16
Thursday, Jun 28 13:00 – 15:00 INFINIDAT Presents at Storage Field Day 16
Thursday, Jun 28 16:00 – 18:00 Nasuni Presents at Storage Field Day 16

Each session will be streamed on the #SFD16 page for the viewers at home/office. 

What am I hoping to get out of attending?
I would be lying if I said that I wasn’t nervous for a couple of reasons. The first reason being that I have never travelled international for and have to wrangle customs at LAX (Of all the airports) in a 2.5 hour stopover. The second is the unknown of what happens at a Tech Field Day event, I have watched a number of streams and recordings from previous events, but that only shows so much, it certainly has given me an idea of how the delegates contribute to the session.
I guess my nerves stem a little from seeing the list of bright minds that will be there as delegates, the list is absolutely packed, and then there is me, but that I see as a good thing. I have a completely open mind about what to expect walking in, the tips that I have received from previous delegates have all lead to “You will walk away with having a completely new look on everything in the vendor/technology space.” So I am excited to make the very most of this and hopefully do a well enough job to be invited back again.

Keep an eye on this blog, there will be lots of content being produced over the next couple of weeks. Also check out the #sfd16 on Twitter and make sure you check out the live streams and recordings.

**Disclaimer: All delegates have their airfares, accomodation and travel (and sometimes extra activities) paid for by the vendors presenting. 

 

 

VMware Current Software Download and Release Notes

I haven’t blogged in a while, so I thought I would put together a quick list of the most current versions of VMware solutions available. Below you will find links to the download and to the release notes. These are the current versions as of this date. Hopefully someone will find this as a useful reference.

**Please note you will require a valid login/Contract to be able to access a number of these solutions for download.

Check out @texiwil Linux VMware Software Manager – Only requires a my.vmware.com login (Great option if you can’t access downloads through the site)
https://github.com/Texiwill/aac-lib/tree/master/vsm

vCenter
6.0u3e Download
6.0U3e Release Notes

6.5U2 Download 
6.5U2 Release Notes

6.7.0a Download 
6.7.0a Release Notes

ESXi
6.0U3a Download
6.0U3a Release Notes

6.5U2 Download
6.5U2 Release Notes

6.7.0 Download
6.7.0 Release Notes 

NSX-V
6.3.6 Download 
6.3.6 Release Notes 

6.4.1 Download 
6.4.1 Release Notes: 

NSX-T
2.2 Download
2.2 Release Notes

Horizon
7.5 Download
7.5 Release Notes 

7.4 Download  
7.4 Release Notes 
 

PowerCLI
10.1 Download/Release Notes

PowerNSX
Download/release notes 

vRealize Automation
7.4 Download
7.4 Release Notes

vRealize Operations Manager
6.7 Download
6.7 Release Notes 

vRealize Log Insight 
4.6.1 Downloads
4.6.1 Release Notes  

Site Recovery Manager 
8.1 Download  
8.1 Release Notes  

PowerCLI: Import-vApp OVA: Hostname cannot be parsed.

The other day I was rebuilding my lab using William Lam’s vGhetto vSphere Automated Lab Deployment script for vSphere 6.5. In the past I have run the 6.0 script successfully. As part of the script, there is an OVA of a host profile that William has made for the deployment, this is used for the configuration of the host.

This particular time I came across an error right after starting the process and immediately after connecting to the nesting host.  It was a bit of a strange error, pointing to the Import-vApp cmdlet but also saying, “Invalid URI: The hostname could not be parsed,” which sounded as though to be a DNS issue, I spent a little bit of time going through my DNS settings, making sure that the computer from which I was running the script was able to resolve the hostname. I moved off my MacBook using PowerCLI Core and tested from my Windows machine using PowerCLI 10.0, and received the same error.

I did some quick research and found nothing related to the specific error message and started to look at it piece by piece. I decided to pull apart the OVA file and try and run just the OVF – SUCCESS! There appears to be an issue with the OVA and the Import-vApp cmdlet in both PowerCLI Core and PowerCLI 10.0. I am yet to test the OVA in vSphere via the WebClient, but I suspect it may work as it should.

To pull apart the OVA, I recommend using 7ZIP and opening the .ova file and copy/paste the content.

  1. Download and Install 7ZIP
  2. Relaunch explorer
  3. right click OVA file -> 7ZIP -> extract to /<foldername>
  4. check for the VMDK, OVF and description file are all present
  5. Change your ESXI $NestedESXiApplianceOVA= to the .ovf file
  6. rerun script.

Configure PowerCLI and PowerNSX on macOS

A couple of months back, PowerShell Core on Mac and Linux became mainstream after success of its beta. This has allowed for modules to be extended to also be cross-platform for many products out there. The two main products I want to cover are the PowerCLI and PowerNSX and installing from the Powershell Gallery.

To get started, you will need to go to the PowerShell github repo and download the PowerShell install package that is right for your system.

Once the package is installed, Open up terminal and type pwsh to launch PowerShell.

The next Module you will need to install is PowerCLI 10.0 which is the full feature install.

In your PS terminal, insert the below

PS>Install-Module -Name VMware.PowerCLI -Scope CurrentUser

If you receive an invalid certificate error, you can bypass this by using the below.

PS>Set-PowerCLIConfiguration -InvalidCertificateAction Ignore
To confirm the Module is installed, you can run Get-Module VMware.PowerCLI 

Lastly, you will want to install PowerNSX, there is whole site full of information regarding PowerNSX and how to use it, 

 

The Easiest way to Install powerNSX is to run:
PS>Install-Module PowerNSX
PS>Import-Module PowerNSX
Again, to confirm installation, run Get-Module and check if PowerNSX is listed.  You should something like below.
Screen Shot 2018-04-06 at 12.39.20 am
That’s it, PowerCLI and PowerNSX are now installed.
To keep the versions up to date, you can run the Update-Module cmdlet.
PS>Update-Module VMware.PowerCLI
PS>Update-Module PowerNSX

VMware vExpert 5th Year in a Row

4 years ago, I decided to take the plunge at applying for my first year as a vExpert. I thought I was just shooting into the open air not thinking I would receive an award. I had only just started getting into virtualisation, having only done a small amount at work, but I was enjoying the technology so much I decided I would start blogging along my journey. Not long after starting that path, I started to attend our local VMUG chapter and then went on to be a leader for a couple of years. More and more I grew into the VMware virtualisation family.

It is with great honour today to accept my 5th year as a vExpert. This program has been running for 10 years now and is there to acknowledge those who provide back to the VMware community. This program has given me so much, in terms of resources and community support to get the most out of my virtualization journey and to continue to grow and learn more and more each day.

Why is this program so special? I’m glad you asked! The program is not only designed to acknowledge publicly those that spend their time blogging about why you should have High Availability turned on, but to use the vExperts as a valuable resources for testing Beta’s for VMware and providing feedback to improve the GA version.  As mentioned in the previous paragraph, the program enables each vExpert to engage in the community as one and this encourages one another to persist push the limits of their blogging, their knowledge and skills. The team we have are a reliable and trusted group who individually, but also together produce content to help the community in their own environments.

There are additional benefits we receive as vExperts, such as invites to internal VMware calls,  private BETA testing and licenses to be able to continue testing and producing content. These benefits only push you to work harder and create bigger and better content.

I love being part of this select group. and I want to thank Corey Romero and the vExpert/Community team at VMware for giving me and all this years vExperts the opportunity to be a part of the program once again.

Configure ESXi 6.5 Autostart

I’ve recently rebuilt my homelab, and as part of bring a nested lab, I like to have my nested host VMs to poweron automatically as I do for my VCSA. However, I configured (Or at least I thought I had) the Autostart option on sll 3 of the nested hosts. After sitting down powering on the physical host, I waited approximately 10 minutes for it all to boot up, which is about normal, unfortunately, I could not connect to anything but the physical ESXi host, to which I found all 3 VMs powered off all with AutoStart option on them.

As you can see above, all VMs have Autostart enabled on them with their start order, and yet they are all powered off. What I found was that there is a separate service for Autostart that need to be enabled before the start order will operate.

To enable:
Select Manage -> Autostart -> Edit Settings
Under Settings, select Enable = Yes -> Click Save

Once completed, restart your ESXi host to ensure the settings are operation.

Extended Unstun Times with VVOLs and Veeam Proxy Fixed in 9.5 Update 3

Recently Veeam released Veeam Backup and Replication 9.5 Update 3″ This update has brought a number of fixes and additional features that you can read about in Anthony Spiteri’s post VEEAM BACKUP & REPLICATION 9.5 UPDATE 3 – TOP NEW FEATURES

This particular release brings a welcomed fix for backing up VVOL backed VMs when using a proxy server. The symptoms occur when you backup a VM that is utilising VVOL storage and a proxy server with hotadd. The snapshot attempts to remove too soon before the HotAdded disk finishes its unbind process. When this occurs the VM can freeze anywhere from a number of seconds up to 80+ seconds.  These issues were not present when the backup proxy was on the same host as the VM that was backing up. The workaround prior to this release was to run in NBD mode which uses the host as a proxy and is a slower method.

So, what am I looking for? The most obvious symptom is when your VM freezes and can not perform any actions, however performance graphs, etc all should a healthy VM. The other is in your VM log file, you will find a line similar to below. this is a standard line in your log, the difference is the the length of time the process runs for.  In this sample: 56 seconds

Checkpoint_Unstun: vm stopped for 56223314 us

In Veeam B&R 9.5U3, you can now add a registry value to set a wait time to allow the unbind from the proxy to complete before the snapshot is removed. to do this, open up your Veeam B&R server -> Open RegEdit -> navigate to:

HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication\

Create a new REG_DWORD: HotaddTimeoutAfterDetachSec
Using decimal set your wait time (value) in seconds for how long you require.

Once added, you can restart your server\services for the settings to take affect. After testing overnight with a few Backup jobs, I re-enabled all jobs to run through proxies and  have not seen any issues yet.

 

You, Your Health and the Datacenter

Just yesterday I finally completed a marathon few of months in downsizing a live environment in one of our datacenter. This was a huge project with a very ambitious deadline that still required time spent in the office each day doing BAU. To put my workload into perspective, BAU contains customer support tickets that usually role into the next day and so on. In itself, just to keep on top of that is full time, now adding in a large datacenter consolidation with multiple parties involved more than doubles that workload.

Like any project, there are lessons that are learnt and usually incorporated into the next project. Some of the mistakes that we make are ones that are obvious and just plain common sense, but due to our own determination (or should we say, stubbiness!) we tend to make them without realising.

One of the biggest mistakes I made during this project was looking after my health. I pride myself for the fact that I don’t get sick (aside from the minor runny nose or cough) but the reality is, is that we are not invincible. A weeks back, we had a nasty virus go around the office,  I was just coming off a large stint of after hours work and being physically drained and surprise surprise, I got sick. I was so sick that I ended up taking days off which is a big deal for me. I then had a week where I took it steady and paced myself, then came the last couple of weeks and I went in guns blazing feeling on top of the world to meet the deadline. Unfortunately, I pushed myself hard, I did 60+ hours in 4 days, I started Sunday and finished Thursday morning. I would go to the datacenter, get a large amount of work done, then go home for an hours sleep, get up, get my daughter ready for the day and go to work. I would then go home and have dinner, put my daughter to bed and go back out to the datacenter, I started to make simple mistakes, but pushed on. Come Friday, I got sick again and over the next 4 days, I lost 4kgs and I only weighed 70kg to start with.

This week I spent several days in the Datacenter to complete the project, this time I put my BAU on hold so I could pace my days and get a good night sleep to limit the mistakes to almost none.

The project is now completed and the biggest lesson I have taken away from this is to look after yourself and know your limits. We all strive to be the best we can, we want to show our peers that we can do almost anything to get the job done, but the risk we take is dangerous. The percentage of mistakes we will make are greater the more tired we become, ranging from possible customer outages through to  causing physical injury to yourself or others.

So, from my recent experience, I have compiled a list of things that I think are vital to try and keep yourself happy, healthy and on top of your game.

  • Take regular breaks and keep water intake up:
    When working in a datacenter, you are in a dry environment where you are constantly moving between cold and hot aisles. Ensure you keep your fluids up, you don’t want to suddenly collapse from lack of hydration in the middle of the datacenter.
  • Ensure you get plenty of sleep and pace yourself:
    No project or job is ever worth your life. When you are tired you will make mistakes that can either impact the company or may cause an accident where yourself or someone may get injured.
  • Don’t be afraid to ask for help:
    If you find yourself running out of time, or being unable to complete all the tasks or just need a moment to take a breathe, Ask for help. There is no shame in needing assistance. We are all human.
  • Plan to spend time with the family:
    I cannot stress enough that spending time with family was a necessity to stay happy and to stop the mind focusing on the work that was ahead. Clearing the mind is essential for when you are back at the task and needing to focus.

If you can stick to these guidelines, you will not only succeed at your project, but you will be happier and healthier at the end of it. If you have a peer you are working with, take the time to remind them every few hours to take a quick 5 minutes break, it could be the difference between working on another project with them again or not.