Tech Field Day 18 – Here we come!

That’s right, another year and another round of Tech Field Day events and this year I am lucky enough to be heading back over again to the US for the title event – Tech Field Day 18! This month, TFD is landing in Austin, Texas running for two days (7-8 Feb). There are 12 individual delegates from around the world flying in for this event. They will be travelling around to visit 4 different vendors to learn, discuss and publish content about certain technologies that are either currently on the market or soon to be released.

This is a great opportunity for the vendors to be able to get real world feedback from those who implement/manage various technologies in many different environments. This is also a chance for companies to get into the grains of their product and do technical deep dives without the marketing behind it. Tech Field Day is all about the tech.

We will hear from Datera, NetApp, Solarwinds and VMware. Each session will be live streamed on Facebook and on the Tech Field Day site. You can also follow the live feed on twitter by following the #TFD18 tag – you can also ask your own questions as well 🙂 .

For more information about the event or about Tech Field Day and GestaltIT, head over to the website and check out the links!

Veeam Backup & Replication 9.5 Install – Back to Basics.

Sometimes it is good to go back to basics, somethings change between different versions of software and the installation process isn’t always the same. Sometimes, you may have never installed that piece of software before, but the only installation guide is 3 versions in the past and they have since introduced the need for IIS for a new web portal or something. So here we go in the first post of my new Veeam series “How to install Veeam Backup and Replication” 

  1. Download the latest Veeam VBR ISO from the support download page. 
  2. Depending on version downloaded, you may need to extract the ISOs from the ZIP
  3. Mount the “VeeamBackup&Replication” ISO and open the setup.exe
  4. Click the install icon for Veeam Backup & Replication.
  • Read and Accept the EULA (You can’t proceed if you don’t agree) 
  • If you have a license file, attach it. Otherwise, you will get 30 days free trial
  • Choose your components to install. 
    • Veeam Backup & Replication– This is the main application for configuring and running backup & replication tasks.
    • Veeam Backup Catalog– Used to index the files into a GuestOS backup file for easy restoration. 
    • Veeam Backup & Replication Console– The console is the GUI used to perform tasks and configure Veeam Backup & Replication. 
  • The System Configuration check is used to ensure the correct components are available and installed ready for Veeam B&R to install and configure.
    If there are components showing as Failed, click on the “Install” button to get them installed.  
  • One installed, re-run the check and ensure each component passes. Click “next”. 
  1. Review the default configuration, this includes directory locations, ports and SQL instances. You can select “Let me specify different settings” if you need to make any changes. Click Install to continue. 
  • Wait until installation completes. The installation will take approx. 10 minutes, and if there is an update as part of the install, you will see this occur towards the end of the process. 
  • You will be notified once completed. 
  • Double click on the Veeam icon on the desktop to open the console and put in your Username and password and click connect.
  • By default, the Component update will open up and require you to run the update on any components that require it. They will be listed. Select and apply the updates.
  • Under “Inventory” > “Virtual Infrastructure” > select “ADD SERVER” > Select type of server (in this case, VMware vSphere). NB:You will need to run this process before you can set up your proxies.
  • Input your vCenter (or host) details for Veeam to connect to. 
  • Add your server credentials into the credential manager. 
  • Trust the certificate if it is Self-signed. Please see KB2806 regarding 9.5U3 self-signed Cert bug. 
  • Confirm settings are all correct and click “finish”. 
  • Confirm under “Virtual Infrastructure” > VMware vSphere” that you can see your vCenter hierarchy. 
  • Under “Backup Infrastructure” > “select Backup Proxies”

  • Add in your proxy server’s IP/hostname and a description
  • Add your credentials for the proxy server, or reuse pre-configured.
  • Wait for components to all install. 
  • Confirm the Proxy service details
  • Once you click “finish”, you will return back to the VMware Proxy screen, here you will be able to set your Transport Mode and Datastores. 
  • Select “Choose” for Transport Mode. This will show you a number of options with descriptions to help you choose the correct transport mode to meet your infrastructure requirements. If unsure, select Automatic Selection.
  • Once you continue on, you will have the opportunity to set up traffic rules where you can create bandwidth restrictions. Here you can get granular and create policies for certain IPs. 
  • Once you have finished setting up your proxy, you will then need to set up your repository for storing your backups and/or replications. Select “Backup Repositories” and Set up a new repository. Start off by giving your repo a name. 
  • Veeam offers support for a number of different types of repositories. Select the best option for your infrastructure. (This tutorial will just be a Windows Server.) 
  • Under “Server” choose the repository server from the list or click  â€śAdd New…” server. Once added click the “Populate” button to see the capacities and free space available. Once identified, select the disk you want to use. 
  • In your Repository settings, you can setup the path for which you want your backups to go to. Once set, click the “Populate” button again. Veeam also offers Load Control to assist with your bandwidth and disk performance for your backups, use and adjust as required. 
  • With Windows Server 2016, Microsoft introduced ReFS, their new volume format that allows for greater capabilities. Veeam acknowledges these abilities and advises you of the benefits of using ReFS over NTFS.  *Proceeding will not prevent you from using the Datastore. NB: ReFS is reasonably resources heavy.
  • Under Mount Server, you can set which server will take the load when mounting a restore point with Instant VM Recovery, SureBackup or On-Demand Sandbox. If you have the ability to provide write caching for the mount server, you can enable vPower NFS Service to assist with those mount points.
  • Once all configured, the review stage will confirm if any of the additional components will need to be installed on the new Backup Repository. Once confirmed, you can apply and let Veeam set up the repository. 
  • During the apply process, you will be able to confirm all steps completion. 
  • Once your infrastructure is configured, it is down to business and time to test and create your first Backup job. Under “Backups”, Right click and Select “New Backup Job.” Set a Name and then select your Virtual Machines you would like to backup.  
  • Click recalculate to ensure the total size is updated to reflect the size of the disks to backup. You can also exclude objects from being backed up. 
  • Select your proxy, if you have a Proxy server setup, then choose that one. Otherwise, if you did not setup a proxy earlier, you can use VMware as the Backup Proxy, however this will be a slower process. 

  • Select your Repository for the backup job. In this screen, you can also set the amount for restore points you want to keep and any advanced settings such as additional scripts, Email notifications, backup modes (Incremental, Active Full, etc.) etc. 
  • The Guest Processing page is used to configure the backup job to leverage Application Aware processing and also file indexing and exclusions, and much more.
  • The schedule is fairly self-explanatory. Here you can configure how often the job will run and how many retry attempts before failing the job. 
  • Once all settings have been configured, apply the configuration and if appropriate, Run the job once created. 

  • Watch the progress and if there are any errors, adjust your components where required. 

Rubrik Full Bare Metal Recovery

Recently I wrote a post on Rubrik’s latest major release, 5.0 Andes. That’s 5 major releases since the very beginning in January, 2014 and shipping the first Brik in November of the same year.  Much like the appliances themselves, that is just an incredible speed to get up and operational to send onsite to a customer! You can check out the full timeline here.


I do however want to dig a little bit deeper into some of the items that were released as part of the 5.0 announcement. There were a number of things I had already mentioned, but for this post I want to go further into the Windows Bare Metal Recovery.

So why do we want to deal with physical servers, isn’t everything virtualized? Well, no. Not everyone’s environments are virtualized as there are many reasons why an enterprise may need to continue to run physical machines for licensing or hardware requirements,  and it is imperative that they are backed up accordingly and even more-so easily recoverable in the event of a failure.

Rubrik have now met this need with their full Windows Bare Metal Recovery (BMR) feature as part of the Rubrik 5.0 Andes release, backing up at block level while ensuring even the MBR/GPT partitions are secured. BMR isn’t new to Rubrik, but in the past it started only protecting file sets and files, in 4.2 it was able to protect volume partitions, however, now in 5.0 Rubrik offer FULL BMR by introducing the Volume Filter Driver (VFD) which can be installed optionally to work with the Volume Shadow Copy Service (VSS) and is used for Change Block Tracking (CBT) to decrease backup times.

On first run, the Rubrik Backup Service (RBS) will capture a Crash-Consistent Volume-based snapshot and backed up onto the Brik and is stored as Virtual Hard Disk v2 (VHDX) which makes it easily available for P2v. Once captured, then incremental snapshots are taken with the VFD checking and creating a bitmap of changed blocks to ensure faster backup windows.

The ease of backup is incredibly simple, and recovering from a disaster is just as good. You will need to use the Rubrik WinPE Tool to assist in making you WinPE boot image, but once created, you can then boot your target system into the PE. This environment will allow you to log in and mount the samba share and kick off the restore PowerShell script. BMR also support Live Mount which will allow you to mount your required volume snapshot that you want to restore from. As the first part, the script will obtain and prepare you disk layouts prior to copying down your boot partitions and data. Once the restore is completed, reboot and then you can login and confirm that all the volumes are intact and all data is there and accessible. If you are migrating to a different physical tin, then sensible target hardware needs to be considered, however there is no requirement to match the original source hardware.

One last feature of the 5.0 release is the ability to migrate from a physical machine. There are 3 options for migration;

  • P2V: Migrate from physical to virtual, either VMware vSphere or Microsoft Hyper-V
  • P2P: Migrate from physical to similar or dissimilar physical hardware
  • P2V: Migrate from Physical to the cloud, whether it is Microsoft Azure or AWS

It really is just that easy to do and those who are still bound by physical servers can breathe again knowing that Rubrik can take care of their Full BMR needs, as well as being able to meet the Enterprises’ requirements of off-site long term storage by pushing to the cloud.

Rubrik announces Andes 5.0

Alright, backup and take a seat for this one!  See what I did with the title? Ok, maybe I’m a kinda a little excited to write a post this time. Not only is this my first briefing with Rubrik, but it is also a jam packed new release with some amazing new features.  In this post, I am just going to briefly mention the new features and what to expect, as I hope over the next week, I can dive in deeper into a few of them..

So starting from the top, since 2015 when Rubrik rebased their v1.0, they have been making some significant headway in new feature and almost releasing a a major release within the same year of the previous. Skipping a bit over 3 years since that first release and here we are with Andes 5.0, this is staggering to see a company raising the bar.

As I mentioned above, this is a just a quick overview to get your tastebuds watering, so I’ll move right in to mentioning the all the gossip!

Rubrik is really looking into the Digital Transformation buzz that we are all see happening at the moment, and they are rapidly adapting to keep up and get infant of the market to meet the Digital Transformation demand.

When you work with databases, you know how quickly they update and change, and being able to recover your database in the event of a disaster is time critical. That’s where Rubrik now has Instant Recovery and Live Mount to achieve almost near zero Recovery Time Object. This is great for test/Dev as the databases would be an almost instant clone.

NAS Direct Archive:
Sometimes you may already have a file store in place backing up\replicating from an onsite NAS, this is good, but it can be better with Rubrik’s NAS Direct Archive. This new feature will allow you to continue to backup you NAS to a remote store (Clud, NFS Store, or Object store) however, it packs on extra simplicity by crawling through the data for you and indexing it automatically to save you time when you need to recover from that remote storage.

Elastic App Service:
The EAS is a new storage repository that provides a highly available, storage efficient distributed file repository over NFS or Secure SMB. This platform allows nearly any application or OS to write direct to the volume, opening and closing the connection with scripted API calls. The EAS works like any other plain old NFS/SMB but when the connection is closed a Point-In-Time snapshot is taken and the data is secured. Rubrik has again made it easy by creating some pre-tuned default tags for different databases, you can tag the volume with this predefined tags and let the Brik for the rest.

SAP HANA Protection Certified Solution.
Rubrik has now been awarded the SAP HANA Certified Solution certification leveraging the backint API to assist with backup and restore using SAP HANA Studio or Cockpit.

Microsoft SQL Enhancements:
MS SQL has been supported since version 3.0, however there have been some major updates to the platform to help make the process more efficient. Change Block Tracking has now been enabled with a new filter driver, this will ensure that you are only backing up the blocks that have had any changes made and decreasing the backup time. Rubrik can also now invoke VSS and take snapshots of the database to get a point-in-time consistent backups.

Windows Server Bare-Metal Recovery:
Just when the world was moving away from physical as virtual infrastructure offers more in terms of high availability, lower power consumption, better resource utilisation, etc… Rubrik has come out with Bare-Metal backup and recovery, protecting those who still run in a physical world. The perfect thing about this is that your Brik will do all the work for you in taking a backup of the MBR and GUID Partition Table while tracking the changed blocks at the kernel level. In the event of a failure, there are some manual steps including booting into WinPE, however, this is a much more efficient and accurate way than restoring from tapes!
There is also the added benefit this can be use for P2V – so not all hope is lost in those still running physical workloads!

Polaris protection for Office365:
Earlier this year, Rubrik announced their Polaris SaaS Platform offering simple policy-based management to backup and recovery of your Office365 environment. This allows customers to mange their O365 backup and recovery policies through the Polaris interface, allowing them to use the same SLA policies as their on-prem solution. This integration also allows for global file search and single object recovery.


As mentioned, this is just a brief overview for now. There is a lot crammed into this release, too much to put into one post. Stay tuned and continue to check out other posts around this release.

To learn more check out: 


New Release VMware NSX Books – Free Download

Following on from last years Free NSX books that were given away at VMworld 2017 and also available for download, there have been another 2 new releases that are now available for download.

VMware NSCross-vCenter NSX DesignX® Multi-site Solutions and Humair Ahmed with contributions from Yannick Meillier

Screen Shot 2018-08-03 at 11.15.02 pm

Screen Shot 2018-08-03 at 11.14.06 pm

With over 300 pages between the two books full of great content, they are two books well worth having in your collection.

Nasuni – Global Object File Storage on Steroids – #SFD16

This was one of my favourite presentations. These guys are not messing around, their product is important to them and their message was clear that they mean business. Not only did the panel provide good feedback during the session, but conversations continued afterwards, and this showed that they really cared about the community’s thoughts and ideas.

So, who are Nasuni and what do they do? Well, that is a very good question. Nasuni has built their product from the ground up, they provide a cloud and on-premises Global Object File Storage system running on their patented UniFS file system. Nasuni’s main focus is on the ever-growing size of files from Photoshop to Autocad, Audio to UltraHD films and more while storing them in a central location in the cloud to provide quick and efficient access, as well as file redundancy. The architectural layout behind this global object storage is a hub and spoke approach where there is a central location that maintains the “Cold” storage and then each spoke is the branch\remote office accessing the files. Each office can have either a virtual or physical Nasuni appliance for caching to allow “hot” files to be accessed much quicker, while integrating with AD or LDAP for security.

Nasuni believe that storage requirements are increasing dramatically for individual files and that there should be no limits on whether or not those files can be stored, regardless of their size. UniFS has no limits on; maximum file size, number of files per volume, total volume size or the number of snapshots on a volume. All these open limits aid in the success of Nasuni, along with their file collaboration technology.

The on-premises cache appliance allows users to pre-seed the files from the global object storage so that they are available when the user will require access. For example, if there is a 4GB file that is required Monday morning, the user can start the pre-seed on Sunday. EA Games proved that Nasuni can make a significant difference in how their organization works with files by going from only testing approximately 3 game builds per day to more than 100. The appliance also holds onto the changes and the files requiring upload when the link to the global system is down.

Each file is deduped and compressed, while encrypted with a client-controlled key to ensure data is transferred optimally and securely.  When the file is in use, it is locked and advises the next user of the locked file, once released, the file will remain locked for a short additional amount of time to ensure that it has replicated back to the global repository and confirmed before being available for the next person.

If you’re organisation has multiple offices and you’re looking at centralising your files, whether small or large, I highly recommend checking out more on what Nasuni has to offer.

See below for the presentations from Storage Field Day 16, both the overview and technical deep dive.




Technical Deep Dive

An Introduction to SNIA – SFD16

The first session to kick off SFD16 was presented by SNIA (Storage Networking Industry Association.) “The SNIA is a non-profit global organization dedicated to developing standards and education programs to advance storage and information technology.” –

This session was an introduction to SNIA and the role that they play in creating technical standards and educational materials. SNIA as a whole works towards bringing vendors into a neutral zone of standards, simplifying technology and creating boundaries to work within. SNIA runs forums, approximately 50 in the last 3 years where webinars and presentations are run to allow anyone to choose to learn about any particular storage technology. They also provides a plethora of educational items from white papers, articles, blogs, IT Training, conferences and certification courses all free, run by SMEs of their own companies.

SNIA focus’ on many areas from physical storage, persistent memory, data security, network storage, backup, and much, much more. In the words of Dr J Metz, “Generally speaking, if you can think of it, SNIA has a project that’s working on it, or looking to promote it or educate about it.”

Having learnt more about SNIA and the great work they are doing to help promote and educate about storage, I have gone and looked into a number of their education items, particularly the white papers. I encourage you to also head over and check out their material.

For more details, head to and check out the video from Storage Field Day 16.


Zerto – Not Just Short Term DR Retention Anymore

Last week I had the opportunity to participate in a session with Zerto at their global headquarters in Boston, MA. as part of Storage Field Day 16. This was a session I was really looking forward to after having been a partner for ~3 years and someone who really likes the technology.

The session started with the companies Chief Marketing Officer, Gil Levonai going over the core details of how the company has grown and how their block based Continuous Data Protection technology has evolved over the years.
Zerto Virtual Replication (ZVR) disaster recovery product that uses block based replication allowing it to be hardware agnostic. This means you can use any underlying storage vendor between sites. Zerto is building out their cloud portfolio to allow replication across multiple hypervisors and public cloud companies from vSphere and Hyper-V, through to AWS and Azure, and beyond. There are two main components that are required at both sites for the replication to work, the Zerto Virtual Manager (ZVM) and the Zerto Virtual Replication Appliance (ZVRA). The ZVM is a Windows VM that connects to vCenter/Hyper-V Manager to run the management WebGUI and present and coordinate the VMs and Virtual Protection Groups (VPGs) between sites. The ZVRAs are loaded on to each hypervisor as an appliance and is used to replicate the blocks across sites while compressing the data. One storage platform they do no support currently is VVOLs, however, they are a company that will develop for the technology as there is demand.
You can set your target RPO to a mere 10 seconds and retain your recoverable data in the short-term journal from 1 hour up to 30 days – meaning you can restore data from a specific time rather than when the backup was last run..
The VPGs are groups of VMs you want to be part of a failover group. This is where you can create a group for say a 3 tier app where you need each VM to restart in a certain order at certain intervals.

You can see the Gil’s talk here:

So, what was the technical discussion during this session? Mike Khusid (Product Management Leader) took us through their new Long Term Retention (LTR) piece that is currently under development to extend the capabilities of ZVR. This is  due to to be included in their next Major release, Zerto 7. This requirement for many enterprises is driven by the need to meet compliance standards and be able to retain data from 7 to 99 years. The benefit of this being included in Zerto’s Continuous Data protection means that you will have an available copy of data that was created ~3 minutes prior to being deleted, ensuring it will be recoverable within the set retention period.

This is certainly a great way for Zerto to extend their product set to be able to meet the compliance demands that many companies face. As a partner using Zerto, I know this will be a great piece to be able to pass on to our customers.

You can also catch Mike’s segment here:

Thank you Zerto for taking the time to present at Storage Field Day #16.

Storage Field Day 16 – I’m going on an Adventure!

***Update – Added NetApp session to Timetable.

This is a bit of a late post, however it is done. In less than a week now, I will be boarding my first ever international flight as I will be heading over to Boston, MA, USA for 4 days. Why? I have been invited by the good folks at GestaltIT and the Tech Field Day (TFD)  team to attend and be a delegate at Storage Field Day #16. (SFD16).

This is a great honour to be a part of, an opportunity where I can meet likeminded folk, discuss storage and general technology while diving deep into the guts of the products, meet vendors and staff and most importantly of all, to learn and grow in the knowledge and experience that will come from attending.

What is Storage Field Day?
Well.. as this is going to be my first Tech Field Day event that I am attending, there is only so much I know at this point in time, however I will try and explain it best I can.
Storage Field Day, along with Cloud, Networking, Mobility and Data field days, is a 2-3 day event where a group of delegates selected by the TFD team are taken to multiple sessions presented by vendors on their technology. Each vendor presenting purchases a time slot in which they will discuss their technology, either their current or latest and greatest coming to market, as well as possibly discussing their roadmap. During the sessions, the delegates have an opportunity to ask the hard questions, discuss their views and experiences and  write up their thoughts on the information presented, while being completely open and honest.

Storage Field Day #16
Storage Field Day 16 will be a 2 day event, travelling around the city and outer city of Boston, MA. held between the 27th and 28th of June, 2018. There are currently 6 sponsors for the event announced, each purchasing a session or two to present on their choice of product. The sponsors and session times for #SFD16 are: (Taken from SFD16 page)

Wednesday, Jun 27 8:30 – 9:30 SNIA Presents NVMe Over Fabrics at Storage Field Day 16
Wednesday, Jun 27 10:00 – 12:00 StorONE Presents at Storage Field Day 16
Wednesday, Jun 27 13:15 – 17:15 Dell EMC Storage Presents at Storage Field Day 16
Thursday, Jun 28 8:00-9:00 Zerto Presents at Storage Field Day 16
Thursday, Jun 28 10:00 – 12:00 NetApp Presents at Storage Field Day 16
Thursday, Jun 28 13:00 – 15:00 INFINIDAT Presents at Storage Field Day 16
Thursday, Jun 28 16:00 – 18:00 Nasuni Presents at Storage Field Day 16

Each session will be streamed on the #SFD16 page for the viewers at home/office. 

What am I hoping to get out of attending?
I would be lying if I said that I wasn’t nervous for a couple of reasons. The first reason being that I have never travelled international for and have to wrangle customs at LAX (Of all the airports) in a 2.5 hour stopover. The second is the unknown of what happens at a Tech Field Day event, I have watched a number of streams and recordings from previous events, but that only shows so much, it certainly has given me an idea of how the delegates contribute to the session.
I guess my nerves stem a little from seeing the list of bright minds that will be there as delegates, the list is absolutely packed, and then there is me, but that I see as a good thing. I have a completely open mind about what to expect walking in, the tips that I have received from previous delegates have all lead to “You will walk away with having a completely new look on everything in the vendor/technology space.” So I am excited to make the very most of this and hopefully do a well enough job to be invited back again.

Keep an eye on this blog, there will be lots of content being produced over the next couple of weeks. Also check out the #sfd16 on Twitter and make sure you check out the live streams and recordings.

**Disclaimer: All delegates have their airfares, accomodation and travel (and sometimes extra activities) paid for by the vendors presenting.