VMware has just released 2 weeks ago an updated blueprint (v3.1) aimed at the new VCP5-DCV exam for vSphere 5.5. In the past, VMware has released exams that just add extra questions for the minor releases – However, 5.5 was a bit closer to a major release than a minor, which includes some major differences and therefore supporting the need for a new exam.
The two exams are labeled clearly to make it easy to differentiate between them, VCP510 = 5.x exam / VCP550 = 5.5 exam.
The 5.5 exam brings in many new topics and some that have not been covered as much in previous exams, this now brings the exam to 135 questions /120 minutes (5.x = 85 Questions/ 90 minutes). The additional questions and time bring up in line with the Desktop and Cloud exams.
Some of the focus areas in the blueprint from the previous 5.x (v2.9) blueprint are as follows.
- Web Client
- Single-Sign On
- Data Protection
- vSphere Replication
Pearson Vue will give you both exam options to chose from when booking your test day on their website. If you have sat the vSphere ICM 5.x (Install, Configure and Manage) course with an accredited VMware training provider, you are still able to sit the 5,5 exam, however, you are best to grab a book and other training material to learn the new technology that has been delivered in vSphere 5.5.
Thank you for reading, if you have any questions, comments or suggestions, please feel free to contact me.
Welcome to the second part of the storage segment. This time I want to cover setting up a nested iSCSI target in VMware Workstation. The idea is to create a shared datastore for the training lab so that you may be able to play around and use different features of vSphere that use shared storage. I am using OpenFiler but you can use FreeNAS, NAS4Free, StarWinds iSCSI Target for Windows, etc.
- Create a New VM with 1 CPU/1GB Ram/ 13Gb HDD
- Add a second HDD to the VM large enough to hold a couple of VMs from ESXi (40 – 80Gb recommended)
- Connect the OpenFiler ISO to the CD Rom in VMware Workstation
- Start the VM and select Boot from CD
- Click Next > Select Language
- Select the small 13Gb drive and make sure it is set as Boot drive > click Next and accept Erase all data.
- Click Edit > Put in IP address settings for your network > Click OK > type hostname > type Gateway and DNS
- Click Next > type in root password > click next and wait for install process
- Click reboot
- Once restarted open up your web browser and go to https://:446
- Log on with Username: openfiler Password: password
- Click on Services and click enable and start on iSCSI Initiator and iSCSI Target
- Select Volumes > Click Create New Physical Volume
- Select /dev/sdb depending on which drive is not your OS drive.
- Change your partition size to suit and click create
- Select Add Volume on the right > Give the Volume a name and select the disk
- Select iSCSI Target > Click Add for the Target IQN
- Select Add Volume > type a name for the LUN > Select the size for LUN > select block filesystem
- Select iSCSI Targets > LUN Mapping > Click Map
Once you have completed the steps setting up your LUN in OpenFiler, you then need to log onto your vCenter Server and connect to the Target.
You can attach multiple vNICs to the OpenFiler VM in Vmware Workstation if you want to play around with Load Balancing, etc.
Please see the video below which will walk you through setting up right through to attaching the target in vCenter.
Thank you for viewing. If you have any comments, suggestions or would like me to cover a topic, please leave a comment.
To start off the New Year, we all have resolutions that we set (eat less, exercise more, work harder, etc.). My wife and I have decided that we should go walking every morning before work, and any afternoon that isn’t interrupted by finishing work late. This I believe is achievable as we are working together and motivating each other at the same time.
What does this have to do with this blog post? – I’ve decided that I need to post a lot more often and more about different components on VMware technology. So today, I’m starting with a short series on storage and SANs.
Today I want to cover RAID and recommendations for when you design your disk layout for your SAN.
RAID (Redundant Array of Inexpensive Disks) can be configured at the hardware or the software level each have their advantages and disadvantages. RAID done at the hardware level gains the advantage of having a controller to work through the data that needs to be written to the drives. RAID done at a software level relies on the CPU to read and write data.
There are many types of RAID that you can choose from, each providing their own form on redundancy or performance.
- RAID 0 “Stripe” = Two or more disks sharing data between to increase performance and capacity. The multiple disks will be seen as one large drive, this also increases performance as data is split between the drives and is read at the same time cutting down read times.
RAID 0 is not designed for production uses and does not offer redundancy if a hard drive fails. In most cases you will not be able to retrieve data as part of it may be sitting on the dead hard drive.
RAID 1 “Mirror” =
RAID 1 is the first level of redundant RAID. It requires at least double the amount of drives you are going to use for data. If one drive is only required then an identical drive is required to copy data to. RAID 1 works by writing data at the same time to both drives creating a failover drive. Data can be read from both drives allowing for higher read performance. RAID 1 can be scaled out to multiple drives, this can be achieved by presenting two lots of RAID 0 drives – This is called RAID 0 + 1 where there is two sets of RAID 0 are mirrored – The other is RAID 1 + 0 where multiple mirrored drives are placed into a RAID 0 set, this is also referred to as a NESTED RAID.
- RAID 5 & RAID 6 “Parity” – In these two RAID levels a parity mathematical calculation is used across three or more drives providing the advantage of RAID 1 with the capacity ability of RAID 0.
RAID 5 allows for one disk to fail without disruption to data, however it does also come with a disadvantage of losing a drive to the total capacity as it is required as the spare to keep the production environment available. Once a replacement drive has been sourced to replace the dead disk, data is replicated back to the new disk to rebuild the RAID array. A Hot Spare is recommended to keep in the chassis in case the drive fails when the data center is unattended to start the rebuild process immediately.
RAID 6 works on the same principal as RAID 5 however it allows for two disks to fail as it loses two drives as spares.
Be aware that the rebuild process is very rough on the disks as they are still working the production environment while also rebuilding the new disk. The best time to rebuild is during after hours when disks are less likely to be used.
Depending on where these drives will be installed and what they are used for can determine which RAID level you require.
For example, if you are wanting to just install your OS on a redundant set of drives and create a separate RAID for data, then a Mirror level would save you money and provide a one-disk backup.
Most, if not all storage vendors have a recommendation list for what they believe to be best practice. In a SAN storage design, RAID 5 or 6 is highly recommended as it provides a high level of redundancy. Some vendors recommend having a hot spare per shelf, or per 30 drives, however, this does change between each vendors, so it is best to check with them.
Choosing your RAID design is simple if you consider Capacity/Performance/Redundancy. Although, if you choose the wrong RAID level then you may be in trouble as you will be required to go through the lengthy process of copying the data off and recreating the RAID.
Part 2 will be on Storage network and design. Stay Tuned!
Thank you for reading, if you have any comments, questions or recommendations for future posts, please don’t hesitate to contact me.