Skip to content


SimpliVity in the Gartner MQ for Integrated Systems?!

When I first heard that this was happening I had to ask myself “WTF is an integrated system?”.  Well it turns out that there is a very broad definition of an “Integrated System” in Gartner’s eyes.  The market definition/descriptions were:

  • Integrated stack systems (ISS) – Think Oracle Exadata, IBM PureApplication, and Teradata.
  • Integrated infrastructure systems (IIS) – Think VCE Vblock, HP ConvergedSystem, and IBM PureFlex System.
  • Integrated reference architectures – Think EMC VSPEX
  • Fabric-based computing (FBC) – Think SimpliVity and HP Moonshot System.

As you can see this could include a number of manufacturers, but I have to say that I feel honored that SimpliVity was included in this first round. We are very solidly to the right in the Visionary quadrant and as we mature I look forward to moving further North into the Leader’s quadrant and even more to the right.

Congrats to VCE for taking the top spot! VCE did a really great thing back in 2009 in starting a market and their sales numbers reflect that their message continues to resonate with customers. NetApp/Cisco are in a close second so props to them as well.

The MQ data gathering started back in September 2013. At the time we had been officially GA for 5 months so we were extremely young in Gartner’s eyes. We were coming off a great VMworld US where we won the Gold for Best Storage and Backup for Virtualized Environments.  In late 2013, we won the coveted Modern Infrastructure award for best Converged Infrastructure Product; beating Nutanix, IBM, Dell and HP (Ironically in this contest we beat all the other “MQ visionaries”).  The paper was finalized in December and we had just finished up our first 3 quarters selling OmniCubes and were right around 100 employees.  We are now in the 350 employee range (growing weekly) and the OmniCube is selling like hot cakes and customers love it!  As you can see we’ve had tremendous growth since the report was finalized so I’m definitely looking forward to the next MQ report.

You can download the entire report from SimpliVity here: Report

Doron Kempel, SimpliVity CEO, wrote a blog post you can find here: http://bit.ly/1pHW423

Some of my friends have written blogs about the report as well.  You can find links to their posts below:

Gabriel Chapman - http://wp.me/p23pEx-n1

Matt Vogt - http://wp.me/p3ciJ3-6j

 

 

 

Share

Posted in SimpliVity.

Tagged with , , , , , , .


SimpliVity: A New Paradigm for VDI Implementations

I need to start this post of with a disclaimer: The OmniCube was developed to run all workloads in the data center. VDI is just another workload that will run on the OmniCube.

Brief History

In the past VMware and Citrix have released a number of features to help improve the performance and ROI of VDI deployments.  One of those features is the ability to use linked clones.  One of the primary features of linked clones was designed to allow you to save disk capacity on one of the most expensive resources of your deployment.  For VMware View we would tell customers they could save roughly 40% of the overall disk capacity requirements by implementing linked clones.  The problem with linked clones was the additional design work that had to be maintained (number of clones per source) and the poor performance that resulted from all the clones accessing the same spindles to read from.  This caused us to implement different tiers of disk, maybe some cache devices, and ultimately cost us more money.

Another issue with virtualizing desktops and having persistent data was the difficulty in designing a disaster recovery solution and then actually getting it to work during the recovery. I implemented several View designs by using DFS as my persistent data storage and then replicating that data to the alternate data center.  It worked most of the time, but if the customer didn’t maintain their AD and DNS infrastructures then calamity was bound to ensue.  We have to be honest (because there has been some debate) if we could all run persistent desktops in any VDI implementation we would choose to do so.  We can use some of our traditional desktop management tools making management easier and we all like to have our own data where we know someone else can’t lose it. Naturally, running persistent desktops also uses more disk space because we have to store all of that data in multiple places.

Deduplication for Performance

Yep, that’s how we get it done at SimpliVity.  We started from the beginning with data mobility and performance as the core design elements of our solution and inline deduplication how we solved both of those problems.  From that core came the OmniCube Accelerator and the OmniStack software.  We dedupe, compress, and optimize data inline (in memory) and then sequentialize the I/O before the I/O hits any storage media.  The whole purpose behind that is to never have to write duplicate data to disk thus optimizing all of the I/O throughout the system.  We all know that while hard drives have gotten larger of the years they haven’t gotten any faster. Traditional architectures are capacity rich but I/O poor on those spinning disks.  This data efficiency is across all tiers (DRAM, SSD, HDDs, and cloud storage) and global so once the data has been deduplicated within the OmniCube Federation the data is NEVER duplicated again.  We are not only saving you storage capacity, but more importantly we are saving you I/O which is a much more expensive resource.

Post process dedupe is only going to save you on capacity.  It doesn’t do anything for performance and doesn’t reduce I/O. You still have to write all of the data before it is deduplicated.  And I won’t even get into the fact that it doesn’t help you with your disaster recovery SLAs.

We don’t need no linked clones!

We all know that VDI environments contain a LOT of duplicate data.  We have copies and copies of our OS images.  In the OmniCube, since the data is deduplicated at inception then why wouldn’t you just use full clones.  All of the blocks from the original desktop template have already been deduplicated so just create a bunch of copies from that and not have to worry about using up all of your disk space.  For boot, imagine that the desktop needs 100 MB of disk space to boot from.  If I have 100 desktops that’s 10 GB of data I need to read to boot those 100 desktops.  On traditional architectures all 10 GB of those reads are coming from either spinning HDDs or a cache tier. Well if the data is already deduplicated you only have to read that first 100 MB of data so the boot time for all 100 of those desktops is tremendously fast and even more so since all the OmniCube reads come from SSD.  It’s simple math, reading 100 MBs of data is always faster than reading 10 GB of data.

So for persistent desktops this inline deduplication becomes a dream scenario.  All of the data is deduplicated inline so I will have plenty of both performance and capacity. Why wouldn’t you just give everyone their own desktop and let them keep it?

What about Disaster Recovery?

As I said earlier DR can be difficult, putting it mildly, in traditional VDI environments.  With the OmniCube we do full backups at the VM level without using anymore physical storage. Yep, read that again.  We aren’t doing the traditional array based snapshots that effect performance and go away if I roll back to an earlier snap.  We are doing real full backups without producing more I/O for those backups and guaranteeing the backup will be there when you need to do a restore, even if you delete the VM.  Since we are doing full backups without producing more I/O and not using more capacity then why wouldn’t you backup a desktop VM just like you do your servers?  The next time a user calls and says their desktop has blown up, just restore it from an earlier point in time.

The other piece of DR is offsite replication.  Remember I said that once data has been deduplicated and resides in the OmniCube Federation that it is never duplicated again?  That goes for replication across the WAN as well.  We only send the blocks (4KB – 8KB each) to the alternate site that don’t already exist at that site.  If your remote office or corporate office becomes a smoking hole, then just restore the desktops on OmniCubes in a surviving location and get your employees (IT customers) back up and running.

 Summary

So as you can see SimpliVity really changes the way that you’ve had to think about and design VDI the past few years.  We’ve removed (or fixed) a lot of the issues that have been plaguing IT departments in the past when designing and running VDI environments.  We make it simpler and enable you to manage your desktops the same way you always have and we’ve improved the survivability of your VDI infrastructure thus enabling your IT customers to continue to make your company money.  And that’s the reason companies have IT departments in the first place isn’t it?

You can check out a great article and video here: Link

 

Share

Posted in SimpliVity, Snig.

Tagged with , , , , , .


SimpliVity: Do I Have To Choose To Scale My Infrastructure or Manage IT Infrastructure Lifecycle? No!

I originally wrote this post for SimpliVity.com but I also wanted to post it here for my normal readers.

 

One of the problems we have had when we design, implement, and manage our IT infrastructure is how to design for scale but at the same time design for ease of migration and infrastructure lifecycle.  In the past we have always had to design for one or the other because the infrastructure didn’t provide the means to do both.

With traditional converged infrastructure you have to scale multiple systems at a time. For example if you work with current reference architecture based building blocks (pods that flex and blocks that are virtual) and you run out of blade slots in a chassis you have to add more chassis’ and blades.  Depending on the business need you might even have to add storage shelves and drives and possibly some network equipment.  Because each of the systems within the so called “convereged” architecture are still logically and physically separate from each other you have to add them a piece at a time and configure them separately as well.  As you can imagine this can take a lot of time and resources to implement and manage not to mention take a lot of your annual IT budget.

Drilling a little deeper, let’s look at one of the most painful pieces of our infrastructure to add or remove, the storage.  While VMware has made this process much simpler than in previous years, moving tens or hundreds of terabytes of data is time consuming and must be planned extremely carefully if you don’t want to forget any data during the migration process.  We have to carve out the storage that we want to use for our existing virtual infrastructure.  We also need to ensure that backup jobs are now backing up the new datastores from the new array.  Between scheduling data migrations during periods of low I/O, pausing the nightly backup schedule, and trying very hard to not impact the business it can take weeks or months to complete a data migration project.

With the SimpliVity OmniCube and its OmniStack technology we have designed the solution, from the beginning, to solve these major IT problems.  First we allow you to grow as your IT infrastructure grows.  If you need more Compute or Storage simply add another OmniCube.  The way that is accomplished is almost like adding another host to vCenter.  First we add the OmniCube to vCenter the same way you would add any other host.  Next you click on the SimpliVity plugin tab and deploy the OmniCube.  The OmniCube joins the federation automatically and assimilates into all other aspects of the federation.  The new physical storage of the OmniCube becomes part of the physical space of the existing datastores.  You have more CPU, Memory, and Storage for your virtual infrastructure.  It’s as simple as that.  We never left vCenter to deploy the system or add it to the existing virtual infrastructure.  This can be done globally from anywhere in the world without logging into different systems constantly.  If you have a need to grow just compute or you would like to use a specific add on card (say for VDI) then you can just add those servers into the virtual infrastructure and we can present the OmniCube datastores to those new hosts.  All the great OmniCube features are made available to all VMs hosted on an OmniCube datastore.

Next let’s take a look at the example of removing some of those Compute and/or Storage resources from your infrastructure, specifically storage infrastructure.  As mentioned earlier these projects are very time consuming and costly.  With the OmniCube we’ve made it simpler for you by moving your data with a single click.  Once you’ve migrated your virtual machines to the surviving OmniCubes using a standard VMware vMotion, you right click on the OmniCube Virtual Controller you would like to remove and select “SimpliVity – Remove from Federation”.  After that, you go home and get some sleep.  The OmniCube will migrate all of your data to the surviving OmniCubes within the federation automatically without any impact to your production workloads.  All the backups for those virtual machines will move and since the backup policies are global we don’t have to worry about our virtual machines being unprotected after they have moved.

As you can see we’ve put the simple back into your IT infrastructure as well as managing your infrastructure lifecycle.  We allow you to design for scale while at the same time allowing you to plan your infrastructure lifecycle without impacting your business operations.  This will save you money in both your capex and opex budgets and allow you to utilize those assets where you choose.

Share

Posted in SimpliVity.

Tagged with , , , , , .


SimpliVity Announcement Roundup

I’ve been working at SimpliVity for 4 months now and the technology and the engineers behind it impress me on a daily basis.  Today we’re making a LOT of announcements around the OmniCube (new versions), the OmniStack (the tech behind the OmniCube), and have released some really great analyst reports surrounding all the announcements.  I’ve tried to capture everything below, but if I’ve missed anything let me know.

The official press release can be found here: Press Release

Wikibon Video Announcement by Doron Kempel:

Check out a short (4 minute) video of Doron announcing our releases with the Wikibon folks. https://vimeo.com/72450443

OmniCube Form-Factor Updates:

We are releasing new form factors of the OmniCube in both smaller (CN-2000) and larger (CN-5000) form-factors and updating the current generation CN-3000.

Data sheets can be found here:

Analyst Reports:

ESG Labs created a report that has some really impressive performance numbers and discusses the overall features of the OmniCube platform.  Can you say “1,000,000 IOps” in Dr. Evil’s voice with your pinky finger at the corner your mouth? ;)  And also pay attention to the single OmniCube load test and the linear scalability when they add in 4 OmniCubes.  You can find the report here: Report (Registration required)

Howard Marks over at Deep Storage discusses our VM centric backup and replication technology.  Because the OmniStack manages each VMs data individually it allows us to manage all of the copies of each VMs data individually. This was a plan from the start.  The report can be found here:  Report

Steven Fosket and Hans De Leenheer created an awesome deep technical dive into the OmniStack technology.  I know a lot of folks on the interwebs have been waiting for a document like this so here it is. You can find the report here:  Report

As you can see there is a ton of great stuff happening at SimpliVity right now.  It’s extremely exciting to work for such a forward looking company.  Now that a lot of this information is out I plan on doing some how-to posts on the OmniCube and the technology that surrounds it.  Stay tuned.

 

Share

Posted in SimpliVity.

Tagged with , , , , , .


Moving on. Starting a new chapter in my career.

One week down at SimpliVity so far and it’s been great!  As some of you know I recently left  Datalink where I was a Solutions Architect and working with customers to design solutions for their data centers.  My leaving is bitter sweet in that it was a great company to work for and the people there I can truly call my friends.  Datalink has a truly wonderful group of engineers that really put the Value in the term VAR.

After working on both the customer side and the VAR side of the IT industry the opportunity came to work for a startup in the manufacturing side of the industry and it was one of those things that was honestly too good to pass up.  I look at this transition as the next logical phase in my IT career and will give the me the opportunity to learn both the manufacturing/vendor side as well as the startup side.

 

SimpliVity has released, what I think is going to be, the next big thing in infrastructure.  They have combined both compute and storage in a single box they call the OmniCube.  While that is cool on it’s own, as you dig deeper into the technology it only gets cooler.  They create a global federation of systems that dedupes a VMs data, inline, as the data is being written to disk.  Every OmniCube within a federation knows where the data is stored, no matter where the OmniCube resides around the world.  This allows for a ton of other really cool features.  Extremely efficient backups and restores, super fast clones of VMs, and replication to geographically dispersed data centers.  And on top of all that, everything is managed on a per VM basis so that you can be as granular as you’d like when creating and selecting a protection policy.

 

I really can’t say enough about how cool the technology is right now, because I’m getting way off topic, so I’ll leave that for future blog posts.  For now you can check out some videos on the SimpliVity website <here> and <here>.

 

I look forward to working with all of the future SimpliVity partners and customers as we roll out this game changing tech.  I’d like to thank the folks at Datalink for all the opportunities they afforded me during my six years there and wish them tons of luck as they continue to grow.
Share

Posted in Snig.

Tagged with , , .


WTF!? – Powering on a virtual machine fails with the error: A general system error occurred

This can by far be one of the most frustrating error messages you will run across in a vSphere environment.  I was at a customer site yesterday and the customer ran across it while I was working through some other things.  He asked if I could fix it for him.  :)

I immedieatly dove into the issue and started looking at the problem he was having and the error message was very generalized in nature.  “Powering on a virtual machine fails with the error: A general system error occurred”  I wasn’t able to change any settings on the VM or power it up.  I was able to remove it from inventory and re-add it no problem.  So I Googled the error message and ran across this KB article: Link.

The issue sort of pointed to a corrupt VMX file, so I attempted to create a new VM and point it at the disks from the old VM folder.  The problem with that was the fact that while I could browse the datastore and see the VMDK files, I wasn’t able to “see” them when creating a VM.  I was able to download the files from the datastore but, wait for it, I was not able to upload anything.  I know, it seems like a read only datastore right?  Well it was, but vSphere didn’t think it was.  I wasn’t able to unmount it from the GUI, but was able to via command line.  I could even remount it after I had removed it.

So it turns out that the customer had changed the VMKernel IP address that he was using to access the datastores on his VNX 5300.  If you’ve never used a VNX before, if a datastore has been mapped previously then vSphere will still be able to see the datastore even though it hasn’t been given explicit permissions to do so.  It essentially is a read only file system even though vSphere thinks it’s not.  As soon as you try and power on your VMs (which writes files to the datastore) you see the general error message from the KB article.  Fun huh?

Hopefully this helps some of you to quicker resolution should you run into this.

Share

Posted in Snig.

Tagged with , , , .


The Cloud Orchestration Layer is Changing

With the recent purchases of a couple key companies within the cloud orchestration space I felt it was time to write about it.  The companies I am referring to are Cloupia, who was purchased by Cisco in December and Dynamic Ops, who was purchased by VMware in July.  In my opinion these companies had such a huge head start on where both of their parents were at the orchestration/cloud layer that there really wasn’t a way for the parents to catch up without the purchases.  Thankfully they picked some bright stars to buy.

We know that both Cloupia and Dynamic Ops had great products, but the question now is; what are Cisco and VMware going to do with them now that they have them?  The genius of both products is that they started out as workflow engines and then the companies built the workflows we see today on top of those engines.  They made it extremely easy for the rest of us to install their product into our environments and begin allocating VMs within hours and not days/weeks.  Translating business workflows and processes into these products is a no brainer.

Cloupia

Cloupia’s primary product was named Cloupia Unified Infrastructure Controller or CUIC.  CUIC has the ability to orchestrate not only at the virtualization layer across multiple hyper visors, it can also orchestrate across the physical layer.  This includes servers from HP, Cisco, and Dell; storage from NetApp, HP, EMC, and Hitachi; and network gear from Cisco, Dell, and HP.  They have charge back, self service, analytics, and both North bound and South bound APIs all built into one simple appliance.  For Cisco the primary challenge will be in how they integrate with CIAC.  While CIAC has been a great product for Cisco, it is extremely difficult to get setup and doing what you need to do.  The secondary challenge will be to maintain the relationships with the 3rd party vendors to keep the hardware orchestration rolling.  It is, in the famous words of the “big” IT tool vendors, extensible.  My hope is that the Cloupia team is inserted at a high enough level within the organization that it doesn’t get killed or maimed to the point of uselessness.

Dynamic Ops

VMware released Dynamic Ops as vCloud Automation Center (vCAC) on December 13th.  vCAC is another great workflow engine that has the ability to orchestrate at the virtualization layer across multiple hyper visors as well as the physical server layer.  vCAC is missing the the physical storage and network pieces today when compared to CUIC.  It has a basic charge back mechanism, awesome self service, some performance monitoring, and uses South bound APIs.  While at GA time it’s lacking some of the features others have, I think, in the long term, VMware has a great chance to integrate all of their current management products into vCAC for seamless management across the full IT spectrum.  For VMware the challenge will be to make vCloud Director what it is really good at again, development workflows, and not have vCD be the end all be all cloud management platform.  VMware needs to give vCAC the ability to plug into and use vShield, et al via APIs and not hold vCAC back from what it could be.  Just think about it, most of the management products VMware has been most successful with have been parts of the IT workflow. e.g. SRM, vCOps, etc.  vCAC is simply another workflow engine just waiting for these things to be plugged into it.  Rename vCD back to Lab Manager and just turn everything over to vCAC and call it a day.  Simple right?

Summary

Now I’m sure that somebody’s feelings are going to get hurt by this post.  It was not my intention to hurt feelings here, I just simply want to see some great products/solutions continue and thrive.  Sometimes it makes the most sense for vendors to fall on their sword and self-canabilize things that haven’t worked out and go with what does.  If these vendors roll these products out the right way they both are going to be successful.  Now we just have to wait and see what happens.

 

P.S.  Have a happy new year and a great 2013!

Share

Posted in Snig.

Tagged with , , , .


vSphere Replication and SRM Implementation Notes

I went through a vSphere Replication and Site Recovery Manager implementation this week and overall it went very well.  There are a few things that aren’t really noted in the documentation that I wanted to get out there just in case anyone else runs into the speed bumps I did this week.  While I’ve implemented SRM with array based replication many times, this was my first go with vSphere Replication.

 

First, and most important, is to use the FQDN throughout the entire implementation of SRM, VRMS, and VRS.  While we used the FQDN for the vCenter server when we installed SRM, when you deploy the vSphere Replication Manager Server (VMRS) it defaults to the IP address for the vCenter server on the configuration page.  If you don’t change this to the FQDN of the vCenter server SRM won’t be able to “find” the VRMS servers when you attempt to create the connection between the two.

 

SQL Express port issue.  This customer decided they wanted to use the vCenter implementation of SQL Express for the VRMS database.  While this works fine you need to understand that when vCenter does it’s install of SQL Express it chooses a variable port by default.  Since it does that, you’ll need to manually change the port the DB connections are received on.  I recommend port 1433 which is the default SQL Server port.  You can follow a good blog post on MSDN here:  Link

 

VRMS and VRS SSH root access.  By default root access is not enabled for remote SSH management.  While a good security practice, it makes troubleshooting a PitA.  There is a great KB article you can read here, but I’ve listed the steps below if you would rather stay here.

  1. Open the console for the appliance in vCenter
  2. login with root. You set this password during deployment of the appliance
  3. cd to /etc/ssh/
  4. vi sshd_config
  5. find line with “PermitRootLogin” and set to yes. (type i to insert and then ESC when done editing. :wq will save changed and quit.)
  6. restart the SSH service with service sshd restart
Logs for VRMS and VRS.  Locations for the logs for each appliance are listed below.  Once SSH is configured you can use a SFTP client to connect and download the logs.  If you need a support bundle, refer to the KB article listed above.
VRMS logs = /opt/vmware/hms/logs  You’re looking for the latest hms*.log
VRS logs = /var/log/vmware  You’re looking for the latest hbrsrv*.log
Multiple Datastores per Source VM.  Sometimes you might have multiple datastores presented to a VM for any number of reasons.  If you’re in this boat here is how you need to setup the target datastores you’re replicating to.  You have two options here:
  1. Create a one to one datastore “mirror” so that you can automatically map the datastores to each other in the “Datastore Mappings” tab within vSphere Replication.
  2. Create a single datastore and then create folders on that datastore for the individual hard disks the VM is using.  e.g. For server FS1 that has 3 VMDKs, each on a different datastore you would create folders fs1_hd1, fs1_hd2, and fs1_hd3 on a single datastore. Then you can simply select those folders when setting up vSphere Replication for that VM.

 

Well I hope this helps someone as they’re going through this implementation.  It definitely gives me a quick go to if I run into any of these again. ;)
Share

Posted in Snig, vmware.

Tagged with , , , , .


Kemp Loadmaster: The Best Load Balancer You’ve Never Heard Of

 

 

 

I was contacted recently and asked to check out the latest virtual load balancer by Kemp Technologies and I have to say for the size of the package it sure packs some punch!

The load balancer has all the features and functionality of the large scale “enterprise class” load balancers you’ve heard of but comes in an extremely small (30 MB) package.  It only requires 1GB of RAM so it is extremely efficient on system resources as well.  The architecture is what you would normally see in this type of solution, but for any noobs out there here is a visual.

So let’s get to the install and setup.  The licensing and initial setup was extremely easy and wizard driven from the console.  Just assign IPs to the eth0 and eth1 interfaces, Default GW, and DNS as you normally would on any appliance.

Once on the network I was able to login via web browser (Chrome in my case) and start setting up virtual services and load balancing rules.  All the standard rule types are there that you would expect from the big boys.

From there I was balancing traffic, although in my little lab. =)  Based on some of the testing that Kemp is touting on their website these virtual appliances can handle quite a bit of workload with only 2GB of RAM allocated.

The appliance has built in stats monitoring as well as a pretty simple but informative performance monitoring tool.  Several logging features as well as SMTP and SNMP notifications are there.  Advanced Layer 7 configuration as well as a bunch of other features come with the appliance too.

The load balancers come in either physical or virtual appliances and can run redundantly on both platforms.  The entire list of features can be found on their website here.

All in all for the price/performance for these boxes/appliances I think it would be a no brainer to give them a look.  Kemp has a free trial that you can run with the virtual appliances so again, it really is a no brainer to give them a try.

Hopefully this post helps some of you out there save a little money on your overall solutions without giving up any of the features or functionality you need.  Post in the comments below and let us know what you think about the load balancer if you give it a try.

Share

Posted in Snig.

Tagged with , , .


VCP 5 Study Guide Review

 

 

 

 

 

 

 

Fellow vExpert Brian Atkinson asked me to review his latest book VCP5 VMware Certified Professional on vSphere 5 Study Guide and I’m glad I agreed to do so.  What a great resource for future VCP 5 test takers to have available to them!  I wish I had had it when I was studying for my test a few months ago.  I highly recommend anyone planning to take exam VCP-510 to buy this book.

Link to the book on Amazon is here: link

It already has a couple 5 star reviews so I am not the only person impressed.  And don’t forget that the book comes with a custom online practice test engine with over 300 sample questions and flashcards.

Since I’ve already passed the test I’m going to be giving my copy away at the Datalink booth at EMC World tomorrow (5/21).  Just keep an eye on twitter and I’ll announce what you need to do to win.

Here are some of the features of the book:

  • Full coverage of all exam objectives in a systematic approach, so you can be confident you’re getting the instruction you need for the exam
  • Real-world scenarios that put what you’ve learned in the context of actual job roles
  • Challenging review questions in each chapter to prepare you for exam day
  • Exam Essentials, a key feature in each chapter that identifies critical areas you must become proficient in before taking the exam
  • A handy tear card that maps every official exam objective to the corresponding chapter in the book, so you can track your exam prep objective by objective
  • Sybex Exam Prep Tools
Share

Posted in vmware.

Tagged with , .