// This is the script to give summary on the main page.
Think IPM

Thursday, January 29, 2009

Do you need a D drive in a virtualized environment?

I think it is an interesting question…  If I were to build a generic Windows 2003 or Windows 2008 server today (forget SQL or Exchange – think infrastructure [DCs, DHCP, DNS]), would I need a D drive? Let’s assume for the sake of this post that the environment is a pretty standard virtualization environment.  The virtual machines reside on some sort of shared storage that is configured in a uniform RAID configuration and formatted as a VMFS Datastore. If we were to create separate VMDKs, for the C drive and D drive and place them on the same VMFS, what benefits would we get from the additional D drive?  In these types of situations, I have been leaning towards recommending a single logical drive implementation but here is a list of the PROs and CONs as I see them.

PROs to a single C drive server:

  1. Easier for the Administrators to manage.  When installing applications and services, everything is installed in the C drive.  All components and data reside on the single drive.  We no longer are annoyed by poorly written application installers that insist on writing parts of their program to the system drive.
  2. More efficient space utilization.  With a Single drive implementation, you only need to worry and plan for growth on a single drive.  You do not need to allocate free space for both the boot and data drive.

PROs to having a D drive:

  1. VMDK Backups.  You can easily backup or replicate just the data VMDK in this scenario.
  2. Performance scalability.  In the event of IO bottlenecks, you still have the option of moving data VMDKs to faster LUNs or changing spindle configurations.
  3. Legacy management.  If a large portion of your existing servers already have C & D drives (coming from a physical world), you might have macros, scripts and other automation pieces that you don’t want to rework.
  4. UPDATE: For a vSphere environment, check out the PVSCSI Driver!

Non Factors:

  1. Performance.  In this simplified environment, there is no noticeable difference in spindle counts or I/O since all VMDKs would reside in the same VMFS on the same LUN.
  2. Recoverability.  In the event that the system becomes ‘unbootable’, it is a trivial task to attach the drive to a healthy VM and repair or access the needed data.
  3. Host based Agent Backups.  You can easily exclude Windows or System files from the backup routine and just backup the data.
  4. Growth.  A system drive is just a easily expanded as a data drive in a virtual environment.
  5. Security.  With NTFS on both system drives and data drives, security can be configured appropriately for either environment scenario.
Click Here to Continue Reading >>

Tuesday, January 20, 2009

OT : Pimp your Search!

It’s almost midnight and I’m feeling bored so here’s a little off topic fun : Don’t like the standard Google webpage, try on a skin that is more to your liking!


Check out all the skins at ShinySearch.com.

Click Here to Continue Reading >>

Friday, January 16, 2009

Lost Creations relaunched with new app - Virtualization Manager Mobile


Andrew Kutz has relaunched the Lost Creations website with a great new mobile application!  The Virtualization Manager Mobile.  Right now, it does not seem to work on the BlackBerry Curve I am holding but it IS working great on the iPhone.  The program allows you to securely connect to your Virtual Infrastructure and power on/off/pause and restart your Virtual Machines.  In addition to the control, you also get some snazzy stats on resource usage. 

You can check out the demo online at Lostcreations.com. (vmmdemo/vmmdemo)

You can check out some of Andrew’s past work here.

Click Here to Continue Reading >>

Thursday, January 15, 2009

Nominate your favorite VMware vExpert(s)!


VMware has announced a great new program to reward some of the people in the Virtualization communities who are out there helping others.  Whether its on the internet via forums, blogs & twitter or offline in the form of VMUG organizers or general virtualization evangelists, VMware would like to say thanks by calling them vExperts!  I think it is great to see companies recognizing the extra efforts that people put forward to build a community.

You can read John Troyer’s official announcement here and you can nominate your favorites here**.
** Nominations will be accepted through midnight PST, February 6, 2009.

Click Here to Continue Reading >>

Wednesday, January 14, 2009

Intel + Citrix = PC Freedom

Here is an email that floated into my inbox from Aaron Silber.  Its an announcement from CES this year.  Very much reminds me of the announcement by VMware last year at VMworld 2008 about their vClient strategy. (Client based Hypervisor)

Intel + Citrix = PC Freedom
During CES Intel and Citrix announced a strategic desktop collaboration.  What this announcement entailed was Intel and Citrix hooking up to create a platform where you could have two images on a PC running on top of a hypervisor in virtual machines and still fully compliant with corporate requirements.   This was positioned as a way corporations could provide PCs to employees that could have one image that was locked down and for company work and another for personal stuff substantially lowering management costs while still allowing employees to do what they want.

But, this could also go the other way, where employees could get the hardware they want and their companies could still own the corporate image so that if an employee lost their job, they didn't have to turn in their PC and lose all of their personal stuff as well.  Plus they could get the PC they wanted and not the generic piece of, well you know, that their company wanted to give them.

*Source article

Click Here to Continue Reading >>

Tuesday, January 13, 2009

Replicating direct attached storage Virtual Machines

So I have seen estimates here and there stating that upwards of 80% of VI installations are SAN attached. For me personally, the number is more towards 95% but I still do have some clients that choose, due to fiscal restraints, to forgo the SAN. From my perspective, ESX without shared storage is like Tivo without the remote. Its just not quite the same experience.

But for those 5% - 20% Storage Challenged customers out there, while saving for a SAN, operations must continue forward. One aspect that has come up recently with my clients is replication of Virtual Machines. With a traditional SAN, there are plenty of options, however with DAS, the options are limited. Vizioncore and Veeam both make the job of replication a little easier to tackle.

Vizioncore distributes vReplicator and Veeam distributes Veeam Backup.

While vReplicator's sole purpose is replication, Veeam's offering is primarily backup and restoration.  Veeam adds in replication to Backup 2.0 as a feature.  In my opinion, both products do a great job replicating VMs from one site to the next.  They both will replicate from DAS to SAN, SAN to SAN, SAN to DAS and DAS to DAS.  They are both VirtualCenter aware which means once you set the jobs up, they will track all migrations of the VMs through VirtualCenter and continue to replicate from the new ESX host.

Both have differential engines to make sure you are not sending GOBs of traffic across the wire during replication events.  Just the deltas.

Vizioncore licenses vReplicator based on protected VM while Veeam licenses Backup 2.0 based on ESX host CPUs.

Click Here to Continue Reading >>

Friday, January 9, 2009

Vizioncore vRangerPro – release

For those of you running vRangerPro [Excellent VMware Backup utility by Vizioncore], they just released a new version along with the VCB plug-in that was also updated at the same time.  BE sure to go to their website and download the new version.  According to the release notes, there are a lot of fixes, an upgrade to the P2V engine and full support for ESX3i. 

One pet peeve of mine with the software is it requires you to uninstall everything to perform the upgrade.  It is annoying that the upgrades cannot upgrade the older versions of itself and forces us to uninstall everything and then re-install.  Hopefully this will be corrected in future releases.

But till then, be sure to do everything in the correct order : image

  1. Uninstall prior version of vRangerPRO
  2. Uninstall any vRangerPro Plug-ins (VCB, File level, etc..)
  3. Uninstall the VMware VCB framework
  4. **Backup vRangerPro directory.
  5. [Reboot]
  6. Install vRangerPro
  7. Install any vRangerPro Plug-ins (VCB, File level, etc..)
  8. Install the VMware VCB framework
  9. [Reboot]

vRangerPro is pretty finicky about it’s installation order so I always stick to this upgrade path.

**After uninstalling the prior version of vRangerPro, most everything left in the VizionCore directory are your licenses, configuration and backup history information.  If you back directory up, you can always install a fresh copy of vRangerPro and drop these files in for a quick restoration.

Click Here to Continue Reading >>

iSCSI – Hardware or Software – How many TOEs do you have?

More and more of my new implementations of VMware Infrastructure are being connected to iSCSI SANs (EMC, LeftHand, and Equallogic) and the question has come up about whether or not to spend extra dollars on TOE (TCPIP Offload Engine) Network cards.  The TOE cards take the burden of processing iSCSI packets away from the host’s CPU and place it on the Network Interface Card itself.  In theory this should speed performance of the host machine.  For ESX implementations, VMware has provided a very solid software iSCSI initiator that I have been using with great regularity.  I’ve become curious if others are using TOE cards in their environments or just using the straight SW solution.

In the past (ESX 3.0.2), VMware only officially supported 2 TOE Cards in their hardware I/O compatibility guide which lead me to believe that TOE cards were not widely adopted yet.  A quick scan of VMware’s new Searchable Hardware Compatibility Guide reveals that about 16 TOE cards are now supported so acceptance rates might have changed.


Most of the compatible cards seem to be in the $1000 dollar range while most 1GB Network cards seem to be in the $200 range.  Due to the high costs of these TOE Cards, I am curious to know if anyone is receiving the expected performance increases to justify the higher costs and if it is at all worth the price and effort of putting the TOE cards in.

If you have any experiences with TOE cards in ESX environments, please let me know your thoughts in the comments.  Thanks!

Click Here to Continue Reading >>

Tweets, Twits, Twitter and Twittering!

imageIf you are looking for more ways to fill your days with endless chatter and virtualization speak – twitter as mentioned before is creating a lot of buzz on the internet.  Since one of the hardest things about starting in a new social medium is connecting with others, @Alanrenouf (an avid tweeter and powershell guru) has put together a great list of virtualization people to connect with (or follow in tweet speak).  To make things SUPER easy, he has also included a great little powershell script that will automatically connect you with everyone on the list! 

Your TODO list:

  1. create a twitter account
  2. jump to his page here
  3. run Alan’s powershell script
  4. dive right into the conversations!*


*Please do not hold this blog or the others linked here responsible for the ensuing loss of productivity! :)

Click Here to Continue Reading >>

Tuesday, January 6, 2009

VMware VirtualCenter as a Virtual Machine

I seem to run into this over and over again.  Clients that are confused about whether to put VirtualCenter (now called vCenter) on a physical server or a virtual machine.   It has always been my practice to create vCenter as a Virtual Machine within the VI environment.   I have always felt that one of the goals of virtualization is server consolidation and management and to allocate a physical machine to manage it all seems almost hypercritical.

When speaking with customers I often site the advantages of virtual machines over their clunky metal counterparts with the following :

  1. High Availability – Most clients do not realize that HA will function after initial setup without VirtualCenter and as a result will protect VirtualCenter from hardware failure.  If the Host that is servicing the Virtual VirtualCenter crashes and burns, HA (running on all ESX hosts in the cluster) will direct one of it’s members to power the VirtualCenter VM back up.  Crisis averted.
  2. Portability – Once a VM, VirtualCenter can be replicated and become part of your backup and DR strategy.  With Storage vMotion and “vanilla” vMotion, you can now even use VirtualCenter to move itself from LUN to LUN or Host to Host without having the universe implode on itself.
  3. Manageability – upgrading vCenter, no problem, just take a quick snapshot and go.  Need more memory or CPU cycles, just move the sliders up!  Hard drive filled – Glad it’s not a physical box. :)
  4. Policy – VirtualCenter is an IT server and if it manages to go down, the rest of the environment continues to function.  Administrators can always revert back to attaching to individual hosts to perform basic management tasks on the environment until someone powers the vCenter back up.
  5. Support – VMware fully support either configuration in any environment.
  6. The VMware roadmap is to release a Linux based VirtualCenter Appliance.  Might as well stay ahead of the curve! :)

In LARGE environments with dozens of hosts and 1000s of virtual machines, physical VirtualCenter servers might have their place but for the majority of my clients, a vCenter VM makes a lot of sense.

Click Here to Continue Reading >>

Citrix Farm Cleaner by Gourami

Found this great little Citrix Farm administrator tool on DABBC.comGourami released a free tool called Citrix Farm Cleaner that when run on a Citrix server will connect to the farm, discover all of the Citrix servers and then calculate all of the space being used in the Profile, Print Spooler and Temp directories of those servers.  Once calculated, it will give the administrator the option to clean and delete these space wasting files.  The programmers were crafty enough to exclude profiles such as SMAUser, Default Users and other XenApp defaults in addition to any users that are currently logged into the servers.  You can also manually specify profile folders for exclusion as I did with my Default User.org folder in the screenshot below.

imageIt also sports an automated command line mode for scripting and scheduled housekeeping and NO INSTALL! :)

Now for the screenshots! :)
image image

Left unchecked, Profile directories can get quite large and begin to fill the system drives.
I noticed in this beta version, it does not calculate the spool or temp directory sizes – hopefully that will be fixed.

Try it free at Gourami.eu.

Click Here to Continue Reading >>

Monday, January 5, 2009

Citrix XenAppPrep Cloning tool

Has anyone checked out the XenAppPrep tool?

I stumbled across this post here at Shannon Ma’s Blog.
Supposedly it will handle all the Citrix related things.. so basically,


1) Create a Citrix Golden Server
2) Install this tool – If your XenApp server is Virtualized, you could take a quick snapshot before installing.
3) Run the tool
4) Run NewSid and any generalization tools you might like.
5) Image and Deploy.
6) Revert Snapshot if necessary.

Click Here to Continue Reading >>

Thursday, January 1, 2009

Happy 2009 - Free Copy of Parallels Workstation Virtualization software.

Happy 2009 to everyone.  Here is a quick deal I found online.  Get it while it lasts! 

Found on SlickDeals.net
Parallels is offering a free copy of Parallels Workstation Virtualization Software. The download link will be provided on the following page after you enter a valid name, email, company name, and phone number. The serial number will be sent to your email. Thanks WackyP.
Note: Compatible with 32-bit Operating Systems only.
Can’t beat free! :)

Click Here to Continue Reading >>