// This is the script to give summary on the main page.
Think IPM

Thursday, April 24, 2014

Just because you can, doesn’t mean you should – Custom VMware Update Manager Depots

That’s the gist of the lesson I learned today – :) Again.
A while back, I read about custom depots for VMware Update Manager.  I love VMware Update Manager.  It really does deliver when it comes to keeping the vSphere ESX hosts sparkly and new with the latest patches and fixes officially released by the mothership.  But what about the other vendors parts in the stack? (Specifically Hardware)
Sometimes when working with Dell Servers, I end up going the Dell OEM Route for vSphere Media.  How great would it be if DELL had an online depot that delivered patches via VMware’s Update Manager.  On the internet, I found out you could!  Read all about adding Dell, Cisco, Brocade and others @ PerfectCloud.
Fast forward to today – I am scanning my hosts for updates and am getting a strange error back in vCenter from VUM.  Error 99 :Check the logs.
Logs on the vCenter host are complaining about ‘Cannot merge VIBS”.  Ugh…  Off to the internet again. 
vCrumbs to the rescue. An excellent post with my error exactly.  (and more importantly – a detailed resolution!)
Long story short – The imported patches from one of the custom depots I added brought down a patch that created the issue.  As the article points out, once a patch is imported into VUM, there is no way to remove it.  All I could do was wipe the database and redownload the patches.  Seems extreme but way better than a uninstalling and reinstalling vCenter Update Manager.
No more Custom Depots for me.
Click Here to Continue Reading >>

Tuesday, April 22, 2014

PSA: vSphere 5.5 Update 1 and NFS

Just a quick post for those that may not have heard about this VMware Alert.

If you are running vSphere 5.5 and have NFS datastores, it is advised NOT to upgrade or patch to Update 1.  NFS disconnections have been reported after upgrading and can lead to freezes and crashes on Virtual Machines located on the NFS datastores.

You can read a good write up on the issue by Michael Webster on LongWhiteClouds.com.

The official KB from VMware is here: 
KB 2076392 - Frequent NFS APDs after upgrading ESXi to 5.5 U1

Click Here to Continue Reading >>

Monday, April 21, 2014

Office and IE custom dictionaries -- New IPM utility: SyncMyDICs

Great new utility by Jacques Bensimon:

Juvenile name aside, this one is actually quite useful: (Grab it here)

As you know, as of Office 2007, your custom dictionary entries are stored as a plain Unicode text file, by default %AppData%\Microsoft\UProof\CUSTOM.DIC.  Any time you use “Add to Dictionary” on a word, the dictionary file is updated and re-sorted using the strange AaBbCcDd… collating sequence, which means all the capital “A” words come before the lower-case “a” words, then the “B” words, then “b”, etc. – but the Office apps don’t really care and are just as happy with a normally sorted file (as long as it’s a proper Unicode text file starting with the 0xFF 0xFE signature and containing one word per line).

What you may or may not know is that, as of Windows 8 / 2012 (and in Windows 7 / 2008 R2 with IE 10 or 11 as well), a new *Windows* API for spellchecking has been introduced for use by any app that wants to take advantage of it (as IE 10 and 11 do – you didn’t think that was your WebApp correcting your spelling, did you? :)).  And of course, since there’s still no love lost between the Office and Windows teams, the two spell-check engines are completely distinct and separate (though I’m sure that, if asked, Microsoft would explain that not everybody has Office installed ;)).  One consequence of this is that your custom dictionary for Windows/IE is separate from your Microsoft Office custom dictionary, although happily its format is essentially the same (Unicode text file, one word per line, no sorting imposed at all).  Your default Windows/IE custom dictionary, since you’re all good Yankees (right?), is %AppData%\Microsoft\Spelling\en-us\default.dic.

Which of course is where the new utility comes in:  after backing up the originals, it will merge any two such custom dictionary files (sorting and removing duplicates in the process) and will replace the originals with the merged copy.  As you can read in the screenshot, it will accept full paths to the two files you want to merge & replace but, for simplicity, will assume you mean the two previously mentioned (Office and IE) custom dictionaries if you don’t specify any files.  Run it at logon, or logoff, or whenever and however you like, and you’ll only need to add words once to have them available on both platforms.  (Of course, given how it works, the utility also can be used to combine Office custom dictionaries from your profiles on two different machines, or from two different user accounts, etc.).


And of course you know what word just made it to both my dictionaries, right?  You guessed it, SyncMyDICs! =]


Click Here to Continue Reading >>

Wednesday, April 9, 2014

In NYC? Join us for a PernixData/IPM Breakfast!

Here’s a quick plug for my company IPM who is organizing a great breakfast event on April 17th for Virtualization Enthusiasts.  The event is sponsored by PernixData and will feature guest speaker Frank Denneman.   I’ll be there and am looking forward to hearing Frank’s presentation.

Frank is among the foremost authorities in the world in regard to running optimized VMware environments. He co-wrote with Duncan Epping the authoritative book on VMware HA and DRS. He edits one of the top VMware blogs in the world that is available at http://frankdenneman.nl/.

He will be in NYC for one day to talk about

  • Pros and cons of various flash deployment methodologies
  • Best practices for using flash to accelerate storage performance
  • How to measure results and ROI

Frank was a principal architect at VMware and now works for PernixData. They offer a great product that highly impacts the performance of the VMware hypervisor by implementing low cost SSD in an incredibly cost effective fashion. Clients can leverage this VMware approved reference architecture to optimize their virtualized environments and greatly increase IOPS delivery.

The event will be at the Innovation Loft  @ 151 West 30th Street.

You can register for the event here: Register Here

Click Here to Continue Reading >>

Citrix Netscalers and Heartbleed Bug

Heartbleed logoIf you haven’t heard about the 2 year old OpenSSL security flaw named Heartbleed, check out the official site for information : Heartbleed.com.  Sadly, it was just ‘discovered’ by the good guys a couple days ago.

In a nutshell, it is a vulnerability in some versions of OpenSSL that allows hackers and script kiddies to steal protected information through normal interactions without detection.   It has to do with the heartbeat/handshake process that happens between the server and the client.  The easiest high level explanation I have read is that during the handshaking process, a client normally send 64kb of information to the server that the server then in turn echoes back to the client.  To exploit the vulnerability, a malicious client can send an abnormal 1kb package instead during the handshaking process and then the server will echo that 1k back but fill the rest with server memory (63kb) to make a complete package.  This server memory can contain other user sessions data including usernames, passwords, encryption keys and other privileged information.  Fortunately, it is a simple coding mistake that can be easily rectified through a patch.  Unfortunately, it has been out there for around 2 years and is/was affecting a large part of the internet.

Sam Jacobs opened up a case with Citrix to find out if the Citrix Netscalers that handle SSL VPNs are affected by this bug and was pleased to find out that they are not.  The Netscalers use an older version of OpenSSL that is not vulnerable to this type of attack.  The Netscalers use OpenSSL 0.9.7 and affected versions are 1.0.1 and 1.0.2 versions.

You can check the open ssl version on the Netscaler by following the below steps:

Login to the netscaler using putty.
Go to the shell prompt.
type the command: openssl, press enter.
type the command: version -a, press enter.

This will give detail info about the OpenSSLl version on the Netscaler.

The Netscalers do not support the ‘TLS heartbeat’ extension in the SSL engine that is affected by the Heartbeat Bug.

You can also use the following site to check other web sites for the vulnerability here:

I’ve tested some View Security Servers and some older CSGs using the tester above and they also come back clean.

Update: Citrix has an official link here: http://support.citrix.com/article/CTX140605

Click Here to Continue Reading >>

Monday, April 7, 2014

Rethinking Network Printing (with new PConn2 IPM Utility)

Here is an especially detailed look at Printing, Printer Drivers and Remote sessions by Jacques Bensimon.  Added bonus : New IPM utility PCONN2!

If you’re like me and insist on controlling the printer drivers that are installed and used on your TS/RDS/XenApp servers (you should!), Windows network printing has historically been a nightmare:  print servers only let you connect a printer if you have the exact same driver as the one with which the printer is defined (unlike RDP and ICA client printing which offer driver name substitution mechanisms), attempt to download and install said driver to your server otherwise (sometimes even when you in fact do have an identically named driver, and inexplicably sometimes even if you have the identical driver version), and can reject connections entirely if you attempt to use “Point and Print” or “Packaged Point and Print” Restrictions Policies to prevent driver installations.  This state of affairs is often exacerbated by the fact that your servers’ Windows version does not match that of the (generally older) print server(s), which often means that “in-box” driver names for the same printer models don’t match across platforms (there ought to be a law!) and is further complicated when you have no control over the cowboys who manage the print servers and who sadistically ignore the (much safer) in-box drivers, always installing drivers from vendor downloads.

And yet, preferable though client printing may be (driver substitution feature, universal driver availability, better compression, automatic reconfiguration based on detected client  printers, etc.), providing some network printing capability is often a customer requirement.  For example, if there are thin client devices in use in the environment, creating client-side printer connections can range from absurdly difficult to outright impossible – users of such devices may have no choice but to rely on network printer connections established within their remote sessions (what Citrix calls “session printers” when connected via the XenApp feature by that name).  Another common example is that of home or traveling users who need to print to an office printer, either for later pickup or for immediate use by a colleague or assistant.  (“Are assistants colleagues?”  Discuss. :))

So, with these preliminaries in mind, here are a few items related to resolving the above issues, ranging from mundane to mind-blowing :):

A. Unexpected re-installations of existing printer drivers when establishing network printer connections from a Windows 2008 R2 SP1 RDS/XA server can be at least partially eliminated by installing hotfix “KB2896881 - Long logon time when you use the AddPrinterConnection VBScript command to map printers for users during logon process in Windows Server 2008 R2 SP1”.  Despite the article’s title, the hotfix reduces driver re-installations regardless of the method used to create printer connections, including KiXtart, XenApp “Session Printers” and “manual” network printer additions.  This is a worthwhile hotfix to apply even if you wind up using some of the “fancier” strategies described below.

B. As of Windows Vista and continuing through all current workstation and server versions of Windows, a new network printing method called Client-Side Rendering has become the default, unless disabled via the Always render print jobs on the server” policy setting.  With client-side rendering, the printer’s native command sequence required to get the job onto paper (i.e. the actual PCL or PostScript or whatever commands) is generated entirely on the client side of the printing connection (i.e. within the session in an RDS/XA scenario) via the locally installed printer driver, and is then sent as a RAW stream to the print server which in turn dumps it off to the printer without any further processing by its own printer driverWait a minute!  The print server takes whatever printer commands we send it and passes them on to the printer, no questions asked??  Then why the @#$%&! does it constantly bust our b@##$ about the matching printer driver requirement??  Or is that really a requirement after all?  The following intriguing sentence is found at the bottom of the above-mentioned policy’s description:

Note:  In cases where the client print driver does not match the server print driver (mismatched connection), the client will always process the print job, regardless of the setting of this policy.

Huh?!  Mismatched connection??  Sounds great!  How do I get me one of those?  The answers (there are two) turn out to be buried in a single sparsely detailed MSDN article, and neither one is available via any built-in (or, as far as I can tell, third-party) GUI or command line Windows tool or printer connection method, … until today that is! :)  This seems to be a case of the Windows API having outpaced the user-accessible capabilities of Windows, so the feature lays there, dormant.

Before I describe the two available methods for creating “mismatched” printer connections, let me address something that you may run across regarding Client-Side Rendering and the above “server-side rendering” policy, for example in this Microsoft blog:  “There is one scenario where it may be desirable to offload the rendering policy to the print server - and that would be on a Terminal Server”.  The idea here is that print job rendering can be somewhat processor-intensive, so the suggestion is that it might be best kept on the print server if you fear massive amounts of printing occurring simultaneously on multi-user machines.  That may be true, especially on underpowered machines (<insert VM joke here>), and you can certainly use the policy if you find it beneficial (as you saw above, it won’t affect mismatched printer connections anyway), but I have news for you: you’ve been rendering print jobs on Terminal Servers since long before Windows 2008!  How?  With ICA/RDP client printing!  All client printing (except to printers created with the Citrix Universal Printer Driver) is rendered within the session using a local (possibly substituted) driver and the raw PCL/PostScript/etc. stream is sent to the client for immediate pass-through to the printer – sound familiar?

Okay, back to printer connections with mismatched drivers.  Here are the two ways you can create them:

1. If, for any given shared printer, you create on the print server a REG_SZ value named “DriverPolicy” under the key HKLM\SYSTEM\CurrentControlSet\Control\Print\PrinterName\PrinterDriverData and set it to the name of the driver you would like to use when connecting to this printer (regardless of the driver with which it’s actually defined), then any Windows Vista or above client (including Windows 2008 R2) will only use that particular driver when connecting to that printer, assuming it’s available locally.  There is even a benefit to creating this Registry entry (set to the printer’s “real” driver name) when you in fact do have the matching driver on the client side and are okay with using it:  it dramatically speeds up the establishment of connections to that printer because it completely short circuits all driver name and version comparisons, and eliminates even the possibility of a driver being downloaded from the print server.  But that is both the strength and the (slight) weakness of this technique:  while Windows XP/2003 print clients are oblivious to and unaffected by the DriverPolicy Registry entry (since they don’t support client-side rendering in the first place), it “breaks” Point-and-Print for the more current Windows versions – if they don’t already have the requested driver locally, no attempt will be made to provide them with one and connection attempts to that printer will simply fail.  That’s of course not a problem for you and your carefully managed RDS/XA servers (which will already have the requested drivers and will benefit from the elimination of all the “driver drama”), but print servers usually also support workstations, and their set of installed printer drivers is rarely managed with any sort of care (because Point-and-Print makes that unnecessary).  Workarounds that come to mind include either using separate print servers for RDS/XA sessions (if we could easily have *that*, we probably wouldn’t be having this discussion) *or* creating duplicate printer shares with recognizably different names, one without a DriverPolicy entry (for use on workstations) and one with (for use on RDS/XA).

2. There is a Windows API function called AddPrinterConnection2 that, when correctly used (I had an embarrassingly hard time figuring it out :)), will let you create a printer connection using the specified locally installed driver of your choice.  As in the case of the DriverPolicy Registry entry described above, there is a significant performance benefit to using this function even if you specify the same driver as the print server’s, again because it bypasses all driver version comparisons and never involves a driver download.  But unless you’re building this capability into a custom program, you need a utility that “wraps” the API function in question and makes it available to batch files and other scripts, … which is where the new PConn2 utility comes in.  See the screenshot below for its syntax and usage notes.


Two empirical observations about this API function (and therefore about PConn2):  (a) It creates “mismatched” connections exactly as requested within the current logon session (don’t go by the printer connection’s Properties when confirming the driver it’s using – it’s in the Registry somewhere – ask me if you want to know exactly where), but if the user account has a persistent profile (i.e. local or roaming, not mandatory) and logs off then logs back on, Windows will dutifully reestablish the printer connection *without* employing the correct API function and the requested driver mismatch, so you may still wind up with unexpected driver downloads.  Not a problem with mandatory profiles, but in other scenarios you’ll need to either delete HKCU\Printers\Connections at logoff or exclude it from capture by whatever profile “solution” you’re using.  (b) The function bypasses all policy restrictions on print servers to which you may print – this is actually a great feature:  if you use this function (or PConn2) to create all user printer connections under tight script control, then you can exclude all print servers via policy (by only allowing printing to a single nonexistent “bogus” server) and rest assured than no other mechanism can be used to create printer connections that might result in unwanted driver downloads.

C. If you’re forward-looking, you’ll be interested in a new feature introduced with the Windows 8/2012 “v4” printer driver architecture, the so-called “Class Drivers”:  printer vendors (and Microsoft) can now provide these “super-drivers” that support from a single model to an entire range of printers from any given manufacturer (e.g. “Xerox PCL6 Class Driver” or “Brother PS Class Driver”) and print clients, including Windows 7 and 2008 R2, can connect and print to any shared printer defined with such a class driver using a single client-side printer driver, the “Microsoft Enhanced Point and Print Compatibility Driver”.  Could be a game changer!  At the very least something to consider if you have the option of building (or suggesting) a Windows 2012-based printing environment.

Well, if you aren’t by now as sick of reading about printing as I am of writing about it, you’re in need of serious help.


Click Here to Continue Reading >>

Wednesday, April 2, 2014

TS/RDS User Home Directory policy Explained

Here is a nice write up on Setting Home Directories via group policy by Jacques Bensimon.

Since I’ve had to deal with this before, I thought I’d review two items related to setting a Home Directory Root path via group policy for TS/RDS/XA servers.  This policy is especially useful in the common situation of a farm that has servers in multiple offices but in which users occasionally/frequently need to log on to servers *not* in their home office (where their normal home directory resides) – if the home directories are not on a DFS share replicated to all the possible logon locations (probably the best case scenario), this policy can be used (obviously with a different local root path applied to each server location) to at least ensure that users will not suffer the performance issues associated with having a non-local home directory (especially when some profile folders are being redirected to the home directory).  Of course, this implies that users will have different home directories (with different, unsynchronized contents) at each location to which they log on, but that is often acceptable since the apps they run off remote servers are typically different from those they run off their local servers.  For example, many law firms have XenApp servers in each of their offices for use by local users when running most apps (MS Office, document management, etc.) but require users to log on to headquarter servers to make their time entries or to access other in-house database apps that would perform poorly over the WAN.

A. Assuming you want users in any given office to use their existing local home directory (the one defined in their AD account) when logging on to the TS/RDS/XA servers in their local office (and why wouldn’t you?), you’ll want to be aware of the sequence that this policy appears to follow to locate (or create) a user’s home directory under the specified root folder [Side note:  you’re pretty much screwed if the existing user home directories in an office aren’t all at a single network location or if their names don’t match either %UserName% or %UserName%.%DomainName%.  In either scenario, unless you can reconfigure the home directory environment, at least some users are going to wind up with new separate TS home directories if you use the group policy in question]

(1) If a folder named %UserName%.%DomainName% already exists under the specified root *and* the user has Full Control permissions to it, then that folder is used as the home directory.

(2) Otherwise, if a folder named %UserName% exists under the specified root *and* the user has Full Control permissions to it, then that folder is used as the home directory.

(3) If neither (1) nor (2) apply (and %UserName%.%DomainName% does not already exist), then %UserName%.%DomainName% is created (with Full Control given to the user) and used as the home directory.

(4) If %UserName%.%DomainName% exists *but* the user doesn’t have Full Control permissions to it (the last remaining possibility), then the policy doesn’t work at all and the user presumably doesn’t get a home directory (or does (s)he?  Might the system fall back to whatever is specified in the user’s AD account?  I didn’t test this last possibility.  Anybody?)

So, bottom line, if users already have a home directory named %UserName% at the location in question *and* they have Full  Control permissions to it (pretty much the case in most places), then you need not fear that a separate new %UserName%.%DomainName% will be created and used (though as you can see the system looks for that first).  The other takeaway is that if a home directory is created as a result of this policy, then it will always be named %UserName%.%DomainName% -- I can find no way to force the auto-creation of a plain %UserName% home directory via this policy (this should definitely be a new policy option – not many environments have to deal with multiple domains and the possibility of duplicate usernames).  The only way around the “.%DomainName%” appendage is to pre-create all the home directories, even for the remote users.  Or, come to think of it, this may not be a bad way to distinguish the home directories of local users vs. remote users:  those with the added .DOMAIN were auto-created as a result of the policy and therefore belong to remote users.

B. Now, what if scenario (3) above occurs? i.e. a home directory has to be auto-created.  What sort of permissions are required on the specified root location in order for the folder auto-creation to succeed?  Well, as far as I can tell (and quite inexplicably), the folder creation attempt occurs in the security context of the *user*, so (s)he’d better have the right permissions to the root folder.  Basically, anybody has to be able to enumerate and create folders under the root location and have Full Control over the created folder.  After a little experimentation, this appears to be a sufficient permission set you need to grant on the root folder only to get the desired result without allowing everybody access to everybody else’s home directories (you’ll obviously need to go to the Advanced permissions dialog to get this done):

· Authenticated Users (with option “This Folder Only”)

- Traverse Folder
- List Folder
- Read Attributes / Read Extended Attributes
- Create Folder
- Read Permissions

· CREATOR OWNER (with option “Subfolders and Files Only”)
- Full Control

Of course, you’ll also already have in there the usual complement of Full Control entries for Administrators,  SYSTEM, and whatever else is typical in the particular environment.  [Be careful when adding permission entries to the root folder:  you don’t want to see users lose their existing home directory permissions, so stay away from that “Replace all child object permissions …” option!]


Click Here to Continue Reading >>