// This is the script to give summary on the main page.
Think IPM

Tuesday, September 20, 2016

A quick look at the Horizon 7 Access Point Appliance

In case you missed it, VMware has been moving toward more and more appliance driven applications.  vCenter, vRealize and now Horizon Access Points.  The Access Points are replacements for the Windows based Security server we would normally build out in a Horizon View environment.   This server is on the edge of the connection before the user’s endpoint (usually in a DMZ or utilized with a firewall NAT).

Having this server as a VMware hardened appliance seems great!  I’m in.  Let’s check it out…

First off, it’s an OVF appliance.  I like that.  I am deploying 2.7 here.


There are a couple of different scenarios for NICs.  there are 3 selections. 1, 2 and 3 NICs.  Single NIC is for NAT’d environments, 2 and 3 would be for DMZ installations.  (3 NICs if you want to additionally isolate the management traffic)


Here is where it gets kind of annoying.  You need to create an IP Pool for the network settings. Even though the next screen will allow you to add in a variety of IP settings.



On this screen, BE SURE to give the admin user a password since all configuration will have to be done via the REST API via JSON.  What?!?  Yeah .. that’s it for me.  I’m not dealing with that.   Here are the rest of the screenshots for completeness but at this point, I’ve made up my mind to use the Windows Based Security Server.



After this screen is a nice summary screen and then the appliance will get deployed.  Like I said earlier, this is not for me or my clients.  The manageability needs to be increased substantially before I would consider deploying it to my clients.  Maybe 3.0 or 4.0.  I’ll keep a close eye on the developments of this product.

Click Here to Continue Reading >>

Wednesday, September 14, 2016

Upgrading VMware Site Recovery Manager from 5.8 to 6.1.1

Image result for fall upgrades

So the Summer is over, school is back, Fall is here, iOS10 has been released and everyone is itching to upgrade stuff.  It’s what you do before the end of the year.  Finish crossing things off your lists before starting new lists. #NewYearResolutions

Today, I am upgrading a client’s VMware  Site Recovery Manager from 5.8 to 6.1.1 (the latest at the time of this writing).  Not so fast though…  Looks like there is no direct upgrade from SRM 5.8 to 6.1.1. Sad smile  

Looks like you have to take a slight detour to 6.0 first and then upgrade to 6.1.1.  Whew!


Click Here to Continue Reading >>

Monday, July 18, 2016

Stopping that unstoppable service!

A client (and friend) contacted me recently with a challenge.  Following what we assume was a bad pattern update, his Microsoft System Center Endpoint Protection service (aka Forefront Security, aka Microsoft Antimalware Service, aka MsMpSvc), running on all the servers in his XenApp farms, was suddenly timing out during the scanning of all web downloads and effectively preventing them from completing. 

His question was simple:  “How do I stop this service?”

Needless to say, he only asked because this particular service (like many services one encounters in one’s travels) has permissions set to not only protect itself from being stopped via normal means, like the Services snap-in or “Net Stop …”, but also to prevent the forcible termination of its service executable (in this case MsMpEng.exe) via Task Manager or “TaskKill /F …”, regardless of the privileges held by the account attempting it (including SYSTEM).  The two methods I eventually came up with after some testing on a home VM that happens to be running the same Endpoint Protection certainly worked in the case of this particular service, but should be applicable to a variety of similarly protected services (when they’re suspected of causing issues and provide no alternate way of gracefully shutting them down).  Both methods can be summarized simply as “use the SYSTEM account to give Administrators full control over the service in question, which includes the right to stop it, then stop the service normally”  -- they differ only in that the first method uses a GUI and the second uses command line, making it appropriate for scripting and mass deployment to multiple machines.  They both use SysInternals’ PSExec to launch a process as SYSTEM.  [When attempting this with services other than Endpoint Protection, you’ll of course substitute the appropriate executable and service name into what follows].

Method 1 – Using SysInternals’ Process Explorer:

(1) Run Process Explorer as SYSTEM (PSExec -s -i ProcExp.exe).  The -i is very important to see the program run interactively, and of course -s to run as SYSTEM.

(2) Find MsMpEng.exe in the Process Explorer process tree, right-click and select Properties

(3) Go to the Services tab, make sure MsMpSvc is selected, then click the Permissions button.

(4) In the MsMpSvc Permissions dialog, select Administrators, then give Full Control (see red arrow in screenshot below) and click OK.

(5) You should now be able to stop the service (via Server Manager, command line, whatever).

[In the case of this particular service, Administrators once again lost Full Control when I restarted the service to take the screenshot, so the elevated permission were temporary].


Method 2 – Using Helge Klein’s SetACL:

As always with the incredibly powerful SetACL, figuring out the syntax is a bit daunting, but this eventually worked:

PSExec -s SetACL-x64.exe -on MsMpSvc -ot srv -actn ace -ace "n:Administrators;p:full"

The parameters passed above to SetACL-x64.exe can be roughly interpreted as “object name MsMpSvc, object type Service, action add/modify an ACE (Access Control Entry), ACE details: give account name Administrators the permission Full control).

Once this is done, the service can again be stopped using any Administrator account (via Net Stop, sc.exe, PSService.exe, whatever).

I can’t guarantee they’ll all yield to the same techniques, so good luck with your own unstoppable services.

Follow Jacques Bensimon on Twitter @JacqBens

Click Here to Continue Reading >>

Monday, June 27, 2016

Centralized policy definitions - Bad idea! ... with a remedy.

imageOn several occasion in the past few years (and twice in the past month, probably the result of interest in piloting Windows 10 and/or Office 2016), I noticed a very bad Microsoft idea being implemented in a client’s domain environment:  a central policy definitions store is created at


and is populated with some collection of ADMX/ADML policy definition templates (in both recent cases copied from a Windows 10 machine of unknown vintage).

Doing this prevents *all* machines in the environment from which any policy editing is ever done (my XenApp  6.5 servers being just one example of such) from being able to use their own platform-specific, version-specific and custom policy definitions.  For my situation at these clients, any ADMX/ADML collection older or newer than the most current Windows 2008 R2 definitions is bad and/or confusing – I want to see all the policy settings that are applicable to the 2008 R2 XenApp servers, and none that aren’t – and anything other than the Microsoft Office policy definitions for the version installed on XenApp means that I (for example) can’t properly manage Office policies because they are seen by the policy editor as just “Extra Registry settings” that are not explained and cannot be modified.  Adding more policy definitions (e.g. for additional Office versions) into the central store is *not* a good solution to this, because it further confuses the situation and doesn’t solve the problem of a different Windows version’s policies being displayed (e.g.  there are like millions – okay, hundreds – of new policies defined for Windows 10 and Server 2016 and, while editing policies for Windows 7/8.x or 2008/2012 R2, it would be torture to wade through them and ignore the inapplicable ones if the Windows 10 definitions were the ones placed in the central definitions store).

Bottom line:  the central store idea was, at the time it first appeared, one of the worst ones Microsoft ever had, because they initially and for a long time thereafter provided no way for a domain machine to say “no thanks, I’ll use my own definitions”) and should even now probably only ever be used in incredibly uniform environments where every machine in the joint is running the same version of Windows, Office, etc., no machine uses custom or modified policy templates, and all machines are always updated at the same time (in lockstep with the central policy definitions) – yeah, I know of no such environment either!   My message to those who might unthinkingly implement a central policy store is “If you’re so in love with your set of policy definitions, put them where you use them without imposing them on everybody else!”.

However, should you run into this situation and can’t talk sense into your client (a shameful consulting fail, by the way J), Microsoft did at some point released a hotfix (An update is available to enable the use of Local ADMX files for Group Policy Editor) which, when installed and the following Registry entry added (set to 1), allows a Windows 7/8.x or 2008/2012 R2 machine to force the continued use of its own local policy definitions during editing, i.e. a “thanks but no thanks” setting:

Key:   HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\Group Policy\EnableLocalStoreOverride


Value: 0      (use PolicyDefinitions on Sysvol if present - Default)

1      (always use local PolicyDefinitions)

You’ll need to reboot after applying this hotfix (which is not delivered through Windows Update).

Thought you’d want to know.

Follow Jacques on twitter (@JacqBens)

Click Here to Continue Reading >>

Thursday, June 16, 2016

New IPM utility Net2SwapFlag -- solves infamous KB2536487 issue

Here is a great new utility written by my colleague Jacques Bensimon.


Microsoft Knowledge Base article KB2536487, entitled  Applications may crash or become unresponsive if another user logs off Remote Desktop session in Windows Server 2008 or Windows Server 2008 R2 , describes a situation whereby applications run directly from a network share can suddenly stop working for all existing and new users of a given Remote Desktop Services / Terminal Server / XenApp server as a result of one user logging off.  Having butted head with this issue, I can further clarify that once all running instances of the app on that server are fully closed, the app will once again start working, … until it occurs again (the main symptom is an application crash, either immediately upon launch or later when a particular app feature is invoked).  If online forum threads are any indication, this issue has been plaguing many environments for years (don’t let the recent date of that article fool you, it’s revision 5.0 and still as borderline incomprehensible as ever), and Microsoft is reported to have privately told some who’ve opened support cases that it doesn’t plan to provide a fix for this issue (because it entails a network File Control Block management strategy embedded deep within the redirector – I thought FCBs had gone out with 16-bit DOS applications, so I’m guessing Microsoft has recycled the term to denote something of a similar nature – think handle or the like).


Before I get to Microsoft’s suggested workarounds, let me first quickly dispense with a couple of urban myths that have grown up around solving the issue (I’ve tried them all, none of them work):

·        Run the app directly from a UNC path rather than a mapped drive (even if the app allows it, doesn’t work).

·        Make sure no mapping even exists to that share before using the UNC path (nope).

·        Use a DFS share, either mapped or via UNC (nyet and nyet).


Now here are Microsoft’s suggestions (verbatim from the KB article):


1.      Do not run shared applications from a mapped folder; instead install the shared application locally on the Terminal Server.


That right there is some solid technical support work!  … but let’s move on.


2.      Use WebDAV shares as opposed to mapped folders if remote binary sharing is required.


Not at all disruptive I’m sure (!) but, as you’d suspect, those who tried it reported horrible performance, so let’s move on again.


3.      Compile the application using the "Swap run from network" linker setting. This setting is described here.


I was ready to laugh this one off as well (the main affected app being a complex network-installed third-party application for which our client obviously didn’t have the source code and the means to recompile it), but I read the description of that linker setting anyway and found out that an executable (EXE/DLL/OCX) linked with that setting is treated differently by Windows when it is loaded, i.e. it is first copied to the local swap file and loaded from that location.   Well, that doesn’t sound like any kind of code change, merely some instruction to the OS about how the executable should be handled, the sort of thing one might find specified by something in the executable’s header maybe?  Armed with the Microsoft PE Header Specification and the experience of previously having investigated the Terminal Server Aware Flag, also found in the PE header and the subject of the IPM TSFlag utility, I found out that this setting is indeed also a single-bit flag (officially called IMAGE_FILE_NET_RUN_FROM_SWAP, value 0x0800 in the header’s so-called Characteristics word) and that it could therefore in theory be set into an already compiled executable … and thus was born the IPM Net2SwapFlag utility:




I applied it to every executable in the application’s network folder [For %F in (*.exe *.dll *.ocx) do Net2SwapFlag-x64 %F /QY] and the app crashes have completely disappeared since.  If the utility is launched interactively on one of the executables, it now displays something like




I have yet to run into an executable “in the wild” that already had that flag set, so I’m guessing not a lot of developers are even aware of the feature (which by the way as a side-effect should somewhat speed up the overall performance of network-based applications).


But there was one more piece of information provided as part of that last workaround given in the KB article:


If the application is a managed app, instead use the “Shadow Copy” feature described here.


I and a couple of .NET developers I spoke to couldn’t make any sense of that “Shadow Copy” feature for standalone apps (the linked article appears oriented to web development with ASP.Net), and it’s unclear from the KB article whether setting the magic flag in the header of a .NET executable would have the same desirable “run from swap” effect.  So for now I can only present  Net2SwapFlag as a working solution to the KB2536487 issue for unmanaged (i.e. non-.NET) applications.  I of course would be happy to know whether it works with managed apps, one way or the other – I’m leaning 51% toward it working there as well.   [If ever a utility required a disclaimer, this is it, so let’s keep it simple: You assume all the risks of using this utility, even if the resulting executables cause untold data and/or job loss!  Don’t even think of using the utility otherwise.  Capiche?]




You can find the utilities here.
Click Here to Continue Reading >>