A common Microsoft conundrum: Updates Stuck at 67%

One thing thats certainly anxiety-inducing to an IT nerd is the above screen; you installed some updates, and now waiting…waiting…waiting…and praying the update actually completes. Depending on what version of Windows you have (like Windows Server 2016) – this may often mean HOURS of waiting. I feel for you.

You can certainly tell that the percentage number isn’t really dynamic; the number of people who report things going sideways at this “67%” moniker is more than 1, so it seems like it jumps from 35% to 67% to completed, hence these numbers are likely hard-coded.

That said, what often happens here, is that the update may “hang” at 67%, and after what seems like an eternity, the computer/server may reboot, and jump right back to the 67% screen AGAIN. WTF.

To address the rebooting part first, it seems these updates use the “Trusted Installer Service” to install them, and out of the box, it has a 15 minute (aka 900 second) time limit – in that if it doesn’t see a reboot within 15 minutes, it reboots itself and tries again. After the second reboot, if it still fails, it will “fail” the patch and undo all changes. You can actually adjust this timeout setting in the registry; see the following link for more info:

https://docs.microsoft.com/en-us/troubleshoot/windows-client/deployment/windows-update-hangs-updates-uninstalled

Although they recommend setting it to a 3 hour limit; thats NUTS. I’m sure for some very large servers that may make some sense, but seriously, I wouldn’t set it any longer than 30-60 minutes. 3 hours wouldn’t do my nerves (or my liver) any favors!

Now, one thing I recently had this issue doing some patching (I guess we’re all doing some patching thanks to PRINTNIGHTMARE), but I couldn’t get any patch to install (at least any larger patch). One thing I found was that despite the blue screen that makes you think the server isn’t really running while the the 67% screen is up, the server is in fact running with full network access, and I was able to open a Powershell session to the server.

I looked at the eventlog using something like get-eventlog -logname application -newest 10 to see if anything was popping up, and sure enough the File Server Resource Monitor service was, for lack of better term, flapping. This was due to a drive having bitlocker encryption, and the drive was still locked since the server rebooted. It looked to me like the service was constantly trying to start, which I could imagine would stop a patch from continuing or being unable to stop the service while new files are replaced.

Since I had a powershell session open, I was able to simply shut down the FSRM service (actually its service name is srmsvc), and while I was at it, shut down a few other third-party services that were running on the server as well. To my relief, it finally passed the dreaded 67% screen and fully completed.

I figured this out by a tip I saw in another search where someone mentioned shutting down all third-party services before trying to install a patch/rollup/cumulative update, but in this case it was actually a native service that was locking up.

Lesson learned; life goes on, except Microsoft just announced Print Nightmare Part II, a NEW vuln found AFTER the July 13th, 2021 updates, so be ready for more patches!