The Case of Invoke-RestMethod vs. The Bad JSON Feed

I work with some incredibly smart and talented people on a daily basis at work who build some pretty cool systems that integrate with the services my team runs for our campus.  One of those services is an API that is run by our Identity Management team that gives us an interface to work with all of the identity data for campus.  This REST API allows us to query not only information about accounts in our environment, but feed data back in for things like provisioning email aliases or just notifying them that we’ve given someone a mailbox.  Like I said, pretty cool stuff!

We use the API for both interactively querying account information and for working with accounts in automation.  Several of the members of my team started noticing that some of our PowerShell functions weren’t returning data periodically.  The bells and whistles started going off when we started receiving failures on several pieces of PowerShell automation due to null data being received when pulled data on an individual account.  So, let’s look one piece of code we’re using :

This PowerShell function is a fairly basic example of how to work with a REST API using the Invoke-RestMethod cmdlet.  This particular function lets me run

on myself and get back everything that is stored about my account when Invoke-RestMethod hits the API.  In this case, I’m getting back null data.  That’s not right…

Invoke-RestMethod is a pretty cool cmdlet for working with a REST API.  It’s smart enough to recognize that it is receiving back data from my API in JSON format and take that data and turn it into objects that I can work with in PowerShell.  The small gotcha is that Invoke-RestMethod really depends on this being valid JSON.  See where I’m going here???

So, it looks like we’re getting back null data, but that shouldn’t be happening.  What do we do next?  The first thing I checked is the data that we’re actually pulling.  I have the ability to pull back “all” in the requested_attributes.  Starting there, if I decrease the scope of data I’m requesting and limit it to something like “primaryemailaddress”, all of a sudden everything is happy.  Strange…  Pulling “all” again… nope… null…

The API is returning a ton of data, but we typically only care about the result data and not all the other messages, logs, and other general fluff that comes back from the API… until now.  In my function, you’ll see that we have a line with

where we’re getting out results back from the API. In this case, I want to see all the messages back from the API to see if we’re receiving any errors or other useful information.  To do that, we’re going to change that line to

so we get back everything. That will give us the full response from the REST API so we can see what we’re getting back and play around with all the data in the response.

After re-loading the function, I’m going to re-run my command and see what I get back. (Sparing you the ugly output, there was nothing useful is the messages that were returned in the huge glob of JSON I got back.) So, let’s dump it everything in a variable so we can play with it a bit:

Next, let’s try piping that through ConvertFrom-Json and then we can parse through the data to see what’s going on:

Yeah buddy!  An error!  Now this is something we can actually work with!  “Cannot convert the JSON string because a dictionary that was converted from the string contains the duplicated keys ‘persondirectoryid’ and ‘PersonDirectoryId’.”  BINGO!  It seems our REST API JSON feed is giving us duplicate data in the form of one attribute in lower case and another in camel case.

As a short term fix, we can use “.Replace” to replace the bad data that we’re getting so things work properly:

In this case, we notified the team that owns the API application and they were able to correct the issue with the duplicate attribute.  Though, this does bring up an interesting shortcoming of Invoke-RestMethod:  in this case, it had no tolerance whatsoever for the invalid data.  Both entries were the same except that one was all lower case and one was camel case.  I guess in a perfect world, it would be nice if there was a -CaseSensitive parameter to allow different cases of entries or some other way I could -IgnoreErrors or –DropErrors.  But, bad data is bad data and fixing the data fixed my problem in this case.

Stop Mouse and Keyboard Theft with a Cable Lock and Washer

I recently had to deal with the disappearance of several keyboards and mice from computers that are set up in a semi-public hotelling area.  I received a support request from someone that noticed that some of the computers were missing either a keyboard, a mouse, or both.  We had no reason to believe they were stolen and were most likely taken by a well-meaning employee assisting a co-worker or fixing their own issue.  We keep a stockpile of extra keyboards and mice; so, replacing the missing keyboards and mice was trivial.  However, we still have to account for the inventory and really need people to contact us when their equipment breaks.

The solution?  A cable lock and a washer that cost less than $0.25.

inexpensive washer

The cable for the mouse or keyboard is looped through the washer.

mouse cord looped through the washer

If you find a washer with a large enough hole, you can loop both the keyboard and mouse through.  If the hole isn’t large enough, you may need to increase your budge by ~$0.25 for each PC.  🙂

keyboard and mouse cord looped through the washer

As you can see in this up close shot, the end of the USB cables can’t be pulled through the washer.

keyboard and mouse cord looped through washer up close

Many of our computers are already attached to desks as a theft deterrent using a cable lock. All we had to do was disconnect the lock from the back of the computer and pull it through the loop created on the cables.

security lock pulled through cable loop in keyboard and mouse

Obviously this isn’t completely foolproof, but should be enough of a deterrent to keep the casual keyboard/mouse thief from walking away with your equipment.

Encourage Users to Submit a Ticket Instead of Emailing You Directly With a MailTip

Exchange-2013-LogoHow many times has this happened to you? You go on vacation, to a conference, you’re inundated with email, or for any of a hundred other reasons you don’t see a support request from an end user come in. Fast forward a few days or weeks and the end user is concerned that their issue hasn’t been resolved. [And we all know that “concerned” could be anything from genuine concern for your well being (“You always respond so quickly!”) to concern that your job performance should be discussed at the highest levels of your organization for not responding to them within 5 minutes.] So what’s the problem? The end user emailed you directly instead of submitting a support request through a ticketing system… a ticketing system that, most times, alerts a team of people about the problem so that their issue can be handled when you’re out of pocket.

We all know what happens… end users find a favorite “computer guy” or you’re a one man shop; but, support requests start coming directly to you that should go through the ticket system. Short of outright refusing direct support requests, it can be difficult to get some people to submit tickets.

Use an Exchange MailTip!

One creative way I’ve seen companies handle this is by setting an Exchange MailTip for certain IT Pros.  Here’s how to do it in Office 365:

Go to the Exchange Admin Center at https://outlook.office365.com/ecp and click on Mailboxes.

01-mailitp_for_it_supportHighlight your account (or any other IT Pro) and click the Edit button.

02-mailitp_for_it_supportClick on MailTips and enter the message you want to be displayed.  When you’re done, click the “save” button.

03-mailitp_for_it_support There’s a slight lag from when you set a MailTip and when it shows up for end users. When the MailTip starts showing up, end users should get your warning that they should submit a ticket instead of contacting someone directly.

04-mailitp_for_it_support

IE 11 Enterprise Mode Not Working?

A few weeks back, I wrote about the Group Policy changes in the Windows 8.1 Update.  One of the big changes in the Update was the addition of Enterprise Mode for Internet Explorer 11.  Enterprise Mode allows web sites (either specified by the end user or via Group Policy) to be processed in such a way that they appear to to the site to be Internet Explorer 8.  There are also some additional ActiveX security tweaks that happen in Enterprise Mode so that [hopefully] organizations can get away from being tied to older versions of IE.

In my testing of IE 11, I came across an application that many of my customers use on a daily basis that had some compatibility issues.  Specifically, a JavaScript pop-up that was supposed to appear when clicking on certain links wouldn’t show up.  All I was getting was a spinning “Please Wait” icon.

I had that “Aha!” moment and put the site into Enterprise Mode and…. buzzer.  Nope, same problem.  What gives?  This was supposed to fix this problem, right?

The Fix!

After banging my head against the desk a few times, it occurred to me that this particular web application has about 10 different URL’s behind it.  You go to the published URL for the application that looks something like http://application.trekker.net, get kicked to https://app.auth.trekker.net, then get kicked to a central login service page (Shibboleth, ADFS, etc.).  After logging in, you’re kicked to https://prod.app.authd.trekker.net:1234.  [URL’s have been sanitized and replaced with trekker.net to protect the innocent!]

After looking at the source of the page (right click > View source), there were another two (!) URL’s in the page I’d never seen before:  https://files.app.trekker.net and https://scripts.app.trekker.net.  Another “Aha!” moment!

I added both of these sites to my XML file (here are instructions on how to set that up) and, voila!  The app works!  It appears that Enterprise Mode was taking my list literally and wasn’t including either of these URL’s since they were different than the main web application.  Lesson learned: if using Enterprise Mode, make sure any other URL’s that are being called by the app get added to the Enterprise Mode IE website list to ensure that everything is running in Enterprise Mode.

Upgrade the Windows Server 2012 R2 Edition from Standard to Datacenter

Technically, there are no differences between Windows Server 2012 R2 Standard and Datacenter other than licensing. I ran into an issue the other day where a 3rd-party package performed an edition check and refused to install on Standard. I contacted their support and they basically told me reload the box. (Thanks, guys!) After I little research, I was able to figure out that changing the edition from Standard to Datacenter is actually pretty simple and only requires a reboot.

01-change_server_2012_r2_editionIn addition to looking in System, we can also run the DISM tool to show the current edition of Server 2012 R2 that we’re running:

02-change_server_2012_r2_editionWe’ll need to find out if the install is capable of being upgraded to a higher edition.  To do that run:

03-change_server_2012_r2_editionIt looks like we’re eligible to upgrade!  Next, we’ll need to change the edition, accept the EULA, and provide a product key.  If you’re using Volume License (VL) media, you’ll need to use the Datacenter setup key that is provided by Microsoft.  If you’re using non-VL media, your mileage may vary.

04-change_server_2012_r2_editionNow we reboot and run the edition check again:

05-change_server_2012_r2_editionWe’re done!  After changing the edition, you’ll need to reactivate Windows Server with your KMS if you’re using a VL copy.

Can I go from Datacenter to Standard?

Unfortunately, no.  Using DISM to change the edition from Datacenter to Standard isn’t supported.  Here’s what happens if you try:

06-change_server_2012_r2_editionChecking the eligible upgrade editions will tell you that “The current edition cannot be upgraded to any target editions.”

07-change_server_2012_r2_editionHonestly, this is a big shortcoming from a licensing perspective.  Sure, if your entire environment is virtualized, this isn’t an issue for you since all the VM’s on top of your hypervisors are fully licensed by having the Datacenter edition on the host(s).  But if you still (for whatever reason) are installing physical servers that are running non-virtualized workloads, paying for Datacenter licenses over Standard licenses if you don’t need Datacenter can be pricey.

I’ve seen several posts on forums and blogs that say you can change a Registry setting to go back to Standard.  I’m going to go out on a limb and say that probably isn’t going to be supported.

One other word of warning:  I performed the edition change with DISM on a recently deployed OS.  I haven’t (and probably won’t) try doing this with a server/VM that’s been in use for any amount of time.  If you’re in that boat, definitely make sure you have a full backup of the system before you start making changes.

What kind of reference image should I use and what should be in it?

I had a great question come in last week and the writer agreed to let me respond as an article:

Kyle,

Last July, I started my first real systems administrator job at a school system here in the Midwest. One of the things I inherited was Ghost for imaging computers in classrooms, computer labs and so on. Now that Symantec is killing off Ghost, I’ve been tasked with figuring out how we’re going to re-image computers this summer. We’ve settled on using SCCM for our OS deployments, but I had a question about reference images after reading your series on creating base images in MDT. What do you typically include in your reference images? Our Ghost images include literally everything from Office to Java to other random education apps… just about all of them.  We even found an image with some old gradebook software in it. The gradebook software went fully web-based years ago (before I even got here) and the software just never got taken out! The problem is that it feels like we’re constantly updating the reference image (all 40-something of them!!!!), people have apps they don’t need, many of the apps like Java and Flash have to be updated immediately after a re-image, there are remnants of old software, and so on.

Any help or advice you can provide would be really helpful!

Jeremy S.

Jeremy,

First off, thanks for letting me answer your question in the form of a blog post!  And, thank you for responding to my followup questions so crazy fast.  Here we go:

I too came from the school of Ghost imaging; so, I totally understand where you’re coming from. A lot of people that use sector-based imaging solutions build these massive monolithic catch-all images and tend to update them for years on end before re-creating them from scratch (or they just keep using the same base forever!).  And for good reason… you tended to have to have a whole lot of them to cover all of your hardware types and use cases.  The good news is that when Vista came out, the whole OS deployment process got an overhaul and it made OS deployment far more customizable and predictable without the need to create these massive reference images (unless your particular environment requires it).

MDT and SCCM have really changed the game for OS deployments.  You don’t need to create a monolithic reference image that includes every single piece of software someone needs if you don’t want to.  You can install as much or as little as your want and then use MDT or SCCM to customize that deployment at install time.  So before we can really get into a discussion about that what of your reference image, you’ll need to decide what kind of reference image you’re going to create first.

There are three schools of thought when it comes to creating reference images:  Thin Images, Thick Images, and a Hybrid Images that are somewhere between Thin and Thick.

[Short Answer] Which do I recommend?  Honestly, it depends on your environment and what you’re trying to accomplish.  If you just need to test something like a script where you don’t need any applications or to be fully patched, a Thin Image is probably all you need.  If you’re imaging a computer lab full of computers that all need to be identical, then you probably need a Thick Image. Most people I know (including me) are using a Hybrid Image.  I use a Hybrid Image because the applications used by my end users vary and I like to be able to customize the deployment to their specific needs.

[Long Answer] —

Thin Images

For me, a Thin Image is OS only.  I’ve seen some people use just the RTM media to deploy Windows 7/8 and then lay down all their software, but there’s one huge problem with doing it that way…  If you use the RTM bits, you now have to install all of the Windows Updates too.  Ouch.  That can be really time consuming.  Personally, I like to keep a Windows  reference image that is using our currently supported version of Internet Explorer and the latest Windows Updates installed available as a Thin Image with no other 3rd party software.  Even if I don’t update it every single month, I’m not having to wait while over a year’s worth of updates are installed on the computer.  There’s also the added benefit of speeding up the process of building a Thick/Hybrid Image if I base it off my fully patched Windows 7/8 Thin Image.

PROS:

  • Smaller image since since you’re just dealing with the base OS (and possibly Windows Updates).
  • Very customizable since there isn’t any software installed.
  • Speedy install of a base OS (assuming you’re including Windows Updates).

CONS:

  • Requires months (if not years worth) of Windows Updates if you don’t make a reference image that has the latest updates.
  • The full deployment process of laying down the OS and installing all your software on a computer may be slower since you’ll have to potentially install Office, Adobe products, plugins, etc.
  • Potentially eats up additional CPU cycles and disk IOPS in a virtualized environment while software installs.

WHEN TO USE

  • Any time you just need Windows on a system… whether that be testing or systems that don’t require additional software.
  • When you need to customize the install of each and every computer that will be deployed.

WHAT TO INCLUDE

  • Windows Base OS
  • Latest version of IE your applications support
  • Latest Windows Updates
  • [Consider] Visual C++ Runtimes

Thick Images

A Thick Image is everything and the kitchen sink (ok, well maybe not the kitchen sink…):  Windows, Office, all the latest Windows/Office Updates, plugins, custom apps, and everything else you can think to install.

PROS:

  • PC is ready faster since all necessary software is installed as part of the image.
  • Works well as a “cookie cutter” deployment to large numbers of identical systems like in computer labs or corporate environments where every PC should be identical.
  • Easier to hand to junior level staff or temps since everything is already installed.
  • Less chance for a piece of software to be missed at deploy time since everything intended for the system is already in the image.

CONS:

  • May require more frequent updates since you’ll need to update it monthly for Patch Tuesday updates from Microsoft and third-party products.
  • May require patching after image is deployed since third-party products like Adobe Reader, Adobe Flash, Oracle Java, etc. may have been updated since the image was built.
  • May require building multiple reference images since software needs may differ between different departments, computer labs, etc.
  • An error like a misconfiguration or a piece of software that wasn’t installed in a Thick Image means the error goes out to more computers.
  • Users end up with software that they potentially don’t need.  Unneeded software will still need to be patched/updated even if the users doesn’t use it.

WHEN TO USE

  • Computer labs where a room full of systems will all be identical.
  • Server deployments where all the systems will be identical.
  • Large scale deployments where all the systems will be identical (see a trend here?).
  • Time sensitive deployment when you need to deploy the OS and all software as quickly as possible to a system.

WHAT TO INCLUDE

  • Windows Base OS & EVERYTHING else
  • Latest version of IE your applications support
  • Latest Windows Updates
  • Visual C++ Runtimes
  • Office (and latest updates)
  • Browser Plugins (Flash, Java, etc.)
  • Adobe Reader/Acrobat
  • Antivirus software
  • Management agents
  • VPN Client

Hybrid Images

A Hybrid Image is somewhere between a Thin and a Thick Image.  It would typically include applications that everyone gets that [hopefully] aren’t updated constantly like Office, Visual C++ runtimes, various agents, OS customizations like adding wallpapers, etc.

PROS:

  • Smaller images than Thick Images since unnecessary software isn’t installed.
  • More customizable as unnecessary applications aren’t installed and the image can be customized to the needs of the user of the system at deploy time.
  • Sped up deployment since larger common packages like Office and Windows/Office Updates are already installed.

CONS:

  • Still may require updates after deployment if the image isn’t updated regularly.
  • Slightly slower deployment if large packages are left out of image and need to be installed as part of the OS deployment process.

WHEN TO USE

  • You have applications that all users get (like Office for example), but you still want the ability to customize the experience for each department or user.
  • You don’t want to constantly update images to update things like Java and Flash.

WHAT TO INCLUDE

  • Windows Base OS
  • Latest version of IE your applications support
  • Latest Windows Updates
  • Office (and latest updates)
  • Visual C++ Runtimes
  • Management agents
  • Antivirus Software
  • Install everything else at OS deployment time