The Case of Invoke-RestMethod vs. The Bad JSON Feed

I work with some incredibly smart and talented people on a daily basis at work who build some pretty cool systems that integrate with the services my team runs for our campus.  One of those services is an API that is run by our Identity Management team that gives us an interface to work with all of the identity data for campus.  This REST API allows us to query not only information about accounts in our environment, but feed data back in for things like provisioning email aliases or just notifying them that we’ve given someone a mailbox.  Like I said, pretty cool stuff!

We use the API for both interactively querying account information and for working with accounts in automation.  Several of the members of my team started noticing that some of our PowerShell functions weren’t returning data periodically.  The bells and whistles started going off when we started receiving failures on several pieces of PowerShell automation due to null data being received when pulled data on an individual account.  So, let’s look one piece of code we’re using :

This PowerShell function is a fairly basic example of how to work with a REST API using the Invoke-RestMethod cmdlet.  This particular function lets me run

on myself and get back everything that is stored about my account when Invoke-RestMethod hits the API.  In this case, I’m getting back null data.  That’s not right…

Invoke-RestMethod is a pretty cool cmdlet for working with a REST API.  It’s smart enough to recognize that it is receiving back data from my API in JSON format and take that data and turn it into objects that I can work with in PowerShell.  The small gotcha is that Invoke-RestMethod really depends on this being valid JSON.  See where I’m going here???

So, it looks like we’re getting back null data, but that shouldn’t be happening.  What do we do next?  The first thing I checked is the data that we’re actually pulling.  I have the ability to pull back “all” in the requested_attributes.  Starting there, if I decrease the scope of data I’m requesting and limit it to something like “primaryemailaddress”, all of a sudden everything is happy.  Strange…  Pulling “all” again… nope… null…

The API is returning a ton of data, but we typically only care about the result data and not all the other messages, logs, and other general fluff that comes back from the API… until now.  In my function, you’ll see that we have a line with

where we’re getting out results back from the API. In this case, I want to see all the messages back from the API to see if we’re receiving any errors or other useful information.  To do that, we’re going to change that line to

so we get back everything. That will give us the full response from the REST API so we can see what we’re getting back and play around with all the data in the response.

After re-loading the function, I’m going to re-run my command and see what I get back. (Sparing you the ugly output, there was nothing useful is the messages that were returned in the huge glob of JSON I got back.) So, let’s dump it everything in a variable so we can play with it a bit:

Next, let’s try piping that through ConvertFrom-Json and then we can parse through the data to see what’s going on:

Yeah buddy!  An error!  Now this is something we can actually work with!  “Cannot convert the JSON string because a dictionary that was converted from the string contains the duplicated keys ‘persondirectoryid’ and ‘PersonDirectoryId’.”  BINGO!  It seems our REST API JSON feed is giving us duplicate data in the form of one attribute in lower case and another in camel case.

As a short term fix, we can use “.Replace” to replace the bad data that we’re getting so things work properly:

In this case, we notified the team that owns the API application and they were able to correct the issue with the duplicate attribute.  Though, this does bring up an interesting shortcoming of Invoke-RestMethod:  in this case, it had no tolerance whatsoever for the invalid data.  Both entries were the same except that one was all lower case and one was camel case.  I guess in a perfect world, it would be nice if there was a -CaseSensitive parameter to allow different cases of entries or some other way I could -IgnoreErrors or –DropErrors.  But, bad data is bad data and fixing the data fixed my problem in this case.

Stop Mouse and Keyboard Theft with a Cable Lock and Washer

I recently had to deal with the disappearance of several keyboards and mice from computers that are set up in a semi-public hotelling area.  I received a support request from someone that noticed that some of the computers were missing either a keyboard, a mouse, or both.  We had no reason to believe they were stolen and were most likely taken by a well-meaning employee assisting a co-worker or fixing their own issue.  We keep a stockpile of extra keyboards and mice; so, replacing the missing keyboards and mice was trivial.  However, we still have to account for the inventory and really need people to contact us when their equipment breaks.

The solution?  A cable lock and a washer that cost less than $0.25.

inexpensive washer

The cable for the mouse or keyboard is looped through the washer.

mouse cord looped through the washer

If you find a washer with a large enough hole, you can loop both the keyboard and mouse through.  If the hole isn’t large enough, you may need to increase your budge by ~$0.25 for each PC.  🙂

keyboard and mouse cord looped through the washer

As you can see in this up close shot, the end of the USB cables can’t be pulled through the washer.

keyboard and mouse cord looped through washer up close

Many of our computers are already attached to desks as a theft deterrent using a cable lock. All we had to do was disconnect the lock from the back of the computer and pull it through the loop created on the cables.

security lock pulled through cable loop in keyboard and mouse

Obviously this isn’t completely foolproof, but should be enough of a deterrent to keep the casual keyboard/mouse thief from walking away with your equipment.

Encourage Users to Submit a Ticket Instead of Emailing You Directly With a MailTip

Exchange-2013-LogoHow many times has this happened to you? You go on vacation, to a conference, you’re inundated with email, or for any of a hundred other reasons you don’t see a support request from an end user come in. Fast forward a few days or weeks and the end user is concerned that their issue hasn’t been resolved. [And we all know that “concerned” could be anything from genuine concern for your well being (“You always respond so quickly!”) to concern that your job performance should be discussed at the highest levels of your organization for not responding to them within 5 minutes.] So what’s the problem? The end user emailed you directly instead of submitting a support request through a ticketing system… a ticketing system that, most times, alerts a team of people about the problem so that their issue can be handled when you’re out of pocket.

We all know what happens… end users find a favorite “computer guy” or you’re a one man shop; but, support requests start coming directly to you that should go through the ticket system. Short of outright refusing direct support requests, it can be difficult to get some people to submit tickets.

Use an Exchange MailTip!

One creative way I’ve seen companies handle this is by setting an Exchange MailTip for certain IT Pros.  Here’s how to do it in Office 365:

Go to the Exchange Admin Center at https://outlook.office365.com/ecp and click on Mailboxes.

01-mailitp_for_it_supportHighlight your account (or any other IT Pro) and click the Edit button.

02-mailitp_for_it_supportClick on MailTips and enter the message you want to be displayed.  When you’re done, click the “save” button.

03-mailitp_for_it_support There’s a slight lag from when you set a MailTip and when it shows up for end users. When the MailTip starts showing up, end users should get your warning that they should submit a ticket instead of contacting someone directly.

04-mailitp_for_it_support

IE 11 Enterprise Mode Not Working?

A few weeks back, I wrote about the Group Policy changes in the Windows 8.1 Update.  One of the big changes in the Update was the addition of Enterprise Mode for Internet Explorer 11.  Enterprise Mode allows web sites (either specified by the end user or via Group Policy) to be processed in such a way that they appear to to the site to be Internet Explorer 8.  There are also some additional ActiveX security tweaks that happen in Enterprise Mode so that [hopefully] organizations can get away from being tied to older versions of IE.

In my testing of IE 11, I came across an application that many of my customers use on a daily basis that had some compatibility issues.  Specifically, a JavaScript pop-up that was supposed to appear when clicking on certain links wouldn’t show up.  All I was getting was a spinning “Please Wait” icon.

I had that “Aha!” moment and put the site into Enterprise Mode and…. buzzer.  Nope, same problem.  What gives?  This was supposed to fix this problem, right?

The Fix!

After banging my head against the desk a few times, it occurred to me that this particular web application has about 10 different URL’s behind it.  You go to the published URL for the application that looks something like http://application.trekker.net, get kicked to https://app.auth.trekker.net, then get kicked to a central login service page (Shibboleth, ADFS, etc.).  After logging in, you’re kicked to https://prod.app.authd.trekker.net:1234.  [URL’s have been sanitized and replaced with trekker.net to protect the innocent!]

After looking at the source of the page (right click > View source), there were another two (!) URL’s in the page I’d never seen before:  https://files.app.trekker.net and https://scripts.app.trekker.net.  Another “Aha!” moment!

I added both of these sites to my XML file (here are instructions on how to set that up) and, voila!  The app works!  It appears that Enterprise Mode was taking my list literally and wasn’t including either of these URL’s since they were different than the main web application.  Lesson learned: if using Enterprise Mode, make sure any other URL’s that are being called by the app get added to the Enterprise Mode IE website list to ensure that everything is running in Enterprise Mode.

Customize Disk Partitions in MDT

For most systems, I typically recommend using the primary disk’s full capacity as one partition, C:\, instead of creating multiple partitions/drive letters for end users. As an IT Pro, it makes it easier for me to find someone’s “stuff” if they store their data in a standard location like their default profile location, C:\Users\%username%\.  If all of your documents, pictures, shortcuts, Favorites, settings, etc. all live in the same place, I don’t have to go hunting for files when it’s time to migrate someone to a new machine.  (Or, better yet, I can automate it!)  For the end user, it’s just easier:  Most people are used to just saving files to the default locations on their home computers.  Any time you can keep the corporate computing experience similar to what people experience at home, it saves you time and money.

However, there are some times when it can be advantageous to create more than one partition when deploying an operating system (OS) to a computer.  I know quite a few people who actually prefer that their end users store their data on D:\ so that it can be fully separated from OS and applications on C:\.  In the event of an OS crash or malware infection that isn’t recoverable, C:\ can be wiped out and all of the user’s data on D:\ is still there.  Personally, I’m not a huge fan of that because it tends to miss application settings, the Registry hive, and other important things a user may miss later.  But, to each his own I guess.

I am, however, a fan of separating data from OS and software on servers.  I’m also a fan of keeping my virtual machines totally separate from C:\ also. (Those things have this bad habit of filling up disks, don’t they!?!)

How MDT Partitions Disks

The disk partitioning process is a task that is part of each OS deployment Task Sequence.  By default, MDT creates a C:\ partition using the full first disk and names it OSDisk.  If this default doesn’t work for your environment, it is pretty easy to change.

Change the Default Partition

In the MDT Deployment Workbench, go to Deployment Shares > $YourDeploymentShare > Task Sequences.  Find the Task Sequence you want to edit and right-click on it.  Click on Properties.

00-custom_disk_partition_mdt

In the Task Sequence Properties, go to Preinstall > New Computer only > Format and Partition Disk.

01-custom_disk_partition_mdtIn the Volume section, you should see “OSDisk (Primary).”  Click on OSDisk (Primary) and then click the Edit button.  (The Edit button is the middle button that looks like a hand pointed at a document with a bulleted list.)

02-custom_disk_partition_mdtIn the Partition Properties, you can change the Partition name, the size, file system, etc.

03-custom_disk_partition_mdtFor our example, we’ll change the partition size to “Use specific size” and set it to 80 GB.  Once we’re done, click Ok.

03b-custom_disk_partition_mdt
I don’t want to waste the remaining disk space; so, we’ll add a second partition that uses the remaining space.  Back in the “Format and Partition Disk” task, click on the New button.  (The New button is the left-most button that looks like a yellow star.)

04-custom_disk_partition_mdtIn the Partition Properties, fill in the Partition name with “Data Disk,” and select the “Use a percentage of remaining free space.”  Set the Size (%) to 100.  Ensure the File system is set to NTFS and click Ok.

05-custom_disk_partition_mdtWhen you’re done, you should have something that looks like this:

06-custom_disk_partition_mdtIf we perform a test deployment, you should get an 80GB drive and a second with the remaining space.

07-custom_disk_partition_mdt

Targeting OS Platform/Bitness with Group Policy Preferences

I’ve had several people ask about targeting the bit level/bitness/platform of Windows with Group Policy Preferences using Item Level Targeting who were having problems getting it to work properly. Before we jump in, I should probably define bitness since I only first heard the term a few months back (Sorry… no… I can’t claim credit for making it up…). There’s an MSDN glossary entry that has very geeky sounding definition: “The distinction between 32-bit and 64-bit address spaces, and the potential differences in instantiation of components that this entails.” The less geeky, but easier to explain to your co-workers and/or boss definition is that we want to determine whether the operating system is 32-bit (x86) or 64-bit (x64) so we can selectively apply a Group Policy Preference setting. Continue reading

Upgrade the Windows Server 2012 R2 Edition from Standard to Datacenter

Technically, there are no differences between Windows Server 2012 R2 Standard and Datacenter other than licensing. I ran into an issue the other day where a 3rd-party package performed an edition check and refused to install on Standard. I contacted their support and they basically told me reload the box. (Thanks, guys!) After I little research, I was able to figure out that changing the edition from Standard to Datacenter is actually pretty simple and only requires a reboot.

01-change_server_2012_r2_editionIn addition to looking in System, we can also run the DISM tool to show the current edition of Server 2012 R2 that we’re running:

02-change_server_2012_r2_editionWe’ll need to find out if the install is capable of being upgraded to a higher edition.  To do that run:

03-change_server_2012_r2_editionIt looks like we’re eligible to upgrade!  Next, we’ll need to change the edition, accept the EULA, and provide a product key.  If you’re using Volume License (VL) media, you’ll need to use the Datacenter setup key that is provided by Microsoft.  If you’re using non-VL media, your mileage may vary.

04-change_server_2012_r2_editionNow we reboot and run the edition check again:

05-change_server_2012_r2_editionWe’re done!  After changing the edition, you’ll need to reactivate Windows Server with your KMS if you’re using a VL copy.

Can I go from Datacenter to Standard?

Unfortunately, no.  Using DISM to change the edition from Datacenter to Standard isn’t supported.  Here’s what happens if you try:

06-change_server_2012_r2_editionChecking the eligible upgrade editions will tell you that “The current edition cannot be upgraded to any target editions.”

07-change_server_2012_r2_editionHonestly, this is a big shortcoming from a licensing perspective.  Sure, if your entire environment is virtualized, this isn’t an issue for you since all the VM’s on top of your hypervisors are fully licensed by having the Datacenter edition on the host(s).  But if you still (for whatever reason) are installing physical servers that are running non-virtualized workloads, paying for Datacenter licenses over Standard licenses if you don’t need Datacenter can be pricey.

I’ve seen several posts on forums and blogs that say you can change a Registry setting to go back to Standard.  I’m going to go out on a limb and say that probably isn’t going to be supported.

One other word of warning:  I performed the edition change with DISM on a recently deployed OS.  I haven’t (and probably won’t) try doing this with a server/VM that’s been in use for any amount of time.  If you’re in that boat, definitely make sure you have a full backup of the system before you start making changes.