The UltraDNS API and Powershell Pt. 2

In a previous post, I outlined using Powershell to work with the UltraDNS API. Here is an update with a screen shot of the GUI. As part of this project, I implemented a web front end that allows anyone on my team to trigger a DNS failover by domain, client, or datacenter. This web front end is used for many different functions and utilizes PHP for the web application, which calls Powershell scripts that do most of the heavy lifting.

This also includes several automated reports that gather a variety of different data; SAN statistics, patching reports, and more. This makes it easy to abstract running Powershell scripts for team members or other staff. They can access the web interface, kick off actions, and read reports. Future enhancements are pretty much unlimited.

Why roll our own solution? There is a lack of products that do anything like this, and this approach allows us to customize things to our own needs.


The UltraDNS API and Powershell

cloudyAt work we’re currently using Neustar’s UltraDNS service to host 200+ DNS records, and I started a project to automate changing IP addresses to switch to DR sites. There is a well documented API for this, with great examples and solutions built mostly on Python, Perl, and Java. UltraDNS has published examples for all three of those on their Github page, and there is even a Perl module available on CPAN.

Since Powershell is my current lingua franca, I put together this rough writeup with some test code. I’ll eventually shape things into a Powershell module that does a few different functions, then tie it into a web interface to fully automate switching to DR IP addresses. The rest of my team is of course, in a nerdgasmic state over being able to press a few buttons to accomplish this.

First, have a look at the, which gives you a quick overview. UltraDNS customers also have access to more in depth documentation, including a full user guide. Some familiarity with using REST web services, Powershell’s built in commands for them, and reading the API documentation is helpful.

The API is at and uses tokens to authenticate. To use the service, you’ll need to first build a call to get an access token. This token has to be passed on to subsequent calls. The code below will return the value of the accessToken property. It goes without saying the credentials should be hidden in production.

In this example I’m using the test URI. This call to /token will return some other output. Remove the pipe to Select-Object to see it.

Now I can get my access token any time with;

This code gets the A record(s) for a given domain;

Inward Turn

The Emerging Threat of Cyberwar

infosec In the wake of the attack on Sony Pictures in the U.S., many are trying to validate the government’s claim that it was a hack perpetuated by North Korea. If this is true it is the first highly public incident involving state-sponsored cyber espionage, and there will likely be more in the future. The incident has already led to sanctions imposed on North Korea, the first of it’s kind based on these attacks. Clearly the consequences of open cyberwar are ominous and far reaching for the entire world.

it’s been rumored that the United States has been in a prolonged cyber cold war with China for the better part of a decade, and by studying attack data there is clearly a pattern involving the two superpowers. Over the past few years, there have been numerous thefts of classified information and acts of espionage attributed to hackers working for or in collusion with both the U.S., China, and their allies. This includes the break-in at the Lawrence Livermore Labs, the hacking of RSA secure IDs in 2011, the Stuxnet virus that destroyed Iranian centrifuges, and the outing of the NSA’s wiretapping and data collection activities by Edward Snowden. All of these are clear indications that nation-states and government agencies are actively using cyberspace to engage in these operations.

The U.S. Congress has also recently expressed concerns over the import and use of telco, networking, and computing gear from Chinese companies such as Huwei and others. These companies are beholden to the Communist government and the possibility of backdoors, doomsday bombs, and other malicious functions embedded in thousands of devices and machines around the world isn’t that far fetched. This could already be happening, leading to a day when the planet’s computers and the internet are held for ransom. That’s right, that Lenovo you got at a bargain price could already be a ticking logic bomb.

So what would happen if this type of activity escalated and reached full-scale cyberwar? We’ve already seen a glimpse of this when North Korea was promptly knocked off the internet for over nine hours after it was publicly outed as the perpetrators. Technology and the web are now so ubiquitous, the loss of major parts of the internet or critical infrastructure could cause a catastrophic collapse we may never recover from. Financial meltdown or worse could be only a mouse click away and the threat is only becoming greater.

The consequences of the same thing happening to the United States could result in serious problems for the entire world. This very reason is likely why large scale and open cyberwarfare hasn’t really happened yet. It’s similar to the Cold War and the concept of mutually assured destruction; As long as each side had enough to wipe each other out several times over and destroy the world, no one dared to press the button. But as nations begin moving more toward offensive operations in cyberspace, the potential is huge for serious collateral damage to business, critical operations, and civil services.

In the event nations begin large scale cyber attacks against one another, the internet itself will be the first victim, followed by everyone who uses it. Businesses that operate online would be crippled by denial of service and war-based traffic. Even more ominous is the threat to critical infrastructure such as power and financial systems. The blow to the economy could be disastrous if key organizations or internet infrastructure were taken offline, even if they weren’t the actual targets.

In the future, this threat could lead to huge changes in the network and internet landscape. The United States could begin a technological Inward Turn,  a doctrine of technology isolationism. Companies and especially government would rely only on hardware and software from those they trusted and build highly secure private networks. Greater security defenses would be created, eventually leading to more heavily guarded perimeters in cyberspace. Every country might build a Great Firewall, capable of protecting major critical infrastructure and economic resources. Not all traffic would be treated equally, and none of it would be implicitly trusted.

In the face of all out cyberwar businesses could begin the Inward Turn and rely only on trusted sources, hardening their critical systems and access to the outside world. The days of cloud-anything would become numbered, and the highly secured, walled-off private cloud would rule the landscape. The silver lining to this dystopian vision would be the rise of robust solutions and technologies that lead to more secure networks and internet. Ironically, war and the military has historically driven these kinds of revolutionary changes.

It may be worthwhile to think about what would be needed to go into this highly secure future. As unreal as it sounds, the possibility is there and the impact it could have would be astronomical. Organizations would need ways to continue to operate and protect themselves from the side effects of open warfare on the net, as well as continue operations safely and efficiently. And while mostly no one wants such a thing to come true, laying the groundwork now might pay off in a difficult future.

To 365 Or Not To 365?

cloudy   Today the word “cloud” is everywhere, and is used to describe sites, services, and software that are typically provided over the internet. Although the term is new, the concept definitely isn’t. It’s a marketing gimmick and we’ve heard the same thing before, it’s just in a new package now.

Email, web sites, and services people use on the web every day are accessed in the cloud, and we’ve been doing it for more than a decade. Modern business infrastructure is really a private cloud and always has been, and there have been companies operating on a public cloud (ie; the internet) since the 1990s. In fact, most of the concepts that drive the cloud were originally used in the early days before the client/server model caused a technological shift.

So, the cloud isn’t anything new. It’s an abstract buzzword meant to be associated with all things internet. There are some advantages to using it, and some services have compelling functionality or niche focus. The problem is the cloud isn’t always what it’s been hyped to be, and this is more common for IT shops than most people think.

Using any cloud service or app is a mixed bag of pros and cons for IT. There are some services that make more sense in the cloud than others. For example, a ticketing system is a better candidate than a log aggregator or a password manager. It pays to think carefully about what should and shouldn’t be cloud based.

Microsoft touts the benefits of cloud based services with Office 365, including typical solutions like Lync, Exchange, SharePoint and the Office suite itself in a subscription based per-user model. It’s clear Microsoft’s strategy is a continual push for the cloud and mobile, changing the old ways that most IT pros have long been familiar with. As a result, from that perspective it’s hard to see Office 365 as being anything good, and here are the best reasons why.

It lacks accessibility to common administrative features that have been tools of the trade for decades. Things like being able to see info on inbound/outbound SMTP connections via Exchange protocol logs is invaluable, but you can’t do it in Office 365.

It’s not worth it if you have the in-house skills to run on-premise systems. There can be painful, sometimes crippling response times from support teams. Even the Premiere Support you have to pay extra for.  Something a skilled admin could fix in minutes on-premise can stretch out for days or weeks before resolution. Sometimes they’re minor problems that should never take long to fix.

You lose visibility into health and performance. The service dashboard is an excellent tool but if you don’t look at it, you don’t know there’s a problem. And for some mind boggling reason it doesn’t have the capability to email you what Microsoft posts there.

The systems aren’t dedicated. The platform is a shared environment, with all of the pitfalls that can come with.

Authentication in hybrid setups. With ADFS and/or DirSync users can authenticate to Office 365 resources with their domain accounts, but they have to login to Microsoft’s portal first, then ADFS. This adds an often confusing additional layer for users they don’t experience when accessing in-house applications.

Compatibility. Naturally there are parts of the 365 stack dependent on Microsoft’s own browser. Although IE 11 is a huge improvement over previous versions, the truth is not every user is doing it with Internet Explorer these days.

Licensing is now a recurring cost per user. Depending on your organization’s size, this could easily become a hefty fee. Conversely, many on-premise solutions are a one time cost.

Your data resides in remote facilities, and you have no control over the underlying systems. O365 is likely as secure as it can be, but your data is still somewhere out in the cloud. And anyone using Exchange, Lync, and SharePoint Online has lots of different data points residing on the platform.

Hybrid configs can be painful. For example, Lync hybrid support was only recently introduced. There were several things MS engineers hadn’t really tackled before when I implemented an on-premise Lync system earlier this year. There are also some troublesome bugs that are specific to a hybrid configuration, a sign of something that was probably rushed.

Unfortunately, I can’t think of one thing that really stands out either. When it works it works, but there are really no compelling bells and whistles. The short answer to the burning question of this literary rant is… Enterprise IT with the right skill sets should stick with the tried and true that will hum along, and don’t buy into the hype.

Hello world!

display-icon-2  After about a decade of the same static content here on, I’ve decided to start over fresh. All of the old subdomains are now gone, with my blog from being moved here to serve as the main site. As time permits I’ll be restoring the old posts here and rebuilding

The New Relic API and Powershell Pt. 2

cloudy In a previous post I wrote about using Powershell to work with the New Relic monitoring service API, which gives you the ability to query performance metrics it gathers about your application. Once you have this data, it is easy to do whatever you want with it; store it in a database, send a formatted report in email, etc. In the first post I included snippets of script code and instructions on how to connect to the API, pass the API key, and get a summary of metric data using the built-in XML parsing capabilities in Powershell.

What if you want more than just a summary? What if you also use the sever monitoring features and want data about those metrics too? You can pull just about any New Relic metric data you want, and that’s the subject of this second post.

First, I retrieved all of the metric names for a server. You can also specify a specific agent for the application monitoring functionality;

The above will return all the metric names and write them to a text file. Once you know all of the names, it’s time to start building the code to get to the data. In this example we’ll use the server monitoring metric System/CPU/System/percent to get an XML formatted response:


The above will return the following XML data;

<metric name=”System/CPU/System/percent”>
<fields type=”array”>
<field name=”average_exclusive_time”/>
<field name=”average_response_time”/>
<field name=”average_value”/>
<field name=”call_count”/>
<field name=”calls_per_minute”/>
<field name=”max_response_time”/>
<field name=”min_response_time”/>

Armed with this info, I was then able to build a call that got the average value of the CPU for the server in the last 24 hours. I found that the API expects a very specific timestamp within the metric URL so I used get-date to format it the way I wanted it and set the start time to 24 hours ago. There may be a more elegant way to insert the date, but this is what I came up with.

Then I built a variable containing the string in the proper format. The API will return errors if it isn’t.

Finally, I built the API call and retrieved the data. Note how I’ve specified my $begin variable, the metric name, the average_value field, and the agent_id in the $url variable.

$servCPU will now hold the raw value, you can round it up or leave it as is. You can use this method for all of the metrics available as well as the other fields in the CPU data. New Relic has more documentation for their API at

The New Relic API and Powershell

I’ve used the awesome performance monitoring tool New Relic to gather diagnostics and other stats for applications. I thought it would be a really cool idea to get some of the metrics using the New Relic API, but there wasn’t much information on how to do it with Powershell. This is relatively simple though, and can be done in different ways depending on the version you use. The code I’m using is 2.0 but I’ll include some snippets of 3.0 equivalent.

The New Relic API is REST but requires authentication. Similar to other services, you will need to enable the API access for your account. This generates an API key you will need to authenticate. There is a good document of the API on Github @

With this simple bit of code, you can get and parse the data returned by New Relic;

Using the XML abilities baked into Powershell, I can now use the common dotted notation to get what I want. Also I’m doing a replace on the returned data to get rid of the hyphen in it, which trips Powershell up no matter what I did to try and escape it. I decided to circle back to that little problem later and just did the replace. Keep in mind this only applied to the summary metrics data that I wanted and may not be needed for some of the other API calls.

Here is an example of returning all the New Relic metric names. Here I am using the 3.0 auto foreach which automatically outputs all of the values. You will need a real loop in 2.0;

With Powershell 3, you can also use the built in REST commands and it’s auto foreach to achieve the same results and more. I didn’t dive into that much because most of my production environments are using 2.0 but here’s a sample of how that would look;

Using ESEUTIL To Copy Large Files

Many Exchange admins are familiar with the venerable Exchange database utility ESEUTIL. I’ve used it many times when working with Exchange databases, and it still exists in Exchange 2010. Recently a DBA coworker and I had a scenario where log shipping for a customer’s site was taking way to long to complete due to a slow network. The DR site is in California on the other side of the country and this was affecting our ability to keep things updated.

We experimented with different file transfer tools like Robocopy, etc. and others with little success until we discovered you can use ESEUTIL to move large files with a respectable performance gain. This is because the utility is designed to move and work with large Exchange databases, and typically does not have a lot of I\O buffer during copy operations. This MSDN blog post outlines the details.

I whipped up a nifty Powershell script that actually gathers the SQL transaction logs (in our case generated by RedGate SQL Backup) that have the archive bit set, calls ESEUTIL to copy them to the DR site, then unsets the bit. All you need is eseutil.exe and ese.DLL from an Exchange server placed somewhere on the SQL server so the script can call it. We saw about a 20% increase in copy speeds when using this method.


Fixing Weak SSL the Easy Way

infosec IIS Crypto is a great free tool produced by Nartac Sotware that allows Windows Server/IIS admins to easily enable/disable weak SSL cryptos and ciphers. This is a PCI requirement, and I’ve seen it show up on many scan using tools designed to probe for compliance. It’s usually a tedious process of adding/changing registry keys, right up to today’s current Windows OSes.

I recently had two fully patched Win Server 2008 R2 servers that were failing PCI scans using the McAffee Secure online service. IIS Crypto made short work out of what would’ve been a longer after hours change. It even has a PCI button that you can just click and it configs the server for compliance. Saved me a ton of work. Microsoft needs to start turning this off by default though and maybe even ask if you want it turned on, just a thought for the guys at Redmond.

In a unique twist, even after verifying the registry keys were correct after running the tool, McAffee still complained about the problem after a post-change scan. Qualy’s SSL Site Analyzer, a nifty and free online tool, actually passed it with flying colors. Another interesting venture of theirs is the HTTP Client Fingerprinting Using SSL Handshake Analysis project, which produced a mod for Apache and some other interesting reads at the bottom of the page, enjoy.