Tome's Land of IT

IT Notes from the Powertoe – Tome Tanasovski

What’s Big in 2012?

A friend of mine asked me an innocent enough pair of questions, “What’s going to be big in 2012?” … “What’s worth learning?”  I approach these questions cautiously.  In most organizations 2012 is a year of finishing up projects: mainly win7, virtual desktops, or both.  You should have a good handle on application delivery or at least a strategy for how to handle it in the years to come.  You’ve probably spent some time looking and perhaps implementing some of the layering technologies that can be used with VDI.  For many, that is still the focus area.  Some of you are putting focus back into Citrix for published desktops or for app delivery as people bring their devices to work.  Finally, a few of you are evaluating the multitude of cloud services that are now available that may act as a better alternative to what you can do in your own house.  Others are still scratching their heads as they try to figure out what a private cloud means to them.

Rather than be the pundit on a pulpit making predictions, I’ll just tell you what I plan on doing this year.

What’s in your lineup for 2012, Tome?

Glad you asked.  First and foremost this is obviously the year that Microsoft wants me to get up to speed with Windows 8 and System Center 2012.

Windows 8

I’m not talking desktop here.  I’m talking server!  Metro apps are aesthetically in this millenium, which is nice.  However, the pieces of meat that interest me have nothing to do with HTML 5, fake suspended apps that instantly recreate state, or the way search works.  Nope, the interesting stuff is definitely in server.  There is a new file system (REFS), extensions on existing ones (NTFS with dedupe), tons of new cmdlets, a direction from the Server team to go all server core (the command line only version of Windows Server), VHD improvements (vhdx), virtual aware DCs, Active Directory Admin Center (ADAC), and who knows what else I’m about to learn this year when I hit this topic hard.


I’m finally ready to learn everything I can about this hypervisor.  I know ESX very well and I love it.  I’ve had plenty of opportunities to have tongue-in-cheek comments like, “I think Microsoft just announced that they invented v-motion” or “Hey, look at this: Hyper-V can overcommit memory now”.  Now the feature set is full, and Microsoft is starting to use the technology in ways that are integrated into other products – like …. wait for it …  Private Cloud

System Center 2012

Hand in hand with Hyper-V are some of the components within System Center.  I think the best way to put it is that if it runs the private cloud infrastructure, I plan on trying it out.  While the term cloud is an overused word that describes development best practices for the past few years, the mainstream adoption and flexibility of virtualization have made this a sexy term.  What I believe Microsoft is offering (I will know better once I start playing with it) is their new application framework.  I believe that their hope is that if IT Pros can easily create and set up Azure-like services, that it will be easier to convince developers to develop on those services.  This, of course, means it’s much easier to then push those services to the public cloud at a later date – perhaps due to in-demand bursts, perhaps due to outsourcing, perhaps just for the sexiness of it.

Big Data

I spent a lot of time near the end of last year getting up to speed on some of the big data options.  If you’re not familiar with the term, let’s just say that it’s quantities of data that would make a relational database feel uncomfortable while maintaining the querying speed of a relational database – in most cases it is faster due to the fact that you are leveraging multiple servers to pull back data in parallel streams.  I have played with some simple nosql data stores like mongodb and Cassandra, and I have tackled some map reduce with Hadoop, couchdb, and Splunk.


Splunk is where I am committing my time at the moment.  Splunk allows you to ingest time-series data from disparate sources, but that’s only the beginning.  The power is twofold: First, in their slick query language that lets you spin real-time slices of that data in ways that were traditionally only possible with a data architect and a defined cube of your data sets.  Second, in their ability to combine and reconcile data events from the multiple sources that you feed it.  The fact that they require very little setup (compared to the other options) to get going makes them the easiest way that I know of to implement a big data solution without a lot of heavy developer work.  This relatively simple time-to-implement has also enabled them to adopt this ridiculous pricing model that is horrible for us, but is sure to keep them in business long enough to keep their competitive edge.


Also, I’m keeping an eye on 1010data. They are a business-driven solution that enables people to get the same slicing and dicing power that you can get from Splunk without requiring a data architect, but the solution is 100% cloud based and uses an online spreadsheet to empower their users.  It currently acts as the data warehouse for NYSE – they store every historical trade in their trillion row spreadsheet.  This is a much different beast than Splunk.  Splunk is right now the IT Pro’s tool for managing the sprawl of machine data.  1010data is all about number crunching and speed.  Splunk enables IT Pros and Devs, while 1010data enables business.  Both of these are extremely valuable things to understand if you plan on designing the future vision of IT in your environment.

Teradata Aster

I’m really curious to see what Teradata‘s aquisition of Aster bears.  I’m not overly up to speed with Teradata to begin with, but I know it’s a “hot” product that I should know.  The fact that Aster provides map reduce capability to the Teradatadata data store makes my eyebrow rise a bit higher.  I’m committed to some more research – I expect that will be conversations with people in that area and a lot of reading more than anything practical.


How could I talk about 2012 without PowerShell.  V3 was so Q4 of 2011, but I am waiting for the next CTP or beta release to see what is making the cut and what will not.  Soon it will be time to actually use V3 in production – now that’s when it gets fun.

Additionally on the PowerShell front, I’ve been hitting a series of AI/Collective Intelligence algorithms hard the past few months to see what I can use them for in PowerShell.  This is more of an academic pursuit than anything else although it was directly spawned from a task I was handed at work.  The plus for the rest of the world is that I’ve been developing a fairly nice suite of cmdlets that can implement the various algorithms.  I hope to release this as a module in 2012.


What do you think is hot in 2012?  What am I missing?  Drop a line in the comments here or on Google+


One response to “What’s Big in 2012?

  1. mjolinor February 1, 2012 at 8:34 am

    While you’re digging around in SCCM, have a look at SCORCH. I’m looking forward to leveraging that for consolidating my management scripts and scheduled tasks into one central repository for scheduling, change management, and monitoring.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: