Well It’s been a while since the last PowerShell blog, and already over a month in to the new year so about time I picked up on this serious of blogs.
For the first blog of the new year, we will be looking at managing storage via Server 2012 mainly:
Looking at a quick overview of the new features, you are most likely already familiar with a few of these..
This blog is going to jump back and forth between different topics, mainly because there’s a fair bit to cover, and will be split over two blogs so try and keep with it!
I’ll start with data deduplication, as I’m pretty certain you’ve most likely heard about this or know of this. But if not what do we mean by data deduplication?
Well data deduplication identifies and removes duplications within data without compromising its integrity or fidelity, with the ultimate goal to store more data on less space. (and what does less usage = ? that’s right it’s CHEAPER!).
Obviously there is a slight overhead to enabling deduplicaiton on a volume, in this case a background task run’s with low priority which performs the following:
You are most commonly going to enable this on File Shares, software deployment shares and maybe even VHD libraries.
Let’s take a look at configuring data deduplication. Firstly add it as a server role
We will now apply this to a share, so lets now browse to server manager > File and Storage Services > and right click configure data deduplication
You can set how many days old the files should be as well as choosing a schedule, and file types to exclude
Now before I enabled data deduplication, I made a single 1GB text file and renamed it a few times in both the same folder and a different folder as shown below
As you can see, three x 1GB files take up 3GB of space (not surprisingly)
I’m now going to kick off data deduplication manually on the data drive
And to view the status of the job (as above and below):
Now the state is showing as completed, I will go back in to server manager and here you can see the status of deduplicated data
Checking the overview you can see we seemingly now have an entire free drive
Yet if we browse we still see the same size files, and all three files open correctly.
Moving on from data deduplication…
For those of you familiar with home labs, and virtual environments you are already more than familiar with Thin provisioning, and to some degree Trim storage, but for those of you who aren’t this is a new feature to Sever 2012 which is ON by default, so there are no features or roles you need to install.
Thin provisioning, is the ability to allocate storage space without the physical disk space needing to be there. For example, In my home lab I have a 250GB physical disk. I could provision a virtual disk on here for the size of 2TB. Now obviously the disk won’t store 2TB, but it means I can allocate sufficient space to a drive without the physical requirement being there. Useful if you have a JBOD setup or back-end SAN, allowing you to simply “throw” more disks in when needed (that’s a technical term…).
Trim storage however is the ability to reclaim storage that is no longer needed, if you are familiar with exchange (older version’s), it’s no different to reclaiming “lost white space”.
The best way to explain this is that the the file system can inform the underlying physical storage device that the contents of specified sectors are no longer important. This means those sectors can be used by another volume. It’s effectively saying there used to be data here, it’s since been moved/deleted and this space can be re-claimed so feel free to use it.
That’s all there is to say about those two topics, as mentioned they are on by default, so now I’m going to cover the new features within file server resource manager.
File Resource manager allows you to manage and classify data that is stored on file servers, many people don’t know this is already available in Server 2008 R2, you can already do the the following:
The new features Server 2012 brings to the table are:
I like the new features Server 2012 brings to the table, ad certainly whilst playing about with it in my home lab and I consider it much better than what was available in 2008 R2.
Before I continue, I just want to make sure before I start referencing them everyone knows the difference between Basic and Dynamic Disks….
If not it’s pretty simple:
Server 2012 also brings to the table something know as ReFS (Resilient File System). What does this do? Well it has the following advantages:
Basically ReFS inherits features from the NTFS file system including bitlocker encryption, ACL’s, change notifications, mout points volume snapshots etc….
Because ReFS uses a subset of features from NTDS it means it is backwards compatible, if enabled older clients will still be able to read and write to ReFS partitions.
The whole driver for ReFS and the features it brings is to allow for even greater resilience, which i’ll show at a later date
Of course, like all these new features there are those features which sadly leave and are deprecated…. below are some of those which just didn’t make the cut
That’s it for Part 1, not too much to show in terms of guides for this as it’s mainly describing the new features Microsoft want you to know about…