Be Sure With Azure (.NET): Blob Storage

Are you looking at utilizing Microsoft Azure for your next application, or just overwhelmed with all the options and services provided by Microsoft Azure? The “Be Sure With Azure” is a series of technical articles to get you organized and up to speed with Azure offerings. More than just that, we're going to look at areas less covered, pitfalls to avoid as well as tips to take advantage of.

I will start off the series with a set of articles on Azure Storage options that include BLOB, Queue and Table storage with this first article focusing on BLOB storage. In this article we will cover a number of areas that will be important to your utilization of Azure storage options such as:

  1. Why Azure Storage
  2. Storage Accounts
  3. Billing and Pricing
  4. Development Tools
  5. BLOB Storage
  6. Concurrency
  7. Security
  8. Management Tools

Requirements

So Let's get started by looking at what you will need first and foremost to get started working with Azure BLOB storage; an Azure storage account.

Storage Account

First, you need a Storage Account.

Without having to spend a penny, Microsoft does provide a complete 1 month free trial of all their services as of the time of this writing. To get started you can go to their free trial to easily set up your account to get started.

Microsoft Azure Blob Storage Free Trial View

Your Azure storage account provides a number of purposes and you can read all the details about an Azure Storage Account here, but it boils down to:

  1. Access to 3 services; BLOB storage, Table storage and Queue storage.
  2. Provides a unique namespace for working with your own BLOBs, queues and tables

Namespaces

Obviously, we will be diving deeply into the specifics behind these services starting with the BLOB storage, but let's look at this second purpose of the storage account; namespaces. A storage namespace denotes a specific address space that you can access resources in one of the 3 services. There are various levels of namespace, but let's look at a BLOB storage namespace for a concrete example. At the highest level of the namespace we have an endpoint that is a URI that denotes a specific address space for a specific storage service under your account:

http://mystorageaccount.blob.core.windows.net

Where mystorageaccount would represent your account while .blob represents the storage service type. This namespace can further break down to more specific URIs such as:

http://mystorageaccount.blob.core.windows.net/containerName/blobName

Where containerName and blobName can define where specific resources within a storage service can be located. But we will get into those details shortly.

Billing and Pricing

Customer storage charges are based on the following three factors:

  1. Storage Capacity
  2. Transactions
  3. Outbound data (data egress)

Storage capacity breaks down 3 different ways, your subscription plan (in other words pay-as-you-go, 6-month, and so on), the redundancy level and the total volume of data. Like most things, the longer the subscription and the higher the storage volume, the lower the cost.

Transactions are fairly inexpensive, but can easily and quickly add up as some operations in the API issue multiple transactions. Transactions also include both read and write operations. So obviously the glaring motivation is to use batch operations anytime you can, that we will have a look at. Inbound data is free, but outbound data is charged and you can see the data transfer fees here. You can see the full breakdown of storage capacity and transactions pricing here.

Now that you have had an introduction to Azure accounts and their billing and pricing, let's get into utilizing Azure Storage services.

Development Tools

In regards to the storage, the services (BLOB, table and queue) are exposed through a RESTful API. A simple example of this can be seen when you issue a HTTP GET request to a BLOB resource such as:

http://mystorageaccount.blob.core.windows.net/Images/01-08-2014.jpg

However, complexity can arise with more complicated requests to the storage APIs to post data, update or run advanced queries. Doing so requires you to supply all the necessary HTTP request components such as additional headers, parameters and HTTP request body information.

Lucky for us, Microsoft has provided the Azure .NET Client SDKs and tools that perform all the heavy lifting for you. You can acquire the Client SDKs and various tools from a number of resources such as NuGet packages, GitHub and Microsoft's Azure site. Let's take a quick stroll through the various resources and examine the differences and what to expect.

Azure Visual Studio Tools

The Azure Visual Studio can be downloaded for Visual Studio 2012 and 2013 from the the Microsoft Azure download page. This installation includes the various .NET Client SDKs and storage emulator. It also includes a number of templates available for getting you off the ground running. I would definitely recommend going this route to acquire the storage emulator but you will want to ensure that the installation (at the time of your download) includes the latest version of the storage client library.

NuGet

NuGet Package Manager provides a quick route to install all the necessary dependencies for the various client libraries and generally includes the latest of the storage clients. You will see the various Azure client SDKs that are available. The NuGet Package Manager Console (which is different), offers more granular control over exactly what version of a Azure SDK client you install.

NuGet Package Manager Window

GitHub

If you would like to dive into the depths of the source code, dissect the underpinnings and run the suite of test that back the Azure Storage client, you can download the storage client source code from their repository on GitHub. It never hurts to see how tools that we rely on work and hey, you might just learn something.

Now that we have the necessary tools to do something with Azure's Storage services, let's put it to work and see what we can do with it.

BLOB Types

There are a few important facts that we need to layout about BLOB storage before diving into utilizing them. First off, there are two main types, Block BLOBs and Page BLOBs. Their differences mainly include read/write access, performance optimization and size limitations. There are other smaller differences, but these compose the majority of reasons for determining which BLOB type to work with.

Page BLOB

Page BLOBs are broken down into 512-byte pages with a maximum of 1TB in size. They are optimized for random read/write access, allowing for individual or multiple pages to be updated or consumed within a single page BLOB and are best suited when you have a range of bytes in a page BLOB being modified frequently. The kicker about page BLOBs is that they are almost always associated with being the storage for virtual hard disks that support Azure Virtual Machines. You will find an overwhelmingly lopsided amount of documentation available for Block BLOBs than you will for Page BLOBs.

Block BLOB

Block BLOBs retain their name from the fact that an individual block BLOB are comprised of 4mb blocks. A block BLOB can have up to 50,000 blocks with a total maximum size of 200GB. Because they are comprised of individual smaller blocks, they provide a level of write granularity that we will see how to take advantage of when uploading large files. Block BLOBs have been optimized for large binary data writes and streaming workloads. Because of this, you will find that block BLOBs are the favored storage choice for websites and application resources such as files, images, videos and other similar resources that don't change often. This will be the type of BLOBs that we will be leveraging for this article.

Azure Storage accounts have a 500 TB capacity, but can be extended through the ability to have multiple storage accounts under a subscription (a maximum of 50 accounts). Without bogging this article down with metrics, Microsoft provides documentation on its storage scalability and performance targets.

The Big Picture

A conceptual view of how BLOB storage is structured can be seen below.

Azure Storage Hierarchical Flow

As you can see at the top, we are working within the capacity of a storage account, followed by the concept of a container and finally the individual BLOBs within a container. This hierarchical structure also provides a good technical road map to look at how to work with BLOB storage.

So you have set up you're free trial account, acquired the storage Client SDK and are ready to put it to good use. Our goal is to read and write binary data to BLOB storage under our own storage account. The first task that will be required to do so is to obtain a specific place in our storage account to save or access a specific BLOB. BLOBs have the following two-level hierarchy:

  • The Container that is the parent of the BLOB
  • The BLOB itself

We'll see shortly how to fake a more granular folder structure.

Minimal Requirements

There is a finite structure level for containers to BLOBs. It's very simple and only contains two distinct levels, the container and the immediate BLOB. This can be thought of as a single directory structure, such as /images/photo1.jpg.

The container itself is associated with a specific Azure Storage Account. Therefore, if we want to do read and write actions on a specific BLOB under our storage account, roughly, we will be required to instantiate objects that represent our storage account, a specific container within our storage account and finally the BLOB object itself.

Keeping that in mind, the minimal requirements to work with a BLOB would go something like the following:

  • First, you create a storage account object that represents your Azure Storage Account
  • Secondly, through the storage account object, you can get create a BLOB Client object
  • Third, through the BLOB Client object, obtain an object that references a BLOB container within your storage account.
  • Finally, through the specific BLOB container reference, you can get a reference to a specific BLOB 
  1. CloudStorageAccount account = new CloudStorageAccount(new StorageCredentials("your-storage-account-name""your-storage-account-access-key"), true);   
  2. CloudBlobClient blobClient = account.CreateCloudBlobClient();   
  3. CloudBlobContainer blobContainer = blobClient.GetContainerReference("images");   
  4. CloudBlockBlob blob = blobContainer.GetBlockBlobReference("photo1.jpg");   
  5. \  
You can see for the CloudStorageAccount object we are creating a StorageCredentials object and passing two strings, the storage account name and a storage account access key (more on that in a moment). In essence, we create each required parent object, until we eventually get a reference to a BLOB.

Storage Keys

As said earlier, creating the storage credentials object, you need to provide it your storage account name and either the primary or secondary base64 key. You can obtain this information from your Azure Portal by selecting “Manage Access Keys” under “Storage”, where it will list your storage accounts you have created. Yes, I have regenerated the keys shown below.

Azure portal access keys

Yes, as you might have guessed, the access keys are the keys to the kingdom, so you would not want to pass this information out. However, for the sake of simplicity, I am demonstrating the bases for what we need in order to access a specific BLOB storage account. We will take a closer look at security later on in this article.

A better approach to acquiring a CloudStorageAccount would be to look at the “Windows Azure Configuration Manager” NuGet package that can help with abstracting out some of the overhead of creating an account.

However, the preceding container instantiation assumes that the container exists. This would be a good time to look at containers and what options we have with them.

Containers

Containers provide the logical grouping of our BLOBs. In our two level hierarchy, containers are our first level. In addition to the standard set of CRUD operations that the client tools affords us, we also have the ability to set control access at the container level, set container specific metadata and obtain leases for delete operations just to name a few. We'll have a look at security later in this article.

Creating Containers
  1. CloudBlobClient blobClient = account.CreateCloudBlobClient();   
  2. CloudBlobContainer blobContainer = blobClient.GetContainerReference("images");   
  3. blobContainer.CreateIfNotExists();   
Tip: Something that will eventually bite you if your not careful is the container naming limitations. Be sure to follow the naming convention that Microsoft supplies when naming your containers or you will see HTTP status code 400 (bad request) errors.

Delete
  1. CloudBlobContainer containerToDelete = _client.GetContainerReference(container);   
  2. containerToDelete.Delete();   
Or if uncertain:
  1. containerToDelete.DeleteIfExists();   
Tip: You might find code that repeatedly makes CreateIfNotExists or DeleteIfExists method calls anytime they want to interact with Containers. Recall, one of the aspects of billing is that it is based on transactions. The preceding create and delete examples conduct multiple transactions to validate the container exists. Therefore, you could easily incur unneeded transactional overhead, especially if you are doing this on every attempt to work with a container (yes, there is code out there that does that). For the creation of containers, I advise you to create the containers ahead of time when you can.

Metadata

Metadata allows us to provide descriptive information about specific containers. This is simply providing name/value pairs of the data we want to set on the container.
  1. CloudBlobContainer container = _client.GetContainerReference(containerName);   
  2. container.Metadata.Add("Creator""Max McCarty");   
  3. container.Metadata.Add("Title""Storage Demo");  
  4. container.SetMetadata();  
Acquiring Metadata on a container is as easy as:
  1. container.FetchAttributes();   
This however, also acquires properties of the container. Properties in the scope of containers are system-related properties or properties that correspond to standard HTTP headers.

Tip: Metadata on containers and BLOBs is not easy to query. If you need to put important emphasis on the data (metadata) describing the container or BLOB, consider storing this in a database.

Working with Container Contents

Depending on the set access policy of the container, we can retrieve a list of contents of the container. The contents are not the BLOBs themselves, but metadata about the BLOBs (not to be confused with BLOB Metadata).
  1. CloudBlobContainer blobContainer = _client.GetContainerReference(containerName);   
  2. IEnumerable<IListBlobItem> items = blobContainer.ListBlobs();   
The final operations we will look at deal with the contents of BLOB containers, BLOBs. To do anything with BLOBs, we need to obtain a reference to a BLOB. We can obtain a reference to an existing or new block BLOB through the container:
  1. CloudBlobContainer blobContainer = _client.GetContainerReference(containerName);   
  2. CloudBlockBlob blob = blobContainer.GetBlockBlobReference(blobName);  
However, there we can use GetPageBlobReference for referencing a page BLOB, or GetBlobReferenceFromServer for referencing a ICloudBlob without designating a page or block type BLOB.

This last method is a good point to look at working directly with block BLOBs.

BLOBs

Minimal Requirements

As a refresher, see the previous specified Minimal Requirements earlier in the article for working with BLOBs. This will outline the required procedure for working with BLOBs. As with BLOB containers, all examples will assume we have acquired a CloudBlobClient.

Read and Writing Data

So we are now at the point to actually do something with our BLOB reference. The Storage Client provides a number of sources we can use to upload data to our BLOB, such as: 
  • Byte Arrays
  • Streams
  • Files
  • Text

See the following under Asynchronous for more information on the available Asynchronous methods supported by the Storage Client SDK.

Writing

A simple upload procedure to upload a file would be as easy as:

  1. CloudBlobContainer blobContainer = blobClient.GetContainerReference("demo");   
  2. CloudBlockBlob blob = blobContainer.GetBlockBlobReference("photo1.jpg");   
  3. blob.UploadFromFile(@"C:\Azure\Demo\photo1.jpg", FileMode.Open);   
But this was a simple upload for a small file. What if it was a substantially large file? What if halfway through the upload the upload failed due to network connectivity or some other unknown?

Looking at the preceding upload in an application like Fiddler (or your browser development tools), we can see the single request/response for a small file:

fiddler single thread upload response

By default, the storage client will break apart an upload that exceeds 32mb in size into individual blocks. This maximum size setting before it breaks out an upload into individual block can be increased up to a maximum of 64mb by setting the SingleBlobUploadThresholdInBytes. When the size of an upload exceeds the SingleBlobUploadThresholdInBytes setting, parallel upload operations will occur to upload the individual blocks and reassemble them into a single BLOB. The number of max parallel operations can be set with the ParallelOperationThreadCountproperty. Both of these properties are part of the RequestBlobOptions .

We can update the CloudBlobClient RequestBlobOptions properties and as follows:
  1. CloudBlobContainer blobContainer = blobClient.GetContainerReference("demo");   
  2. BlobRequestOptions opt = new BlobRequestOptions   
  3. {   
  4.     SingleBlobUploadThresholdInBytes = 1048576,   
  5.     ParallelOperationThreadCount = 2   
  6. };   
  7. blobClient.DefaultRequestOptions = opt;   
And after performing another upload of a file larger than the preceding specified SingleBlobUploadThresholdInBytes we can see in Fiddler exactly what this might look like.

fiddler-multiple-thread-request

You can even see where it has appended the blockid query parameter. In a robust application like Fiddler, you can even see the entire breakdown of the request/response including the bytes associated with the upload.

This would be a good time to point out some of the benefits of utilizing the Azure Storage Client SDK. If we had not, we would need to track all the block IDs generated, managing the completion of all block uploads and finalizing the entire upload by issuing a commit request of the blocks. However, the Storage Client SDK takes care of all of this for us.

Before moving on, we have only seen a simple upload of a file. The Storage Client also provides the following methods to upload additional sources (each with their equivalent Async version not shown here): 
  • UploadFromByteArray
  • UploadFromFile
  • UploadFromStream
  • UploadText

Delete

Deleting a BLOB is pretty straight forward. Up until now, we have kept the BLOB creation and interaction pretty simple. So we can simply delete a BLOB that already exists:

  1. CloudBlobContainer blobContainer = _client.GetContainerReference(containerName);   
  2. CloudBlockBlob blob = blobContainer.GetBlockBlobReference(blobName);   
  3. bool wasDeleted = blob.Delete();  
There are also additional helper methods when we are uncertain of a BLOB's existence, such as :
  1. bool wasDeletedOrDoesntExist = blob.DeleteIfExists();  
The simplicity of deleting BLOBs can degenerate, however we have BLOB leases and/or snapshots of BLOBs as well as when working with Shared Access signatures, all of which we'll cover shortly. All of these factors can make the ability to delete BLOBs a little less straight forward. I will cover these two scenarios under their appropriate sections below.

Tip: As I said with BLOB containers, some BLOB methods conduct multiple transactions within their operation. DeleteIfExists is precisely one of those methods that will cause you to incur additional transactional fees. These types of multiple transactional operations might not always be avoidable. But writing financially efficient applications is one way we can put our client's interests first.

BLOB Properties and Metadata

BLOB Metadata is data that describes the data that makes up the BLOB. We can set the BLOB metadata as we did with the container metadata:
  1. CloudBlobContainer blobContainer = _client.GetContainerReference(containerName);   
  2. CloudBlockBlob blob = blobContainer.GetBlockBlobReference(blobName);   
  3.   
  4. foreach (var keyValuePair in metaData)   
  5. {   
  6.     blob.Metadata.Add(keyValuePair);   
  7. }   
  8. blob.SetMetadata();   
We can retrieve metadata about a BLOB:
  1. blob.FetchAttributes();   
  2. string author = blob.Metadata[“Author”];   
FetchAttributes also acquires the BLOB's properties that are generally HTTP properties and System properties of the BLOB, but not the BLOBs themselves. Some of the more heavily used BLOB properties are ContentType, ContentMD5, ContentEncoding, ETag just to name a few. But after uploading a MP4 video that might be consumed in a web request, we might want to set its content type:
  1. CloudBlobContainer blobContainer = _client.GetContainerReference(containerName);   
  2. CloudBlockBlob blob = blobContainer.GetBlockBlobReference(blobName);   
  3. blob.UploadFromFile(path, FileMode.Open);   
  4. blob.Properties.ContentType = "video/mp4";   
  5. blob.SetProperties();   
Structuring BLOBs Hierarchical

I hear the question asked “How do I create a hierarchical folder structure with my BLOBs/containers?”. I'll make this easy, you cannot create hierarchical folder structures for your BLOBs within a container. But, the Storage Client SDK does provide the mechanism to mimic a hierarchy and provide you with the functionality to traverse the BLOBs in a hierarchical fashion.

The mechanism to do so is made up of the following 3 requirements: 
  • A delimiter (in this case the forward slash “/”)
  • The BLOB's name
  • The Storage Client SDK methods to traverse directories and subdirectories

So if we wanted to create a folder structure that mimicked something like “vacation/photos/family/kids” with “kids” being the lowest subdirectory that would contain pictures such as “kids-02.jpg”, along with a second folder at the same level as kids for “relatives” with it's own relative photos, we would upload it like so:

  1. CloudBlobContainer blobContainer = _client.GetContainerReference(“vacation”);   
  2. CloudBlockBlob kidBlob = blobContainer.GetBlockBlobReference("photos/family/kids/kids-02.jpg");   
  3. CloudBlockBlob auntBlob = blobContainer.GetBlockBlobReference("photos/family/relatives/aunt-velma-01.jpg");   
  4. blob.UploadFromFile(path, FileMode.Open);   
Then we can traverse the folder structure as in the following:
  1. CloudBlobDirectory photoDirectory = BlobContainer.GetDirectoryReference("photos");   
  2. CloudBlobDirectory familyKidsSubDirectory = photoDirectory.GetDirectoryReference("family/kids"); //combination   
  3. CloudBlockBlob blob = familyKidsSubDirectory .GetBlockBlobReference("kids-02.jpg");   
I should point out that you can either step through the superficial structure one hierarchy at a time or combine them as in “family/kids”. Using the container ListBlobs method on say a CloudBlobDirectory for just “family” would show us both the “kids” and “relatives”.

This would be a good time to point out that utilizing a method such as ListBlobs on a CloudBlobDirectory will be showing an IEnumerable<IListBlobItem>, but in this case it is not providing a list of individual metadata about actual BLOBs, but about the folder structure.

Tip: It is important to note that Azure BLOB Storage was not created with a hierarchical structure in mind. That being said, we generally will find ourselves in trouble when we attempt to make something do something that was not its original purpose. I would advise making use of containers as that hard boundary for grouping BLOBs by their relativity. Because at the end of the day, the superficial hierarchical folder structure is simply in the naming of the BLOB.

BLOB Snapshots (Backups)

Azure BLOB Storage provides a mechanism for creating backups of BLOBs as a snapshot. Snapshots are read-only copies of a BLOB at a specific time.
  1. CloudBlobContainer blobContainer = _client.GetContainerReference(containerName);   
  2. CloudBlockBlob blob = blobContainer.GetBlockBlobReference(blobName);   
  3. blob.CreateSnapshot();   
It's a good time to point out that in many of these Storage Client SDK methods that we have looked at in this article, allow for providing optional parameters. I have opted to keep this simple and look at the default use cases for most of them. Take for example, CreateSnapshot that also takes a collection object of string key/value pairs, such as a dictionary for specifying metadata for a BLOB snapshot at the time of creation.

As pointed out before, creating snapshots creates a small overhead to the act of deleting a BLOB if snapshots exists. Attempting to delete the preceding BLOB as in the following:
  1. blob.Delete()  
Would throw an HTTP 409 Conflict status code. Therefore, to delete the snapshot we would need to use the DeleteSnapshotsOption enumeration, IncludeSnapshots or None in order to delete the BLOB (None will throw an error if a snapshot does exists):
  1. //will delete the blob and any associated snapshots   
  2. blob.Delete(DeleteSnapshotsOption.IncludeSnapshots);   
We can delete only the snapshots by using the DeleteSnapshots options. The CreateSnapshot returns a BLOB reference to the snapshot. This snapshot has a property Time that will return when the snapshot was taken. Using this Time as well as a reference to the snapshot, you can replace the actual BLOB.

Where Am I?

Working with the Azure Storage Client SDK we have been working with containers and BLOBs by acquiring references and interfacing with container and BLOB objects. But you need to remember that the provided Azure API is a REST API and under the covers, the Storage Client SDK is handling making those HTTP/HTTPS requests to Azure. With that being said, the one aspect of all storage resources that we have been shield from by the Storage Client SDK is it's resource location.

One of the most important aspects of the container and BLOB is its associated storage URI. Containers and BLOBs have both a primary and secondary storage URI that defines where the BLOB or container resource can be found and retrieved. In addition, both containers and BLOBs have a URI property that only returns the primary location URI.

At the beginning of this article we talked about the concept of namespaces and primarily focused on the primary namespace involving the storage account. The container and BLOB primary and secondary URI is made up of the storage account namespace and its own relative address.

While we have mainly been setting up containers, uploading, downloading and manipulating BLOBs, the use of their URI really comes into play when a consumer, consumes the storage resource outside the scope of the Storage Client SDK such as displaying an image on a webpage or an interoperation by a third-party application just to name a couple of the infinite scenarios.

Asynchronous Operations

When looking through the CloudBlobContainer and CloudBlockBlob available methods, you will be inundated with a slew of options for the same operation. However, a vast majority are to support the older Asynchronous Programming Model (APM) with their Begin… and End…operation for every operation that supports asynchronous execution.

In regards to utilizing the asynchronous operations that the Storage Client SDK provides, the newer Task-based Asynchronous Pattern (TAP) based methods will be the asynchronous methods you will want to leverage. Following the TAP convention, these will be methods ending in Async such as UploadFromFileAsync. The swath of asynchronous methods that you see that support other asynchronous patterns are to support backward compatibility for the most part.

Since this is not an article on asynchronous programming, I will leave you with a couple resources for working with Tasks such as the MSDN page on TAP and working with aysnc and await keywords.

Concurrency

When we are dealing with multi-tenant applications that paint a high probability scenario where the same data could be updated simultaneously by multiple users, we need a way to manage those concurrent updates. Azure BLOB Storage provides the following two concurrency control mechanisms for doing just that: 
  • Optimistic Concurrency (via the ETag property)
  • Pessimistic Concurrency (via leases)

Optimistic Concurrency

In an optimistic concurrency model, resources are not locked and inaccessible to the clients. The system will assume that though conflicts are possible, they will be rare. Instead of locking a resource ever time it is in use, the system will look for indicators that multiple tenants actually did attempt to update the same resource at the same time. Azure BLOB Storage provides a optimistic concurrency control through the ETag property. This is available on both the BLOB as well as the container.

Every time an update is made to a BLOB or container you can see this value increase. Essentially, if an update is attempted against a container or BLOB that is out of order it will be rejected and an exception will be thrown.

We can utilize this concurrency control by providing an AccessCondition object on our update attempt. You will find a vast majority of update operations in the Storage Client SDK has an optional AccessCondition parameter.

  1. CloudBlockBlob blob = BlobContainer.GetBlockBlobReference("etag-example.jpg");   
  2. AccessCondition accessConditiion = new AccessCondition   
  3. {   
  4.     IfMatchETag = blob.Properties.ETag   
  5. };   
  6.   
  7. blob.UploadFromFile(@"C:\Azure\Demo\new-etag-example.jpg",   
  8. FileMode.Open, accessConditiion);   
We are setting the condition in which a successful update can occur. There are other AccessCondition properties that allow you to further control a success update as well. If a failed attempt has occurred, a 314 Not Modified HTTP Status code will be returned.

Pessimistic Concurrency

Pessimistic Concurrency is called pessimistic because it assumes the worst case. It assumes that two tenants will attempt to update the same resource at the same time. Therefore it will allow a lock to be placed on the resource by the first tenant, barring all further updates by other tenants.

Azure Storage's form of Pessimistic Concurrency is through the form of Leases. Both containers and BLOBs offer lease options. There is a difference between both leases where container leases are restricted to delete operations while BLOB leases are for both write and delete operations.

Container Leases

These container leases are strictly delete leases to ensure that the container is not deleted until either the specified length of time designated when the lease was obtained has expired or the lease has been released (not just broken). The time span that can be set for a lease is either infinitely or 15-60 seconds. The only caveat is if the lease ID is provided with the delete request.
  1. var leaseId = Guid.NewGuid();   
  2. blobContainer.AcquireLease(new TimeSpan(0, 0, 0, 60), leaseId.ToString());   
  3.   
  4. //will throw an StorageException with a 412 error   
  5. blobContainer.Delete();   
But you can either pass in an AccessCondition object with the specified Lease ID or release the lease before calling delete as in the following:
  1. blobContainer.Release();  
or:
  1. AccessCondition accessCondition = new AccessCondition { LeaseId = leaseId.ToString()};   
  2. blobContainer.Delete(accessCondition);   
We have seen how to Acquire and Release leases, but you can also Change, Renew and Break container leases. As a quick explanation: 
  • ChangeLease: will allow you to change the ID of the lease in question. This is useful when you want to transfer a lease to another party but maintain control over the lease
  • RenewLease: does exactly as you would expect and allows you to renew a current lease. In previous versions, this was able to be conducted on expired leases.
  • ReleaseLease: This allows the containers current lease to be released and the container to be free for another client to acquire an exclusive lease. It is important to understand the difference between ReleaseLease and BreakLease. Where BreakLease ends the lease, but ensures that another lease cannot be acquired by another client on the container until the current lease time period has expired.

BLOB Leases

BLOB leases establish a lock on BLOBs for both write and delete operations. The time span that can be set for a lease is either infinitely or 15-60 seconds just as with Container leases. This is the primary difference between container and BLOB leases where container leases are for locks strictly on delete operations. Because of the additional write lock that BLOB leases manage, an overhead is put on a number of write related operations requiring the Lease ID. These operations include:

  • BLOB upload operations
  • Setting BLOB Metadata
  • Setting BLOB properties
  • deleting BLOBs
  • copying BLOBs

Therefore, acquiring a lease on an existing BLOB and attempting to update will result in a returned 412 HTTP status code:

  1. CloudBlockBlob blob = BlobContainer.GetBlockBlobReference(blobName);   
  2. Guid leaseId = Guid.NewGuid();   
  3.   
  4. //Specifying a 45 second lease time before the lease is released for another consumer   
  5. blob.AcquireLease(new TimeSpan(0, 0, 45), leaseId.ToString());   
  6.   
  7. //Returns a 412 HTTP status code: No lease ID was specified in the request nor has the time period expired.   
  8. blob.UploadFromFile(path-to-file, FileMode.Open);   
You will need to provide an AccessCondition object specifying the current lease ID:
  1. blob.UploadFromFile(path-to-file, FileMode.Open, new AccessCondition{LeaseId = leaseId.ToString()});   
This same procedure would apply to all the preceding listed write operations when a lease exists on a BLOB. As far as delete locks by the BLOB lease, you can reference the "Container Leases” section under Concurrency for deleting BLOBs that have an active lease as the same mechanisms apply, but only within the context of a BLOB.

Tip: A gotcha that you need to be aware of regarding leases for both BLOBs and containers, they have a limited range of time that you can set for the lease as specified earlier and if an attempt to set a timespan that is outside of the 15 – 60 seconds (other than infinite), the service will return a 400 Bad Request HTTP status code. Although, the status code makes sense it still can be a bit vague as to “what” part of the request was bad (incorrect).

Security

Azure Storage security comes in a few different forms. But at the end of the day most of it would boil down to access control. The first form of security that is the least complex is the simplicity of setting the BLOB Container public access permissions.

Container Permissions

You will have noticed that in the CreateIfNotExists operation for containers, we are passing a BlobContainerPublicAccessType enumeration. This is the public access setting for the container itself.
  1. var blobContainer = _client.GetContainerReference(containerName);   
  2. blobContainer.CreateIfNotExists(BlobContainerPublicAccessType.Container);   
The various types breaks down as follows: 
  • Off: No public access and only the owner (someone with access keys) can read resources from the container
  • BLOB: Public access to a specific BLOB if a anonymous client knows the direct BLOB URI
  • Container: Specifies anonymous clients can retrieve information on the resources of the container, utilizing container methods such as ListBlobs to retrieve the containers resources.

That's all there is to it. The only other piece of information would be to use the Principle of Least Privilege and provide the minimal access to your container as possible for consumers to be able to operate.

Shared Access Signatures

We have already discussed that in order to access BLOB storage resources we need to have access to the Storage Access Keys. So if we were to provide additional clients direct access to BLOB storage resources, they would also need access to our Storage Access Keys. Yes, the same Storage Access Keys we defined earlier as “Keys to the Kingdom”. So why on earth would we want to do that? We don't give the keys to our house when someone asks for our home Address do we? This is where Shared Access Signatures can make it possible to provide clients access to BLOB storage resources without giving out our Storage Access Keys.

A Shared Access Signature (SAS) provide a granular level of access control to specific resources. Shared Access Signatures allow you to specify what access rights a consumer has for a specific resource for a given amount time. They can be created for containers, BLOBs, tables and queues. But, in this article we will strictly look at SASs for BLOB Storage. In the next up coming “Be Sure with Azure” on Table Storage, we'll look at how to utilize Shared Access Signatures for tables.

So what is a Shared Access Signature? Simply put, a SAS is a query string with a set of parameters that specifies the access rights and length of access. This query string accompanies a BLOB storage resource's URI. As we have already discussed, all BLOB storage resources (containers and BLOBs) have an associated primary and secondary URI for directly accessing that resources. Therefore, when the generated SAS is appended to the URI of a specific resources and together provided to a requesting consumer, the consumer will only have those access rights for the specified time that is specified by the SAS.

A BLOB Storage Shared Access Signature has the following 4 constraints:

  • The Storage Resource: the type of resource (container, blob, table, queue)
  • Start Time: the start time of SAS
  • Expiry Time: how long is the SAS good for
  • Permissions: the allowed operations (read, write, delete, list)

How It Works

Before getting into how to create one, let's start by breaking down a generated SAS into its components. Then we'll look at what is required for generating our own SAS, BLOB storage SAS limitations and what problems we need to be aware of.

Example of a URI with a Shared Access Signature:

https://MyAccount.blob.core.windows.net/demo/photo1.jpg?sv=2014-02-14&sr=b&st=2014-08-26T13%3A58%3A15Z&se=2014-08-26T13%3A58%3A25Z&sp=rw&sig=E0YB9qQBwtDGzX0KGgrrJf%2F2QM7cHgx7heoTtQ55Cwo%3D
 

blob URI https://MyAccount.blob.core.windows.net/demo/photo1.jpg The address to the BLOB resource.
sv sv=2014-02-14 Storage Version (required): that specifies what version to use of the service.
sr sr=b Signed Resource (required): specifies what type of resource, in this case b for blob.
st st=2014-08-26T13%3A58%3A15Z Start Time (optional): ISO 8061 Format: The date time the SAS begins based on UTC time. Recommended to be avoided if possible due to problems with time synchronization of remote clients.
se se=2014-08-26T13%3A58%3A25Z Signed Expiry (required): ISO 8061 Format: The date the SAS expires based on UTC time when combined with the start date or implied when not specified, defines the valid time period.
sp sp=rw Signed Permissions: (required) The defined access control permissions for this resource. In this case, read and write permissions are granted
sig sig=E0YB9qQBwtDGzX0KGgrrJf%2F2QM7cHgx7heoTtQ55Cwo%3D Signature: The signature is an HMAC computed over the string-to-sign and key using the SHA256 algorithm and then encoded using Base64 encoding. This signature is what Azure uses to authenticate the provide SAS.

When a request to a BLOB storage resource comes in with an accompanied Shared Access Signature, the first thing that Azure will do is validate the authenticity of the supplied query parameters to ensure they have not been tampered with. This is done by computing the HMAC-SHA256 hash that is generated using the Azure Storage Access Key with the supplied parameters as the message and verify it matches the provided sig parameter. Upon success, it will then validate the start time (if provided) and that the SAS has not expired as well as the permissions. If all validations check out, the requestor will be granted access to the specified resource in the URI assuming the requested operation matches the supplied permissions.

Note: In order to take advantage of the security imposed by the Shared Access Signature, we need to limit/restrict the access given to the storage access resource whether that is a container or BLOB. In other words, don't set the Container's BlobContainerPublicAccessType to Container or BLOB or any consumer will be able to access a BLOB resource. See Shared Access Signatures Best Practices below for more information.

Generating a SAS

There is much more to be said about working with Shared Access Signatures and we'll get to that, but for now let's look at creating our own for use through the Storage Client SDK. We'll do two things here, generate a Shared Access Policy to specify the access rights and expiry time and generate the SAS by providing the policy:

  1. SharedAccessBlobPolicy policy = new SharedAccessBlobPolicy()   
  2. {   
  3.     Permissions = SharedAccessBlobPermissions.Read | SharedAccessBlobPermissions.Write,   
  4.     SharedAccessStartTime = DateTime.UtcNow.AddDays(1),   
  5.     SharedAccessExpiryTime = DateTime.UtcNow.AddDays(1).AddMinutes(10),   
  6. };   
  7.   
  8. string sas = blob.GetSharedAccessSignature(policy);   
In this specific example, we are generating a SAS on a specific BLOB resource, by creating a policy that specifies read and write only permissions with a start time of 1 day in the future, lasting for 10 minutes. This returned SAS string can then be appended to the BLOB URI that when supplied to a consumer can govern their access to the resource.

Tip: Don't specify a start time unless it is absolutely required due to problems with clients and time clock synchronization issues. This example wanted to provide the possible available policy settings. See Shared Access Signatures Best Practices for more information.

For Containers and BLOBs, the following are the available access rights that can be defined.

Containers: 
  • Read
  • Write
  • List
  • Delete: note, this is allowing for any BLOB to be deleted. You can't provide access rights to delete the container.

BLOBs:

  • Read
  • Write
  • Delete

This is a simple example of how to use the Storage Client SDK operations to generate a SAS. But there is much more to think about when using SAS such as how do we revoke a generated SDK and managing the lifetime of a SAS.

Managing Shared Access Signatures Lifetime and Revocation

Until now, we have been using what is called ad hoc Shared Access Signatures. This is when we define the policy at the time we generate the SAS. With ad hoc Shared Access Signatures, we have managed to provide access to BLOB storage resources without having to provide the Storage Access Keys.

You might have observed that the HMAC-SHA256 hashed signature (query parameter: sig) generated to validate the authenticity of the provided query parameters uses the Azure Storage Access Key. So, you might be asking, what if we decided that we need to invalidate a SAS that had been handed out? You guessed it, we would need to regenerate the Storage Access Key that was used. Which also means that anything else that relies on that Access Key would also be invalidated immediately and need updating. With the exception of regenerating the Storage Access Key, our only other means of managing the SAS is by specifying very short expiration times.

Tip: It is always recommended to use short expiration periods for any generated Shared Access Signatures. This is one of the handful of Best Practice tips.

To prevent this from having to occur, Azure has the concept of Stored Access Policies that the Shared Access Signatures are derived from when they are specified.

Stored Access Policies

In the previous ad hoc SAS, we generated the policy along with the SAS. Stored Access Policies are policies generated separate from any SAS and stored on the server. In regards to Azure Storage, it is the containers where the Stored Access Policies are associated and only a maximum of 5 policies can be stored at any given time. These policies can be viewed as templates that contain specific access rights and expiry times. Shared Access Signatures that are generated from specifying an existing policy, provide the following two noticeable benefits:

  • Ease of revoking a generated Shared Access Signature without the need to regenerate Storage Access Keys
  • Easily setup predefined access rights and expiry time periods 

Once Stored Access Policies are in place, we can associate them with a Shared Access Signature by specifying the additional signedidentifier (si) query parameter:

si si=policy1 (Optional) A unique value up to 64 characters in length that correlates to an access policy specified for the container.

We can add Stored Access Policies through the Storage Client SDK by setting the Permissions on a container that contains a collection of SharedAccessBlobPolicies as in the following:

  1. CloudBlobContainer container = _client.GetContainerReference(containerName);  
  2. BlobContainerPermissions permissions = new BlobContainerPermissions();  
  3. permissions.SharedAccessPolicies.Add("policy1"new SharedAccessBlobPolicy   
  4. {   
  5.     Permissions = SharedAccessBlobPermissions.Read,   
  6.     SharedAccessExpiryTime = DateTime.UtcNow.AddDays(1)   
  7. });   
  8.   
  9. container.GetPermissions();   
  10. container.SetPermissions(permissions);   
We can then incorporate this policy into a SAS generation as in the following:
  1. //Notice we are ommitting the details of the SharedAccessBlobPolicy   
  2. blob.GetSharedAccessSignature(new SharedAccessBlobPolicy(), "policy");   
Notice that we have omitted the SharedAccessBlobPolicy since this is required when specifying a Stored Access Policy. Then, in the situation where we might need to revoke the generated Shared Access Signature, we only need to do one of the following to revoke any issued Shared Access Signature: 
  • Change the identifier (name) of the policy
  • Remove the Stored Access Policy (in this case Policy1) from the container
  • Provide a new list of Stored Access Policies that retains a policy with the same name but different parameters.

So, based on everything we have seen, when a consumer attempts to obtain a BLOB resource with a valid SAS they can proceed with operations that align with whatever rights have been granted by the SAS. But, in the case where a SAS is no longer valid due to expiration we can see an example of what this might look like attempting to acquire the resource.

failed-sas
Shared Access Signatures Best Practices

The following is a list of what I deemed the most important and security related Shared Access Signatures best practices documented by Microsoft Azure.

  1. HTTPS Only! You might have noticed that in the creation of the Stored Access Policies or the Shared Access Signatures did we once mention an association with any client, user or application. This is because they allow any consumer that possess the Shared Access Signature to have access to the resource. Therefore, the #1 or at least top 3 most important best practice would be the importance of using HTTPS for any creation and communication of the SAS.
     
  2. Adhere to the Principle of Least Privilege! This is only hinted at in the documentation, so I thought I would put it at the forefront. What I would consider probably the second most important security best practice for using SAS is, only issue the minimal credentials for the least amount of resources to a client that is required for them to do the necessary operations
     
  3. Use Policies! If at all possible use Stored Access Policies. Policies allow for the creation of revocable Shared Access Signatures and avoid the undesirable need to regenerate Storage Access Keys as a way to revoke ad hoc Shared Access Signatures.
     
  4. Issue short termed (near-term) policies! In either the case where you are issuing ad-hoc SAS or a revocable SAS has been compromised, policies with short term expirations minimize the exposed data's vulnerability.
     
  5. Validate data written using SAS. When a client application writes data to your storage account, keep in mind that there can be problems with that data. If your application requires that that data be validated or authorized before it is ready to use, you should do this validation after the data is written and before it is used by your application. This practice also protects against corrupt or malicious data being written to your account, either by a user who properly acquired the SAS, or by a user exploiting a leaked SAS.
     
  6. Don't always use SAS. Not all operations against your BLOB Storage account carry the same weight of importance. Therefore, don't always assume that a SAS is the perfect fit for all situations. If the risks of an operation outweighs any of the SAS benefits its time to rethink the situation. Use a middle tier of your application to do the necessary business rule validations, authentication, auditing and any further management processes. Finally, sometimes we tend to choose the harder path for no good reason, maybe a SAS is just adding unneeded overhead. Only you can judge the situation.

Managing/Access Tools

Finally, getting to the last part of what I feel is important to be aware of when working with Azure BLOB Storage is utilizing one of the handful of BLOB Storage Management tools. Some are free while others are not. These are not product reviews, but simply a way for you to be familiar with a few different BLOB Storage management tools that are available. MSDN blog site posted a Windows Azure Storage Explorers (2014) page for listing out a number of management tools available that you can also use for seeing what is available out in the wild!

Azure Storage Explorer

This is a free open-source application for viewing and performing various operations on your BLOB Storage resources. This will provide some of the basic operations such as read, delete and uploading and downloading of storage resources and creation of ad-hoc Stored Access Policies.

azure-storage-explorer

Azure Explorer

This is a very limited, free application by Cerebrata who provides Azure Explorer for performing some very rudimentary operations against your Azure Storage Account. Most of the limited operations include uploading, deleting, renaming and viewing containers and BLOBs. It is basically a teaser to their paid version Azure Management Studio.

The following you can see how Azure Explorer is displaying the hieratical structure like the BLOB we uploaded earlier using the “/” delimiter.

Azure-Explorer

Azure Management Studio

Azure Management Studio and the Azure Explorer both are by Cerebrata. The Azure Management Studio is what Cerebrata rolled their paid version of Azure Explorer (Cloud Storage Studio) into. It comes with your full list of features for operating on your Azure Storage Account. These features include read, write, delete BLOB storage resources, creating and managing Stored Access Policies, managing properties and metadata on resources just to name a few.

azure-management-studio

Conclusion

We have covered a lot, from working with BLOB Containers to BLOB resources themselves to concurrency and asynchronous operations and a healthy dose of various security options that Azure provides out of the box. The intent was to provide more than just a quick how-to of Azure BLOB Storage, but cover some of the areas less talked about, understand some of the gotchas and provide tips that might help working with Azure BLOB Storage. That being said, it would take more than just a lengthy article to cover all the features and their variations. So, as you get underway with working with Azure BLOB storage, take advantage of their documentation for all the available features. Stay tuned for the next installment of “Be Sure with Azure” where we will look closely at a similar topic; Azure Table Storage.

References:

BLOB Storage http://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-blobs/?rnd=1

Azure Subscriptions and Service Limits/Quotas http://azure.microsoft.com/en-us/documentation/articles/azure-subscription-service-limits/?rnd=1

Shared Access Signatures:

http://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-shared-access-signature-part-1/

http://msdn.microsoft.com/en-us/library/azure/jj721951.aspx

Storage Client Repository: https://github.com/Azure/azure-storage-net

Tools: http://blogs.msdn.com/b/windowsazurestorage/archive/2014/03/11/windows-azure-storage-explorers-2014.aspx

Up Next
    Ebook Download
    View all
    Learn
    View all