.Net

Service fabric cluster security

What’s Service Fabric?

If you are aware about the concept of micro services and keep yourself up-to-date with latest technologies, it is highly probable that you would have heard about Service Fabric. When you break your monolith applications into micro services or design brand new micro services, you would like to have an orchestrator that can manage service reliability, lifetime, scaling, upgrade mechanism, versioning, service discovery etc. Service Fabric is an orchestrator for micro services (and containers) developed by Microsoft. As per Microsoft, Service fabric is not new, it already powers many of existing highly scalable Azure services. This blog post is not about what service fabric is, but what it takes to secure your service fabric cluster (environment). One last thing, Service Fabric cluster is a collection of nodes that are used to host your services. Let’s see how to secure your cluster.

Cluster Security

When you create your cluster your should be aware available security options. Most of the advance things that are essential for security are not doable from Azure Portal (at-least they were not when I last checked) and can only be done via ARM scripts. Best time to do these things is during initial provisioning of the cluster.

Cluster Authentication

When you create your cluster there is always a scare that random nodes can join your cluster. To prevent this to happen we can use X509 certificates. Certificates are added to all valid nodes during provisioning and Service Fabric is made aware of this certificate. Only nodes with the certificate are allowed to communicate with each other and are accepted as being part of cluster. At the end I’ll provide a simple ARM script to provision a secure cluster.

Diagram of node-to-node communication

Server Authentication

You can connect to cluster from multiple clients like Visual studio, Powershell or Cloud Explorer. When you connect you have to provide your cluster endpoint, for example: https://mycluster.com:19000/. You can push binaries, secrets etc. using clients to your cluster. What about getting some guarantee that you are actually communicating to the real cluster that you indented to connect? During provisioning of the cluster you can define a server authentication certificate and while connecting you can provide thumbprint of that certificate. Service Fabric will make sure the cluster you are trying to connect has certificate (pfx) with same thumbprint. If not then connection will be declined.

Diagram of client-to-node communication

Role Based Access

Azure Active Directory is used to secure access to the service fabric cluster management endpoints. A Service Fabric cluster offers several entry points to its management functionality, including the web-based Service Fabric Explorer and Visual Studio. Thus, we create two AAD applications to control access to the cluster, one web app and one native application. For fine grained authorization, web app registered in AAD will have 2 app roles; ADMIN and READONLY.

Two SGs will be created in AAD tenant, one for cluster admins and other one for read only access to cluster. These SGs will be added to AAD application roles as follows:

  • ReadOnly security group should be added to READONLY app role
  • Admin security group will be added to ADMIN app role

To check more details around RBAC for service fabric cluster please check service fabric documentation. You can use script available at – http://servicefabricsdkstorage.blob.core.windows.net/publicrelease/MicrosoftAzureServiceFabric-AADHelpers.zip to create these AAD applications. if executed correctly with proper parameters, you will get following output that can be used in cluster creation ARM script.

image

Reverse Proxy, SSL and Service Fabric explorer.

  • While provisioning your cluster you should be aware of reverse proxy concept. If you enable reverse proxy on your cluster (say on port 19088), you should know that all services hosted on your cluster can be accessed from outside by browsing https://mycluster.com:19088/service/api/operation. This means, service might be running on some random port (say 2000) and you would think that as port 2000 is blocked on firewall, your service is hidden. If you have enabled reverse proxy and reverse proxy port is open on firewall then all you services are free to be browsed from outside. Scary… Smile is it not?
  • Always use SSL to host your services
  • Management endpoint that is exposed by default on port 19080 should not be exposed to public internet. You can use jump box and only allow access to management endpoint (service fabric explorer) via jump box. This means, if anyone wants to browse service fabric explorer to manage your cluster, they will have to first remote desktop into jump box and access the explorer.

Microsoft has some good documentation on cluster security. Take some time and check that as well. In next post I’ll share a cluster ARM script using which you can manage majority of above mentioned steps.

~cHeErS~
Currently listening to – ALL THE STARS [KENDRICK]

Advertisements

Cancelling async tasks – C#

async await programming pattern has become very common these days. For developers working on Windows store apps there is no other option apart from learning this pattern.

Few days back I was working on an app with a simple control flow:

  • User starts the application
  • Using async await we try to start a long running process
  • Once process is completed, results are shown to the user

One feedback that we got was to allow user cancel this long running process if he wishes to do so. All we have to do is give user a progress indicator, progress text and a button to cancel the current long running async task.

Let’s see our existing code. In our view model we have a method that is used to load media files in a playlist;

public async Task<bool> AutoLoadPlaylist()
{
try
{
this.InProgress = true;
this.ProgressText = "We are trying to add your last saved playlist";
if (string.IsNullOrEmpty(AutoSavedPlayListPath))
return false;

var files = await _command.AddPlayList(AutoSavedPlayListPath);
await AddMusicFiles(files);
}
finally
{
this.InProgress = false;
}
return true;
}
 
Here we call another command which is responsible for actually creating this playlist;
 
public async Task<IList<StorageFile>> AddPlayList(string path)
{
try
{
var file = await StorageFile.GetFileFromPathAsync(path);
Playlist playList = await Playlist.LoadAsync(file);
return playList.Files;
}
catch (FileNotFoundException)
{
AutoSavedPlayListPath = string.Empty;
return null;
}
}

and, finally we call AutoLoadPlaylist(…) method from our view;

await this.viewModel.AutoLoadPlaylist();
 
Now, in order to provide a way to cancel this operation we need to change our code little bit to implement task cancellation pattern. This is done via CancellationTokenSource class defined in System.Threading namespace.
 
First, let’s modify our command class method to accept a cancellation token as method argument and then use that token while starting the task;
 

public async Task<IList<StorageFile>> AddPlayList(string path,

CancellationToken cancellationToken)
{
try
{
var file = await StorageFile.GetFileFromPathAsync(path);
Playlist playList = await Playlist
.LoadAsync(file)
.AsTask(cancellationToken);
return playList.Files;
}
catch (FileNotFoundException)
{
AutoSavedPlayListPath = string.Empty;
return null;
}
}

Now, modify the method in view model to accept cancellation token and then pass it on to command method. We also need to catch OperationCanceledException which is thrown when a task is cancelled.

public async Task<bool> AutoLoadPlaylist(CancellationToken cancellationToken)
{
try
{
this.InProgress = true;
this.ProgressText = "We are trying to add your last saved playlist";
if (string.IsNullOrEmpty(AutoSavedPlayListPath))
return false;

var files = await _command.AddPlayList(AutoSavedPlayListPath,

cancellationToken);
await AddMusicFiles(files);
}
catch (OperationCanceledException)
{
// no need to do anything
}
finally
{
this.InProgress = false;
}
return true;
}

Now modify the code that is calling view model’s AddPlayList(…) method and pass a cancellation token instance;

CancellationTokenSource cts;

CancellationToken _cancellationToken;


private async void InitializeControls()
{
try
{
cts = new CancellationTokenSource();
_cancellationToken = cts.Token;

await this.viewModel.AutoLoadPlaylist(_cancellationToken);
}
catch (Exception ex)
{
var dlg = new MessageDialog(ex.Message);
dlg.ShowAsync();
}
}
 
Infrastructure wise we are done. Only thing left now is to provide a button which user will click to cancel this long running operation. In your cancel button click (tapped) event handler add this code;
 
if (cts != null)
{
this.viewModel.ProgressText = "Trying to cancelling the operation";
cts.Cancel();
}

Make sure you cancel correct cancellation token source object 🙂

Exception dispatch info

Imagine you have a simple code as shown below:

class Program
{
static void Main()
{
try
{
SimpleErrorCheck();
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
}

public static void SimpleErrorCheck()
{
try
{
int b = 1;

int result = (b / (b - 1));

Console.WriteLine("Result is {0}", result);
}
catch (Exception ex)
{
// log

throw;
}
}
}

A call is made to SimpleErrorCheck() method where we do some calculations that results in a divide by zero exception.

Using throw ex overwrites the stack trace while using throw keeps it intact so that you can go back and see where exactly the error happened. You can also examine generated IL and see rethrow is used if we do throw ex.

throw is only available inside catch block. If there is some stuff like code related to rollback or anything else that you would like to do outside of catch block and then throw the error, you will have to rethrow which will overwrite the stack trace.

In .Net 4.5 ErrorDispatchInfo class is introduced to solve this problem.
 
Let’s change our code to use ErrorDispatchInfo class:
 
class Program
{
static void Main()
{
try
{
SimpleErrorCheck();
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
}

public static void SimpleErrorCheck()
{
ExceptionDispatchInfo exInfo = null;

try
{
int b = 1;

int result = (b / (b - 1));

Console.WriteLine("Result is {0}", result);
}
catch (Exception ex)
{
exInfo = ExceptionDispatchInfo.Capture(ex);
}

// do some operation
if (exInfo != null)
{
exInfo.Throw();
}
}
}

Let’s examine the stack trace that we got in console:

image

Stack trace of original exception is preserved and we can get exact line of code.

Fluent API – Improvements

Please read my earlier article on Fluent API before reading this post. I wrote my old article to get you started with Fluent API design. There are few essential improvement that we should do in that implementation.

In our Fluent interface all methods were returning similar type of object, IFluentPolicyEngine

public interface IFluentPolicyEngine
{
IFluentPolicyEngine CreatePolicy(string name, string description);

IFluentPolicyEngine PerformsOperationOnData(string operation);

IFluentPolicyEngine HavingDataFilter(string type);

IFluentPolicyEngine Where(string filter);

IFluentPolicyEngine Validate();

void Save();
}

This makes it very easy to chain methods but in real world applications this will not work well. There should be a predefined way of chaining methods. Above interface let’s you chain methods in any order. There will be some business rules that should be used while creating these chains. Designing your fluent API based on business rules helps in:

  • Business rule driven data validation
    Suppose our policy has a business rule, a policy can have filter statements added via Where(…) only if data filter is added to the policy via HavingDataFilter(…). It is not possible to restrict chaining using our above interface. Only possible solution is to do a check in API method and throw exception. Nahhh :(, we should strive for compile time safety rather than runtime.
  • Improving code readability
    If API is designed carefully based on business rules then rather than sprinkling validation code in all methods we can have far better data validation routines

Let’s modify our fluent API to support following business rules:

  • Client can only add policy filter after adding business operation
  • Client can only add filter statements via Where(…) after data filter is added via AddDataFilter(…)
  • Client can only validate policy when an operation and at least one filter statement is added to the policy
  • Validation can only be performed after adding operation and filter to the policy
  • Save can only be called on a validated policy

New interface design looks like this:

public interface IFluentPolicyEngine
{
IFluentPolicyEngine CreatePolicy(string name, string description);

IFluentPolicyWithOperation PerformOperationOnData(string operation);
}

public interface IFluentPolicyWithOperation
{
IFluentPolicyWithFilter HavingDataFilter(string type);
}
public interface IFluentPolicyWithFilter
{
IFluentPolicyWithFilter Where(string filter);

IValidatedPolicy Validate();
}

public interface IValidatedPolicy
{
void Save();
}

Modify the class where old single interface was implemented by implementing all new interfaces and changing method return types. Client code remains the same.

Now you can only write client code with proper business rule driven method chaining. Ad hoc chaining will result in compile time issues. Download the code and try changing things.

Download Code : http://1drv.ms/1o3lnyC

Starting with Fluent API

Last week, while trying out a new logging mechanism, Serilog, I stumbled upon a new style of API design. It is known as Fluent API and looked like an interesting topic to understand.

What is Fluent API?

This term was coined by object oriented programming guru Martin Fowler in 2005. Not so new 🙂 His idea was to develop client friendly code that is easy to understand (read) and easy to maintain.

Business processes are defined by developing mix of objects and then chaining them together via some internal domain specific language. LINQ is a perfect place to start with. The way multiple methods are chained together to create a SQL like business operation is an example of Fluent API. We keep on adding Where(), Select(), Take() etc. methods to our code statement and each operation returns IEnumerable object. This is commonly known as method chaining. See following example to understand it better:

var underAgeCustomerCount = Customers                
.Where(c => c.Age <= 18)
.OrderBy(c => c.FirstName)
.Count();

How it is done?

I’ll take one of my real life project as a sample and show how we can add a very basic Fluent interface to that.

Business Goal of the project

This project contains a policy engine where users can define policies based on which operations are performed on data. Users can create multiple policies and save them. When data passes through this policy engine, appropriate policy is kicked off and a specific operation is performed.

Each policy consists of an operation and set of filters. Consider filter as a SQL WHERE clause. It can have multiple filter statements and these statements are added together by either AND or OR. So a filter can look like this:

WHERE Name = 'Gaurav Sharma' AND Age > 18 AND City = 'HYD'

Above is a filter, having 3 statements and uses AND as separator.

Operation is a method that is executed if data matches with defined filter.

Current Implementation

There are POCO classes defined for Policy, Operation, Filter and FilterStatement. Then there is a PolicyManager class that provided functionality to Validate and Save policy. Code looks like this:

public class Policy
{
public Guid ID { get; set; }
public string Name { get; set; }
public string Description { get; set; }
public Filter Filter { get; set; }
public Operation Operation { get; set; }
}


public class Filter
{
public string Type { get; set; }
public List<FilterStatement> FilterStatements { get; set; }
}
public class FilterStatement
{
public string Statement { get; set; }
}
public class Operation
{
public string Name { get; set; }
}
 
public class PolicyManager
{
public void Validate(Policy policy)
{
// check for invalid properties in policy object
// throw exception in case of validation failures
}

public Policy Save(Policy policy)
{
// create policy and save to store

// create a new ID and update the object
policy.ID = Guid.NewGuid();

return policy;
}
}
 
This is very basic representation of what we are actually doing in our project but for this blog I think it is sufficient. Now if you have to write client code for this then it would look similar to:
 
var policy = new Policy();

policy.Name = "Sample policy";
policy.Description = "This is a sample policy";

var operation = new Operation();
operation.Name = "Create Order";

var statement1 = new FilterStatement();
statement1.Statement = "City = New Delhi";

var statement2 = new FilterStatement();
statement1.Statement = "CustomerName = Gaurav Sharma";

var statement3 = new FilterStatement();
statement1.Statement = "TotalAmount > 100000";

var filterStatements = new List<FilterStatement>();
filterStatements.Add(statement1);
filterStatements.Add(statement2);
filterStatements.Add(statement3);

var filter = new Filter();
filter.FilterStatements = filterStatements;

policy.Operation = operation;
policy.Filter = filter;

var manager = new PolicyManager();
manager.Validate(policy);
policy = manager.Save(policy);

Console.WriteLine("Successfully saved policy with ID {0}", policy.ID);

Remember good old ADO.NET days when we use to write this type of code for data operations. Remember DataConnection, DataReader, DataSet etc. It was painful.

We can clean this a little more by using different constructors and object initializes. Let’s skip that for now. Now we will add very simple Fluent interface implementation to our project and see how it improves client code.

public interface IFluentPolicyEngine
{
IFluentPolicyEngine CreatePolicy(string name, string description);

IFluentPolicyEngine PerformsOperationOnData(string operation);

IFluentPolicyEngine HavingDataFilter(string type);

IFluentPolicyEngine Where(string filter);

IFluentPolicyEngine Validate();

void Save();
}
 
See the return type defined for our new fluent interface methods. This helps us in chaining these methods together. Let’s implement this interface.
 
public class FluentPolicyEngine : IFluentPolicyEngine
{
public Policy Policy { get; set; }

public FluentPolicyEngine()
{
Policy = new Policy();
}
public IFluentPolicyEngine CreatePolicy(string name,
string description)
{
Policy.Name = name;
Policy.Description = description;
return this;
}

public IFluentPolicyEngine PerformsOperationOnData(string operation)
{
var op = new Operation
{
Name = operation
};
Policy.Operation = op;
return this;
}

public IFluentPolicyEngine HavingDataFilter(string type)
{
var filter = new Filter
{
FilterStatements = new List<FilterStatement>(),
Type = type
};
Policy.Filter = filter;
return this;
}

public IFluentPolicyEngine Where(string filter)
{
var filterStatement = new FilterStatement
{
Statement = filter
};
Policy.Filter.FilterStatements.Add(filterStatement);

return this;
}

public IFluentPolicyEngine Validate()
{
if (Policy.Operation == null)
throw new InvalidDataException("Operation is not provided.");

if (!(Policy.Filter != null
&& Policy.Filter.FilterStatements != null
&& Policy.Filter.FilterStatements.Count > 0))
throw new InvalidDataException("Filter is missing.");

return this;
}

public void Save()
{
Policy.ID = Guid.NewGuid();
}
}
 
This is again a very simple implementation of our interface but sufficient for this write up. Methods are simple and self explanatory. Now our new client code looks like:
 
var policyEngine = new FluentPolicyEngine();
policyEngine
.CreatePolicy("Fluent Policy", "Fluent policy sample")
.PerformsOperationOnData("Create Order")
.HavingDataFilter("All")
.Where("City = New Delhi")
.Where("CustomerName = Gaurav Sharma")
.Where("TotalAmount > 100000")
.Validate()
.Save();

Console.WriteLine("Successfully saved policy with ID {0}",
policyEngine.Policy.ID);
 
See how readable this code is now.
Some important points
  • Unless client developer understand Fluent API business domain, it is very tough to write this type of code. Think about a developer knowing nothing about SQL and then trying his hands on LINQ.
  • Debugging is little difficult in this type of client code. You can not put breakpoint on a specific method call. 
  • There are multiple ways of doing what I just showed you. For example, in C# we can use extension methods to achieve similar results.
  • Lot of configuration APIs provide fluent interfaces. Entity framework is one of them.
  • Although it simplifies client code, your API code can become complex. Keep a watch on that.

It is not easy to create efficient Fluent API. You have to think a lot. So think.then.code.

Download Code: http://1drv.ms/1khBvvD