Tuesday, December 17, 2013

CSLA, Inversion of Control and Unit Testing: Part 1 Sending an interface to the CSLA Data Portal

This post is part one in a multi part series on Inversion of Control (IoC) and unit testing in the CSLA framework. 

As most of you know, IoC can reduce coupling within our applications.  CSLA has many new features that support this.  For the purposes of this blog post I'm using CSLA version 4.5.4.

How do we get started?  I'll start with creating the interfaces that we will use.  The five most common stereotypes in CSLA are BusinessBase, BusinessListBase, ReadOnlyBase, ReadOnlyListBase and CommandBase.  With each of these we can make an interface for our concrete classes that in turn inherits from the appropriate CSLA interfaces for these types:

A sample interface for BusinessBase:

   1:  public interface ITestBusinessBase : Csla.IBusinessBase
   2:  {
   3:      int Id { get; }
   4:      string Name { get; set; }
   5:      void SomeMethod(string someParameter);
   6:  }


A sample interface for BusinessListBase and a leaf:

   1:  public interface ITestBusinessBaseLeaf : Csla.IBusinessBase
   2:  {
   3:      int Id { get; }
   4:      string Name { get; set; }
   5:  }
   6:      
   7:  public interface ITestBusinessCollectionBase : Csla.IBusinessListBase<ITestBusinessBaseLeaf>
   8:  {
   9:  }

A sample interface for ReadOnlyBase:

   1:  public interface ITestReadOnlyBase : Csla.IReadOnlyBase
   2:  {
   3:      int Id { get; }
   4:      string Name { get; }
   5:  }

A sample interface for ReadOnlyListBase:

   1:  public interface ITestreadOnlyBaseLeaf : Csla.IReadOnlyBase
   2:  {
   3:      int Id { get; }
   4:      string Name { get; set; }
   5:  }
   6:   
   7:  public interface ITestReadOnlyListBase : Csla.IReadOnlyListBase<ITestreadOnlyBaseLeaf>
   8:  {
   9:  }

A sample interface for CommandBase:

   1:      public interface ITestCommandBase : Csla.ICommandBase
   2:      {
   3:          bool SomeResult();
   4:      }

One thing to notice here is there are no static methods and the CSLA samples generally use static factory methods.  Why is this?  Unfortunately static methods are not able to be added to an interface in C# so if we still use factory methods we need to be aware that any code that uses them may be tied to the concrete class that defines them.  This may, or may not, be OK depending on what we are trying to do.  It is important to note that the static factory method could still use an IoC container to resolve out the object factory and instance that it is creating.

Ok, so what's next?  I use an IoC container, for the purposes of this posting I'll use Autofac.  Generally in our production code we are going to use the same IoC container for all operations.  I use a Service Locator pattern for this.  There are some schools of thought that declare this pattern to be somehow wrong but I don't subscribe to that philosophy.  Generally if you use ASP MVC you are already using service locators to resolve out the routes and to bind your post backs to your models.  I'm not going to get too far into why the Service Locator pattern may in some cases be appropriate but suffice it to say if you are using CSLA, you will need to buy into the idea that Service Locators are at least not harmful. 

The following static class gives me program wide reference to my IoC container.  At some point I need to set the Container property for use:

   1:  public static class IoC
   2:  {
   3:      public static Autofac.IContainer Container { get; set; }
   4:  }

Another CSLA component that we may take a dependency on is the DataPortal itself.  I generally abstract this out behind my own implementation using CSLA's IDataPortal<T> interface.  For the purpose of this post, I created an IObjectFactory interface to also expose out the "child" DataPortal methods that are not part of the IDataPortal<T> interface:

   1:  public interface IObjectFactory<T> : IDataPortal<T>
   2:  {
   3:      TC CreateChild<TC>();
   4:      TC CreateChild<TC>(params object[] parameters);
   5:      TC FetchChild<TC>();
   6:      TC FetchChild<TC>(params object[] parameters);
   7:      void UpdateChild(object child);
   8:      void UpdateChild(object child, params object[] parameters);
   9:  }

The concrete implementation of the ObjectFactory class would be as so:

   1:  public sealed class ObjectFactory<T> : Common.Interfaces.IObjectFactory<T> where T : class, IMobileObject
   2:  {
   3:      public void BeginCreate(object criteria, object userState)
   4:      {
   5:          DataPortal.BeginCreate(criteria, CreateCompleted, userState);
   6:      }
   7:   
   8:      public void BeginCreate(object criteria)
   9:      {
  10:          DataPortal.BeginCreate(criteria, CreateCompleted);
  11:      }
  12:   
  13:      public void BeginCreate()
  14:      {
  15:          DataPortal.BeginCreate(CreateCompleted);
  16:      }
  17:   
  18:      public void BeginDelete(object criteria, object userState)
  19:      {
  20:          DataPortal.BeginDelete(criteria, DeleteCompleted, userState);
  21:      }
  22:   
  23:      public void BeginDelete(object criteria)
  24:      {
  25:          DataPortal.BeginDelete(criteria, DeleteCompleted);
  26:      }
  27:   
  28:      public void BeginExecute(T command, object userState)
  29:      {
  30:          DataPortal.BeginExecute(command, ExecuteCompleted, userState);
  31:      }
  32:   
  33:      public void BeginExecute(T command)
  34:      {
  35:          DataPortal.BeginExecute(command, ExecuteCompleted);
  36:      }
  37:   
  38:      public void BeginFetch(object criteria, object userState)
  39:      {
  40:          DataPortal.BeginFetch(criteria, FetchCompleted, userState);
  41:      }
  42:   
  43:      public void BeginFetch(object criteria)
  44:      {
  45:          DataPortal.BeginFetch(criteria, FetchCompleted);
  46:      }
  47:   
  48:      public void BeginFetch()
  49:      {
  50:          DataPortal.BeginFetch(FetchCompleted);
  51:      }
  52:   
  53:      public void BeginUpdate(T obj, object userState)
  54:      {
  55:          DataPortal.BeginUpdate(obj, UpdateCompleted, userState);
  56:      }
  57:   
  58:      public void BeginUpdate(T obj)
  59:      {
  60:          DataPortal.BeginUpdate(obj, UpdateCompleted);
  61:      }
  62:   
  63:      public T Create()
  64:      {
  65:          return DataPortal.Create<T>();
  66:      }
  67:   
  68:      public TC CreateChild<TC>()
  69:      {
  70:          return DataPortal.CreateChild<TC>();
  71:      }
  72:   
  73:      public TC CreateChild<TC>(params object[] parameters)
  74:      {
  75:          return DataPortal.CreateChild<TC>(parameters);
  76:      }
  77:   
  78:      public T Create(object criteria)
  79:      {
  80:          return DataPortal.Create<T>(criteria);
  81:      }
  82:   
  83:      public async Task<T> CreateAsync(object criteria)
  84:      {
  85:          return await DataPortal.CreateAsync<T>(criteria);
  86:      }
  87:   
  88:      public async Task<T> CreateAsync()
  89:      {
  90:          return await DataPortal.CreateAsync<T>();
  91:      }
  92:   
  93:      public event EventHandler<DataPortalResult<T>> CreateCompleted;
  94:   
  95:      public void Delete(object criteria)
  96:      {
  97:          DataPortal.Delete<T>(criteria);
  98:      }
  99:   
 100:      public Task DeleteAsync(object criteria)
 101:      {
 102:          return DataPortal.DeleteAsync<T>(criteria);
 103:      }
 104:   
 105:      public event EventHandler<DataPortalResult<T>> DeleteCompleted;
 106:   
 107:      public T Execute(T obj)
 108:      {
 109:          return DataPortal.Execute<T>(obj);
 110:      }
 111:   
 112:      public async Task<T> ExecuteAsync(T command)
 113:      {
 114:          return await DataPortal.ExecuteAsync<T>(command);
 115:      }
 116:   
 117:      public event EventHandler<DataPortalResult<T>> ExecuteCompleted;
 118:   
 119:      public T Fetch()
 120:      {
 121:          return DataPortal.Fetch<T>();
 122:      }
 123:   
 124:      public T Fetch(object criteria)
 125:      {
 126:          return DataPortal.Fetch<T>(criteria);
 127:      }
 128:   
 129:      public async Task<T> FetchAsync(object criteria)
 130:      {
 131:          return await DataPortal.FetchAsync<T>(criteria);
 132:      }
 133:   
 134:      public async Task<T> FetchAsync()
 135:      {
 136:          return await DataPortal.FetchAsync<T>();
 137:      }
 138:   
 139:      public TC FetchChild<TC>()
 140:      {
 141:          return DataPortal.FetchChild<TC>();
 142:      }
 143:   
 144:      public TC FetchChild<TC>(params object[] parameters)
 145:      {
 146:          return DataPortal.FetchChild<TC>(parameters);
 147:      }
 148:   
 149:      public event EventHandler<DataPortalResult<T>> FetchCompleted;
 150:   
 151:      public ContextDictionary GlobalContext
 152:      {
 153:          get { return ApplicationContext.GlobalContext; }
 154:      }
 155:   
 156:      public T Update(T obj)
 157:      {
 158:          return DataPortal.Update<T>(obj);
 159:      }
 160:   
 161:      public async Task<T> UpdateAsync(T obj)
 162:      {
 163:          return await DataPortal.UpdateAsync<T>(obj);
 164:      }
 165:   
 166:      public event EventHandler<DataPortalResult<T>> UpdateCompleted;
 167:   
 168:      public void UpdateChild(object child)
 169:      {
 170:          DataPortal.UpdateChild(child);
 171:      }
 172:   
 173:      public void UpdateChild(object child, params object[] parameters)
 174:      {
 175:          DataPortal.UpdateChild(child, parameters);
 176:      }
 177:  }

This object factory implementation requires that our IoC container understand the concept of generics and the type of object that we pass in must implement CSLA's IMobileObject interface.  The IBusinessBase, IBusinessListBase, IReadOnlyBase, IReadOnlyListBase and ICommandObject all implement this interface and will be usable for our custom object factory.  If I always make sure to use the IObjectFactory interface to resolve out my DataPortal call I will be able to mock the entire thing out when writing unit tests.

To create our concrete classes we can follow this pattern:

   1:  public class TestBusinessBase : BusinessBase<TestBusinessBase>, ITestBusinessBase
   2:  {
   3:      public static readonly PropertyInfo<int> IdProperty = RegisterProperty<int>(c => c.Id);
   4:      public int Id
   5:      {
   6:          get { return GetProperty(IdProperty); }
   7:          private set { LoadProperty(IdProperty, value); }
   8:      }
   9:   
  10:      public static readonly PropertyInfo<string> NameProperty = RegisterProperty<string>(c => c.Name);
  11:      public string Name
  12:      {
  13:          get { return GetProperty(NameProperty); }
  14:          set { SetProperty(NameProperty, value); }
  15:      }
  16:   
  17:      public void SomeMethod(string someParameter)
  18:      {
  19:          throw new NotImplementedException();
  20:      }
  21:   
  22:      public static ITestBusinessBase CreateTechBusinessBase()
  23:      {
  24:          return IoC.Container.Resolve<IObjectFactory<ITestBusinessBase>>().Create();
  25:      }
  26:   
  27:      public async static Task<ITestBusinessBase> GetTestBusinessBaseByIdAsync(int id)
  28:      {
  29:          return await IoC.Container.Resolve<IObjectFactory<ITestBusinessBase>>().FetchAsync(id);
  30:      }
  31:  }

This is a sample of an implementation for our BusinessBase interface and all the stereotypes will generally look the same.  There are a few of things to notice here:

- Our concrete class still inherits from CSLA's BusinessBase class as well as our ITestBusinessBase interface
- Using the static factory methods will tie anything using it to the concrete class TestBusinessBase to locate the factory method
- We always return our interface instead of the concrete class type
- We use the global IoC class to resolve by interface instead of calling the concrete object factory or the concrete TestBusinessBase.  This allows us to return any implementation of the IObjectFactory and/or ITestBusinessBase we desire from the IoC container.  This will be very useful when it comes to creating unit tests.

If you don't like the idea of using the static factory methods, any code that would call the factory method can call the object factory instead.  The downside of this is if we have our factory methods abstract out knowledge of the criteria and parameters allowed to make it easy for the consumers of our classes.  The person using our classes will instead need to know the particulars of the allowed parameters for the factory.  For the above class we could call the following instead of calling the GetTestBusinessBaseByIdAsync factory method:

   1:  await IoC.Container.Resolve<IObjectFactory<ITestBusinessBase>>().FetchAsync(id);

To understand what we need to do next we have to understand a little about how CSLA's data portal works.  When we ask it to create, fetch or save an object it creates an instance of that type (unless using the object factory pattern which is not covered here).  The type to create is determined by the type set in the data portal generic.  For example if I call the following:

   1:  DataPortal.Create<TestBusinessBase>()

The CSLA data portal will attempt to create an instance of the TestBusinessBase object and then call it's DataPortal_Create method.  But that's not what our code ends up doing.  What our code ultimately will call on the CSLA data portal is this:

   1:  DataPortal.Create<ITestBusinessBase>()

CSLA's data portal won't know what concrete class to create in this case.  ITestBusinessBase is just an interface.  So what do we do?  This is where CSLA's IDataPortalActivator interface comes into play.  This interface allows us to do two things:

- Tell CSLA what instance to create (CreateInstance method)
- Set any properties on that instance after it is created (InitializeInstance method)

There are a few items to note here.  The CreateInstance method only receives one parameter, the type that is being requested.  In normal CSLA operation this would be a concrete type but in our case we are going to be passed in the ITestBusinessBase interface.  There is an implication to this as we do not have access to any other parameters sent along with the data portal call.  This means we only have a few ways to create the required instance from the type information:

- A convention that given the type of the interface we are able to determine the concrete type we want to create

   1:  public object CreateInstance(Type requestedType)
   2:  {
   3:      if (requestedType == null)
   4:      {
   5:          throw new ArgumentNullException("requestedType");
   6:      }
   7:   
   8:      return requestedType.IsInterface ? CreateConcreteTypeByConvention(requestedType) : Activator.CreateInstance(requestedType);
   9:  }

- Pulling something out of global scope, such as an IoC container, to resolve out the concrete type we want to create

   1:  public object CreateInstance(Type requestedType)
   2:  {
   3:      if (requestedType == null)
   4:      {
   5:          throw new ArgumentNullException("requestedType");
   6:      }
   7:   
   8:      return requestedType.IsInterface ? IoC.Container.Resolve(requestedType) : Activator.CreateInstance(requestedType);
   9:  }

The InitializeInstance method allows us to set and initialize properties on the newly created instance but it only receives a reference to the instance of the class that was created in the CreateInstance method.  Like the CreateInstance method, any information that we sent along with the initial data portal request is not directly available to us.  For example, we cannot pass in the IoC container as a parameter to a factory method that uses the CSLA data portal and then use that parameter in the CreateInstance method to create an instance of the class we want without somehow storing it in a global context. 

CSLA does provide a way to set up parameters in a global context that can be used in the CreateInstance and InitializeInstance methods.  Implementing the IInterceptDataPortal interface will provide us with methods we can use for this purpose. This interface has Initialize and Complete methods that happen at the start of and end of our data portal call and allow us to set up and then clean up any global context needed.  Be aware that the are some complications around the cleanup, particularly if an error occurs in the request.  I won't be covering using the IInterceptDataPortal interface in this blog post.  The best person to ask about using it is Jason Bock.

A sample IDataPortalActivator class may look like this:

   1:  public sealed class ObjectActivator : IDataPortalActivator
   2:  {
   3:      public object CreateInstance(Type requestedType)
   4:      {
   5:          if (requestedType == null)
   6:          {
   7:              throw new ArgumentNullException("requestedType");
   8:          }
   9:   
  10:          return requestedType.IsInterface ? IoC.Container.Resolve(requestedType) : Activator.CreateInstance(requestedType);
  11:      }
  12:   
  13:      public void InitializeInstance(object obj)
  14:      {
  15:      }
  16:  }


To tell CSLA to use our new activator instead of the default CSLA implementation call the following line in code:

   1:  Csla.ApplicationContext.DataPortalActivator = new ObjectActivator();

Notice this is also a service locator pattern.  I want to reiterate this idea.  Like ASP MVC, CSLA is built on top of service locators.  This call is no different.  In order to use frameworks like ASP MVC or CSLA you have to buy into the idea that the service locator pattern is sometimes desirable.

In this post we learned how to create and pass around CSLA objects as interfaces instead of concrete classes using the CSLA data portal.  This will allow us much greater flexibility when it comes to unit testing and mocking.  In my next post we will talk about how to similarly abstract away data access calls to assist in unit testing and mocking.

Saturday, November 23, 2013

Windows Azure SQL Databases, Cost Effective or Not?

Last week I gave an Windows Azure presentation at Modern Apps Live in Orlando.  The session itself wasn't well attended and that matches many of my past experiences with Azure presentations.  I spoke to several people afterwards and it seems that much of the disinterest around Azure has to do with its perceived high cost when comparing it to alternatives.

To some extent this is Microsoft's fault, when they came out with the service initially it wasn't all that well priced.  But for the most part that has changed and Azure is a much better deal than people realize, particularly when looking at factors around redundancy.  For example, there are few ways to quickly and cheaply get a redundant T-SQL database up than by using a Windows Azure SQL Database (WASD).

Companies usually have about three options here.  They can use a private cloud (either existing or build a new one), they can use some sort of IAAS provider like Rackspace (or use Azure IAAS), or they can use a PAAS like WASD.

The private cloud cost comparison is harder to quantify as there are so many situational specific factors that need to be understood in the calculation.  It is usually true that for companies who have existing private clouds it is cheaper to leverage a sunk investment than to move to an IAAS or PAAS SQL solution in the short term.  I want to reiterate; in the short term.  That is because in the short term much of the cost involved in creating that private cloud is sunk.  In the long term (five year horizon) that calculation may change as hardware needs to be refreshed and the amount of services  you plan on deploying to your private cloud will influence the amount of hardware you will need to keep up to date to continue your private cloud investment.  This is an area where momentum tends to come into play that is to say, the path of least resistance is for IT managers to continue to refresh their private cloud infrastructure than to come up with a plan for moving a portion of their services into a hybrid cloud.

In general, if you have a private cloud and want to do a long term analysis on the benefits of moving SQL Services to a hybrid or public cloud these are some of the factors you want to keep in mind.
  • If I move some of my services to a hybrid/public cloud what will my data transfer services cost me in the cloud vs. how much do they cost me now?  Corollary to this is the question of should I move applications that use the data into the cloud as well to limit that data transfer cost?  Remember, to do an apples to apples comparison here, moving around data on your current infrastructure has an associated cost as well (hardware to deal with the amount of data flowing through the network, dark fiber between facilities, external lines that have a data charge for lighting buildings or employees in the field, etc.)  The cost of data transfer is most likely the greatest single monetary factor in this equation.
  • If you move some portion of your services to the cloud, can you do a lower investment in hardware in your private cloud due to the lower need for services in your own facilities?
  • What kind of data do you have and what are the risks between maintaining it in different locations?  The common example is that HIPPA generally is not used in public clouds, though Azure has been working on solutions for that too: http://www.eweek.com/c/a/Health-Care-IT/Microsoft-Adds-HIPAA-Compliance-in-Windows-Azure-for-Cloud-Health-Data-446671/
  • Do I need to have a solution with geographically diverse databases and if so, what are my requirements behind how often the data needs to be kept in sync?  WASD can be set up in many different geographic regions and kept in sync, over time.  The sync interval can be set as low as every five minutes but due to the hub and spoke model even that may take as long as 10 minutes for data changes to be distributed from one spoke to another.
  • How much does your OS licensing cost you?
  • Can the applications that use the database use SQL security instead of Windows integrated security?  WASD only supports the SQL security model.
  • How much does your infrastructure cost you and what are the opportunity costs associated with having infrastructure personnel working machines with SQL databases instead of the many other mission critical items they could be focusing on?  Do you need the same amount of infrastructure resources, particularly as the company grows, etc.?
  • Do you have SLA requirements?  Currently the WASD offer 99.9% monthly uptime.  Many companies require more. Having said that, in my experience very few companies actually maintain greater than 99.9% when doing an analysis of their internal data center operations.  This is an area where companies may have to analyze what their internal capabilities really are.
  • What are your redundancy requirements?  This is one where WASD are hard to beat and give a benefit few people recognize.  If you have an instance of a WASD in a Azure data center you are getting that plus three redundant instances; one primary redundant instance and two secondary.  If your main instance, or one (or more) of the redundant instances go down, Azure will use the others to continue uninterrupted service and bring up more redundant instances in the background.  The net effect is that it is very difficult to take down a WASD instance without an entire data center or the Azure service in general going down (which has happened twice that I can recall).
One thing you may notice is missing from my list; that is DBAs.  You will likely need the same number of DBAs if you host your databases in your private cloud, in a public cloud through IAAS or using a PAAS offering like WASD.  We know comparing the cost of a private cloud is highly situational, but what about looking at IAAS vs. a PAAS like WASD?  That is much clearer, and in most cases WASD will be doing IAAS at a company like Rackspace.  If you've determined that moving a SQL Service instance from your internal private cloud to an IAAS service, in many cases it may be less expensive to take it one step further and go to WASD.

Let's look at this comparison.  For information about assumptions, we have about 60 GB of data of which our average outbound data per month is 2000 GB, increasing 10% yearly.  Also for the Rackspace offering we set up two servers is an active/passive SQL cluster to get some level of redundancy but still less than what we get with WASD.  If that's the case, we end up with something like this:

Windows Azure Year 1 Year 2 Year 3 Year 4 Year 5 Total
Cost WASD/Year $1,510.56 $1,570.44 $1,630.44 $1,690.32 $1,750.32 $8,152.08
Cost of outbound data $2,880.00 $3,168.00 $3,484.80 $3,833.28 $4,216.61 $17,582.69
Total $4,390.56 $4,738.44 $5,115.24 $5,523.60 $5,966.93 $25,734.77

IAAS (Rackspace) Year 1 Year 2 Year 3 Year 4 Year 5 Total
Cost of server licensing $155.40 $0.00 $0.00 $180.00 $0.00 $335.40
Cost of SQL licensing $678.00 $0.00 $0.00 $711.90 $0.00 $1,389.90
Cost of data center $1,401.60 $1,401.60 $1,401.60 $1,401.60 $1,401.60 $7,008.00
Cost of IT labor $480.00 $494.40 $509.23 $524.51 $540.24 $2,548.39
Cost of outbound data $2,880.00 $3,168.00 $3,484.80 $3,833.28 $4,216.61 $17,582.69
Total $5,595.00 $5,064.00 $5,395.63 $6,651.29 $6,158.45 $28,864.37

Of course there are many factors that can cause these numbers to change.  For example, we can say that the SQL instance doesn't need to be redundant like it is with WASD.  Also Rackspace's outgoing data charges decrease the more data you use while WA does not (perhaps something MS should change to stay competitive).  The point of the exercise is to realize that prima facie the Windows Azure services are not more expensive than competitive offerings.  The services offer other benefits that I have not mentioned here but they are worth doing a serious analysis on, both from a cost and a feature perspective.

If you do, or do not, think Windows Azure services are cost effective, or even if you don't know I'd be interesting in speaking with you.

Friday, November 1, 2013

Angular Promises in TypeScript

If you're like me and love typed languages, TypeScript seems like a great thing and it is.  Though there are some complications to using it along with some of the existing frameworks that are out there.  A lot of this comes from using modules and classes in TypeScript, which you probably are if you are using typeScript in the first place.  When using TypeScript and Angular it can sometimes be hard to get access to things like Angular's $scope deep within the bowels of your classes.

Recently I was working with Azure and handling the asynchronous request callback using an Angular promise.  This presented me with a couple of difficulties:

  • When calling .resolve on the deferral, it was never firing the .when statement on the code that was holding the promise so I never got may data loaded correctly from my factory
  • Inside the deferral callback code in a class, any calls to the classes properties using the 'this' operator were failing with an undefined or null reference error
What to do?  Here was my original non working code:

Calling code:
this.dataAccess.BeginGetRegistrations().then(function (result: Classes.RegistrationDTO[]) {
    for (var i in result) {
        var registrationDTO: Classes.RegistrationDTO = result[i];
        var registration: Registration = new Registration(registrationDTO.Id, this.dataAccess);
        registration.LoadRegistration(registrationDTO.ScreenName, registrationDTO.Email, registrationDTO.Zip, registrationDTO.Gender, registrationDTO.BirthDate);
        this.Add(registration);
    }
});

Asynchronous function in my data access class:
public BeginGetRegistrations(): ng.IPromise<Classes.RegistrationDTO[]>{
    var deferral = this.service.defer<Classes.RegistrationDTO[]>();
    var registrations = DataAccess.client.getTable('Registration');

    registrations.read().done(function (results) {
        var returnValue: Classes.RegistrationDTO[] = [];
        var registration: Classes.RegistrationDTO;
        for (var i in results) {
            var result = results[i];
            registration = current.LoadResult(result);
            returnValue.push(registration);
        }
        deferral.resolve(returnValue);
    }, function(err) {
        throw err.toString();
    });
    return deferral.promise;;
}

I did a little digging around it seemed obvious.  The call to resolve on the deferral needs to happen in a context that Angular is aware of.  In this case, calling resolve inside $apply method on the $rootScope should solve it so the .when statement on the calling code fires.  Since I was way down in the bowls of a class inside a module I had Angular inject the $rootScope into my factory which in turn set it inside my data access class on creation (constructor).  Now my data access class has a reference to the $rootScope in a class level variable called scope.

I rewrote my code hoping to get the .when to fire like this:
public BeginGetRegistrations(): ng.IPromise<Classes.RegistrationDTO[]>{
    var deferral = this.service.defer<Classes.RegistrationDTO[]>();
    var registrations = DataAccess.client.getTable('Registration');

    registrations.read().done(function (results) {
        var returnValue: Classes.RegistrationDTO[] = [];
        var registration: Classes.RegistrationDTO;
        for (var i in results) {
            var result = results[i];
            registration = current.LoadResult(result);
            returnValue.push(registration);
        }
        this.scope.$apply(deferral.resolve(returnValue));
    }, function(err) {
        throw err.toString();
    });
    return deferral.promise;;
}

As soon as I did this, the second problem raised its head.  That is, I couldn't get access to the 'this' keyword and was failing with an undefined or null reference.  This problem manifested in both the BeginGetRegistration method and the calling code.  The solution was simple.  In each of these methods set a local variable equal to the class level 'this' variable so the callback function has access to it and all was well with the world.  My new corrected and working functions:

Calling code:
var current: any = this;
this.dataAccess.BeginGetRegistrations().then(function (result: Classes.RegistrationDTO[]) {
    for (var i in result) {
        var registrationDTO = result[i];
        var registration: Registration = new Registration(registrationDTO.Id, current.dataAccess);
        registration.LoadRegistration(registrationDTO.ScreenName, registrationDTO.Email, registrationDTO.Zip, registrationDTO.Gender, registrationDTO.BirthDate);
        current.Add(registration);
    }
});

Asynchronous function in my data access class:
public BeginGetRegistrations(): ng.IPromise<Classes.RegistrationDTO[]>{
    var deferral = this.service.defer<Classes.RegistrationDTO[]>();
    var registrations = DataAccess.client.getTable('Registration');
    var current: any = this;

    registrations.read().done(function (results) {
        var returnValue: Classes.RegistrationDTO[] = [];
        var registration: Classes.RegistrationDTO;
        for (var i in results) {
            var result = results[i];
            registration = current.LoadResult(result);
            returnValue.push(registration);
        }
        current.scope.$apply(deferral.resolve(returnValue));
    }, function(err) {
        throw err.toString();
    });
    return deferral.promise;;
}

With these fixes in place my Azure mobile code was loading beautifully, asynchronously and Angular binding was handling the data coming back and displaying on the UI exactly as it should.

Tuesday, October 22, 2013

Connecting to Web Services with Xamarin Android in Visual Studio

If you do any type of mobile development you are likely to want to connect to some type of external service somewhere and this is no less true of Xamarin Android.  If you write line of business applications you will almost always need to access a service for data persistence.  For public cloud offerings there is always Microsoft's Azure mobile Services (perhaps more on that in a later post).  But if you want something in a web service you wrote for your own application then you will need to know how to connect to it.

There are three general ways to connect to an external web service in Xamarin Android when using Visual Studio: 
  • Use the built in "Add Web Service" tool
  • Use SlSvcUtil.exe
  • Access directly through code
The first method is the easiest but in some ways the most limiting.  To use it simply right click on the project and select Add Web Reference.  When you do the following screen will appear:


The most common choices here are to either enter in the path to an existing service or to click on Web Services in the solution.  Clicking on the later choice will show any services that may exist in your solution:


If you select a service you can name it and add a reference just like you can with the .Net version of this utility.  However, that's as far as your control goes.  Unlike the .Net version you have no control over what list object types are returned as they are always returned as arrays.  Additionally, with the .Net version if your service returns a serialized object and your client project has a reference to it, then you can generate a client proxy that desterilizes into that type.  That is also not available with the "Add Web Service" tool in the Xamarin Android projects, even if you have a reference to a Xamarin version of the Dll with the exact same types that are returned by the service.

This is important because the types being returned from the service may be more than simple DTOs.  They may be business objects with associated business logic.  For any custom types such as these the "Add Web Service" utility will simply generate dumb stubs with the public properties of the original Types.  You will then have to manually move them in and out of instances of your business objects through code as you fetch and save information from your service.

What if I do have serialized types being returned from my WCF service and I want them to appear as their respective types inside my Xamarin Android client because I want all the business logic and I don't want to engage in a manual mapping process?  What if any time I return more than one instance of that type I want them to be contained in an ObservableCollection instead of a array like I can with the .Net version of the tool?  That's where the SLSvcUtil.exe comes in.  This tool comes with Silverlight and can do much of what the full .Net SvcUtil.exe can.

For a little background you may ask how do we get our types to be both returned from our .Net WCF service and also be recognized in our Xamarin Android project?  One way is to have a .Net version of the DLL and a Xamarin version.  If they both have the same namespace, version and share the same class files then the serialization and deserialization routines won't know the difference.  This is a common trick in .Net cross platform development.  Another option is portable class libraries.  For more on this look at the Xamarin cross platform documentation:


If you have both of those Dlls is that enough?  No, you must create a third; a Silverlight version.  This will also have the same namespace, version and share the same class files.  The reason for this is simple, the SLSvcUtil is a Silverlight utility and expects to reference Silverlight assemblies.

Here is an example of running SLSvcUtil at the command prompt to use types in a specified assembly and specify the return type of multiple objects to be in an ObservableCollection:


 Let's take a look at some of the parameters on that command:

 - The first parameter is the address of service that we are generating a proxy for, in this case http://localhost:63367/GravesiteService.svc.
- The /o: parameter told the command where to write the generated proxy. 
- The /l: parameter specified the language to generate the client proxy for; cs == C#.
- The /n: parameter maps all service namespaces (*) to the namespace to be used for the proxy, SampleAndroidUX.TypedGravesiteService.
- The /r: parameter gives the full reference paths to any Silverlight dlls that you need for your proxy.  In this case we use the parameter twice.  The first time to reference the Silverlight version of the class library containing the returned types from the service.  The second time to reverence the Silverlight version of the System.Windows.dll that contains ObservableCollection.
- The /ct: parameter specifies the collection type to use, in this case ObservableCollection`1.  The `1 at the end tells it to use the proper type for the generic ObservableCollection.

Once I have run this command successfully a new file has been added to the output directory, TypedGravesiteService.cs.  I can now simply add this file to my Xamarin Android project to use it as a generated client proxy.

Using it is similar to using the "Add Web Reference" proxy.  The Silverlight utility would normally expect to load connection information from config files and you will see such a file has been generated in the same directory as you ran SLSvcUtil.exe from.  Alternatively you can also configure and run the client proxy directly.  Consider the following method:

public static GravesiteServiceClient CreateGravesiteService()
{
    return new GravesiteServiceClient(new BasicHttpBinding(), new EndpointAddress("http://addresstomyservice"));
}

The important thing to note here is that we were able to specify some parameters such as where our service resides.  Of course if this were a "real" application we would not hard code our server endpoint URL into the code, but instead get it as a configuration value or from a string resource.

How could we use the newly created proxy?

public Task<ObservableCollection<Town>> GetAllTownsAsync()
{
    var tcs = new TaskCompletionSource<ObservableCollection<ITown>>();
    var service = TypedWCFService.CreateGravesiteService();
    service.GetAllTownsCompleted += (s, e) => tcs.TrySetResult(e.Result);
    service.GetAllTownsAsync();
    return tcs.Task;
}

A few things to note in this sample function.  First we can see that the Silverlight proxies still do not use the new task-based asynchronous pattern (TAP).  We could wrap the client proxy call in a method using the TaskCompletionSource as above to convert to the new TAP asynchronous model.  The second thing you may notice is that the result that we are getting out of e.Result is an ObservableCollection<Town>, exactly the .Net type that is being returned by our .Net WCF service, now in a Xamarin Android version of the class.  Exactly what we wanted to do.

That should get you started on using the first two methods of using web services inside a Xamarin Android project.  I will not cover the third technique here, accessing them through code, as that could easily be another blog post and isn't particularly unique to Xamarin Android development.

I hope this information has helped you consume web services in Xamarin Android.

Wednesday, October 2, 2013

Xamarin Android Activities and Intents

One thing that is really important to understand with Xamarin Android development (or just Android development in general) is how activities work and how to navigate between them.  The Xamarin site talks a little about this but it does not really dig deep into the different options for navigating between activities.  That explanation can be found here:

http://docs.xamarin.com/guides/android/application_fundamentals/activity_lifecycle

The most common way to navigate to a new activity is via the StartActivity command.  By default this will create a new activity on the stack.  The Main Activity in an application may create a Second Activity with the following command:

var secondActivity = new Intent(this, typeof(SecondActivity));
StartActivity(secondActivity);

In this case the activity stack looks like this (please excuse the lack of cool graphics):

Second Activity (current)
Main Activity

If the second activity used the StartActivity command in the same way to try and reopen the Main Activity the call stack will look like this:

Main Activity (current)
Second Activity
Main Activity

It will not re-use the existing Main Activity, but it will create a new instance of the Main Activity.  If you press the back button that will pop the latest Main Activity off the stack and return you to the Second Activity but what if you wanted instead to have brought the original Main Activity back to the fore front without creating a new one?  In that case you could have reordered the activities in the activity stack to bring the Main Activity back to the front with the following command:
var mainIntent = new Intent(this, typeof(MainActivity));
mainIntent.AddFlags(ActivityFlags.ReorderToFront);
StartActivity(mainIntent);

If this were called from the Second Activity instead the activity stack would appear as so:

Main Activity (current)
Second Activity

It did not create a new instance of the Main Activity but the Second Activity didn't go away either.  Pressing the back button will now destroy the Main Activity and return to the Second Activity.

The Second Activity could also have just called Finish() instead.  That would have destroyed the Second Activity and returned to the Main Activity similar to what would have happened if the back button were pressed.

There are other options as well.  What if the Main Activity wanted to create the Second Activity and return some information back to the Main Activity without the Second Activity staying on the activity stack?  For this you can use StartActivityForResult:
var secondActivity = new Intent(this, typeof(SecondActivity));
StartActivityForResult(secondActivity, 1);

The second parameter on the StartActivityForResult is a unique identifier that you can use later to identify which activity it was that is returning to the Main Activity.  As you would expect once the Second Activity is created the activity stack would look like this:

Second Activity (current)
Main Activity

It is important to note that the second activity is not a modal window.  It can in turn just re-create a new instance of the Main Activity or a Third Activity or anything else.  When the Second Activity is done you can send some information back to the Main Activity along with if the Second Activity completed successfully.  To do this you can use code similar to this:
Code:
var myIntent = new Intent(this, typeof(MainActivity));
myIntent.PutExtra("My Key", "The data I want to send back");
this.SetResult(Result.Ok, myIntent);
this.Finish();

This will not create a new instance of the Main Activity but return to the existing one.  The intent allows you to send information back to the Main Activity from the Second Activity.  In this case the PutExtra allows you to add key value pairs of information, strings, integers, binary data, etc.  It also uses the SetResult command to indicate that the activity completed successfully.

The question is, now how does the Main Activity use the information passed back from the Second Activity?  That's what the OnActivityResult event is for.  You can implement this event on the Main Activity and it will fire when the Second Activity is finished.

protected override void OnActivityResult(int requestCode, Result resultCode, Intent data)
{
    base.OnActivityResult(requestCode, resultCode, data);
    var returnString = data.GetStringExtra("My Key");
    var result = resultCode;
}

It is important to note that the OnActivityResult event will fire on the Main Activity before the OnRestart event, followed by the OnStart and OnResume events.

If the intent were not passed along to the SetResult command in the Second Activity, the data parameter would be null.  Since it was, we can use functions such as GetStringExtra to retrieve the data that was set in the Second Activity.  The requestCode should have a value of 1 in this case, the value we set when we created the intent to create the Second Activity. 

The resultCode parameter can also be used to tell if the activity was ended successfully.  If the back button was pressed the value of the resultCode would be Result.Cancelled.  Also once the Finish method is called in the Second Activity it is unloaded and removed from the activity stack as you would expect.

I hope this has helped some of you understand how to navigate between activities in Xamarin Android.