#Crm2Crm – part 1: To replicate or not to replicate, that’s the question

Recently I switched jobs and joined a small innovate company. One of my first assignments  inspired me to write this blog article.

I was asked to write a report for a client, on measures that could be taken to guarantee continuity with regard to the use of CRM in case of a long term internet outage. The client wanted to be able to continue working with the CRM data no matter what.

One of the fallback scenarios that popped up in my head was a readonly CRM on-premise environment could act as a fallback environment.
Using Scribe technology we can provision the on-premise environment daily with fresh data. In that case the client is able to access the data he needs, and is able to alter the data a later point of time (once the connection has been restored).
Simple but elegant and can be done within a reasonable budget.

However this case keeps running through my head, bringing out the little hacker in me…  is it somehow possible to set up an hybrid environment in which two CRM organizations replicate data to each other?

In which the online environment is going to act as the master and the on-premise environment acts as a slave. The changes made on the on-premise enivornment should be replicated back to the master, without causing a replication storm…

Replicate

For me interesting enough to start a new series of blog articles in order to find a solution to tackle this problem. In this series of articles I’ll describe the steps I need to take in order to create bidirectional replication between entities. I need to think on:

  • how to keep track of changes (insert, update, delete),
  • how to avoid a replication storm,
  • how to deal with unreliable connections,
  • how to deal with differences in entity schemes
  • maintainablity, a.k.a. how can I implement it with as less maintenance required as possible

In this series of articles, it is not my intention to create an enterprise grade solution. I consider this series as a small research project in which I want to master the mechanics behind replication, in order to be able to advise my clients as good as possible.

In the next article I’ll dive into some more theory describing the mechanics of replication. But first things first, I need to get my new gear up and running and setup a new virtual machine from which I’ll be working.

Stay tuned!

The case of the missing Documents icon

Wow, it has been quite a while since I wrote my last post. Last months have not been easy for me. It was like I was lost at sea, therefor I had to make some difficult choices.

It costed me several months and it seems that I’m on the right track again. It feels like I can breathe again.

Time for a reboot, time to start blogging again.

Lost-at-sea_645x400

The picture above symbolizes also the situation I ran in today, when I was working on site with one of our clients.

I was asked to set up the integration between Dynamics CRM online 2016 and SharePoint online 2013 (soon to be upgraded to 2016 online).

With an eye on the expected SharePoint upgrade, I decided to set up a server based document integration in CRM. After fiddling with the user rights (hint: the SharePoint global administrator account has to be the same as the CRM administator account), setting the SharePoint site in the Document Management Section in Dynamics is a breeze.

Continue reading

Dynamics CRM and Azure queues

Lately I’ve been busy discovering the services Azure has to offer. The more I learn about it, the more enthousiastic I become. The platform is well thought of, and when you combine the services it has to offer, you can build world class solutions for your customers.

In my previous articles I’ve been writing on working with Azure WebJobs, the Azure Job Scheduler, Azure Table storage (or Azure Document Storage). The only thing we are missing is working with Azure Queues.

The following questions pop up in my head:

  • What is a queue?
  • What can I do with it?
  • What are the benefits?

In this series of articles I want to give a good answer on these questions. Furthermore I want to write a small REST library in order to work with queues from Dynamics CRM. The reason I want to use REST instead of the Azure SDK, is that I don’t want to have to reference the Azure SDK assemblies within my plugin project.

For starters I’ve dug up a couple of good articles on Azure queues and how to get started. These articles will give you some good insights.

Using the principles from these articles, I can use the MSDN documentation regarding the Azure Queue Service REST API.

Furthermore there are two types of queues within Azure, the Azure Queue and the Azure Service Bus queue. For now I have to learn the differences between the two queues.

Anyway, enough information to keep me busy for a while. For now, I have some reading to do.

Stay tuned!

Dynamics CRM and Azure Scheduler – The feedback loop

In my last article I showed how to build a simple command line exe, that could be deployed as an Azure WebJob. Furthermore I showed how to pass parameters to the WebJob, parameters that can be used to perform specific actions. For me this is enough information to stop the proof of concept, however that would not be fair to you.

In this article I’ll describe what you actually can do when you combine all the techniques I demonstrated in this series of articles.

Using the Azure Job Scheduler and Azure WebJobs in combination with Dynamics CRM offers you the possibility to build long running batch jobs that you can schedule to run at any given time.

In this article I’ll describe how you can set up a generic batch mechanism with an integrated feedback loop.

Batchjob entity model

Continue reading

Dynamics CRM and Azure Scheduler – The final pieces of the puzzle

Lately it has been quite hectic at the office, project deadlines were shifting, functionality was added and project issues had to be resolved. Not the most optimal situation to write blog articles.
Fortunately, things calmed down. Time to pick up this quest again.

Puzzle

In the previous article I focussed on the Azure webjob. It turns out that the basic techniques required to write web jobs are pretty straight forward. It costed me a little while, but now I did solve the puzzle. Time for an update…

Continue reading

Dynamics CRM and Azure Scheduler – a closer look at Azure Web Jobs

In this series of articles I’m working on a scenario in which I want to hookup the Azure Scheduler to Dynamics CRM. In the previous article I established a breakthrough. Using REST calls and Json, I’m able to create new jobs in the Azure scheduler.

This paves the way for the next challenge: Creating an Azure web job that interacts with Dynamics CRM.

Schedule1 - kopie

I’m new when it comes to Azure web jobs. Before firing up Visual Studio to hammer out a piece of code, I need to do some reading first.

In my research I stumbled on a couple of very useful articles:

In Tom Dykstra’s article I noticed something very interesting. He states that you can use .NET command line exe’s as web jobs. Now that would ease up development quite a lot.
Writing a command line exe means, that you can develop and debug the exe as a normal windows console application. One huge advantage: full debugging capabilities.

In the next article I want to write a long running command line exe, that I want to deploy as an Azure web job. In order to make the command line exe really useful I want to test if it is possible to pass command line arguments. These command line arguments will be passed by the Azure Job Scheduler.

A good reason for passing command line arguments is that I want to pass in a Guid which identifies the job I started from within CRM. This would enable me to write back information about the job into CRM, making it managable by the administrator.

The scenario/framework I want to build in this series of articles is the following:

Schedule2

In Dynamics CRM I create a new job entity. In that entity I register information about the job I want to schedule (name, interval, job name, job parameters, etc…).

When I “finalize” the job, a plugin on the job entity is triggered, placing a new job in the Azure Scheduler.

The Scheduler will fire the Azure web job. The long running (5 minutes or more) Azure web job receives the job id, and uses it to write back information regarding the job.

Once this works, the scenario/framework can be extended and refined to act as a scalable batch job engine, in which long running processes can be executed without facing the dreaded two minute sandbox execution limit.

Anyway, enough food for thought for this evening!

Dynamics CRM and Azure Scheduler – Breakthrough!

In this series of articles I’m setting up a scenario in which I hook up Dynamics CRM to the Azure Scheduler. My goal is to use the Azure Scheduler to start an Azure web job that will interact with Dynamics CRM.

In the previous article I managed to authenticate with Azure using REST. Furthermore I did my first steps in reading the Azure Scheduler job collection and creating a new job. Creating a job failed and I ended up with with error 403.

Breakthrough

Tonight I decided to pick up the previous effort where I left off. I decided to rewrite the code, embrace the async / Task<> / await pattern in order to get the complete response from the actions I called.
I found a good async example at dotnetperls.com. Using the pattern in the example I was able to call the async methods (required for getting and putting REST requests) from a synchronous function.

Continue reading

Dynamics CRM and Azure Scheduler – let the games begin

Last week I started this series of articles on using the Azure Scheduler with Dynamics CRM. I expected a pretty smooth ride, however it turns out to be an infuriated dragon that needs to be tamed. In this article I’ll describe what I achieved till so far.

Dragon-3.png
I started with reading about Azure and the Azure Scheduler. I discovered that there are not many articles on using the Azure Scheduler. Basically there are two options in using the Azure Scheduler: the Azure SDK object model or REST.  Ofcourse I can use the Azure SDK to connect to the Azure Scheduler, but that means that I probably have to merge a number of Azure dll’s with my functionality when I want to use it from within the CRM sandbox.

Continue reading

Dynamics CRM and Azure Scheduler – intro

This week I have a meeting with my manager to discuss the personal goals I want to achieve this year. Goals that will benefit the company, and goals that will benefit me. Among these goals I have to define is the goal of skill / knowledge development. I decided I want to build up knowledge regarding the Azure platform. What I want to learn is:

  • What services does Azure offer?
  • How can I use these services in combination with Dynamics CRM?
  • How do I set up a cloud based architecture using Azure?
  • How can my customers benefit from Azure?
  • In what scenarios should I use Azure?

Last month I’ve been experimenting with some of the storage services, like Azure Table Storage. The scenario I described was using Azure Table Storage for offloading data. But as I already mentioned, Azure has to offer a lot more. Think of the Azure Scheduler, a scheduling service we can use to schedule recurring jobs (Kudos to my colleague Erik Aalbers for mentioning).

From what I understand of the Azure Scheduler, is that we can use the Azure Scheduler to declaratively describe actions to run in the cloud. It then schedules and runs those actions automatically. Azure Scheduler does not host any workloads or run any code.
It only invokes code hosted elsewhere—in Azure, on-premises, or with another provider. It invokes via HTTP, HTTPS, or a storage queue. The scheduler keeps a history of the executed jobs.

One of the most cool features of the Scheduler, is that we can create recurrent schedules. From an administrative point of view the advantage is that administrators will be able to change schedules, add jobs or remove jobs without having to call a developer.

After having read this introduction I see a number of scenarios in which we can use it in combination with CRM.

  • Plan recurrent maintenance jobs (e.g. automatic disposal of inactive records).
  • Trigger a job from within CRM (e.g. mass mailing).
  • Run complex batch jobs (without hitting the two minute execution limit).

As the documentation of the scheduler states, the Scheduler does not run any code. We have to invoke code that is running elsewhere, like e.g. an Azure web job.

Schedule1

In the next articles I’m going to implement a scenario in which we use the Azure Scheduler to invoke a job hosted on Azure that will interact with CRM (e.g. a long running batch job). From within CRM I will implement functionality to add a job to the Azure Scheduler that will be executed.

For now I’ve a lot of reading to do, to master the generic principles.

Stay tuned!

Solved – corrupted index in a managed solution

In my last article, I mentioned that we got stuck with a corrupted index in a managed solution. The reason the index became corrupted was that the width of the index we defined was exceeding the maximum with of an index within SQL Server. The hard lesson we learned that we should create small keys in which the size of all fields do not exceed 900 bytes (link).

On the development server we could repair the situation by dropping the keys we defined. On the target environments on which we deployed the solution the faulty key could not be deleted. In fact we were not able to update the managed solution. We got stuck.

We requested Microsoft to alter the created indexes, making the keys smaller. After a couple of requests (one request per environment) the indexes were made smaller (in case of an on premise environment we would have done the index repair ourselves). The environments were ready to be fixed…

Hostingsolution

From there on, we did the following steps per environment to get rid of the indexes in the managed solutions.

  1. Alter the solution on the development environment, by removing the keys we defined.
  2. Make a managed export of the altered solution.
  3. Copy the exported managed solution and rename it.
  4. Open the renamed solution file, and alter the unique name of the solution in the solution.xml.
  5. Import the altered and renamed solution in the target environment.
  6. Drop the orginal solution from the target environment.
  7. Import the managed solution (created in step 2).
  8. Drop the altered and renamed soluton in the target environment.

This looks like a large number of steps. In fact it is. But it is the only way we could drop the corrupted index on the target environment, preserving all data!

The trick is by installing the renamed solution (step 5) we preserved the data (same published). By uninstalling the original solution, we dropped all modifications made by that solution (including the corrupted index). Because the renamed solution contains the same entities, the data is kept in the new entities 🙂

Repeating the action by installing the altered solition (step 2) over the solution made in step 5, the data will be preserved in the entities defined in the altered solution. By removing the renamed solution, the system reverts back in the state where it should be.

Case closed!