At this moment we are having a Dynamics CRM online performance issue at one of our clients. In a joint effort with Microsoft we are trying to find the cause of the performance issue. As a backup scenario our client asked us to set up a full blown on-premise environment, which is – as we speak – up and running.
Our client is a world wide organization with offices in Europe and Asia and the head quarters based in the Netherlands. The asian office happened to have its own Dynamics CRM subscription.
This gave us the unique opportunity to test the solution we developed on a different tenant in a different CRM region…
One of my clients is experiencing a very poor performance on CRM online. The client is about to go live with the new system, but this week during an internal demo (for the key users) the key users complained about the performance. According to the management it was unacceptable. *ouch*
In the weeks prior we already had set up an on-premise CRM environment that was intended to be used as a fall back environment (in case of an internet outage). Whatever happens, the business must go on!
Now, I can hear many of you think, the crappy performance is caused by the implemented solution!
On the on-premise environment the performance is actually very snappy. Screens crammed with information load between two and three seconds.
A performance that you would expect…
Aaaargh, what a day! Today I was wrapping up the setup of an on-premise Dynamics CRM 2016 (Update 1) testing environment for a client. One of the last tasks I had to do was to update the unmanaged solution we developed on Dynamics CRM 2016 online.
Whatever I tried, the solution didn’t want to be installed on the on-premise environment. The import log didn’t show any error, just some items that were unprocessed.
Whatever I tried no luck, I was able to install some parts of the solution, but other parts refused to be installed. I was lost!
At the moment I’m working for one of our clients. The customer is migrating from Lotus Notes to Dynamics CRM online in combination with Office 365. Since the beginning of the project, the client has been plagued by an unpredictable performance.
One moment an account opens in just 2–3 seconds, while the next moment the same screen needs 25–30 seconds to open. A non workable situation for the client.
Performance issues can be caused by a large number of things. They can be caused by large datasets, customizations, client computers, the browser of choice, coorporate networks, problems at the ISP, problems within the Dynamics network, problems within Dynamics.
The biggest problem with fighting performance issues is availablity of time; you have to prepare yourself that finding the cause can and will consume a large amount of time and resources.
Where do you start? What strategy can you follow to isolate the performance problem? In this article I’ll describe the strategy we are following to find the cause of the unpredictable performance we are dealing with. Continue reading
Welcome back in the final episode in this series of articles on building a replication mechanism within Dynamics CRM. In the previous article
I implemented a concept of a message pump.
The goal of the message pump is to process the messages created by the actions performed on the source entity which are captured by a plugin registered on the messages:
Initially I registered the steps as asynchronous plugin steps, in the end I had to register the steps as synchronous steps. More on that later.
In this series of articles, I’m implementing a proof of concept of a replication mechanism within Dynamics CRM. My intention is not to build an enterprise class replication mechanism (e.g. Scribe, KingswaySoft), instead I want to learn more about the mechanisms involved in replication.
The previous article was a technical necessity as I needed to find a way to serialize and deserialize data within Dynamics CRM. The serialization of the source entity is done inside a plugin that is running inside a sandbox. The sandbox limits us a little bit, therefor we cannot use the standard serialization methods within the .Net framework.
In this article I implemented a first version of the message pump, which can be considered as the engine that will make the actual replication to happen. For the sake of simplicity I implemented the message pump as a console application. In a production like situation I would implement the message pump as a windows Service or Azure service.
The reason why I implement the message pump as a seperate service, is that I don’t want to be dependent on limitations the CRM services are offering me. Besides this reason, it makes the replication more robust as both the source environment and the destination environment can be offline without any changes being lost.
I ended my previous article in this replication series with the remark that I needed to focus on the serialization and deserialization of the entity data that I’m going to replicate. I explained that I have to do the serialization by hand because the CRM sandbox is preventing me from using binary serialization (as (Microsoft considers a number of functions in the Microsoft .Net libraries to be unsafe / untrusted).
In the initial version of the serialization, I created an XML representation of the source entity as you can see in the code snippet below.
Running this program gives the ouput below:
Serializing the entity data like this, would give us headaches when we want to deserialize the data back to an entity. As the decimal and money fields are using the systems locale settings in the XML representation. Furthermore looking at the XML, no one can tell what the original field type used to be.
For the first time in weeks I’ve been able to work on the replication concept. As this excersize is intended to be a proof of concept, I decided to start with a small setup.
In order to replicate the data you need to have an entity in which the actual data is stored. In my scenario I called this entity “DataEntity”.
In the data entity I defined a couple of fields in which data is stored, as a bonus I added an additional technical column called “jcrm_sourceentityid”. This technical column is going to be used later on.
The other entity I defined is a message entity, called “MessageEntity”.
This is a special entity in which all changes in the data entity are going to be stored (field: “jcrm_sourceentitydata”), and is not intended for end users.
The columns I defined are the following:
The name of the action (“Create”, “Update”, “Delete”, “SetState”)
The id of the source entity (the entity the data came from)
The logical name of the source entity
Furthermore I’m going to use the “CreatedOn” column to decide the order of the actions, as the message entity is designed to work in a FiFo manner.
Data coming in first, will be processed first. Once a message record is processed, the message record will be deleted.
It has been a while since I wrote the last article in the #Crm2Crm series. For some reason, my trustworthy SuperMicro server decided that its working life came to an end…
Resulting in a server which was not responsive and which was rebooting all the time. A couple of time consuming repair attempt later I decided that it was time to pull the plug and move to another server.
Since then I’ve been building up a new penguin powered server. Energy efficient and suited for the job. Last night I was able to promote it to a new production server – faster than ever .
Time to move ahead, time to continue the saga… In the meanwhile I’m working on the next article which is going to be published later on this week.
Last time I described the concept of a simple one way replication pattern. In this pattern each time an entity record is added, modified or deleted a message is written in a message entity.
The message pump will process the messages within the message entity in the order they arrived in the message entity (First In First Out), combined with the action assigned to the message entity record (as described in the previous article).
The content of the message entity record is applied on the destination entity. So far so good…
I promised to spice up things a bit in this article… The big question is, what will happen if we duplicate this pattern in order to make the replication bidirectional? Instead of one message entity, we now have two message entities and instead of one message pump we now have two message pumps.