Won't fix

Content Manager returning all cd-error

Daniel Walters 5 years ago in UNIFYBroker/Micro Focus Content Manager updated by Matthew Davis (Technical Product Manager) 5 years ago 19

When I run exports to the Content Manager MA, I'm getting cd-errors with no further information on all pending exports. Doing a FIFS and then another export changes the number of exports so something is getting through. Looking at the logs, it looks like the ldap connection is being closed before the response is sent back to MIM. Attached logs.

Under review

What steps have you performed to diagnose the issue? It would also help to point out when you performed the export as the file is quite large. Were there any entries in the Windows Event Log? Have you ensured that it's not a timeout issue and adjusted batch sizes? Etc.

It's at the end of the log at 7:29:25. 

I believe this is the main problem "Attempt to write response for ExtendedResponse on connection for gateway MIM Gateway (c35c14ba-7f91-43e9-bf73-82cee7f484d3) failed because the connection was disposed.". Matt said what has happened here is that the export has run and then MIM has closed the connection before it received a response on the status of the exports.

I've run the export a few times and noticed the cd-error. In event viewer there's only one summary event that tells how many cd-errors there were and to look in the sync engine for more information. On Matts advice I increased the timeout in the sync engine MA but the timing of the MIM Gateway failure didn't change. I haven't adjusted batch sizes, which way would I adjust them?

There are alot of errors reported with the export in Broker: "Update entities 2995 to connector CM Persons reported 2995 entities saved, 2995 failed. Duration: 00:03:15.3432435". But since the connection is being closed MIM isn't getting any more information about why the failure occurred.

Hi Daniel

What are the versions of Broker and the connectors you're running?

What did you increase the timeout value to?

Try reducing the batch size temporarily to 20. This has helped debug similar issues in the past.

Here are the versions 

I increased it to 360 seconds with no effect on the timing in the log files. I think it was 60 before I changed it. Where is this batch size setting? Is it the max bulk operations on the MIM Gateway?

Batch size is set in the run profile configurations. Also, you have the v5.1 MA installed. Is that the one your using currently? There is a v5.3 MA that you should be using with Broker v5.3.

Changing the batch size to 20 worked and the errors were surfaced to MIM. There were alot of duplicate IDs.

Hi Daniel, did you end up upgrading the MA to v5.3 or just change the batch size? Just wondering if the newer MA gives a better error message for this issue.

We haven't upgraded just changed the batch size. The main concern about upgrading is whether or not I would need to rebuild each of the MA's that use Broker or just upgrade the connector and refresh the interface. I'll recommend it either way but would like to know first if it's a full rebuild of the MAs.

You don't have to recreate the MA when going from v5.1 to v5.3 as both use the same MA type (ECMA2), its just the extension dll it uses that changes.

OK Thanks, I'll recommend it now. I don't see it happening any time soon though with the rest of the project going on but we'll see.


As discussed yesterday, changing the batch size to 20 is only a workaround and it should not be recommended to the customer that the solution go into production with this configuration.Updating the MA is only a minor change, this should be something we push for.

Can I really recommend an upgrade as a fix though? Will it definitely fix it or only possibly? Sounds like Beau wants to check if it'll fix it.

That's exactly my point. We don't have a fix for the issue - batch size is a workaround, we need to validate whether an upgrade fixes it or if the MA needs a patch. If we don't ensure this is properly fixed now, we risk the issue coming up later (after production deployment) and that is not something we should be accepting as a risk. 

Is 5.1 out of support?

Now I've recommended that we upgrade and don't take the batch size fix to production so if the new connector doesn't fix it, what will we do in production? Will you guys look at a patch for 5.3?

If the new version doesn't resolve the issue, we will investigate and provide a patch that does resolve it.

Hi Daniel, Is there an update on this one?

I recommended an upgrade and recommended we don't take the batch size fix to production in an email. They haven't mentioned it since my recommendation.