Identity Broker Forum

Welcome to the community forum for Identity Broker.

Browse the knowledge base, ask questions directly to the product group, or leverage the community to get answers. Leave ideas for new features and vote for the features or bug fixes you want most.

0
Not a bug

The latest version of the Aderant Expert connector no longer truncates long attribute values; PowerShell transform has been written to prototype a solution in DEV but will need to be ported into the C# connector

Adrian Corston 5 years ago in UNIFYBroker/Aderant Expert updated 5 years ago 7

Here's the PowerShell transform that was required (so it can be ported to C#):

Image 5516


# Truncate long fields

function Invoke-FieldTruncator {
param(
$Entity,
[string] $Field,
[int] $Length
)

$old = $Entity[$Field].Value
if ($old) {
$old = $old.ToString()
if ($old.Length -gt $Length) {
$new = $old.Substring(0, $Length)
$Logger.LogInformation("Truncated long $Field '$old' to '$new'")
$Entity[$Field] = $new
}
}
}

foreach ($entity in $entities) {
Invoke-FieldTruncator -Entity $entity -Field HPPhoneNumber -Length 17
Invoke-FieldTruncator -Entity $entity -Field NameSort -Length 30
}

Note that these are only the fields that had long data which needed truncation in the DEV environment.  When we move to UAT and PROD it is likely there will be other fields with long data that needs to be truncated.  A lot of time and debugging effort was expended by the consultant (me) to identify and remediate these two fields in DEV, and this will add to the time required for UAT and PROD deployments.  The extra time required will be a particular issue when we come to PROD as it will significantly increase the deployment time during which time the system will be down for the customer.

As a consequence, I suggest that all fields be truncated to their maximum database lengths, not just those listed in the DEV workaround above.

0
Fixed

Aderant Expert connector fails with "The transaction associated with the current connection has completed but has not been disposed" after a previous SQL timeout failure

After a SQL timeout it appears the SQL connection to Aderant Expert remains with a SQL transaction that has not been disposed.  The error is:

20191203,23:22:24,UNIFYBroker,Connector,Warning,"Update entities to connector failed.
Update entities [Count:2071] to connector Global Aderant Expert Connector failed with reason The transaction associated with the current connection has completed but has not been disposed. The transaction must be disposed before the connection can be used to execute SQL statements.. Duration: 00:01:01.0504286

The previous timeout error responsible for the undisposed transaction is:

20191203,23:16:16,UNIFYBroker,Connector,Warning,"Update entities to connector failed.
System.Data.SqlClient.SqlException (0x80131904): Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding. ---> System.ComponentModel.Win32Exception (0x80004005): The wait operation timed out
Update entities [Count:2071] to connector Global Aderant Expert Connector failed with reason Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.. Duration: 00:01:16.0479697

The workaround is to restart the UNIFYBroker service, to stop it reusing the bad SQL connection.

See also https://voice.unifysolutions.net/communities/6/topics/3995-aderant-expert-agent-ui-doesnt-save-changes-to-the-operation-timeout-parameter for more information about SQL operation timeouts on a Global Aderant Expert connector.

0
Fixed

Aderant Expert agent UI doesn't save changes to the Operation Timeout parameter

After changing and saving the "Operation Timeout" parameter on the Aderant Expert agent UI the value is not saved.  In the extensibility file, the value "PT0S" is always written.

As a workaround I have edited the extensibility file instead.

Image 5515


0
Not a bug

LDAP search on multivalue attribute returns incorrect data

The following shows a search result which incorrectly returns all org unit records where none of the inspected results actually matched the search criteria:

***Searching...

ldap_search_s(ld, "OU=orgUnits,DC=IdentityBroker", 2, "(hierarchy=50002000)", attrList, 0, &msg)

Matched DNs: OU=orgUnits,DC=IdentityBroker

Getting 1000 entries:

Dn: CN=50022695,OU=orgUnits,DC=IdentityBroker

costCentre: 0000045240;

costCentreGroup: CCCA;

costCentreID: d567d5b9d344523618fc25cc2efe70e7;

costCentreIDRef: CN=0000045240,CN=CCCA,OU=costCentres,DC=IdentityBroker;

costCentreText: International Climate Law;

createTimestamp: 30/11/2019 1:36:03 PM AUS Eastern Daylight Time;

distinguishedName: CN=50022695,OU=orgUnits,DC=IdentityBroker;

entryUUID: 08e5d654-d8a2-4b41-a516-00bd457a93b9;

EXTOBJID: 50022695;

hashDN: 60451482495227d7d95601c180bcceac;

hierarchy (7): 50000100; 50000500; 50022601; 50022677; 50022687; 50022692; 50022695;

hierarchyRef (7): CN=50000100,OU=orgUnits,DC=IdentityBroker; CN=50000500,OU=orgUnits,DC=IdentityBroker; CN=50022601,OU=orgUnits,DC=IdentityBroker; CN=50022677,OU=orgUnits,DC=IdentityBroker; CN=50022687,OU=orgUnits,DC=IdentityBroker; CN=50022692,OU=orgUnits,DC=IdentityBroker; CN=50022695,OU=orgUnits,DC=IdentityBroker;

longText: International Climate Law Section;

modifyTimestamp: 2/12/2019 10:23:42 PM AUS Eastern Daylight Time;

objectClass: orgUnit;

objectEndDate: 99991231000000.000Z;

objectID: 50022695;

objectStartDate: 20071203000000.000Z;

OBJECTTYPE: O;

orgLevel: 7;

orgLevelName: Section;

OU: orgUnits;

parentOrgID: 50022692;

parentOrgIDRef: CN=50022692,OU=orgUnits,DC=IdentityBroker;

PLANSTAT: 1;

PLANVERS: 01;

shortText: DCC ID;

status: Active;

subschemaSubentry: CN=orgUnits,cn=schema;

0
Answered

Aderant Expert MA 'string or binary data would be truncated' error on export

Using the new version of the Aderant Expert connector I'm seeing this error.  This is the first time I've attempted an export with the new connector.  The configuration is a migration of the old version of the connector, talking to a database which is a copy of the one used by the old version of the connector.

Image 5489


UNIFYBroker v5.3.2 Revision #0
Aderant Expert Connector 5.3.1.1
Chris21 Connector 5.3.0.0

Could you please assist in working out what's wrong?

Answer

That means there's a field in the Aderant database with a length limit, and you're trying to write a value to it that exceeds that limit.

0
Duplicate

Full import returns only root node

Bob Bradley 5 years ago in UNIFYBroker/Microsoft Identity Manager updated 5 years ago 3

When running a FULL IMPORT on an IdB5.3.2 implentation I am getting data returned from only 2 of the 7 configured partitions - yet data is clearly visible for each of them via LDP, making me suspect an issue with the Broker for MIM component.  I have tried deleting and recreating run profiles, refreshing schema, reloading interfaces, and even creating a new instance of the MA - but still the same result.

There are no exceptions being logged for the full import (currently in VERBOSE mode).  As an example

  • The AREA adapter correctly returns 11 records, including the root container node (although the Total Entities count incorrectly shows as 0 on the Import job counter)
  • The COMPANIES adapter returns 1 record, being the container node only - despite all objects appearing correctly via LDAP.EXE.

Can I please have some urgent assistance to determine the root cause?

0
Answered

Concurrency in UNIFYBroker

Hayden Gray 5 years ago updated by Matthew Davis (Technical Product Manager) 5 years ago 3

Hi Guys,

Couldn't find an existing ticket or knowledge ticket about this so I though I would start one.

In the past I have design schedules and exclusion groups around the idea that you could not import from an adapter and do an import on the relative connector at the same time as it would cause sluggishness within Broker. Additionally, reading from an adapter while UNIFYBroker is committing changes will also cause some sort of locking (whether it locks up entirely or whether it just take long while doing so).

So I was hoping you could tell me about how UNIFYBroker handles concurrency. More specifically what operations it can do at the same time. Eg:

  • Can you import from two connectors at the same time (both if its a one to one adapter relationship or a many to one)?
  • Can you import from the an adapter while an import is being run on the respective connector
  • Can you do a import all and a delta import at the same time without anything locking up (not that I do this, but it happens from time to time)?

If you can let me know of any operations that couldn't be run at the same time that would be great, as it would be good to define a concrete way to schedule UNIFYBroker operations.

Thanks

Answer

Hi Hayden,

Broker can handle running connector imports, reflecting changes into adapters, and reading and writing adapter entities via a gateway concurrently. The only scenario Broker won't allow is doing two imports (ie full and delta) at the same time on the same connector.

That said, yes, this can cause these tasks to take longer to complete when multiple operations are competing for cpu and disk resources. Scheduling various operations might be a good strategy to improve performance, especially on machines which fall below the recommended system requirements and/or where system resources are shared between Broker, the database and other services, but it isn't something you need to always do.

0
Fixed

OU case renames not working

When I changed the case of an OU name in an adapter from OrgUnits to orgUnits, the data returned subsequently over the LDAP interface was not adjusted accordingly.  I believe I tried each of the following without success (but you may want to prove this for yourself):

  1. restarting the IdB service
  2. deleting the adapter and reloading it
  3. deleting the connector and adapter data and reloading them both

Eventually I had to go into the Broker SQL table (container names I think it was), locate the offending record, and update it there.

Answer

This has been implemented and is available in the release of UNIFYConnect V6, which will be made available shortly.

0
Declined

A couple of installation nice to haves

Daniel Walters 6 years ago updated 6 years ago 2

After my recent experience with upgrading Identity Broker there are a couple of nice-to-haves in the installer that would make things easier. In both cases either a warning or a log would be helpful. Admittedly, both problems could be solved by stringent documentation however the scattered nature of documentation on sharepoint and having to find one line mentions in a host of project documentation is a challenge for professional services I believe so some checks and balances in the installer itself would alleviate the problem.

1. A notification that there are non-standard assembly redirects in the unify.service.connect.exe.config and which ones are non standard. (The only way to know now is through trial-and-error or rely on documentation.)

2. A notification of all .dlls that are found that are patches from previous installs. (the only way to find now is to uninstall rendering in place upgrade risky or relying on documentation.) This one is relevant to both UNIFYBroker and UNIFYNow.

Answer

Hi Dan,

The ideal scenario is that solution documentation is up-to-date and available, so any extra changes (binding redirects or patches) are outlined, and their usage documented.

In the scenario where this is not the case, there are a few ways that you can manually validate.

For idea 1 (binding assembly redirects), you can compare a fresh .exe.config for the same version, and see if there are any changes. This would give you some clues as to whether any redirects are required.

For idea 2 (patches), you can view a base UNIFY install directory to determine what normally ships in the directory. This will give you an insight into any service patches that have been added. Web patches are reliant on documentation, as the service isn't aware what has shipped with the core service vs what is a patch.

It would require a significant amount of work to make the installer contextually aware of any non-standard binding redirects or patches, especially between version updates. It is recommended that the documentation is up to date and stored correctly to ensure upgrades can go smoothly.

0
Not a bug

Dynamics CRM Metadata contains a reference that cannot be resolved SSL Problem

Using the Broker Microsoft Dynamics CRM v5.2.0.1, Infrastructure are seeing some intermittent errors. The error shows in MIM and when checking the IdB logs the content of the error is the same as what shows in IdB. It's not that big a problem. It only occurs occurs on export and the pending export that fails remains a pending export and is processed ten minutes later and the error isn't rethrown on the second export. It seems to be some type of network connection problem but there aren't a lot of settings to configure it in the CRM Agent. Just the address and account and they're both correct. The full error is pasted below.

System.InvalidOperationException: Metadata contains a reference that cannot be resolved: 'https://dynamicscrm.internal.dotars.gov.au/DAMS//XRMServices/2011/Organization.svc?wsdl&sdkversion=8.2'. ---> System.Net.WebException: The request was aborted: Could not create SSL/TLS secure channel.
at System.Net.HttpWebRequest.GetResponse()
at System.ServiceModel.Description.MetadataExchangeClient.MetadataLocationRetriever.DownloadMetadata(TimeoutHelper timeoutHelper)
at System.ServiceModel.Description.MetadataExchangeClient.MetadataRetriever.Retrieve(TimeoutHelper timeoutHelper)
--- End of inner exception stack trace ---
at System.ServiceModel.Description.MetadataExchangeClient.MetadataRetriever.Retrieve(TimeoutHelper timeoutHelper)
at System.ServiceModel.Description.MetadataExchangeClient.ResolveNext(ResolveCallState resolveCallState)
at System.ServiceModel.Description.MetadataExchangeClient.GetMetadata(MetadataRetriever retriever)
at Microsoft.Xrm.Sdk.Client.ServiceMetadataUtility.RetrieveServiceEndpointMetadata(Type contractType, Uri serviceUri, Boolean checkForSecondary)
at Microsoft.Xrm.Sdk.Client.ServiceConfiguration`1..ctor(Uri serviceUri, Boolean checkForSecondary)
at Microsoft.Xrm.Sdk.Client.OrganizationServiceConfiguration..ctor(Uri serviceUri, Boolean enableProxyTypes, Assembly assembly)
at Microsoft.Xrm.Sdk.Client.ServiceConfigurationFactory.CreateConfiguration[TService](Uri serviceUri, Boolean enableProxyTypes, Assembly assembly)
at Unify.Product.IdentityBroker.OrganizationServiceCommunicator.GetOrganizationService(IAddressCommunicatorInformation communicatorInformation)
at Unify.Product.IdentityBroker.OrganizationServiceCommunicator.<>c__DisplayClass1_0.<.ctor>b__0()
at Unify.Product.IdentityBroker.AddressCommunicatorBase`2.get_Service()
at Unify.Product.IdentityBroker.DynamicsCrmAgent.GetAttributeMetadata(String objectName, EntityFilters schemaRetrieveEntityFilters)
at Unify.Product.IdentityBroker.DynamicsCrmAgent.RetrieveSpecialFieldTypes(String objectName)
at Unify.Product.IdentityBroker.DynamicsCrmObjectConnector.GetSpecialFieldTypesInformation(IDynamicsCrmAgent`2 agent)
at System.Lazy`1.CreateValue()
at System.Lazy`1.LazyInitValue()
at Unify.Product.IdentityBroker.DynamicsCrmObjectConnector.UpdateEntities(IEnumerable`1 entities, IEnumerable`1 originalEntities, ISaveEntityResults`2 results)
at Unify.Product.IdentityBroker.AuditUpdatingConnectorDecorator.UpdateEntities(IEnumerable`1 entities, IEnumerable`1 originalEntities, ISaveEntityResults`2 results)
at Unify.Product.IdentityBroker.EventNotifierUpdatingConnectorDecorator.UpdateEntities(IEnumerable`1 entities, IEnumerable`1 originalEntities, ISaveEntityResults`2 results)

Any clues on the resolution of this intermittent issue? We haven't done diagnostic logging because it's in production and the error is intermittent so making huge logs is undesirable. I googled a bit and it looks like there are lines of code to set the TLS version to 1.2 which has resolved the same error in different contexts for other people. But you guys don't hardcode the authentication with web services right? So maybe the bindings should be updated? Still it doesn't make much sense that it fails sometimes and works sometimes. Makes me think the service is the problem rather than broker.

Answer

Closing due to no update.