Identity Broker Forum

Welcome to the community forum for Identity Broker.

Browse the knowledge base, ask questions directly to the product group, or leverage the community to get answers. Leave ideas for new features and vote for the features or bug fixes you want most.

+1
Planned

Baseline Sync to revert target system changes takes a long time, even when there are no actual changes

Adrian Corston 4 months ago in UNIFYBroker/Plus updated 3 months ago 5

My customer has 9,000 users in AD corresponding to 7,000 in a locker of which approximately 4,000 are joined (the discrepancy being due to terminated HR users who don't have or need to be provisioned with AD accounts).

When I run a Baseline Sync on my AD link, to revert any unauthorised changes that have been made to users in AD (i.e., disable any accounts that have been manually enabled when they shouldn't have) it takes upwards of 30 minutes to run, during which time all other system processing is blocked (since link operations are blocking).  This doesn't meet customer expectations that unauthorised AD changes should be reverted within a 15-30 minute timeframe and that the system should always process low-latency operations (like manual suspensions using the Suspended Employees feature) with a delay not exceeding a few minutes.

Some possible ways to solve this problem would be to add a "true-up" operation, or allow parallel sync operations.

+1

Pending Export Report capability required to target directory

Bob Bradley 3 years ago in UNIFYBroker/Plus updated by Matthew Davis (Technical Product Manager) 1 year ago 1

There is already a Test Mode concept but this appears to be limited when it comes to providing a pre-execution reporting mechanism for ANY pending change to a target system.

Existing MIM Best Practice has incorporated this capability for many years, and the equivalent is now required in the Broker+ and UNIFYConnect platforms

+1
Under review

Test harness for Adapter and Link PowerShell Transformations

Bob Bradley 3 years ago in UNIFYBroker/Plus updated by Matthew Davis (Technical Product Manager) 1 year ago 1

In order to support the unit testing requirements for transitioning PS solutions on Broker+ to the UNIFYConnect hosted platform, a test harness is required for all PowerShell transformations.

0
Answered

Renaming a locker field results in "An item with the same key has already been added" UI error

I renamed a locker field "HRISEmailAddress" to "EmailAddress" in the UNIFYBroker UI, and this stack dump error appeared:

System.Exception: Swagger Exception could not be parsed. SE response code: 500; SE response text: {"Message":"An error has occurred.","ExceptionMessage":"'The field EmailAddress could not be added to the schema","ExceptionType":"Unify.Framework.Schema.SchemaException","StackTrace":" at Unify.Framework.Schema.Schema`6.Add(TKey key, TFieldDef value)\r\n at Unify.Framework.Visitor.Visit[T](IEnumerable`1 visitCollection, Action`2 visitor)\r\n at Unify.Product.IdentityBroker.EntitySchemaFactory.CreateComponent(IEntitySchemaConfiguration factoryInformation)\r\n at Unify.Product.Plus.LockerEngine.GenerateLockerPair(ILockerInformation lockerConfiguration)\r\n at Unify.Product.Plus.LockerEngine.LockerConfigurationChanged(Guid lockerId, Action`1 lockerAction)\r\n at Unify.Product.Plus.LockerEngine.<>c__DisplayClass33_0.<UpdateLockerSchemaRow>b__0()\r\n at Unify.Product.Plus.LockerEngine.<>c__DisplayClass49_0.<ConfigurationChanged>b__0()\r\n at Unify.Framework.ExtensionMethods.WaitOnMutex(Mutex mutex, Action work)\r\n at Unify.Product.Plus.LockerEngineAuditingDecorator.UpdateLockerSchemaRow(Guid lockerId, IEntitySchemaFieldDefinitionConfiguration entitySchemaRowConfiguration)\r\n at Unify.Product.Plus.LockerEngineNotifierDecorator.<>c__DisplayClass25_0.<UpdateLockerSchemaRow>b__0()\r\n at Unify.Framework.Notification.NotifierDecoratorBase.Notify(ITaskNotificationFactory notificationFactory, Action action)\r\n at lambda_method(Closure , Object , Object[] )\r\n at System.Web.Http.Controllers.ReflectedHttpActionDescriptor.ActionExecutor.<>c__DisplayClassc.<GetExecutor>b__6(Object instance, Object[] methodParameters)\r\n at System.Web.Http.Controllers.ReflectedHttpActionDescriptor.ExecuteAsync(HttpControllerContext controllerContext, IDictionary`2 arguments, CancellationToken cancellationToken)\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__0.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__2.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Filters.AuthorizationFilterAttribute.<ExecuteAuthorizationFilterAsyncCore>d__2.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Filters.AuthorizationFilterAttribute.<ExecuteAuthorizationFilterAsyncCore>d__2.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__1.MoveNext()","InnerException":{"Message":"An error has occurred.","ExceptionMessage":"An item with the same key has already been added.","ExceptionType":"System.ArgumentException","StackTrace":" at System.ThrowHelper.ThrowArgumentException(ExceptionResource resource)\r\n at System.Collections.Generic.Dictionary`2.Insert(TKey key, TValue value, Boolean add)\r\n at Unify.Framework.Schema.Schema`6.Add(TKey key, TFieldDef value)"}}; ---> Unify.Framework.Client.SwaggerException: The HTTP status code of the response was not expected (500).

When I tried to get back to the main locker UI page to select the affected locker and fix my mistake the following error now appears every time:

System.ArgumentException: An item with the same key has already been added.

Could you please review the config and fix it so the locker screen works again?


Answer

Hi Adrian

The config hasn't been changed; it's just the in-memory configuration that's in a a bad state. If you have access to the Broker API for that environment you can use the Locker/UpdateLockerSchemaRowName methods to manually change the problematic fields name. This method will need the row id, which can be retrieved using the Locker/GetLockerConfiguration method. Alternatively, restarting the service will reload the still-correct configuration from file.

0

How to achieve continuous compliance (target system value reversion) at the same time as responding quickly to urgent data updates (e.g. user suspension)

Adrian Corston 3 weeks ago in UNIFYBroker/Plus 0

I have a solution with ~7000 managed AD user accounts.  To ensure any unauthorised changes to those accounts are reverted (continuous compliance) I run regular Baseline Sync operations on the outgoing link.

These Baseline Syncs take approximately 25 minutes to run, and during that time no other link synchonisations run.  This means urgent updates (such as SPOL user suspension functionality) is delayed.

What can I do to ensure fast response for urgent operations, while also having continuous compliance with a reasonable turn-around time (i.e. checked every hour or so - keeping in mind that an import all for my AD users only takes around 50 seconds to run).

0
Duplicate

UNIFYConnect UI shows DataTables error for Remove Joins

Adrian Corston 3 weeks ago in UNIFYBroker/Plus updated by Matthew Davis (Technical Product Manager) 3 weeks ago 1

The Remove Joines screen shows an error when invoked, in all dev/test UNIFYConnect environments.

Image 6434

0
Not a bug

Locker change not synchronising to outgoing adapter entity

Adrian Corston 3 weeks ago in UNIFYBroker/Plus updated by Matthew Davis (Technical Product Manager) 3 weeks ago 3

An update to a locker field value is not resulting in a pending outgoing change on to an adapter entity.

The adapter entity should be joined, but the Remove Joins screen shows a DataTables Error so I can't confirm that.

Locker Entity Id = c7e8a490-6cfb-4ec1-9067-42906411aed0
Adapter Entity Id = 0645e285-577e-4218-afb6-745f1ee08600

The issue is urgent since the customer's UAT is failing due to this error.

Answer

Closing as root cause has been found. 

The locker uses information from the incoming and outgoing mappings and their sources to determine the entities that need syncing during a Changes Sync. 

In this case, the Synchronisation powershell task was being used to read a value from the adapter and inserted into a locker schema field without being mapped in the link schema mappings. In this case, the locker doesn't know that the value has been changed. It was also then being mapped back out to another adapter in the same manner. 

If there's an implementation need to map the items in powershell rather than using the normal mappings (while we would encourage considering why this is necessary), a possible workaround is to map the field through a normal mapping to the locker and back out the other side of the link. That allows the link processing to determine when the value has changed, and correctly queue an outgoing change for this item. 

We've added an item to our backlog to see if there's anything we can add to the product to improve this process - such as being able to better calculate changes that may not have come in through a link mapping, or to allow sync tasks access to pre and post joined value sets so operations can be run on value changes without the script needing to also map the value.

0
Answered

Configuration guidance required

In a UNIFYConnect ABAC solution we use appointment information (i.e. a user's Employee ID, Position, Department, Team, Location and Start Date) along with customer-managed rules in order to determine which access packages the user should be automatically assign to.

My customer has two sources of appointment information: one is directly on the employee, and the other is via a separate feed of secondary appointments.  Each employee has one primary appointment and zero or more secondary appointments.

In order to combine the appointments into one data source, I use the following paths into the Appointment locker:

Employee connector/adapter -> link -> Appointment locker
Secondary appointment connector/adapter -> link -> Appointment locker

The employee connector is keyed solely on Employee ID, but the Secondary appointment connector is keyed on Employee ID, Position, Department, Team, Location and Start Date, to guarantee uniqueness.

On the outgoing side the following path writes the combined Appointments to a CSV file for processing outside of UNIFYBroker:

Appointment locker -> link -> Appointments CSV connector/adapter

The Appointments CSV connector is keyed on Employee ID, Position, Department, Team, Location and Start Date, to guarantee uniqueness.

All links use connection-oriented join resolution.

When an existing Employee connector entity changes Department, Team (etc) the existing Appointment locker record is updated with new values for those fields.  For the export to the Appointments CSV connector, this causes a problem because that update is processed as an anchor modification, which is not supported for CSV connector types.

This problem doesn't occur on the Secondary appointments connector, because the multi-part key ensures that changes to any key field results in a delete/add operation instead of an update.

How can I configure UNIFYBroker to make this scenario work correctly?

Answer

Hi Adrian

Creating a derived key generated from adapter transformations might help.

For the secondary appointment entities, use a PowerShell transformation to generate a unique value based on the Position, Department, Team, Location and Start Date fields that persists their uniqueness quality, but reduces them to a single field. A hash of some kind of their combined values should be sufficient. I'd also add a static prefix for a further uniqueness guarantee. The resulting value may look something like like sec_c9uQNFGLgC.

On the primary adapter, use a constant value transformation to add a derived key field to differentiate primary appointments from secondary ones. The value set can by anything, but shouldn't be anything that could be generated by the transformation on the secondary appointment adapter, ie: primary_appointment.

Use the derived key in conjunction with the EmployeeId field for link joins and as key fields on the Appointments CSV connector. This should provide a stable, two-field anchor based on the immutable secondary appointment properties, but not the mutable properties of the primary appointment.

0
Under review

Adapter data not mapping to locker during baseline sync

Adrian Corston 1 month ago in UNIFYBroker/Plus updated 1 month ago 10

Some adapter field data isn't being updated in their locker entity when a baseline sync is run.

Screen snaps will be in a follow-up comment.

0
Not a bug

Exclusion group not stopping connector import/export operations from running in parallel

Adrian Corston 1 month ago in UNIFYBroker/Plus updated by Matthew Davis (Technical Product Manager) 4 weeks ago 2

I have three connectors in an exclusion group, and yet imports on one connector in the group can run at the same time as exports on another connector in the group.  Exclusion groups should stop this from happening, to avoid two operations (Aurion API calls in this case) from both taking place at the same time.

Answer

Hi Adrian,

Exclusion groups are only capable of blocking for import operations (see Connector Overview / UNIFYBroker knowledge / UNIFY Solutions for more details). 

Connectors don't control the invocation of the export capability, as this is triggered by external processes (generally an identity management system through a Gateway, or more recently through UNIFYBroker/Plus Link operations). 

Export operations are designed to provide immediate communication with the external system, due to the way Gateways are required to communicate the operation status rather than queuing it for a later execution. 

If you'd like, we can have some discussions about the role an export exclusion capability would play in the UNIFYBroker/Plus ecosystem? It would require careful consideration to ensure we don't impact the correct function of gateways. In an ideal scenario you would line your import and link schedules up so they don't overlap, but I understand this can be difficult in complex configuration scenarios.