When testing login/logout functionality in applications, there should be a low level verification that the login was as expected such as checking SAML messages, id tokens etc
Thanks for the suggestion, Ravneel. I've added this to the backlog.
Currently there is an ingestion limit of 1 MB active on the Dev ingestion service. For a couple of dashboards in the Dev instance of UNIFYMonitor, the data being pushed in each package is much larger than 1 MB and all incoming packages fail to be ingested into Data Explorer. Development can not be done on these dashboards as no current data is available.
There is value in being able to view alerts that have been raised over time. Currently, the solution is designed to action alerts with an email or Webhook. In addition to this the solution should store the occurrence of triggered alerts as logs that can be displayed in a dashboard, as well as viewed in a historic list.
A Generic Service/Framework to retrieve Support Activities (Managed Service Activities) and show the Reports and Graphs
A Generic Framework/Service to be developed (if not exists) to retrieve all Support Tickets and other Managed Service Activities that are performed via Ivanti (using it's API) and send to Analytics and there should be a generic/templated (where the values and Customer Logos changes dynamically. Not sure if it is possible in PowerBI) Dashboard and Report.
As we're capturing information on the health of systems it seems like a good value add would be to allow those services to be "failed over" using that information.
Consider Azure Traffic Manager, it has some crossover with what we're doing. Could its External Endpoints monitoring do the job? I.e. we host an endpoint that Azure Traffic Manager calls into to check the health of the underlying service? Or do we develop out some additional functionality that responds to DNS requests and do the failover ourselves?
Gordon should have direct integration with the ServiceDesk tool (i.e. Ivanti) to log Incidents that can then be correlated into Problems.
These can then be reported on.
This will be covered with the new project.
This is to satisfy the ability for customer's ops monitoring/audit reporting tools to be utilised, even whilst we maintain our own.
I envision using Event Hubs as the location for collating and distributing this - including into our own Log Analytics instances.
https://splunkbase.splunk.com/app/3534/ is a sample of an application that a customer may use.
Potentially, we may want to have multiple hubs inside an Event Hubs instance so as only to allow customers to access information that should be exposed to them.
UNIFYMonitor has functionality for client requests for collation and data extraction through an API. This should satisfy the requirement in the meantime.
It would be good if AMS had its own scheduler so we didn't need to trigger it from Windows Task Scheduler. It could then run as a persistent windows service, or as a WebJob in Azure potentially.
We must identify what systems we will configure alerts/dashboards to work with. Adding systems will require custom scripts, new queries and new reporting graphics. A base stack of configurable systems can be offered that can be modified to fit individual clients needs. Any new proposal for a system must be considered in terms of its re-usability unless its development can be funded directly by a client.
From experience, other than application failure, one of the things that most often goes wrong is expired certificates (TLS and SAML signing certificates) and OAUTH/OIDC secrets.
It would be good if the AMS reported on the health of the expiry of these certificates and shared secrets - with alerts if a configurable amount of time until the expiry is met.
Thanks for the suggestion, Shane. I've added this to the backlog.
Customer support service by UserEcho