PROCUCT DOCUMENTATION
SUBSCRIBE

LogicMonitor Data Publisher Service

Note: We have introduced this feature as part of beta. To get more details, contact your LogicMonitor Customer Success Manager.

[lmpass passwords=LMDP2024]

LogicMonitor Data Publisher is an integrated service with an ability to extract and send real-time datasource metrics to a third party destination. You might want to send collected data to other data stores or export data to analyse in your preferred analytics tool. To enable customers to further analyse data, LogicMonitor has introduced the LogicMonitor Data Publisher service. 

Currently, the LogicMonitor Data Publisher service supports integration with the Kafka tool. LogicMonitor Data Publisher is an integration of Kafka Producer into collector where collected metrics are published to Kafka topic The collector sends the metrics to both the third party destination and LogicMonitor portal. The raw data from datasources and metadata is converted to Open Telemetry Platform (OTLP) format and pushed to customer hosted Kafka setup.

Typically, you can send the collected data to other data stores for the following reasons:

  • Export data to create a data lake
  • Export data from LogicMonitor to analytics platforms such as, PowerBI and Tableau
  • Directly access data by sidelining API rate limits
  • Export metrics data to populate Capacity data repository
  • Extract metrics using integration tools such as, Kafka, Prometheus, Grafana, and S3

Requirements to use LogicMonitor Data Publisher

  • Kafka-clients version 3.8.1
  • Collector version EA 35.300 and later
  • The LogicMonitor Data Publisher service must be enabled (enable.collector.publisher=true) in the agent.config settings
  • Strong network connectivity between collector and Kafka
  • Kafka broker hosted URLs
  • Kafka topic name
  • Kafka must able to read and convert metrics data using the metrics.proto version v1.0.0

Considerations to use LogicMonitor Data Publisher

  • Data is published from collector to a single Kafka topic
  • Customers own the Kafka infrastructure
  • Customer must create the Kafka topic
  • Kafka configuration is part of agent.conf settings
  • Data is published to Kafka topic in the OTLP format
  • The shared data has the necessary metadata–device, datasource, and instance info
  • Customer must write the consumer to consume the data

LogicMonitor Data Publisher Service Functioning

Once you integrate Kafka with collector, the LogicMonitor Data Publisher service functions automatically to share the desired data. Following are the key milestones:

  1. Enable and configure Kafka publisher in the agent.conf settings. For details, see the Kafka Property Configurations section. 
  2. Restart the collector to start the LogicMonitor Data Publisher module.
  3. The LogicMonitor Data Publisher collects and translates the metric data into standard OTLP format.
  4. Kafka sends the formatted data to the Kafka topic.

LogicMonitor Data Publisher Authentication

By default, LogicMonitor Data Publisher sends data in the noAuth mode (that is, in plain text). To enable the Auth mode, perform the following steps:

  1. Add the default kafka.producer.truststore.jks and kafka.producer.keystore.jks certificates to the publisherCerts directory at the location where collector is installed. Instead of the default certificates, you can also add the truststore and keystore certificates with some other name.
  2. In the agent.conf settings, set the following properties:
    • agent.publisher.enable.auth
    • kafka.ssl.truststore.password
    • kafka.ssl.keystore.password
    • kafka.ssl.key.password
    • kafka.ssl.truststore.name
    • kafka.ssl.keystore.name

      Note: If you are adding the truststore and keystore certificates with some other name, specify the names in the kafka.ssl.truststore.name and kafka.ssl.keystore.name properties in the agent.conf settings. For details, see the Kafka Property Configurations section.  
  3. Restart the collector. LogicMonitor Data Publisher switches to the Auth (SSL) mode. 

Kafka Property Configurations

In the agent.conf settings, you can configure the following Kafka properties.

Collector Agent PropertiesDescription
enable.collector.publisher(Mandatory) To enable the LogicMonitor Data Publisher service, set the property to true. By default, the value is set to false
kafka.broker.server.urls(Mandatory) A comma separated list of host:port pairs that is needed to establish initial connection with the Kafka cluster.Example – host1:port1,host2:port2, and so on.
kafka.topic.name(Mandatory) Kafka topic name on which data is published.
agent.publisher.enable.auth By default, Kafka publisher sends data in the noAuth mode (that is,plain text). To enable the Auth mode, set the property to true. Once enabled, the Kafka publisher switches to the SSL mode.
kafka.ssl.truststore.nameEnter the Kafka producer truststore name. You must add the specific certificates to the publisherCerts folder in the Agent root directory. The default value of this property is kafka.producer.truststore.jks
kafka.ssl.truststore.passwordIt is the truststore password of the kafka.ssl.truststore.name property. The value of this sensitive property is encrypted in the agent.conf settings.
kafka.ssl.keystore.nameEnter the Kafka producer keystore name. You must add the specific certificates to the publisherCerts folder in the Agent root directory. The default value of this property is kafka.producer.keystore.jks.
kafka.ssl.keystore.passwordIt is the Keystore password of the kafka.ssl.keystore.nameproperty. The value of this sensitive property is encrypted in the agent.conf settings.
kafka.ssl.key.passwordEnter the Kafka ssl.key.password. The value of this sensitive property is encrypted in the agent.conf settings.
kafka.linger.msIt is the equivalence of Kafka Producer ProducerConfig.LINGER_MS_CONFIG. By default, the value is 5000ms.
kafka.batch.sizeIt is the equivalence of Kafka Producer ProducerConfig.BATCH_SIZE_CONFIG. By default the value is 50 KB.
kafka.max.in.flight.requests.per.connectionIt is the equivalence of Kafka Producer ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION. By default, the value is 1 (helps to maintain the order of data)
kafka.enable.idempotenceIt is the equivalence of Kafka Producer ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG. By default, the value is true.
kafka.acksIt is the equivalence of Kafka Producer ProducerConfig.ACKS_CONFIG. By default,the value is all.
kafka.retriesIt is the equivalence of Kafka Producer ProducerConfig.RETRIES_CONFIG. By default, the value is 1.
kafka.max.block.msIt is the equivalence of Kafka Producer ProducerConfig.LINGER_MS_CONFIG. By default, the value is 3000ms.
kafka.compression.typeIt is the equivalence of Kafka Producer ProducerConfig.COMPRESSION_TYPE_CONFIG. By default, the value is snappy.
enable.kafka.key.value.dataKafka Producer provides a feature to send data in key-value format. To protect extravagant disk usage on Kafka broker, the Kafka publisher sends only the message without any key. To send data in the key-value format, set enable.kafka.key.value.data=true
Note: The key is in string format HostName$DataSourceName$InstanceName
kafka.send.data.in.StringTo send data in a string serialized format, set kafka.send.data.in.String=true. By default, the data is sent in ByteArray format.
collector.publisher.device.propsThis property is set in the agent.conf file. This property enables the Data publisher to send the device properties under the “resource" section of the metrics data. Sensitive properties such as snmp.community and wmi.pass are not sent. By default, the limit is 5 device properties and the maximum limit is 10 device properties.

Data Model

The LogicMonitor Data Publisher publishes the metrics data in the OTLP formatted JSON. OTLP is a standard protocol for transmitting telemetry data in observability and monitoring systems. Metrics data in OTLP consists of one or more time series, where each time series represents a set of related data points over time. 

The following is an example of OTLP metric data in JSON format for LogicMonitor_Collector_ThreadCPUUsage datasource of SNMP instance.

{
    "resourceMetrics": [
        {
            "resource": {
                "attributes": [
                    {
                        "key": "hostName",
                        "value": {
                            "stringValue": "127.0.0.1"
                        }
                    },
                    {
                        "key": "hostId",
                        "value": {
                            "stringValue": "1017594"
                        }
                    },
                    {
                        "key": "devicePropKey",
                        "value": {
                            "stringValue": "devicePropValue"
                        }
                    }
                ]
            },
            "scopeMetrics": [
                {
                    "scope": {
                        "name": "LogicMonitor_Collector_ThreadCPUUsage",
                        "attributes": [
                            {
                                "key": "collector",
                                "value": {
                                    "stringValue": "jmx"
                                }
                            },
                            {
                                "key": "epoch",
                                "value": {
                                    "stringValue": "1715263558360"
                                }
                            },
                            {
                                "key": "datasourceId",
                                "value": {
                                    "stringValue": "128265135"
                                }
                            },
                            {
                                "key": "datasourceInstanceId",
                                "value": {
                                    "stringValue": "367542931"
                                }
                            }
                        ]
                    },
                    "metrics": [
                        {
                            "name": "CpuUsage",
                            "sum": {
                                "dataPoints": [
                                    {
                                        "startTimeUnixNano": "1715263558360000000",
                                        "timeUnixNano": "1715263558360000000",
                                        "asDouble": 0,
                                        "attributes": [
                                            {
                                                "key": "dataSourceInstanceName",
                                                "value": {
                                                    "stringValue": "LogicMonitor_Collector_ThreadCPUUsage-netscan-propsdetection"
                                                }
                                            },
                                            {
                                                "key": "datapointid",
                                                "value": {
                                                    "stringValue": "197642"
                                                }
                                            },
                                            {
                                                "key": "wildValue",
                                                "value": {
                                                    "stringValue": "netscan-propsdetection"
                                                }
                                            },
                                            {
                                                "key": "wildAlias",
                                                "value": {
                                                    "stringValue": "netscan-propsdetection"
                                                }
                                            }
                                        ]
                                    }
                                ],
                                "aggregationTemporality": "AGGREGATION_TEMPORALITY_DELTA",
                                "isMonotonic": true
                            }
                        },
                        {
                            "name": "ProcessorCount",
                            "gauge": {
                                "dataPoints": [
                                    {
                                        "startTimeUnixNano": "1715263558360000000",
                                        "timeUnixNano": "1715263558360000000",
                                        "asDouble": 10,
                                        "attributes": [
                                            {
                                                "key": "dataSourceInstanceName",
                                                "value": {
                                                    "stringValue": "LogicMonitor_Collector_ThreadCPUUsage-netscan-propsdetection"
                                                }
                                            },
                                            {
                                                "key": "datapointid",
                                                "value": {
                                                    "stringValue": "197643"
                                                }
                                            },
                                            {
                                                "key": "wildValue",
                                                "value": {
                                                    "stringValue": "netscan-propsdetection"
                                                }
                                            },
                                            {
                                                "key": "wildAlias",
                                                "value": {
                                                    "stringValue": "netscan-propsdetection"
                                                }
                                            }
                                        ]
                                    }
                                ]
                            }
                        },
                        {
                            "name": "RunnableThreadCnt",
                            "gauge": {
                                "dataPoints": [
                                    {
                                        "startTimeUnixNano": "1715263558360000000",
                                        "timeUnixNano": "1715263558360000000",
                                        "asDouble": 0,
                                        "attributes": [
                                            {
                                                "key": "dataSourceInstanceName",
                                                "value": {
                                                    "stringValue": "LogicMonitor_Collector_ThreadCPUUsage-netscan-propsdetection"
                                                }
                                            },
                                            {
                                                "key": "datapointid",
                                                "value": {
                                                    "stringValue": "197644"
                                                }
                                            },
                                            {
                                                "key": "wildValue",
                                                "value": {
                                                    "stringValue": "netscan-propsdetection"
                                                }
                                            },
                                            {
                                                "key": "wildAlias",
                                                "value": {
                                                    "stringValue": "netscan-propsdetection"
                                                }
                                            }
                                        ]
                                    }
                                ]
                            }
                        },
                        {
                            "name": "ThreadCnt",
                            "gauge": {
                                "dataPoints": [
                                    {
                                        "startTimeUnixNano": "1715263558360000000",
                                        "timeUnixNano": "1715263558360000000",
                                        "asDouble": 0,
                                        "attributes": [
                                            {
                                                "key": "dataSourceInstanceName",
                                                "value": {
                                                    "stringValue": "LogicMonitor_Collector_ThreadCPUUsage-netscan-propsdetection"
                                                }
                                            },
                                            {
                                                "key": "datapointid",
                                                "value": {
                                                    "stringValue": "197645"
                                                }
                                            },
                                            {
                                                "key": "wildValue",
                                                "value": {
                                                    "stringValue": "netscan-propsdetection"
                                                }
                                            },
                                            {
                                                "key": "wildAlias",
                                                "value": {
                                                    "stringValue": "netscan-propsdetection"
                                                }
                                            }
                                        ]
                                    }
                                ]
                            }
                        }
                    ]
                }
            ]
        }
    ]
}

The resourceMetrics consists of the following:

  • Resource–It is the metadata of the device from which metrics is collected.
  • ScopeMetrics–It contains scope and metrics.
    • Scope–It is the metadata of datasource and instances for which metrics is collected.
    • Metrics–It consists of actual datapoints of datasource instances which are retrieved from the device.

Note: Raw data has two types of datapoints: normal and complex. LogicMonitor Data Publisher can only send normal datapoints in metrics data.

The OTLP convertor is a gRPC service in LogicMonitor Data Publisher which implements protobuf (based on metrics.proto v1.0.0) for collector metric data conversion to OTLP JSON format.

Metadata Details

The collected data is added to Kafka/s3 or any source in the JSON format. The JSON file contains data collected for a single poll along with the following metadata: 

  • Host name or Device name 
  • DataSource name
  • Instance name 
  • Polling interval
  • Epoch details
  • DataPoint name

LogicMonitor Data Publisher Performance Monitoring

The LogicMonitor Data Publisher datasource monitors and provides real-time performance metrics for the LogicMonitor Data Publisher service. In the following table you can find details of all the datapoint it tracks.

DatapointDescription
CountOfDataEnqueuedCount of data enqueued for publishing.
CountOfDataDequeuedCount of data dequeued for publishing.
SizeOfBigQueueSize of queue in which data persists.
KafkaRequestCountNumber of Kafka requests. 
CountofSuccessfulRequestsToKafkaNumber of successful requests to Kafka.
CountOfRequestsFailedDuetoAuthErrorCount of requests failed due to auth errors, if Auth is enabled.
CountOfRequestsFailedDuetoNetworkErrorsCount of requests failed due to network errors.
CountofRequestsfailedDueToKafkaErrorNumber of messages failed due to Kafka errors.
TimeTakenforDequeueAndConversionTime taken for dequeuing data from queue and converting to OTLP format.
SizeOfDataPublishedinBytesSize (in bytes) of the data published by the Kafka client.

Note: If the connection with Kafka broker fails, LogicMonitor Data Publisher can store data for 30 minutes.

Kafka Setup Recommendations

On an average, a single record is of approximately 25 KB. The amount of data that is send is calculated based on the following four factors: 

  • Number of collectors with LogicMonitor Data Publisher service enabled
  • Number of devices
  • Count of datasource instances 
  • Polling period

Example

Assumption: A collector has the following monitoring setup.

FactorsAssumed Value
Single record size25 KB
Number of devices10
Number of datasources10
Number of datasources instances per device (considering 5 instance per datasource)5 instances x10 datasource = 50 datasource instances
Total instances for 10 devices 50 datasource instances x 10 devices = 500 
Average polling interval5 min

If each instance represents a single record, the size of data published on an average per polling interval is calculated as follows:
500 instances × 25 KB/instance = 12500 KB/5 mins

Thus, the LogicMonitor Data Publisher service publishes datasource instances in the monitoring setup as follows:

Polling IntervalPublished Datasource Instances
5 minutes12500 KB
1 minute2500 KB
1 second41.67 KB

Kafka Cluster

It is recommended that Kafka cluster configuration is similar to a m5.2xlarge instance of AWS EC2. The configuration details are given below.

Hardware Settings

ConfigurationsRecommended Values
CPU Cores8
Memory (RAM)32 GB

Kafka Cluster Settings

The recommended values for a Kafka cluster are as follows:

ConfigurationsRecommended ValueDescription
Number of brokers in a cluster3Multiple brokers is beneficial in situations such as, one broker is down and other brokers in the cluster help to avoid data loss.
Replication factor3This is similar to the broker count. Here, in all the brokers the same topic is created and also, the data that is sent is stored.
Retention period6 hoursIndicates the duration for which the data stays in Kafka broker. The data is purged after the retention period is over. Although it depends on the consumer, it is recommended that you set this configuration for effective memory usage.
Partition limit per broker 2000In Kafka topics the data is stored in partitions. It is the maximum partition limit for each broker. 

Network Settings

Throughput Calculation

To estimate the total throughput of the Kafka cluster, you must consider the rate at which data is produced and consumed. Based on it you can determine the network bandwidth between the collector and Kafka cluster.

Replication Factor

Based on the recommended replication factor, ensure that the network bandwidth accommodates the replication traffic between brokers.

Producer and Consumer Configurations 

To reduce the network overhead, tune Kafka producers and consumers to batch messages. For details, see the Kafka Property Configurations section. 

Security Settings 

The LogicMonitor Data Publisher supports both the plain text (noAuth) and SSL (Auth) mode. However, to strengthen the security, it is recommended that you use the SSL (Auth) mode. Configuration properties related to security are as follows:

  • agent.publisher.enable.auth 
  • kafka.ssl.truststore.name
  • kafka.ssl.truststore.password
  • kafka.ssl.keystore.name
  • kafka.ssl.keystore.password
  • kafka.ssl.key.password

For details, see the Kafka Property Configurations section. 

[/lmpass]