Skip to main content

Datadog

Port's Datadog integration allows you to model Datadog resources in Port and ingest data into them.

Overview

This integration allows you to:

  • Map and organize your desired Datadog resources and their metadata in Port (see supported resources below).
  • Watch for Datadog object changes (create/update/delete) in real-time, and automatically apply the changes to your software catalog.

Supported Resources

Setup

Choose one of the following installation methods:

Using this installation option means that the integration will be hosted by Port, with a customizable resync interval to ingest data into Port.

Beta feature

The Hosted by Port option is currently in beta, and is still undergoing final testing before its official release.

Should you encounter any bugs or functionality issues, please let us know so we can rectify them as soon as possible.
Your help is greatly appreciated! ⭐

Live event support

Currently, live events are not supported for integrations hosted by Port.
Resyncs will be performed periodically every 1 hour by default (can be configured differently after installation), or manually triggered by you via Port's UI.

Therefore, real-time events (including GitOps) will not be ingested into Port immediately.
Support for live events is WIP and will be supported in the near future.

Installation

To install, follow the following steps:

  1. Go to the Data sources page of your portal.

  2. Click on the + Data source button in the top-right corner.

  3. Click on the relevant integration in the list.

  4. Under Select your installation method, choose Hosted by Port.

  5. Configure the integration settings and application settings as you wish (see below for details).

Application settings

Every integration hosted by Port has the following customizable application settings, which are configurable after installation:

  • Resync interval: The frequency at which Port will ingest data from the integration. There are various options available, ranging from every 1 hour to once a day.

  • Send raw data examples: A boolean toggle (enabled by default). If enabled, raw data examples will be sent from the integration to Port. These examples are used when testing your mapping configuration, they allow you to run your jq expressions against real data and see the results.

Integration settings

Every integration has its own tool-specific settings, under the Integration settings section.
Each of these settings has an ⓘ icon next to it, which you can hover over to see a description of the setting.

Port secrets

Some integration settings require sensitive pieces of data, such as tokens.
For these settings, Port secrets will be used, ensuring that your sensitive data is encrypted and secure.

When changing such a setting, you will be prompted to choose an existing secret or create a new one:



Port source IP addresses

When using this installation method, Port will make outbound calls to your 3rd-party applications from static IP addresses.
You may need to add these addresses to your allowlist, in order to allow Port to interact with the integrated service:

54.73.167.226  
63.33.143.237
54.76.185.219

Configuration

Port integrations use a YAML mapping block to ingest data from the third-party api into Port.

The mapping makes use of the JQ JSON processor to select, modify, concatenate, transform and perform other operations on existing fields and values from the integration API.

Examples

To view and test the integration's mapping against examples of the third-party API responses, use the jq playground in your data sources page. Find the integration in the list of data sources and click on it to open the playground.

Additional examples of blueprints and the relevant integration configurations can be found on the datadog examples page

Relevant Guides

For relevant guides and examples, see the guides section.

Alternative installation via webhook

While the Ocean integration described above is the recommended installation method, you may prefer to use a webhook to ingest alerts and monitor data from Datadog. If so, use the following instructions:

Note that when using this method, data will be ingested into Port only when the webhook is triggered.

Webhook installation (click to expand)

Port configuration

Create the following blueprint definitions:

Datadog microservice blueprint
{
"identifier": "microservice",
"title": "Microservice",
"icon": "Service",
"schema": {
"properties": {
"description": {
"title": "Description",
"type": "string"
}
},
"required": []
},
"mirrorProperties": {},
"calculationProperties": {},
"relations": {}
}
Datadog alert/monitor blueprint
{
"identifier": "datadogAlert",
"description": "This blueprint represents a Datadog monitor/alert in our software catalog",
"title": "Datadog Alert",
"icon": "Datadog",
"schema": {
"properties": {
"url": {
"type": "string",
"format": "url",
"title": "Event URL"
},
"message": {
"type": "string",
"title": "Details"
},
"eventType": {
"type": "string",
"title": "Event Type"
},
"priority": {
"type": "string",
"title": "Metric Priority"
},
"creator": {
"type": "string",
"title": "Creator"
},
"alertMetric": {
"type": "string",
"title": "Alert Metric"
},
"alertType": {
"type": "string",
"title": "Alert Type",
"enum": ["error", "warning", "success", "info"]
},
"tags": {
"type": "array",
"items": {
"type": "string"
},
"title": "Tags"
}
},
"required": []
},
"mirrorProperties": {},
"calculationProperties": {},
"relations": {
"microservice": {
"title": "Services",
"target": "microservice",
"required": false,
"many": false
}
}
}

Create the following webhook configuration using Port UI:

Datadog webhook configuration
  1. Basic details tab - fill the following details:

    1. Title : Datadog Alert Mapper;
    2. Identifier : datadog_alert_mapper;
    3. Description : A webhook configuration for alerts/monitors events from Datadog;
    4. Icon : Datadog;
  2. Integration configuration tab - fill the following JQ mapping:

    [
    {
    "blueprint": "datadogAlert",
    "entity": {
    "identifier": ".body.alert_id | tostring",
    "title": ".body.title",
    "properties": {
    "url": ".body.event_url",
    "message": ".body.message",
    "eventType": ".body.event_type",
    "priority": ".body.priority",
    "creator": ".body.creator",
    "alertMetric": ".body.alert_metric",
    "alertType": ".body.alert_type",
    "tags": ".body.tags | split(\", \")"
    },
    "relations": {
    "microservice": ".body.service"
    }
    }
    }
    ]
  3. Click Save at the bottom of the page.

note

The webhook configuration's relation mapping will function properly only when the identifiers of the Port microservice entities match the names of the services or hosts in your Datadog.

Create a webhook in Datadog

  1. Log in to Datadog with your credentials.
  2. Click on Integrations at the left sidebar of the page.
  3. Search for Webhooks in the search box and select it.
  4. Go to the Configuration tab and follow the installation instructions.
  5. Click on New.
  6. Input the following details:
    1. Name - use a meaningful name such as Port_Webhook.

    2. URL - enter the value of the url key you received after creating the webhook configuration.

    3. Payload - When an alert is triggered on your monitors, this payload will be sent to the webhook URL. You can enter this JSON placeholder in the textbox:

      {
      "id": "$ID",
      "message": "$TEXT_ONLY_MSG",
      "priority": "$PRIORITY",
      "last_updated": "$LAST_UPDATED",
      "event_type": "$EVENT_TYPE",
      "event_url": "$LINK",
      "service": "$HOSTNAME",
      "creator": "$USER",
      "title": "$EVENT_TITLE",
      "date": "$DATE",
      "org_id": "$ORG_ID",
      "org_name": "$ORG_NAME",
      "alert_id": "$ALERT_ID",
      "alert_metric": "$ALERT_METRIC",
      "alert_status": "$ALERT_STATUS",
      "alert_title": "$ALERT_TITLE",
      "alert_type": "$ALERT_TYPE",
      "tags": "$TAGS"
      }
    4. Custom Headers - configure any custom HTTP header to be added to the webhook event. The format for the header should be in JSON.

  7. Click Save at the bottom of the page.
tip

To view the different payloads and structure of the events in Datadog webhooks, look here.

Done! Any problem detected on your Datadog instance will trigger a webhook event. Port will parse the events according to the mapping and update the catalog entities accordingly.

Let's Test It

This section includes a sample response data from Datadog. In addition, it includes the entity created from the resync event based on the Ocean configuration provided in the previous section.

Payload

Here is an example of the payload structure from Datadog:

Webhook response data (Click to expand)
{
"id": "1234567890",
"message": "This is a test message",
"priority": "normal",
"last_updated": "2022-01-01T00:00:00+00:00",
"event_type": "triggered",
"event_url": "https://app.datadoghq.com/event/jump_to?event_id=1234567890",
"service": "my-service",
"creator": "rudy",
"title": "[Triggered] [Memory Alert]",
"date": "1406662672000",
"org_id": "123456",
"org_name": "my-org",
"alert_id": "1234567890",
"alert_metric": "system.load.1",
"alert_status": "system.load.1 over host:my-host was > 0 at least once during the last 1m",
"alert_title": "[Triggered on {host:ip-012345}] Host is Down",
"alert_type": "error",
"tags": "monitor, name:myService, role:computing-node"
}

Mapping Result

The combination of the sample payload and the Ocean configuration generates the following Port entity:

Alert entity in Port (Click to expand)
{
"identifier": "1234567890",
"title": "[Triggered] [Memory Alert]",
"blueprint": "datadogAlert",
"team": [],
"icon": "Datadog",
"properties": {
"url": "https://app.datadoghq.com/event/jump_to?event_id=1234567890",
"message": "This is a test message",
"eventType": "triggered",
"priority": "normal",
"creator": "rudy",
"alertMetric": "system.load.1",
"alertType": "error",
"tags": "monitor, name:myService, role:computing-node"
},
"relations": {
"microservice": "my-service"
},
"createdAt": "2024-2-6T09:30:57.924Z",
"createdBy": "hBx3VFZjqgLPEoQLp7POx5XaoB0cgsxW",
"updatedAt": "2024-2-6T11:49:20.881Z",
"updatedBy": "hBx3VFZjqgLPEoQLp7POx5XaoB0cgsxW"
}

Ingest service level objectives (SLOs)

This guide will walk you through the steps to ingest Datadog SLOs into Port. By following these steps, you will be able to create a blueprint for a microservice entity in Port, representing a service in your Datadog account. Furthermore, you will establish a relation between this service and the datadogSLO blueprint, allowing the ingestion of all defined SLOs from your Datadog account.

The provided example demonstrates how to pull data from Datadog's REST API at scheduled intervals using GitLab Pipelines and report the data to Port.

Ingest service dependency from your APM

In this example, you will create a service blueprint that ingests all services and their related dependencies in your Datadog APM using REST API. You will then add some shell script to create new entities in Port every time GitLab CI is triggered by a schedule.

Ingest service catalog

In this example, you will create a datadogServiceCatalog blueprint that ingests all service catalogs from your Datadog account. You will then add some python script to make API calls to Datadog REST API and fetch data for your account.