Datadog’s App Builder is a game-changer, extending the platform beyond observability to empower users to build custom applications. The possibilities are vast, from monitoring solutions to remediation tools and even resource management. However, organizations with on-premise workloads or those needing actions beyond standard cloud APIs face a challenge: To securely connect Datadog’s SaaS environment to internal infrastructure.Â
Enter Datadog’s Private Action Runner. This powerful tool bridges the gap, enabling App Builder to interact seamlessly with resources that are inaccessible from the public network. Think of it as Datadog Synthetics’ private locations, but with a twist. In App Builder mode, the runner acts as a proxy, securely relaying requests initiated by the user’s browser within a private network.Â
Here’s the crucial point: The runner only communicates outbound to Datadog for authentication and enrollment, ensuring your sensitive data remains secure within your environment. To maximize the benefits of the Private Action Runner in App Builder mode, you must assign a DNS hostname and manage SSL termination. While the runner supports Docker and Kubernetes, I recommend Kubernetes for its scalability and manageability. Let’s deploy it on EKS.Â
Setting Up Your Private Action RunnerÂ
The initial setup within Datadog is straightforward. Navigate to the new Private Action Runner page, give your runner a descriptive name, select the operating model (app builder, workflow automation or both), input the DNS hostname and specify the actions it is authorized to perform. For example, I chose actions related to GitLab, as my self-hosted GitLab instance is a key internal service.Â
Datadog will then generate a Docker command to create the URN and private key for your runner’s configuration. These, along with any necessary credentials for accessing your internal infrastructure, are added to the private-action-runner values.YAML file.Â
My Approach of Kubernetes Deployment on EKSÂ Â
There are multiple deployment strategies, but I prefer leveraging Kubernetes resources for streamlined management. My setup utilizes an Ingress resource to deploy an EKS-managed application load balancer (ALB) via the AWS Load Balancer Controller. This is combined with ExternalDNS to automate DNS record management for the runner hostname. This approach centralizes resource interaction within Kubernetes, eliminating the need for separate Terraform or custom scripts.Â
Here’s a sample Ingress resource configuration that handles SSL termination and Route53 record creation:Â
Â
apiVersion: networking.k8s.io/v1Â
kind: IngressÂ
metadata:Â
 name: runner-ingressÂ
 namespace: runnerÂ
 annotations:Â
   alb.ingress.kubernetes.io/target-type: “ip”Â
   alb.ingress.kubernetes.io/healthcheck-protocol: “HTTP”Â
   alb.ingress.kubernetes.io/healthcheck-path: “/liveness”Â
   alb.ingress.kubernetes.io/healthcheck-port: “9016”Â
   external-dns.alpha.kubernetes.io/hostname: test-runner.lab.rapdev.ioÂ
spec:Â
 ingressClassName: albÂ
 rules:Â
   – host: test-runner.lab.rapdev.ioÂ
     http:Â
       paths:Â
         – path: /Â
           pathType: PrefixÂ
           backend:Â
             service:Â
               name: private-action-runner-runner-serviceÂ
               port:Â
                 number: 9016Â
To orchestrate the deployment of the private-action-runner, aws-load-balancer-controller, external-dns and the Ingress resource, I used a Helmfile. Before running the Helm release, ensure you have configured the necessary service accounts and identity and access management (IAM) policies for the load balancer controller and ExternalDNS, as detailed in their respective setup instructions.Â
Tackling CORS ChallengesÂ
You may encounter cross-origin resource sharing (CORS) errors while testing the connection to your runner within the Datadog application or when executing actions. These errors arise from the browser’s security mechanism that limits web applications from accessing resources across different domains.Â
In my case, the ALB wasn’t passing the required headers back to the browser in the runner’s response to the preflight request. Â
To resolve this, configure your ALB’s listener using the ingressClassParams resource in your aws-load-balancer-controller Helm chart values file.Â
Here’s the configuration that resolved my CORS errors for both the connection test and GitLab actions:Â
Â
î°ƒingressClassParams:Â
 create: trueÂ
 name: albÂ
 spec:Â
   scheme: “internet-facing”Â
   certificateArn: [ ******* ]Â
   ipAddressType: ipv4Â
   listeners:Â
     – port: 443Â
       protocol: HTTPSÂ
       listenerAttributes:Â
         – key: routing.http.response.access_control_allow_origin.header_valueÂ
           value: “*”Â
         – key: routing.http.response.access_control_allow_headers.header_valueÂ
           value: x-web-ui-version, x-csrf-token, x-datadog-apps-on-prem-runner-access-token, x-datadog-apps-on-prem-runner-access-token-id, content-typeÂ
Final ThoughtsÂ
Despite the minor CORS hiccup, setting up the Private Action Runner is an intuitive process. Looking ahead, I would like to see improved error messaging in the connection test and private actions to pinpoint potential causes of browser-related failures. The ability to pull secrets from a secret manager or environmental variables would also be a welcome addition.Â
With the Private Action Runner, you can unlock the full potential of Datadog’s App Builder and seamlessly integrate your internal infrastructure.Â