Skip to content

[Bug]: Cannot create an alert for a metric collected by python.d/haproxy collector #1117

@cuu508

Description

@cuu508

Bug description

I'm monitoring haproxy using the python.d/haproxy plugin. The metrics show up in Netdata Cloud:

Image

I'd like to create an alert for this metric – get notified when the number of active connections crosses a defined threshold.

Netdata Docs say I should navigate to the graph, click on alert icon, click "Add Alert", set thresholds and submit to nodes. I did that. Afterwards, a failed job configuration appeared in Manage Space > Configurations:

Image

If I click on the pencil button, it looks like it's a configuration for the go.d/haproxy collector:

Image

I'm thoroughly confused. Where did this job configuration came from? I do not have /etc/netdata/go.d/haproxy set on the agent nodes. Was this job auto-created by Netdata Cloud and somehow pushed down to the agents? I'm not sure if there's some confusion between go.d/haproxy and python.d/haproxy modules in Netdata Cloud. Or the documentation for setting up an alert is wrong.

Expected behavior

I have not set up custom alerts before so I am not sure what would be the expected end result. I assume I would be able to see the newly defined alert listed somewhere. I would not expect to see an erratic new collector configuration.

Steps to reproduce

  1. Set up Netdata agent with a python.d/haproxy configuration
  2. Hook it up to Netdata Cloud
  3. In Netdata Cloud try to create a custom alert for one of the haproxy metrics

Screenshots

No response

Error Logs

No response

Desktop

OS: Ubuntu 24.04
Browser Firefox
Browser Version 144

Additional context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions