Skip to content

Connect your GitHub workflows to incident.io

License

Notifications You must be signed in to change notification settings

incident-io/github-action

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

incident.io Alert Dispatcher GitHub Action

Unit tests End to end tests Check dist/ Coverage

Automatically create incident.io alerts from your GitHub workflows. Get notified about failures, deployment issues, and critical events - all integrated with your incident response process.

⚠️ Beta Release This action is currently in beta (v0.x.x). The API is functional and well-tested, but may change before the v1.0.0 release. We welcome feedback and bug reports!

When to Use This Action

Integrate GitHub events with incident.io for centralized incident management and alerting.

Quick Start

  1. Get your incident.io API key
  2. Create an alert source in your incident.io dashboard and note its ID
  3. Add secrets to GitHub: Settings → Secrets → Actions:
    • INCIDENT_IO_API_KEY - Your API key
    • INCIDENT_IO_ALERT_SOURCE_ID - Your alert source ID
  4. Add this workflow to .github/workflows/alert-on-failure.yml:
name: Alert on Failure

on:
  workflow_run:
    workflows: ["CI", "Deploy"] # Monitor these workflows
    types: [completed]

jobs:
  alert:
    if: ${{ github.event.workflow_run.conclusion == 'failure' }}
    runs-on: ubuntu-latest
    steps:
      - uses: incident-io/github-action@v0
        with:
          api-key: ${{ secrets.INCIDENT_IO_API_KEY }}
          alert-source-id: ${{ secrets.INCIDENT_IO_ALERT_SOURCE_ID }}
          alert-title:
            "${{ github.workflow }} failed in ${{ github.repository }}"
          alert-description: |
            Workflow **${{ github.workflow }}** failed on branch `${{ github.ref }}`.

            - Triggered by: ${{ github.actor }}
            - Run: #${{ github.run_number }}
            - Commit: ${{ github.sha }}

            [View Run](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }})
          severity: "error"

Failed workflows will now create alerts in incident.io.

Use Cases

This action supports a wide range of incident management scenarios:

Production Deployment Failures

Monitor production deployments and create critical alerts when they fail. Includes automatic resolution when deployments succeed.

Key features: Critical severity, deduplication, automatic resolution, rollback guidance

View sample workflow →

Critical Pull Request Notifications

Alert when PRs are labeled as critical, hotfix, urgent, or security to ensure timely reviews.

Key features: Label-based triggering, automatic resolution on PR close, SLA tracking

View sample workflow →

CI Pipeline Health Monitoring

Track CI failures across your organization with automatic alerting and resolution.

Key features: Workflow monitoring, deduplication by branch, automatic resolution on success

View sample workflow →

Custom Business Logic

Integrate custom scripts, data quality checks, or scheduled health monitoring with incident.io.

Key features: Scheduled or manual triggers, dynamic content from script outputs, custom metadata

View sample workflow →

More examples: See the samples/ directory for complete, tested workflow examples you can copy and customize

How Event Detection Works

The action automatically detects which type of GitHub event triggered it and handles it accordingly. You don't need to specify a trigger type - the action routes events to the appropriate handler based on the event name.

Event Handlers

Specialized handlers for: workflow_run, deployment_status, pull_request

All other events use the generic handler, suitable for custom logic and manual triggers.

Configuration Reference

Inputs

Input Description Required Default
api-key incident.io API key (get one here) Yes
alert-source-id Alert source configuration ID from incident.io (create an alert source in your incident.io dashboard) Yes
alert-title Alert title (use GitHub expressions for dynamic values) Yes
alert-description Alert description with details (use GitHub expressions and Markdown) No ""
severity Severity level using your incident.io severity name (e.g., "critical", "high", "SEV1") No info
custom-fields JSON object with metadata for catalog mapping, routing, and filtering (see Custom Fields) No {}
deduplication-key Unique key to prevent duplicate alerts (use GitHub expressions for dynamic values) No ""
alert-status Alert status: firing or resolved. Use resolved to clear a previously fired alert (see Auto-Resolution) No firing
source-url Optional URL to associate with the alert (e.g., link to logs, dashboard, or related resource) No ""

Outputs

Output Description
alert-id Unique ID of the created alert in incident.io
alert-url Direct URL to view the alert in incident.io dashboard
- uses: incident-io/github-action@v0
  id: alert
- run: echo ${{ steps.alert.outputs.alert-id }}

Using GitHub Expressions

Use ${{ }} syntax for dynamic values. See GitHub's context documentation for available variables.

Advanced Features

Severity Levels

Use your organization's incident.io severity levels directly in the severity input:

severity: "critical" # or "SEV1", "high", etc.

Finding your severity levels:

  1. Go to incident.io dashboard
  2. Settings → Severities
  3. Use the severity name or slug from your configuration

Common examples: "critical", "high", "medium", "low", or custom values like "SEV1", "SEV2", etc.

Deduplication

Prevent multiple alerts for the same issue:

deduplication-key:
  "${{ github.repository }}-${{ github.workflow }}-${{ github.ref }}"

Common patterns:

  • Per-workflow: "${{ github.workflow }}-${{ github.ref }}"
  • Per-deployment: "deploy-${{ github.event.deployment.environment }}-${{ github.sha }}"

Automatically Resolving Alerts

Use the alert-status input to automatically clear alerts when the problem condition is resolved. This is particularly useful for:

  • Deployment rollbacks - Clear deployment failure alerts after a successful rollback
  • Test suites - Clear test failure alerts when tests pass again
  • CI/CD pipelines - Resolve build failure alerts after successful rebuilds

Example: Deployment Rollback Auto-Resolution

name: Deploy with Rollback

on:
  workflow_dispatch:

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Deploy to production
        id: deploy
        run: ./deploy.sh
        continue-on-error: true

      # Fire alert on deployment failure
      - name: Alert on deployment failure
        if: steps.deploy.outcome == 'failure'
        uses: incident-io/github-action@v0
        id: deploy-alert
        with:
          api-key: ${{ secrets.INCIDENT_IO_API_KEY }}
          alert-source-id: ${{ secrets.INCIDENT_IO_ALERT_SOURCE_ID }}
          alert-title: "Production deployment failed"
          severity: "critical"
          deduplication-key: "deploy-production-${{ github.run_id }}"
          alert-status: "firing"

      # Rollback on failure
      - name: Rollback deployment
        if: steps.deploy.outcome == 'failure'
        id: rollback
        run: ./rollback.sh

      # Resolve alert after successful rollback
      - name: Resolve alert after rollback
        if:
          steps.deploy.outcome == 'failure' && steps.rollback.outcome ==
          'success'
        uses: incident-io/github-action@v0
        with:
          api-key: ${{ secrets.INCIDENT_IO_API_KEY }}
          alert-source-id: ${{ secrets.INCIDENT_IO_ALERT_SOURCE_ID }}
          alert-title: "Deployment rolled back successfully"
          alert-description:
            "Production deployment was rolled back to the previous version"
          severity: "info"
          deduplication-key: "deploy-production-${{ github.run_id }}"
          alert-status: "resolved"

Custom Fields (Metadata)

Send structured metadata to incident.io as key-value pairs. This is critical for:

  • Catalog mapping - Link alerts to services, teams, and resources in your incident.io catalog
  • Workflow routing - Route alerts to the right team based on service, environment, or severity
  • Dashboard filtering - Query and filter incidents by metadata fields
  • Runbook automation - Trigger automated responses based on metadata values

Example: Service Catalog Mapping

custom-fields: |
  {
    "service": "payment-api",
    "team": "payments-team",
    "environment": "production",
    "region": "us-east-1",
    "version": "v2.3.1",
    "severity_level": "SEV1"
  }

Example: Multi-Service System

custom-fields: |
  {
    "affected_services": ["api-gateway", "auth-service", "payment-processor"],
    "incident_category": "infrastructure",
    "customer_impact": "high",
    "estimated_affected_users": 15000
  }

Example: Deployment Tracking

custom-fields: |
  {
    "deployment_id": "${{ github.run_id }}",
    "deployed_by": "${{ github.actor }}",
    "deployment_environment": "production",
    "release_version": "${{ github.ref_name }}",
    "rollback_available": true
  }

These fields appear in incident.io and enable:

  • Automatic routing to owning teams via catalog lookups
  • Service dependency mapping for impact analysis
  • Custom dashboards and reporting
  • Integration with external tools (PagerDuty, Slack, etc.)

Troubleshooting

Alerts not appearing in incident.io

Check:

  1. API key is correct and has permissions to create alerts
  2. Check action output logs for error messages

Common causes:

  • API key stored in wrong secret name
  • Network issues (check GitHub Actions logs)

Too many duplicate alerts

Solution: Add deduplication:

deduplication-key:
  "${{ github.repository }}-${{ github.workflow }}-${{ github.ref }}"

Rate limiting

incident.io has rate limits. The action automatically retries with exponential backoff. If you consistently hit limits:

  1. Increase deduplication (reduce unique alerts)
  2. Contact incident.io support to increase limits

Best Practices

1. Use Severity Appropriately

Use your organization's incident.io severity levels. For example:

  • critical - Production down, data loss
  • high - Feature broken, deployment failed
  • medium - Flaky tests, degraded performance
  • low - Successful deployments, informational events

Or use your org's custom levels like SEV1, SEV2, etc.

2. Implement Deduplication

Always use deduplication keys to prevent alert storms:

deduplication-key:
  "${{ github.repository }}-${{ github.workflow }}-${{ github.ref }}"

3. Provide Context

Include actionable information in descriptions:

alert-description: |
  **What happened:** Deployment to production failed
  **Impact:** API is unavailable
  **Next steps:** Check deployment logs at ${{ github.event.deployment_status.target_url }}
  **Runbook:** https://wiki.company.com/runbooks/deployment-failure

Versioning

This action follows Semantic Versioning. We use Git tags to mark releases, with three types of version references available:

Version Tags

  • Major version (e.g., @v0): Automatically tracks the latest stable release within the major version
  • Minor version (e.g., @v0.1): Tracks the latest patch release within the minor version
  • Specific version (e.g., @v0.1.0): Pins to an exact release version

Recommended Usage

For most users (recommended):

uses: incident-io/github-action@v0

This tracks the latest v0.x.x release. You'll automatically get new features and bugfixes, but no breaking changes.

For conservative users:

uses: incident-io/github-action@v0.1

This tracks only patch releases (v0.1.x). You'll get bugfixes but not new features.

For maximum stability:

uses: incident-io/github-action@v0.1.0

This pins to the exact version. You won't get any updates unless you manually change the version.

Beta Status

Currently, this action is in beta (v0.x.x). During the beta period:

  • Breaking changes may occur in minor version bumps (v0.1.0 → v0.2.0)
  • New features are added in minor version bumps
  • Bugfixes are released as patch versions (v0.1.0 → v0.1.1)

Once we reach v1.0.0, we'll follow strict semantic versioning where breaking changes only occur in major version bumps.

Release Notes

See CHANGELOG.md for detailed release notes and Releases for all published versions.

Development

Want to contribute? See CONTRIBUTING.md for setup instructions and guidelines.

For architecture details, see AGENTS.md.

Quick Commands

npm install         # Install dependencies
npm test            # Run tests (109 tests)
npm run lint        # Check code quality
npm run bundle      # Build for distribution
npm run all         # Run full validation pipeline

Support

License

MIT

About

Connect your GitHub workflows to incident.io

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •