Skip to content

Refactor SLO workload to use environment variables#802

Open
polRk wants to merge 2 commits intomainfrom
slo-v2
Open

Refactor SLO workload to use environment variables#802
polRk wants to merge 2 commits intomainfrom
slo-v2

Conversation

@polRk
Copy link
Copy Markdown
Member

@polRk polRk commented Apr 2, 2026

  • Migrate from CLI subcommands (table-run, topic-run) to environment-based configuration (WORKLOAD_NAME, YDB_ENDPOINT, etc.)
  • Simplify argument parsing: remove subcommand structure, add env var injection
  • Update metrics collection to always be enabled (remove otlp_endpoint checks)
  • Replace quantile-estimator with hdrhistogram
  • Update workflow matrix from include to sdk naming
  • Upgrade ydb-slo-action from commit hash to v2 tag
  • Add pyrightconfig.json for type checking
  • Update Dockerfile comments to reflect new env-var-based interface

Pull request type

Please check the type of change your PR introduces:

  • Bugfix
  • Feature
  • Code style update (formatting, renaming)
  • Refactoring (no functional changes, no api changes)
  • Build related changes
  • Documentation content changes
  • Other (please describe):

What is the current behavior?

Issue Number: N/A

What is the new behavior?

Other information

Copilot AI review requested due to automatic review settings April 2, 2026 10:59
@polRk polRk added the SLO label Apr 2, 2026
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR refactors the Python SLO workload runner under tests/slo/ to use environment-variable-driven configuration (workload selection, YDB connection, duration, OTLP exporter settings) instead of CLI subcommands, and updates metrics and CI workflows accordingly.

Changes:

  • Replace subcommand-based orchestration with a single “run all” flow selected via WORKLOAD_NAME / env vars.
  • Rework metrics initialization to use standard OpenTelemetry env vars and switch latency quantiles to HDR histogram-based calculation.
  • Update SLO GitHub workflows and Docker image interface to match the new env-var-based runner.

Reviewed changes

Copilot reviewed 15 out of 15 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
tests/slo/src/runners/topic_runner.py Update metrics factory call to new signature.
tests/slo/src/runners/table_runner.py Update metrics factory call to new signature.
tests/slo/src/root_runner.py Replace subcommand runner with run_all() flow selected via WORKLOAD_NAME.
tests/slo/src/pyrightconfig.json Add Pyright configuration for the SLO runner code.
tests/slo/src/options.py Simplify CLI flags; inject connection/duration from environment variables.
tests/slo/src/jobs/table_jobs.py Switch to WORKLOAD_NAME constant.
tests/slo/src/jobs/base.py Always start metrics sender job (no CLI OTLP gating).
tests/slo/src/jobs/async_topic_jobs.py Always start async metrics sender task (no CLI OTLP gating).
tests/slo/src/core/metrics.py Move to standard OTel env vars and implement HDR-based latency quantiles.
tests/slo/src/main.py Switch entrypoint to call run_all().
tests/slo/requirements.txt Replace quantile-estimator with hdrhistogram.
tests/slo/Dockerfile Update documentation/comments and provide a default CMD for tuning flags.
pyproject.toml Configure tool.ty extra paths for SLO sources.
.github/workflows/slo.yml Refactor workload matrix and switch to ydb-slo-action v2 init/report.
.github/workflows/slo-report.yml Switch report action reference to v2 and remove label-removal job.
Comments suppressed due to low confidence (1)

.github/workflows/slo.yml:53

  • concurrency.group references matrix.workload, but the matrix is now defined under matrix.sdk (with name/command). This will make the workflow fail to evaluate at runtime. Update the concurrency group expression to use the new matrix key (e.g., matrix.sdk.name).
    concurrency:
      group: slo-${{ github.ref }}-${{ matrix.workload }}
      cancel-in-progress: true

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread tests/slo/src/core/metrics.py Outdated
# Metrics job calls push() with the cadence of --report-period.
# force_flush() makes the exporter send immediately.
with self._hdr_lock:
snapshot = {k: v for k, v in self._hdr.items()}
Copy link

Copilot AI Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In push(), the code copies the _hdr dict under the lock, but then reads percentiles from the same HdrHistogram instances after releasing the lock. Since stop() records into these histograms under the same lock, this creates a data race (HdrHistogram isn't thread-safe) and can lead to incorrect values or runtime errors. Keep the lock held while computing percentiles, or clone/snapshot each histogram under the lock before reading it.

Suggested change
snapshot = {k: v for k, v in self._hdr.items()}
# Clone histograms under the lock to avoid concurrent mutation while reading.
snapshot = {k: v.copy() for k, v in self._hdr.items()}

Copilot uses AI. Check for mistakes.
Comment on lines 1 to 10
import asyncio
import ydb
import ydb.aio
import logging
from typing import Dict

from runners.topic_runner import TopicRunner
from core.metrics import WORKLOAD_NAME
from runners.table_runner import TableRunner
from runners.base import BaseRunner

logger = logging.getLogger(__name__)


class SLORunner:
def __init__(self):
self.runners: Dict[str, type(BaseRunner)] = {}

def register_runner(self, prefix: str, runner_cls: type(BaseRunner)):
self.runners[prefix] = runner_cls
from runners.topic_runner import TopicRunner

def run_command(self, args):
subcommand_parts = args.subcommand.split("-", 1)
if len(subcommand_parts) < 2:
raise ValueError(f"Invalid subcommand format: {args.subcommand}. Expected 'prefix-command'")
import ydb
import ydb.aio

Copy link

Copilot AI Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

asyncio and ydb.aio are imported but not used in this module. Flake8 will flag these as unused imports (F401). Remove them, or use the async driver path if it’s still intended here.

Copilot uses AI. Check for mistakes.
Comment thread tests/slo/requirements.txt Outdated
Comment thread tests/slo/src/jobs/async_topic_jobs.py
@github-actions github-actions Bot removed the SLO label Apr 2, 2026
@polRk polRk added the SLO label Apr 2, 2026
@github-actions github-actions Bot removed the SLO label Apr 2, 2026
@polRk polRk added the SLO label Apr 2, 2026
@github-actions github-actions Bot removed the SLO label Apr 2, 2026
Copy link
Copy Markdown
Collaborator

@vgvoleg vgvoleg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this PR you've deleted async jobs, which is out of scope; you've reorganized arguments, which is out of scope; you've deleted an ability to run jobs without monitoring job; you've added configs for tools we are not using...
Please make PR as simple as it needed to solve your task.

@polRk polRk added the SLO label Apr 13, 2026
@github-actions github-actions Bot removed the SLO label Apr 13, 2026
@polRk polRk requested a review from vgvoleg April 13, 2026 10:04
@vgvoleg vgvoleg added the SLO label Apr 22, 2026
@github-actions
Copy link
Copy Markdown

🌋 SLO Test Results

🔴 2 workload(s) tested — 1 workload(s) exceeded failure thresholds

Commit: 360bfde · View run

Workload Thresholds Duration Report
sync-query 🟢 OK 10m 9s 📄 Report
sync-table 🔴 Failure 10m 1s 📄 Report

Threshold violations:

sync-table:

  • read_retry_attempts: Regression 50.0% >= critical 50%
  • write_retry_attempts: Regression 33.3% >= warning 20%

Generated by ydb-slo-action

@github-actions github-actions Bot removed the SLO label Apr 22, 2026
Comment thread tests/slo/src/pyrightconfig.json Outdated
@@ -0,0 +1,4 @@
{
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

local tool configs should not be pushed to main repo

Comment thread pyproject.toml Outdated
@@ -1,3 +1,9 @@
[tool.ty.environment]
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

local tool configs should not be pushed to main repo

Comment thread AGENTS.md Outdated
---

## Topic Chaos Testing (SLO)
## SLO Testing
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we don't have CI job for topics and they still have lots of issues during disconnect, this article was created for local topic development only. There's no need to make this doc related to table workloads as well, since it may lead to an unnecessary increased consumption of tokens. Please revert it to topic chaos testing and update only code snippets

Comment thread tests/slo/src/core/metrics.py
if not getattr(self.args, "otlp_endpoint", None):
return []

def _run_metric_job(self) -> list[Any]:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Old _run_metric_job had an early return if not getattr(self.args, "otlp_endpoint", None): return []. This guard was removed in both sync (base.py) and async (async_topic_jobs.py) versions — a background thread/task is now always started even when metrics are disabled.

Comment thread tests/slo/src/jobs/async_topic_jobs.py
Comment thread tests/slo/src/root_runner.py Outdated

try:
logger.info("[%s][async] Creating resources", workload_name)
runner.create(args)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_run_all_async calls runner.create() and runner.cleanup() synchronously, but with a ydb.aio.Driver whose topic_client has async def create_topic/drop_topic/etc. The coroutines are never awaited — topic is never actually created, so the workload fails immediately.

Comment thread tests/slo/src/options.py Outdated

args = parser.parse_args()

# Aliases used by topic runner
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

args.path, args.consumer, args.partitions_count are aliased from args.topic_path, args.topic_consumer, args.topic_partitions at the bottom of parse_options() because the runner/job code was never updated to use the new names. Two names for the same field — the aliases should be removed and the runner code updated instead.

Comment thread tests/slo/src/options.py
def make_table_create_parser(subparsers):
table_create_parser = subparsers.add_parser("table-create", help="Create tables and fill with initial content")
add_common_options(table_create_parser)
def parse_options():
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The old subcommand structure meant each subcommand only accepted its own relevant arguments — passing a table arg to a topic run was a parser error. Now all arguments are always accepted regardless of --workload-name, so e.g. --workload-name topic --min-partitions-count 42 silently ignores the table parameter. Mistakes won't be caught at parse time.

Comment thread tests/slo/slo_runner.sh
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 19 out of 19 changed files in this pull request and generated 7 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +270 to +286
def create_metrics(args) -> BaseMetrics:
"""
Factory used by SLO runners.

Metrics are enabled if either:
- OTLP_ENDPOINT env var is set, or
- `--otlp-endpoint` is provided (and non-empty)

If endpoint is empty, metrics are disabled (DummyMetrics).
Uses args.otlp_endpoint, args.workload_name, args.workload_ref from parsed CLI arguments.
If the endpoint is empty, returns a no-op DummyMetrics.
"""
endpoint = (environ.get("OTLP_ENDPOINT") or (otlp_endpoint or "")).strip()
endpoint = (args.otlp_endpoint or "").strip()

if not endpoint:
logger.info("Creating dummy metrics (metrics disabled)")
logger.info("OTLP endpoint not configured — metrics disabled")
return DummyMetrics()

logger.info("Creating OTLP metrics exporter to Prometheus: %s", endpoint)
environ.setdefault("OTEL_EXPORTER_OTLP_PROTOCOL", "http/protobuf")
environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = endpoint

logger.info("Creating OTLP metrics exporter (endpoint: %s)", endpoint)
Comment on lines +61 to +63
driver.stop(timeout=args.shutdown_time)


Comment thread tests/slo/src/options.py
Comment on lines +61 to 72
parser.add_argument(
"endpoint",
nargs="?",
default=os.environ.get("YDB_ENDPOINT", "grpc://localhost:2136"),
help="YDB endpoint (default: $YDB_ENDPOINT)",
)
topic_parser.add_argument(
"--write-threads",
default=1,
type=int,
help="Number of threads for topic writing",
parser.add_argument(
"db",
nargs="?",
default=os.environ.get("YDB_DATABASE", "/local"),
help="YDB database (default: $YDB_DATABASE)",
)
Comment thread tests/slo/src/options.py
default="http://localhost:9090/api/v1/otlp/v1/metrics",
type=str,
help="Full Prometheus OTLP metrics endpoint (e.g. http://ydb-prometheus:9090/api/v1/otlp/v1/metrics). Empty to disable.",
Every flag supports a fallback chain: CLI arg > environment variable > hardcoded default.
Comment thread .github/workflows/slo.yml
- if: always()
name: Upload logs
uses: actions/upload-artifact@v4
workload_baseline_image: ydb-app-current
Comment thread tests/slo/Dockerfile
Comment on lines +19 to +35
FROM python:3.11-slim

ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1

WORKDIR /src
COPY . /src

# Install runtime deps into an isolated venv so we can copy it into the final stage.
RUN python -m venv /opt/venv \
&& /opt/venv/bin/python -m pip install --no-cache-dir --upgrade pip \
&& /opt/venv/bin/pip install --no-cache-dir . \
&& /opt/venv/bin/pip install --no-cache-dir -r tests/slo/requirements.txt
RUN apt-get update && apt-get install -y --no-install-recommends gcc libc6-dev && rm -rf /var/lib/apt/lists/*

WORKDIR /src

FROM python:3.11-slim
# 1. YDB SDK
COPY setup.py pyproject.toml README.md requirements.txt ./
COPY ydb/ ydb/
RUN pip install --no-cache-dir .

ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1 \
PATH="/opt/venv/bin:${PATH}"
# 2. SLO deps
COPY tests/slo/requirements.txt tests/slo/requirements.txt
RUN pip install --no-cache-dir -r tests/slo/requirements.txt
Comment on lines +170 to +185
# Latency gauges (fed from HDR histograms via push())
self._latency_p50 = self._meter.create_gauge(
name="sdk_operation_latency_p50_seconds",
unit="s",
description="P50 operation latency in seconds.",
)
self._latency_p95 = self._meter.create_gauge(
name="sdk_operation_latency_p95_seconds",
unit="s",
description="P95 operation latency in seconds.",
)
self._latency_p99 = self._meter.create_gauge(
name="sdk_operation_latency_p99_seconds",
unit="s",
description="P99 operation latency in seconds.",
)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants