test: add e2e tests for TLS security profile watcher#1055
test: add e2e tests for TLS security profile watcher#1055IshwarKanse wants to merge 2 commits intorhobs:mainfrom
Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 5 minutes and 35 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Repository YAML (base), Organization UI (inherited) Review profile: CHILL Plan: Pro Plus Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughA new end-to-end test file was added to validate the observability operator's TLS profile watcher on OpenShift. The test skips non-OpenShift clusters, captures and clears the APIServer Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: IshwarKanse The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
Hi @IshwarKanse. Thanks for your PR. I'm waiting for a rhobs member to verify that this patch is reasonable to test. If it is, they should reply with Regular contributors should join the org to skip this step. Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
db6befd to
3d0a9e2
Compare
Add end-to-end tests that verify the operator correctly watches the APIServer CR for TLS security profile changes and restarts accordingly. Test scenarios: - Operator is running and healthy with the default (nil/Intermediate) TLS profile - Operator restarts when TLS profile changes to Old - Operator restarts when TLS profile changes to a Custom profile - Operator does NOT restart when a non-TLS field (annotation) is modified The test file is prefixed with zz_ to ensure it runs last in the e2e suite, since modifying the APIServer TLS profile triggers a MachineConfigPool rollout which can be disruptive to other tests. Assisted by Claude Code.
3d0a9e2 to
8bcce30
Compare
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
test/e2e/zz_tls_profile_test.go (1)
397-405: Consider filtering by container name for consistency.Similar to the previous restart detection loop, this checks uptime across all containers. For consistency with
getOperatorContainerRestartCount, consider filtering tooperatorContainerName.♻️ Suggested improvement
for _, cs := range pod.Status.ContainerStatuses { + if cs.Name != operatorContainerName { + continue + } if cs.State.Running != nil { uptime := time.Since(cs.State.Running.StartedAt.Time) if uptime < 15*time.Second {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2e/zz_tls_profile_test.go` around lines 397 - 405, The uptime check iterates all pod.Status.ContainerStatuses and may include non-operator containers; update the loop to filter by the operatorContainerName (same selector used in getOperatorContainerRestartCount) so you only inspect ContainerStatus entries where cs.Name == operatorContainerName before checking cs.State.Running and StartedAt; this ensures consistency with the restart-detection logic and avoids false waits due to other containers' short uptimes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/e2e/zz_tls_profile_test.go`:
- Around line 361-366: The restart-detection loop iterates all
p.Status.ContainerStatuses and can pick up non-operator containers; update the
loop in the test (the block iterating p.Status.ContainerStatuses) to only
consider the operator by checking cs.Name (or the container name field used)
equals operatorContainerName before comparing RestartCount to baselineRestarts
so only the operator container restarts trigger the true return; you may also
explicitly skip InitContainerStatuses if relevant.
---
Nitpick comments:
In `@test/e2e/zz_tls_profile_test.go`:
- Around line 397-405: The uptime check iterates all
pod.Status.ContainerStatuses and may include non-operator containers; update the
loop to filter by the operatorContainerName (same selector used in
getOperatorContainerRestartCount) so you only inspect ContainerStatus entries
where cs.Name == operatorContainerName before checking cs.State.Running and
StartedAt; this ensures consistency with the restart-detection logic and avoids
false waits due to other containers' short uptimes.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro Plus
Run ID: 159c305b-31a0-48e3-8161-9b1252ed052c
📒 Files selected for processing (1)
test/e2e/zz_tls_profile_test.go
d87050d to
cbd97cd
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/e2e/zz_tls_profile_test.go`:
- Around line 190-207: The poll silently passes when pods are replaced or when
lookup errors are swallowed; change the stability check to (1) capture the
operator pod UID alongside the restart count before the wait (introduce or
modify operatorContainerRestartCount to return (int, string, error) or add a
getOperatorPodUID helper and store initialPodUID), and (2) in the
wait.PollUntilContextTimeout lambda return an error when
operatorContainerRestartCount/getOperatorPodUID fails (do not swallow persistent
errors) and treat podUID != initialPodUID OR currentRestarts > initialRestarts
as a restart (return true, fmt.Errorf(...)); keep using
wait.Interrupted/assert.NilError as final assertions.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro Plus
Run ID: 51e16b9d-ffa7-41d0-a59f-0bd2014906b8
📒 Files selected for processing (1)
test/e2e/zz_tls_profile_test.go
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
cbd97cd to
f1a72c6
Compare
Summary
APIServerCR'stlsSecurityProfilechanges and remains stable when non-TLS fields changezz_to run last in the suite, since modifying the APIServer TLS profile triggers a MachineConfigPool rolloutTest scenarios
Oldtriggers operator container restart; operator recovers to readyCustomprofile with specific ciphers triggers restart; operator recoversDesign decisions
waitForStableRestartCount: waits for restart count to remain unchanged for 30s AND the container to have been running for 15s+ before considering the operator stable — prevents race conditions between test cleanup restarts and subsequent testsspec.audit.profiletriggers disruptive MachineConfigPool rollouts; annotations trigger the watcher reconcile without cluster impactTest plan