Fix data race in scheduling post start hook by enj · Pull Request #132692 · kubernetes/kubernetes · GitHub | Latest TMZ Celebrity News & Gossip | Watch TMZ Live
Skip to content

Fix data race in scheduling post start hook #132692

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

enj
Copy link
Member

@enj enj commented Jul 3, 2025

/kind bug

NONE
==================
WARNING: DATA RACE
Read at 0x00010a07f370 by goroutine 70716:
  k8s.io/apimachinery/pkg/apis/meta/v1.(*TypeMeta).GroupVersionKind()
      k8s.io/apimachinery/pkg/apis/meta/v1/meta.go:126 +0x40
  k8s.io/apimachinery/pkg/runtime.WithVersionEncoder.Encode()
      k8s.io/apimachinery/pkg/runtime/helper.go:231 +0x13c
  k8s.io/apimachinery/pkg/runtime.(*WithVersionEncoder).Encode()
      <autogenerated>:1 +0x94
  k8s.io/apimachinery/pkg/runtime.Encode()
      k8s.io/apimachinery/pkg/runtime/codec.go:49 +0x90
  k8s.io/client-go/rest.(*Request).Body()
      k8s.io/client-go/rest/request.go:530 +0x584
  k8s.io/client-go/gentype.(*Client[go.shape.*uint8]).Create()
      k8s.io/client-go/gentype/type.go:212 +0x244
  k8s.io/client-go/kubernetes/typed/scheduling/v1.(*priorityClasses).Create()
      <autogenerated>:1 +0xf8
  k8s.io/kubernetes/pkg/registry/scheduling/rest.(*RESTStorageProvider).PostStartHook.RESTStorageProvider.PostStartHook.AddSystemPriorityClasses.func1.1()
      k8s.io/kubernetes/pkg/registry/scheduling/rest/storage_scheduling.go:93 +0x218
  k8s.io/apimachinery/pkg/util/wait.Poll.ConditionFunc.WithContext.func1()
      k8s.io/apimachinery/pkg/util/wait/wait.go:113 +0x30
  k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext()
      k8s.io/apimachinery/pkg/util/wait/wait.go:159 +0x94
  k8s.io/apimachinery/pkg/util/wait.waitForWithContext()
      k8s.io/apimachinery/pkg/util/wait/wait.go:212 +0x124
  k8s.io/apimachinery/pkg/util/wait.poll()
      k8s.io/apimachinery/pkg/util/wait/poll.go:260 +0xc0
  k8s.io/apimachinery/pkg/util/wait.PollWithContext()
      k8s.io/apimachinery/pkg/util/wait/poll.go:85 +0x84
  k8s.io/apimachinery/pkg/util/wait.Poll()
      k8s.io/apimachinery/pkg/util/wait/poll.go:66 +0x70
  k8s.io/kubernetes/pkg/registry/scheduling/rest.(*RESTStorageProvider).PostStartHook.RESTStorageProvider.PostStartHook.AddSystemPriorityClasses.func1()
      k8s.io/kubernetes/pkg/registry/scheduling/rest/storage_scheduling.go:82 +0x94
  k8s.io/apiserver/pkg/server.runPostStartHook.func1()
      k8s.io/apiserver/pkg/server/hooks.go:200 +0x7c
  k8s.io/apiserver/pkg/server.runPostStartHook()
      k8s.io/apiserver/pkg/server/hooks.go:201 +0x84
  k8s.io/apiserver/pkg/server.(*GenericAPIServer).RunPostStartHooks.gowrap2()
      k8s.io/apiserver/pkg/server/hooks.go:167 +0x98

Previous write at 0x00010a07f370 by goroutine 70757:
  k8s.io/apimachinery/pkg/apis/meta/v1.(*TypeMeta).SetGroupVersionKind()
      k8s.io/apimachinery/pkg/apis/meta/v1/meta.go:121 +0x108
  k8s.io/apimachinery/pkg/runtime.WithVersionEncoder.Encode()
      k8s.io/apimachinery/pkg/runtime/helper.go:242 +0x260
  k8s.io/apimachinery/pkg/runtime.(*WithVersionEncoder).Encode()
      <autogenerated>:1 +0x94
  k8s.io/apimachinery/pkg/runtime.Encode()
      k8s.io/apimachinery/pkg/runtime/codec.go:49 +0x90
  k8s.io/client-go/rest.(*Request).Body()
      k8s.io/client-go/rest/request.go:530 +0x584
  k8s.io/client-go/gentype.(*Client[go.shape.*uint8]).Create()
      k8s.io/client-go/gentype/type.go:212 +0x244
  k8s.io/client-go/kubernetes/typed/scheduling/v1.(*priorityClasses).Create()
      <autogenerated>:1 +0xf8
  k8s.io/kubernetes/pkg/registry/scheduling/rest.(*RESTStorageProvider).PostStartHook.RESTStorageProvider.PostStartHook.AddSystemPriorityClasses.func1.1()
      k8s.io/kubernetes/pkg/registry/scheduling/rest/storage_scheduling.go:93 +0x218
  k8s.io/apimachinery/pkg/util/wait.Poll.ConditionFunc.WithContext.func1()
      k8s.io/apimachinery/pkg/util/wait/wait.go:113 +0x30
  k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext()
      k8s.io/apimachinery/pkg/util/wait/wait.go:159 +0x94
  k8s.io/apimachinery/pkg/util/wait.waitForWithContext()
      k8s.io/apimachinery/pkg/util/wait/wait.go:212 +0x124
  k8s.io/apimachinery/pkg/util/wait.poll()
      k8s.io/apimachinery/pkg/util/wait/poll.go:260 +0xc0
  k8s.io/apimachinery/pkg/util/wait.PollWithContext()
      k8s.io/apimachinery/pkg/util/wait/poll.go:85 +0x84
  k8s.io/apimachinery/pkg/util/wait.Poll()
      k8s.io/apimachinery/pkg/util/wait/poll.go:66 +0x70
  k8s.io/kubernetes/pkg/registry/scheduling/rest.(*RESTStorageProvider).PostStartHook.RESTStorageProvider.PostStartHook.AddSystemPriorityClasses.func1()
      k8s.io/kubernetes/pkg/registry/scheduling/rest/storage_scheduling.go:82 +0x94
  k8s.io/apiserver/pkg/server.runPostStartHook.func1()
      k8s.io/apiserver/pkg/server/hooks.go:200 +0x7c
  k8s.io/apiserver/pkg/server.runPostStartHook()
      k8s.io/apiserver/pkg/server/hooks.go:201 +0x84
  k8s.io/apiserver/pkg/server.(*GenericAPIServer).RunPostStartHooks.gowrap2()
      k8s.io/apiserver/pkg/server/hooks.go:167 +0x98

Goroutine 70716 (running) created at:
  k8s.io/apiserver/pkg/server.(*GenericAPIServer).RunPostStartHooks()
      k8s.io/apiserver/pkg/server/hooks.go:167 +0x10c
  k8s.io/apiserver/pkg/server.preparedGenericAPIServer.NonBlockingRunWithContext()
      k8s.io/apiserver/pkg/server/genericapiserver.go:764 +0x198
  k8s.io/apiserver/pkg/server.preparedGenericAPIServer.RunWithContext()
      k8s.io/apiserver/pkg/server/genericapiserver.go:602 +0x718
  k8s.io/kube-aggregator/pkg/apiserver.preparedAPIAggregator.Run()
      k8s.io/kube-aggregator/pkg/apiserver/apiserver.go:504 +0x130
  k8s.io/kubernetes/cmd/kube-apiserver/app/testing.StartTestServer.func3()
      k8s.io/kubernetes/cmd/kube-apiserver/app/testing/testserver.go:442 +0xb0

Goroutine 70757 (running) created at:
  k8s.io/apiserver/pkg/server.(*GenericAPIServer).RunPostStartHooks()
      k8s.io/apiserver/pkg/server/hooks.go:167 +0x10c
  k8s.io/apiserver/pkg/server.preparedGenericAPIServer.NonBlockingRunWithContext()
      k8s.io/apiserver/pkg/server/genericapiserver.go:764 +0x198
  k8s.io/apiserver/pkg/server.preparedGenericAPIServer.RunWithContext()
      k8s.io/apiserver/pkg/server/genericapiserver.go:602 +0x718
  k8s.io/kube-aggregator/pkg/apiserver.preparedAPIAggregator.Run()
      k8s.io/kube-aggregator/pkg/apiserver/apiserver.go:504 +0x130
  k8s.io/kubernetes/cmd/kube-apiserver/app/testing.StartTestServer.func3()
      k8s.io/kubernetes/cmd/kube-apiserver/app/testing/testserver.go:442 +0xb0
==================
==================
WARNING: DATA RACE
Read at 0x00010a07f360 by goroutine 70716:
  k8s.io/apimachinery/pkg/apis/meta/v1.(*TypeMeta).GroupVersionKind()
      k8s.io/apimachinery/pkg/apis/meta/v1/meta.go:126 +0x58
  k8s.io/apimachinery/pkg/runtime.WithVersionEncoder.Encode()
      k8s.io/apimachinery/pkg/runtime/helper.go:231 +0x13c
  k8s.io/apimachinery/pkg/runtime.(*WithVersionEncoder).Encode()
      <autogenerated>:1 +0x94
  k8s.io/apimachinery/pkg/runtime.Encode()
      k8s.io/apimachinery/pkg/runtime/codec.go:49 +0x90
  k8s.io/client-go/rest.(*Request).Body()
      k8s.io/client-go/rest/request.go:530 +0x584
  k8s.io/client-go/gentype.(*Client[go.shape.*uint8]).Create()
      k8s.io/client-go/gentype/type.go:212 +0x244
  k8s.io/client-go/kubernetes/typed/scheduling/v1.(*priorityClasses).Create()
      <autogenerated>:1 +0xf8
  k8s.io/kubernetes/pkg/registry/scheduling/rest.(*RESTStorageProvider).PostStartHook.RESTStorageProvider.PostStartHook.AddSystemPriorityClasses.func1.1()
      k8s.io/kubernetes/pkg/registry/scheduling/rest/storage_scheduling.go:93 +0x218
  k8s.io/apimachinery/pkg/util/wait.Poll.ConditionFunc.WithContext.func1()
      k8s.io/apimachinery/pkg/util/wait/wait.go:113 +0x30
  k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext()
      k8s.io/apimachinery/pkg/util/wait/wait.go:159 +0x94
  k8s.io/apimachinery/pkg/util/wait.waitForWithContext()
      k8s.io/apimachinery/pkg/util/wait/wait.go:212 +0x124
  k8s.io/apimachinery/pkg/util/wait.poll()
      k8s.io/apimachinery/pkg/util/wait/poll.go:260 +0xc0
  k8s.io/apimachinery/pkg/util/wait.PollWithContext()
      k8s.io/apimachinery/pkg/util/wait/poll.go:85 +0x84
  k8s.io/apimachinery/pkg/util/wait.Poll()
      k8s.io/apimachinery/pkg/util/wait/poll.go:66 +0x70
  k8s.io/kubernetes/pkg/registry/scheduling/rest.(*RESTStorageProvider).PostStartHook.RESTStorageProvider.PostStartHook.AddSystemPriorityClasses.func1()
      k8s.io/kubernetes/pkg/registry/scheduling/rest/storage_scheduling.go:82 +0x94
  k8s.io/apiserver/pkg/server.runPostStartHook.func1()
      k8s.io/apiserver/pkg/server/hooks.go:200 +0x7c
  k8s.io/apiserver/pkg/server.runPostStartHook()
      k8s.io/apiserver/pkg/server/hooks.go:201 +0x84
  k8s.io/apiserver/pkg/server.(*GenericAPIServer).RunPostStartHooks.gowrap2()
      k8s.io/apiserver/pkg/server/hooks.go:167 +0x98

Previous write at 0x00010a07f360 by goroutine 70757:
  k8s.io/apimachinery/pkg/apis/meta/v1.(*TypeMeta).SetGroupVersionKind()
      k8s.io/apimachinery/pkg/apis/meta/v1/meta.go:121 +0x144
  k8s.io/apimachinery/pkg/runtime.WithVersionEncoder.Encode()
      k8s.io/apimachinery/pkg/runtime/helper.go:242 +0x260
  k8s.io/apimachinery/pkg/runtime.(*WithVersionEncoder).Encode()
      <autogenerated>:1 +0x94
  k8s.io/apimachinery/pkg/runtime.Encode()
      k8s.io/apimachinery/pkg/runtime/codec.go:49 +0x90
  k8s.io/client-go/rest.(*Request).Body()
      k8s.io/client-go/rest/request.go:530 +0x584
  k8s.io/client-go/gentype.(*Client[go.shape.*uint8]).Create()
      k8s.io/client-go/gentype/type.go:212 +0x244
  k8s.io/client-go/kubernetes/typed/scheduling/v1.(*priorityClasses).Create()
      <autogenerated>:1 +0xf8
  k8s.io/kubernetes/pkg/registry/scheduling/rest.(*RESTStorageProvider).PostStartHook.RESTStorageProvider.PostStartHook.AddSystemPriorityClasses.func1.1()
      k8s.io/kubernetes/pkg/registry/scheduling/rest/storage_scheduling.go:93 +0x218
  k8s.io/apimachinery/pkg/util/wait.Poll.ConditionFunc.WithContext.func1()
      k8s.io/apimachinery/pkg/util/wait/wait.go:113 +0x30
  k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext()
      k8s.io/apimachinery/pkg/util/wait/wait.go:159 +0x94
  k8s.io/apimachinery/pkg/util/wait.waitForWithContext()
      k8s.io/apimachinery/pkg/util/wait/wait.go:212 +0x124
  k8s.io/apimachinery/pkg/util/wait.poll()
      k8s.io/apimachinery/pkg/util/wait/poll.go:260 +0xc0
  k8s.io/apimachinery/pkg/util/wait.PollWithContext()
      k8s.io/apimachinery/pkg/util/wait/poll.go:85 +0x84
  k8s.io/apimachinery/pkg/util/wait.Poll()
      k8s.io/apimachinery/pkg/util/wait/poll.go:66 +0x70
  k8s.io/kubernetes/pkg/registry/scheduling/rest.(*RESTStorageProvider).PostStartHook.RESTStorageProvider.PostStartHook.AddSystemPriorityClasses.func1()
      k8s.io/kubernetes/pkg/registry/scheduling/rest/storage_scheduling.go:82 +0x94
  k8s.io/apiserver/pkg/server.runPostStartHook.func1()
      k8s.io/apiserver/pkg/server/hooks.go:200 +0x7c
  k8s.io/apiserver/pkg/server.runPostStartHook()
      k8s.io/apiserver/pkg/server/hooks.go:201 +0x84
  k8s.io/apiserver/pkg/server.(*GenericAPIServer).RunPostStartHooks.gowrap2()
      k8s.io/apiserver/pkg/server/hooks.go:167 +0x98

Goroutine 70716 (running) created at:
  k8s.io/apiserver/pkg/server.(*GenericAPIServer).RunPostStartHooks()
      k8s.io/apiserver/pkg/server/hooks.go:167 +0x10c
  k8s.io/apiserver/pkg/server.preparedGenericAPIServer.NonBlockingRunWithContext()
      k8s.io/apiserver/pkg/server/genericapiserver.go:764 +0x198
  k8s.io/apiserver/pkg/server.preparedGenericAPIServer.RunWithContext()
      k8s.io/apiserver/pkg/server/genericapiserver.go:602 +0x718
  k8s.io/kube-aggregator/pkg/apiserver.preparedAPIAggregator.Run()
      k8s.io/kube-aggregator/pkg/apiserver/apiserver.go:504 +0x130
  k8s.io/kubernetes/cmd/kube-apiserver/app/testing.StartTestServer.func3()
      k8s.io/kubernetes/cmd/kube-apiserver/app/testing/testserver.go:442 +0xb0

Goroutine 70757 (running) created at:
  k8s.io/apiserver/pkg/server.(*GenericAPIServer).RunPostStartHooks()
      k8s.io/apiserver/pkg/server/hooks.go:167 +0x10c
  k8s.io/apiserver/pkg/server.preparedGenericAPIServer.NonBlockingRunWithContext()
      k8s.io/apiserver/pkg/server/genericapiserver.go:764 +0x198
  k8s.io/apiserver/pkg/server.preparedGenericAPIServer.RunWithContext()
      k8s.io/apiserver/pkg/server/genericapiserver.go:602 +0x718
  k8s.io/kube-aggregator/pkg/apiserver.preparedAPIAggregator.Run()
      k8s.io/kube-aggregator/pkg/apiserver/apiserver.go:504 +0x130
  k8s.io/kubernetes/cmd/kube-apiserver/app/testing.StartTestServer.func3()
      k8s.io/kubernetes/cmd/kube-apiserver/app/testing/testserver.go:442 +0xb0
==================

Signed-off-by: Monis Khan <mok@microsoft.com>
@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. kind/bug Categorizes issue or PR as related to a bug. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Jul 3, 2025
@k8s-ci-robot k8s-ci-robot requested review from dom4ha and macsko July 3, 2025 00:18
@k8s-ci-robot k8s-ci-robot added sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jul 3, 2025
@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jul 3, 2025
Copy link
Member

@aramase aramase left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jul 3, 2025
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: fa62dff3371f4c4448414a01693e223cb38842a7

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: aramase, enj
Once this PR has been reviewed and has the lgtm label, please ask for approval from macsko. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot
Copy link
Contributor

@enj: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kubernetes-e2e-gce fe633cd link true /test pull-kubernetes-e2e-gce

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@@ -90,7 +90,8 @@ func AddSystemPriorityClasses() genericapiserver.PostStartHookFunc {
_, err := schedClientSet.PriorityClasses().Get(context.TODO(), pc.Name, metav1.GetOptions{})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about pc.Name, would it cause data race?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pc.Name is ok. the root cause can refer to this issue #82497

@macsko
Copy link
Member

macsko commented Jul 3, 2025

@enj Please follow the rules of PR description. Especially, the What this PR does / why we need it section is required. Please also mention how the race was detected (in CI or by hand?) and how it can be reproduced.

@@ -90,7 +90,8 @@ func AddSystemPriorityClasses() genericapiserver.PostStartHookFunc {
_, err := schedClientSet.PriorityClasses().Get(context.TODO(), pc.Name, metav1.GetOptions{})
if err != nil {
if apierrors.IsNotFound(err) {
_, err := schedClientSet.PriorityClasses().Create(context.TODO(), pc, metav1.CreateOptions{})
// create can mutate its input so we deep copy pc here since it is a global
Copy link
Member

@likakuli likakuli Jul 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. release-note-none Denotes a PR that doesn't merit a release note. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants

TMZ Celebrity News – Breaking Stories, Videos & Gossip

Looking for the latest TMZ celebrity news? You've come to the right place. From shocking Hollywood scandals to exclusive videos, TMZ delivers it all in real time.

Whether it’s a red carpet slip-up, a viral paparazzi moment, or a legal drama involving your favorite stars, TMZ news is always first to break the story. Stay in the loop with daily updates, insider tips, and jaw-dropping photos.

🎥 Watch TMZ Live

TMZ Live brings you daily celebrity news and interviews straight from the TMZ newsroom. Don’t miss a beat—watch now and see what’s trending in Hollywood.