[
  {
    "path": ".github/PULL_REQUEST_TEMPLATE.md",
    "content": "Sorry, we do not accept changes directly against this repository. Please see\nCONTRIBUTING.md for information on where and how to contribute instead.\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "# Contributing guidelines\n\nDo not open pull requests directly against this repository, they will be ignored. Instead, please open pull requests against [kubernetes/kubernetes](https://git.k8s.io/kubernetes/).  Please follow the same [contributing guide](https://git.k8s.io/kubernetes/CONTRIBUTING.md) you would follow for any other pull request made to kubernetes/kubernetes.\n\nThis repository is published from [kubernetes/kubernetes/staging/src/k8s.io/kube-scheduler](https://git.k8s.io/kubernetes/staging/src/k8s.io/kube-scheduler) by the [kubernetes publishing-bot](https://git.k8s.io/publishing-bot).\n\nPlease see [Staging Directory and Publishing](https://git.k8s.io/community/contributors/devel/sig-architecture/staging.md) for more information\n"
  },
  {
    "path": "LICENSE",
    "content": "\n                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright [yyyy] [name of copyright owner]\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "OWNERS",
    "content": "# See the OWNERS docs at https://go.k8s.io/owners\n\napprovers:\n  - sig-scheduling-maintainers\n  - sttts\n  - luxas\nreviewers:\n  - sig-scheduling\n  - luxas\n  - sttts\nlabels:\n  - sig/scheduling\n"
  },
  {
    "path": "README.md",
    "content": "> ⚠️ **This is an automatically published [staged repository](https://git.k8s.io/kubernetes/staging#external-repository-staging-area) for Kubernetes**.   \n> Contributions, including issues and pull requests, should be made to the main Kubernetes repository: [https://github.com/kubernetes/kubernetes](https://github.com/kubernetes/kubernetes).  \n> This repository is read-only for importing, and not used for direct contributions.  \n> See [CONTRIBUTING.md](./CONTRIBUTING.md) for more details.\n\n# kube-scheduler\n\nImplements [KEP 115 - Moving ComponentConfig API types to staging repos](https://git.k8s.io/enhancements/keps/sig-cluster-lifecycle/wgs/115-componentconfig#kube-scheduler-changes)\n\nThis repo provides external, versioned ComponentConfig API types for configuring the kube-scheduler.\nThese external types can easily be vendored and used by any third-party tool writing Kubernetes\nComponentConfig objects.\n\n## Compatibility\n\nHEAD of this repo will match HEAD of k8s.io/apiserver, k8s.io/apimachinery, and k8s.io/client-go.\n\n## Where does it come from?\n\nThis repo is synced from https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/kube-scheduler.\nCode changes are made in that location, merged into `k8s.io/kubernetes` and later synced here by a bot.\n\n"
  },
  {
    "path": "SECURITY_CONTACTS",
    "content": "# Defined below are the security contacts for this repo.\n#\n# They are the contact point for the Product Security Committee to reach out\n# to for triaging and handling of incoming issues.\n#\n# The below names agree to abide by the\n# [Embargo Policy](https://git.k8s.io/security/private-distributors-list.md#embargo-policy)\n# and will be removed and replaced if they violate that agreement.\n#\n# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE\n# INSTRUCTIONS AT https://kubernetes.io/security/\n\ncjcullen\njoelsmith\nliggitt\nphilips\ntallclair\n"
  },
  {
    "path": "code-of-conduct.md",
    "content": "# Kubernetes Community Code of Conduct\n\nPlease refer to our [Kubernetes Community Code of Conduct](https://git.k8s.io/community/code-of-conduct.md)\n"
  },
  {
    "path": "config/OWNERS",
    "content": "# See the OWNERS docs at https://go.k8s.io/owners\n\n# Disable inheritance as this is an api owners file\noptions:\n  no_parent_owners: true\napprovers:\n  - api-approvers\n  - sig-scheduling-api-approvers\nreviewers:\n  - api-reviewers\nlabels:\n  - kind/api-change\n"
  },
  {
    "path": "config/v1/doc.go",
    "content": "/*\nCopyright 2022 The Kubernetes Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\n// +k8s:deepcopy-gen=package\n// +k8s:openapi-gen=true\n// +k8s:openapi-model-package=io.k8s.kube-scheduler.config.v1\n\n// +groupName=kubescheduler.config.k8s.io\n\npackage v1\n"
  },
  {
    "path": "config/v1/register.go",
    "content": "/*\nCopyright 2022 The Kubernetes Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage v1\n\nimport (\n\t\"k8s.io/apimachinery/pkg/runtime\"\n\t\"k8s.io/apimachinery/pkg/runtime/schema\"\n)\n\n// GroupName is the group name used in this package\nconst GroupName = \"kubescheduler.config.k8s.io\"\n\n// SchemeGroupVersion is group version used to register these objects\nvar SchemeGroupVersion = schema.GroupVersion{Group: GroupName, Version: \"v1\"}\n\nvar (\n\t// SchemeBuilder is the scheme builder with scheme init functions to run for this API package\n\tSchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes)\n\t// AddToScheme is a global function that registers this API group & version to a scheme\n\tAddToScheme = SchemeBuilder.AddToScheme\n)\n\n// addKnownTypes registers known types to the given scheme\nfunc addKnownTypes(scheme *runtime.Scheme) error {\n\tscheme.AddKnownTypes(SchemeGroupVersion,\n\t\t&KubeSchedulerConfiguration{},\n\t\t&DefaultPreemptionArgs{},\n\t\t&InterPodAffinityArgs{},\n\t\t&NodeResourcesBalancedAllocationArgs{},\n\t\t&NodeResourcesFitArgs{},\n\t\t&PodTopologySpreadArgs{},\n\t\t&VolumeBindingArgs{},\n\t\t&NodeAffinityArgs{},\n\t\t&DynamicResourcesArgs{},\n\t)\n\treturn nil\n}\n"
  },
  {
    "path": "config/v1/types.go",
    "content": "/*\nCopyright 2022 The Kubernetes Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage v1\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"time\"\n\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/runtime\"\n\tcomponentbaseconfigv1alpha1 \"k8s.io/component-base/config/v1alpha1\"\n\t\"sigs.k8s.io/yaml\"\n)\n\nconst (\n\t// SchedulerDefaultLockObjectNamespace defines default scheduler lock object namespace (\"kube-system\")\n\tSchedulerDefaultLockObjectNamespace string = metav1.NamespaceSystem\n\n\t// SchedulerDefaultLockObjectName defines default scheduler lock object name (\"kube-scheduler\")\n\tSchedulerDefaultLockObjectName = \"kube-scheduler\"\n\n\t// SchedulerDefaultProviderName defines the default provider names\n\tSchedulerDefaultProviderName = \"DefaultProvider\"\n)\n\n// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object\n\n// KubeSchedulerConfiguration configures a scheduler\ntype KubeSchedulerConfiguration struct {\n\tmetav1.TypeMeta `json:\",inline\"`\n\n\t// Parallelism defines the amount of parallelism in algorithms for scheduling a Pods. Must be greater than 0. Defaults to 16\n\tParallelism *int32 `json:\"parallelism,omitempty\"`\n\n\t// LeaderElection defines the configuration of leader election client.\n\tLeaderElection componentbaseconfigv1alpha1.LeaderElectionConfiguration `json:\"leaderElection\"`\n\n\t// ClientConnection specifies the kubeconfig file and client connection\n\t// settings for the proxy server to use when communicating with the apiserver.\n\tClientConnection componentbaseconfigv1alpha1.ClientConnectionConfiguration `json:\"clientConnection\"`\n\n\t// DebuggingConfiguration holds configuration for Debugging related features\n\t// TODO: We might wanna make this a substruct like Debugging componentbaseconfigv1alpha1.DebuggingConfiguration\n\tcomponentbaseconfigv1alpha1.DebuggingConfiguration `json:\",inline\"`\n\n\t// PercentageOfNodesToScore is the percentage of all nodes that once found feasible\n\t// for running a pod, the scheduler stops its search for more feasible nodes in\n\t// the cluster. This helps improve scheduler's performance. Scheduler always tries to find\n\t// at least \"minFeasibleNodesToFind\" feasible nodes no matter what the value of this flag is.\n\t// Example: if the cluster size is 500 nodes and the value of this flag is 30,\n\t// then scheduler stops finding further feasible nodes once it finds 150 feasible ones.\n\t// When the value is 0, default percentage (5%--50% based on the size of the cluster) of the\n\t// nodes will be scored. It is overridden by profile level PercentageOfNodesToScore.\n\tPercentageOfNodesToScore *int32 `json:\"percentageOfNodesToScore,omitempty\"`\n\n\t// PodInitialBackoffSeconds is the initial backoff for unschedulable pods.\n\t// If specified, it must be greater than 0. If this value is null, the default value (1s)\n\t// will be used.\n\tPodInitialBackoffSeconds *int64 `json:\"podInitialBackoffSeconds,omitempty\"`\n\n\t// PodMaxBackoffSeconds is the max backoff for unschedulable pods.\n\t// If specified, it must be greater than podInitialBackoffSeconds. If this value is null,\n\t// the default value (10s) will be used.\n\tPodMaxBackoffSeconds *int64 `json:\"podMaxBackoffSeconds,omitempty\"`\n\n\t// Profiles are scheduling profiles that kube-scheduler supports. Pods can\n\t// choose to be scheduled under a particular profile by setting its associated\n\t// scheduler name. Pods that don't specify any scheduler name are scheduled\n\t// with the \"default-scheduler\" profile, if present here.\n\t// +listType=map\n\t// +listMapKey=schedulerName\n\tProfiles []KubeSchedulerProfile `json:\"profiles,omitempty\"`\n\n\t// Extenders are the list of scheduler extenders, each holding the values of how to communicate\n\t// with the extender. These extenders are shared by all scheduler profiles.\n\t// +listType=set\n\tExtenders []Extender `json:\"extenders,omitempty\"`\n\n\t// DelayCacheUntilActive specifies when to start caching. If this is true and leader election is enabled,\n\t// the scheduler will wait to fill informer caches until it is the leader. Doing so will have slower\n\t// failover with the benefit of lower memory overhead while waiting to become leader.\n\t// Defaults to false.\n\tDelayCacheUntilActive bool `json:\"delayCacheUntilActive,omitempty\"`\n}\n\n// DecodeNestedObjects decodes plugin args for known types.\nfunc (c *KubeSchedulerConfiguration) DecodeNestedObjects(d runtime.Decoder) error {\n\tvar strictDecodingErrs []error\n\tfor i := range c.Profiles {\n\t\tprof := &c.Profiles[i]\n\t\tfor j := range prof.PluginConfig {\n\t\t\terr := prof.PluginConfig[j].decodeNestedObjects(d)\n\t\t\tif err != nil {\n\t\t\t\tdecodingErr := fmt.Errorf(\"decoding .profiles[%d].pluginConfig[%d]: %w\", i, j, err)\n\t\t\t\tif runtime.IsStrictDecodingError(err) {\n\t\t\t\t\tstrictDecodingErrs = append(strictDecodingErrs, decodingErr)\n\t\t\t\t} else {\n\t\t\t\t\treturn decodingErr\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\tif len(strictDecodingErrs) > 0 {\n\t\treturn runtime.NewStrictDecodingError(strictDecodingErrs)\n\t}\n\treturn nil\n}\n\n// EncodeNestedObjects encodes plugin args.\nfunc (c *KubeSchedulerConfiguration) EncodeNestedObjects(e runtime.Encoder) error {\n\tfor i := range c.Profiles {\n\t\tprof := &c.Profiles[i]\n\t\tfor j := range prof.PluginConfig {\n\t\t\terr := prof.PluginConfig[j].encodeNestedObjects(e)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"encoding .profiles[%d].pluginConfig[%d]: %w\", i, j, err)\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\n// KubeSchedulerProfile is a scheduling profile.\ntype KubeSchedulerProfile struct {\n\t// SchedulerName is the name of the scheduler associated to this profile.\n\t// If SchedulerName matches with the pod's \"spec.schedulerName\", then the pod\n\t// is scheduled with this profile.\n\tSchedulerName *string `json:\"schedulerName,omitempty\"`\n\n\t// PercentageOfNodesToScore is the percentage of all nodes that once found feasible\n\t// for running a pod, the scheduler stops its search for more feasible nodes in\n\t// the cluster. This helps improve scheduler's performance. Scheduler always tries to find\n\t// at least \"minFeasibleNodesToFind\" feasible nodes no matter what the value of this flag is.\n\t// Example: if the cluster size is 500 nodes and the value of this flag is 30,\n\t// then scheduler stops finding further feasible nodes once it finds 150 feasible ones.\n\t// When the value is 0, default percentage (5%--50% based on the size of the cluster) of the\n\t// nodes will be scored. It will override global PercentageOfNodesToScore. If it is empty,\n\t// global PercentageOfNodesToScore will be used.\n\tPercentageOfNodesToScore *int32 `json:\"percentageOfNodesToScore,omitempty\"`\n\n\t// Plugins specify the set of plugins that should be enabled or disabled.\n\t// Enabled plugins are the ones that should be enabled in addition to the\n\t// default plugins. Disabled plugins are any of the default plugins that\n\t// should be disabled.\n\t// When no enabled or disabled plugin is specified for an extension point,\n\t// default plugins for that extension point will be used if there is any.\n\t// If a QueueSort plugin is specified, the same QueueSort Plugin and\n\t// PluginConfig must be specified for all profiles.\n\tPlugins *Plugins `json:\"plugins,omitempty\"`\n\n\t// PluginConfig is an optional set of custom plugin arguments for each plugin.\n\t// Omitting config args for a plugin is equivalent to using the default config\n\t// for that plugin.\n\t// +listType=map\n\t// +listMapKey=name\n\tPluginConfig []PluginConfig `json:\"pluginConfig,omitempty\"`\n}\n\n// Plugins include multiple extension points. When specified, the list of plugins for\n// a particular extension point are the only ones enabled. If an extension point is\n// omitted from the config, then the default set of plugins is used for that extension point.\n// Enabled plugins are called in the order specified here, after default plugins. If they need to\n// be invoked before default plugins, default plugins must be disabled and re-enabled here in desired order.\ntype Plugins struct {\n\t// PreEnqueue is a list of plugins that should be invoked before adding pods to the scheduling queue.\n\tPreEnqueue PluginSet `json:\"preEnqueue,omitempty\"`\n\n\t// QueueSort is a list of plugins that should be invoked when sorting pods in the scheduling queue.\n\tQueueSort PluginSet `json:\"queueSort,omitempty\"`\n\n\t// PreFilter is a list of plugins that should be invoked at \"PreFilter\" extension point of the scheduling framework.\n\tPreFilter PluginSet `json:\"preFilter,omitempty\"`\n\n\t// Filter is a list of plugins that should be invoked when filtering out nodes that cannot run the Pod.\n\tFilter PluginSet `json:\"filter,omitempty\"`\n\n\t// PostFilter is a list of plugins that are invoked after filtering phase, but only when no feasible nodes were found for the pod.\n\tPostFilter PluginSet `json:\"postFilter,omitempty\"`\n\n\t// PreScore is a list of plugins that are invoked before scoring.\n\tPreScore PluginSet `json:\"preScore,omitempty\"`\n\n\t// Score is a list of plugins that should be invoked when ranking nodes that have passed the filtering phase.\n\tScore PluginSet `json:\"score,omitempty\"`\n\n\t// Reserve is a list of plugins invoked when reserving/unreserving resources\n\t// after a node is assigned to run the pod.\n\tReserve PluginSet `json:\"reserve,omitempty\"`\n\n\t// Permit is a list of plugins that control binding of a Pod. These plugins can prevent or delay binding of a Pod.\n\tPermit PluginSet `json:\"permit,omitempty\"`\n\n\t// PreBind is a list of plugins that should be invoked before a pod is bound.\n\tPreBind PluginSet `json:\"preBind,omitempty\"`\n\n\t// Bind is a list of plugins that should be invoked at \"Bind\" extension point of the scheduling framework.\n\t// The scheduler call these plugins in order. Scheduler skips the rest of these plugins as soon as one returns success.\n\tBind PluginSet `json:\"bind,omitempty\"`\n\n\t// PostBind is a list of plugins that should be invoked after a pod is successfully bound.\n\tPostBind PluginSet `json:\"postBind,omitempty\"`\n\n\t// MultiPoint is a simplified config section to enable plugins for all valid extension points.\n\t// Plugins enabled through MultiPoint will automatically register for every individual extension\n\t// point the plugin has implemented. Disabling a plugin through MultiPoint disables that behavior.\n\t// The same is true for disabling \"*\" through MultiPoint (no default plugins will be automatically registered).\n\t// Plugins can still be disabled through their individual extension points.\n\t//\n\t// In terms of precedence, plugin config follows this basic hierarchy\n\t//   1. Specific extension points\n\t//   2. Explicitly configured MultiPoint plugins\n\t//   3. The set of default plugins, as MultiPoint plugins\n\t// This implies that a higher precedence plugin will run first and overwrite any settings within MultiPoint.\n\t// Explicitly user-configured plugins also take a higher precedence over default plugins.\n\t// Within this hierarchy, an Enabled setting takes precedence over Disabled. For example, if a plugin is\n\t// set in both `multiPoint.Enabled` and `multiPoint.Disabled`, the plugin will be enabled. Similarly,\n\t// including `multiPoint.Disabled = '*'` and `multiPoint.Enabled = pluginA` will still register that specific\n\t// plugin through MultiPoint. This follows the same behavior as all other extension point configurations.\n\tMultiPoint PluginSet `json:\"multiPoint,omitempty\"`\n\n\t// PlacementGenerate is a list of plugins that should be invoked during pod group scheduling cycle when determining placements for a pod group.\n\tPlacementGenerate PluginSet `json:\"placementGenerate,omitempty\"`\n\n\t// PlacementScore is a list of plugins that should be invoked during workload scheduling cycle when ranking pod group assignments.\n\tPlacementScore PluginSet `json:\"placementScore,omitempty\"`\n}\n\n// PluginSet specifies enabled and disabled plugins for an extension point.\n// If an array is empty, missing, or nil, default plugins at that extension point will be used.\ntype PluginSet struct {\n\t// Enabled specifies plugins that should be enabled in addition to default plugins.\n\t// If the default plugin is also configured in the scheduler config file, the weight of plugin will\n\t// be overridden accordingly.\n\t// These are called after default plugins and in the same order specified here.\n\t// +listType=atomic\n\tEnabled []Plugin `json:\"enabled,omitempty\"`\n\t// Disabled specifies default plugins that should be disabled.\n\t// When all default plugins need to be disabled, an array containing only one \"*\" should be provided.\n\t// +listType=map\n\t// +listMapKey=name\n\tDisabled []Plugin `json:\"disabled,omitempty\"`\n}\n\n// Plugin specifies a plugin name and its weight when applicable. Weight is used only for Score and PlacementScore plugins.\ntype Plugin struct {\n\t// Name defines the name of plugin\n\tName string `json:\"name\"`\n\t// Weight defines the weight of plugin, only used for Score and PlacementScore plugins.\n\tWeight *int32 `json:\"weight,omitempty\"`\n}\n\n// PluginConfig specifies arguments that should be passed to a plugin at the time of initialization.\n// A plugin that is invoked at multiple extension points is initialized once. Args can have arbitrary structure.\n// It is up to the plugin to process these Args.\ntype PluginConfig struct {\n\t// Name defines the name of plugin being configured\n\tName string `json:\"name\"`\n\t// Args defines the arguments passed to the plugins at the time of initialization. Args can have arbitrary structure.\n\tArgs runtime.RawExtension `json:\"args,omitempty\"`\n}\n\nfunc (c *PluginConfig) decodeNestedObjects(d runtime.Decoder) error {\n\tgvk := SchemeGroupVersion.WithKind(c.Name + \"Args\")\n\t// dry-run to detect and skip out-of-tree plugin args.\n\tif _, _, err := d.Decode(nil, &gvk, nil); runtime.IsNotRegisteredError(err) {\n\t\treturn nil\n\t}\n\n\tvar strictDecodingErr error\n\tobj, parsedGvk, err := d.Decode(c.Args.Raw, &gvk, nil)\n\tif err != nil {\n\t\tdecodingArgsErr := fmt.Errorf(\"decoding args for plugin %s: %w\", c.Name, err)\n\t\tif obj != nil && runtime.IsStrictDecodingError(err) {\n\t\t\tstrictDecodingErr = runtime.NewStrictDecodingError([]error{decodingArgsErr})\n\t\t} else {\n\t\t\treturn decodingArgsErr\n\t\t}\n\t}\n\tif parsedGvk.GroupKind() != gvk.GroupKind() {\n\t\treturn fmt.Errorf(\"args for plugin %s were not of type %s, got %s\", c.Name, gvk.GroupKind(), parsedGvk.GroupKind())\n\t}\n\tc.Args.Object = obj\n\treturn strictDecodingErr\n}\n\nfunc (c *PluginConfig) encodeNestedObjects(e runtime.Encoder) error {\n\tif c.Args.Object == nil {\n\t\treturn nil\n\t}\n\tvar buf bytes.Buffer\n\terr := e.Encode(c.Args.Object, &buf)\n\tif err != nil {\n\t\treturn err\n\t}\n\t// The <e> encoder might be a YAML encoder, but the parent encoder expects\n\t// JSON output, so we convert YAML back to JSON.\n\t// This is a no-op if <e> produces JSON.\n\tjson, err := yaml.YAMLToJSON(buf.Bytes())\n\tif err != nil {\n\t\treturn err\n\t}\n\tc.Args.Raw = json\n\treturn nil\n}\n\n// Extender holds the parameters used to communicate with the extender. If a verb is unspecified/empty,\n// it is assumed that the extender chose not to provide that extension.\ntype Extender struct {\n\t// URLPrefix at which the extender is available\n\tURLPrefix string `json:\"urlPrefix\"`\n\t// Verb for the filter call, empty if not supported. This verb is appended to the URLPrefix when issuing the filter call to extender.\n\tFilterVerb string `json:\"filterVerb,omitempty\"`\n\t// Verb for the preempt call, empty if not supported. This verb is appended to the URLPrefix when issuing the preempt call to extender.\n\tPreemptVerb string `json:\"preemptVerb,omitempty\"`\n\t// Verb for the prioritize call, empty if not supported. This verb is appended to the URLPrefix when issuing the prioritize call to extender.\n\tPrioritizeVerb string `json:\"prioritizeVerb,omitempty\"`\n\t// The numeric multiplier for the node scores that the prioritize call generates.\n\t// The weight should be a positive integer\n\tWeight int64 `json:\"weight,omitempty\"`\n\t// Verb for the bind call, empty if not supported. This verb is appended to the URLPrefix when issuing the bind call to extender.\n\t// If this method is implemented by the extender, it is the extender's responsibility to bind the pod to apiserver. Only one extender\n\t// can implement this function.\n\tBindVerb string `json:\"bindVerb,omitempty\"`\n\t// EnableHTTPS specifies whether https should be used to communicate with the extender\n\tEnableHTTPS bool `json:\"enableHTTPS,omitempty\"`\n\t// TLSConfig specifies the transport layer security config\n\tTLSConfig *ExtenderTLSConfig `json:\"tlsConfig,omitempty\"`\n\t// HTTPTimeout specifies the timeout duration for a call to the extender. Filter timeout fails the scheduling of the pod. Prioritize\n\t// timeout is ignored, k8s/other extenders priorities are used to select the node.\n\tHTTPTimeout metav1.Duration `json:\"httpTimeout,omitempty\"`\n\t// NodeCacheCapable specifies that the extender is capable of caching node information,\n\t// so the scheduler should only send minimal information about the eligible nodes\n\t// assuming that the extender already cached full details of all nodes in the cluster\n\tNodeCacheCapable bool `json:\"nodeCacheCapable,omitempty\"`\n\t// ManagedResources is a list of extended resources that are managed by\n\t// this extender.\n\t// - A pod will be sent to the extender on the Filter, Prioritize and Bind\n\t//   (if the extender is the binder) phases iff the pod requests at least\n\t//   one of the extended resources in this list. If empty or unspecified,\n\t//   all pods will be sent to this extender.\n\t// - If IgnoredByScheduler is set to true for a resource, kube-scheduler\n\t//   will skip checking the resource in predicates.\n\t// +optional\n\t// +listType=atomic\n\tManagedResources []ExtenderManagedResource `json:\"managedResources,omitempty\"`\n\t// Ignorable specifies if the extender is ignorable, i.e. scheduling should not\n\t// fail when the extender returns an error or is not reachable.\n\tIgnorable bool `json:\"ignorable,omitempty\"`\n}\n\n// ExtenderManagedResource describes the arguments of extended resources\n// managed by an extender.\ntype ExtenderManagedResource struct {\n\t// Name is the extended resource name.\n\tName string `json:\"name\"`\n\t// IgnoredByScheduler indicates whether kube-scheduler should ignore this\n\t// resource when applying predicates.\n\tIgnoredByScheduler bool `json:\"ignoredByScheduler,omitempty\"`\n}\n\n// ExtenderTLSConfig contains settings to enable TLS with extender\ntype ExtenderTLSConfig struct {\n\t// Server should be accessed without verifying the TLS certificate. For testing only.\n\tInsecure bool `json:\"insecure,omitempty\"`\n\t// ServerName is passed to the server for SNI and is used in the client to check server\n\t// certificates against. If ServerName is empty, the hostname used to contact the\n\t// server is used.\n\tServerName string `json:\"serverName,omitempty\"`\n\n\t// Server requires TLS client certificate authentication\n\tCertFile string `json:\"certFile,omitempty\"`\n\t// Server requires TLS client certificate authentication\n\tKeyFile string `json:\"keyFile,omitempty\"`\n\t// Trusted root certificates for server\n\tCAFile string `json:\"caFile,omitempty\"`\n\n\t// CertData holds PEM-encoded bytes (typically read from a client certificate file).\n\t// CertData takes precedence over CertFile\n\t// +listType=atomic\n\tCertData []byte `json:\"certData,omitempty\"`\n\t// KeyData holds PEM-encoded bytes (typically read from a client certificate key file).\n\t// KeyData takes precedence over KeyFile\n\t// +listType=atomic\n\tKeyData []byte `json:\"keyData,omitempty\"`\n\t// CAData holds PEM-encoded bytes (typically read from a root certificates bundle).\n\t// CAData takes precedence over CAFile\n\t// +listType=atomic\n\tCAData []byte `json:\"caData,omitempty\"`\n}\n\n// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object\n\n// DynamicResourcesArgs holds arguments used to configure the DynamicResources plugin.\ntype DynamicResourcesArgs struct {\n\tmetav1.TypeMeta `json:\",inline\"`\n\n\t// FilterTimeout limits the amount of time that the filter operation may\n\t// take per node to search for devices that can be allocated to scheduler\n\t// a pod to that node.\n\t//\n\t// In typical scenarios, this operation should complete in 10 to 200\n\t// milliseconds, but could also be longer depending on the number of\n\t// requests per ResourceClaim, number of ResourceClaims, number of\n\t// published devices in ResourceSlices, and the complexity of the\n\t// requests. Other checks besides CEL evaluation also take time (usage\n\t// checks, match attributes, etc.).\n\t//\n\t// Therefore the scheduler plugin applies this timeout. If the timeout\n\t// is reached, the Pod is considered unschedulable for the node.\n\t// If filtering succeeds for some other node(s), those are picked instead.\n\t// If filtering fails for all of them, the Pod is placed in the\n\t// unschedulable queue. It will get checked again if changes in\n\t// e.g. ResourceSlices or ResourceClaims indicate that\n\t// another scheduling attempt might succeed. If this fails repeatedly,\n\t// exponential backoff slows down future attempts.\n\t//\n\t// The default is 10 seconds.\n\t// This is sufficient to prevent worst-case scenarios while not impacting normal\n\t// usage of DRA. However, slow filtering can slow down Pod scheduling\n\t// also for Pods not using DRA. Administators can reduce the timeout\n\t// after checking the\n\t// `scheduler_framework_extension_point_duration_seconds` metrics.\n\t//\n\t// Setting it to zero completely disables the timeout.\n\tFilterTimeout *metav1.Duration `json:\"filterTimeout\"`\n\n\t// BindingTimeout limits how long the PreBind extension point may wait for\n\t// ResourceClaim device BindingConditions to become satisfied when such\n\t// conditions are present. While waiting, the scheduler periodically checks\n\t// device status. If the timeout elapses before all required conditions are\n\t// true (or any bindingFailureConditions become true), the allocation is\n\t// cleared and the Pod re-enters scheduling queue. Note that the same or other node may be\n\t// chosen if feasible; otherwise the Pod is placed in the unschedulable queue and\n\t// retried based on cluster changes and backoff.\n\t//\n\t// Defaults & feature gates:\n\t//   - Defaults to 10 minutes when the DRADeviceBindingConditions feature gate is enabled.\n\t//   - Has effect only when BOTH DRADeviceBindingConditions and\n\t//     DRAResourceClaimDeviceStatus are enabled; otherwise omit this field.\n\t//   - When DRADeviceBindingConditions is disabled, setting this field is considered an error.\n\t//\n\t// Valid values:\n\t//   - >=1s (non-zero). No upper bound is enforced.\n\t//\n\t// Tuning guidance:\n\t//   - Lower values reduce time-to-retry when devices aren’t ready but can\n\t//     increase churn if drivers typically need longer to report readiness.\n\t//   - Review scheduler latency metrics (e.g. PreBind duration in\n\t//     `scheduler_framework_extension_point_duration_seconds`) and driver\n\t//     readiness behavior before tightening this timeout.\n\tBindingTimeout *metav1.Duration `json:\"bindingTimeout,omitempty\"`\n}\n\nconst DynamicResourcesFilterTimeoutDefault = 10 * time.Second\nconst DynamicResourcesBindingTimeoutDefault = 600 * time.Second\n"
  },
  {
    "path": "config/v1/types_pluginargs.go",
    "content": "/*\nCopyright 2022 The Kubernetes Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage v1\n\nimport (\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n)\n\n// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object\n\n// DefaultPreemptionArgs holds arguments used to configure the\n// DefaultPreemption plugin.\ntype DefaultPreemptionArgs struct {\n\tmetav1.TypeMeta `json:\",inline\"`\n\n\t// MinCandidateNodesPercentage is the minimum number of candidates to\n\t// shortlist when dry running preemption as a percentage of number of nodes.\n\t// Must be in the range [0, 100]. Defaults to 10% of the cluster size if\n\t// unspecified.\n\tMinCandidateNodesPercentage *int32 `json:\"minCandidateNodesPercentage,omitempty\"`\n\t// MinCandidateNodesAbsolute is the absolute minimum number of candidates to\n\t// shortlist. The likely number of candidates enumerated for dry running\n\t// preemption is given by the formula:\n\t// numCandidates = max(numNodes * minCandidateNodesPercentage, minCandidateNodesAbsolute)\n\t// We say \"likely\" because there are other factors such as PDB violations\n\t// that play a role in the number of candidates shortlisted. Must be at least\n\t// 0 nodes. Defaults to 100 nodes if unspecified.\n\tMinCandidateNodesAbsolute *int32 `json:\"minCandidateNodesAbsolute,omitempty\"`\n}\n\n// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object\n\n// InterPodAffinityArgs holds arguments used to configure the InterPodAffinity plugin.\ntype InterPodAffinityArgs struct {\n\tmetav1.TypeMeta `json:\",inline\"`\n\n\t// HardPodAffinityWeight is the scoring weight for existing pods with a\n\t// matching hard affinity to the incoming pod.\n\tHardPodAffinityWeight *int32 `json:\"hardPodAffinityWeight,omitempty\"`\n\n\t// IgnorePreferredTermsOfExistingPods configures the scheduler to ignore existing pods' preferred affinity\n\t// rules when scoring candidate nodes, unless the incoming pod has inter-pod affinities.\n\tIgnorePreferredTermsOfExistingPods bool `json:\"ignorePreferredTermsOfExistingPods\"`\n}\n\n// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object\n\n// NodeResourcesFitArgs holds arguments used to configure the NodeResourcesFit plugin.\ntype NodeResourcesFitArgs struct {\n\tmetav1.TypeMeta `json:\",inline\"`\n\n\t// IgnoredResources is the list of resources that NodeResources fit filter\n\t// should ignore. This doesn't apply to scoring.\n\t// +listType=atomic\n\tIgnoredResources []string `json:\"ignoredResources,omitempty\"`\n\t// IgnoredResourceGroups defines the list of resource groups that NodeResources fit filter should ignore.\n\t// e.g. if group is [\"example.com\"], it will ignore all resource names that begin\n\t// with \"example.com\", such as \"example.com/aaa\" and \"example.com/bbb\".\n\t// A resource group name can't contain '/'. This doesn't apply to scoring.\n\t// +listType=atomic\n\tIgnoredResourceGroups []string `json:\"ignoredResourceGroups,omitempty\"`\n\n\t// ScoringStrategy selects the node resource scoring strategy.\n\t// The default strategy is LeastAllocated with an equal \"cpu\" and \"memory\" weight.\n\tScoringStrategy *ScoringStrategy `json:\"scoringStrategy,omitempty\"`\n}\n\n// PodTopologySpreadConstraintsDefaulting defines how to set default constraints\n// for the PodTopologySpread plugin.\ntype PodTopologySpreadConstraintsDefaulting string\n\nconst (\n\t// SystemDefaulting instructs to use the kubernetes defined default.\n\tSystemDefaulting PodTopologySpreadConstraintsDefaulting = \"System\"\n\t// ListDefaulting instructs to use the config provided default.\n\tListDefaulting PodTopologySpreadConstraintsDefaulting = \"List\"\n)\n\n// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object\n\n// PodTopologySpreadArgs holds arguments used to configure the PodTopologySpread plugin.\ntype PodTopologySpreadArgs struct {\n\tmetav1.TypeMeta `json:\",inline\"`\n\n\t// DefaultConstraints defines topology spread constraints to be applied to\n\t// Pods that don't define any in `pod.spec.topologySpreadConstraints`.\n\t// `.defaultConstraints[*].labelSelectors` must be empty, as they are\n\t// deduced from the Pod's membership to Services, ReplicationControllers,\n\t// ReplicaSets or StatefulSets.\n\t// When not empty, .defaultingType must be \"List\".\n\t// +optional\n\t// +listType=atomic\n\tDefaultConstraints []corev1.TopologySpreadConstraint `json:\"defaultConstraints,omitempty\"`\n\n\t// DefaultingType determines how .defaultConstraints are deduced. Can be one\n\t// of \"System\" or \"List\".\n\t//\n\t// - \"System\": Use kubernetes defined constraints that spread Pods among\n\t//   Nodes and Zones.\n\t// - \"List\": Use constraints defined in .defaultConstraints.\n\t//\n\t// Defaults to \"System\".\n\t// +optional\n\tDefaultingType PodTopologySpreadConstraintsDefaulting `json:\"defaultingType,omitempty\"`\n}\n\n// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object\n\n// NodeResourcesBalancedAllocationArgs holds arguments used to configure NodeResourcesBalancedAllocation plugin.\ntype NodeResourcesBalancedAllocationArgs struct {\n\tmetav1.TypeMeta `json:\",inline\"`\n\n\t// Resources to be managed, the default is \"cpu\" and \"memory\" if not specified.\n\t// +listType=map\n\t// +listMapKey=name\n\tResources []ResourceSpec `json:\"resources,omitempty\"`\n}\n\n// UtilizationShapePoint represents single point of priority function shape.\ntype UtilizationShapePoint struct {\n\t// Utilization (x axis). Valid values are 0 to 100. Fully utilized node maps to 100.\n\tUtilization int32 `json:\"utilization\"`\n\t// Score assigned to given utilization (y axis). Valid values are 0 to 10.\n\tScore int32 `json:\"score\"`\n}\n\n// ResourceSpec represents a single resource.\ntype ResourceSpec struct {\n\t// Name of the resource.\n\tName string `json:\"name\"`\n\t// Weight of the resource.\n\tWeight int64 `json:\"weight,omitempty\"`\n}\n\n// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object\n\n// VolumeBindingArgs holds arguments used to configure the VolumeBinding plugin.\ntype VolumeBindingArgs struct {\n\tmetav1.TypeMeta `json:\",inline\"`\n\n\t// BindTimeoutSeconds is the timeout in seconds in volume binding operation.\n\t// Value must be non-negative integer. The value zero indicates no waiting.\n\t// If this value is nil, the default value (600) will be used.\n\tBindTimeoutSeconds *int64 `json:\"bindTimeoutSeconds,omitempty\"`\n\n\t// Shape specifies the points defining the score function shape, which is\n\t// used to score nodes based on the utilization of provisioned PVs.\n\t// The utilization is calculated by dividing the total requested\n\t// storage of the pod by the total capacity of feasible PVs on each node.\n\t// Each point contains utilization (ranges from 0 to 100) and its\n\t// associated score (ranges from 0 to 10). You can turn the priority by\n\t// specifying different scores for different utilization numbers.\n\t// The default shape points are:\n\t// 1) 10 for 0 utilization\n\t// 2) 0 for 100 utilization\n\t// All points must be sorted in increasing order by utilization.\n\t// +featureGate=StorageCapacityScoring\n\t// +optional\n\t// +listType=atomic\n\tShape []UtilizationShapePoint `json:\"shape,omitempty\"`\n}\n\n// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object\n\n// NodeAffinityArgs holds arguments to configure the NodeAffinity plugin.\ntype NodeAffinityArgs struct {\n\tmetav1.TypeMeta `json:\",inline\"`\n\n\t// AddedAffinity is applied to all Pods additionally to the NodeAffinity\n\t// specified in the PodSpec. That is, Nodes need to satisfy AddedAffinity\n\t// AND .spec.NodeAffinity. AddedAffinity is empty by default (all Nodes\n\t// match).\n\t// When AddedAffinity is used, some Pods with affinity requirements that match\n\t// a specific Node (such as Daemonset Pods) might remain unschedulable.\n\t// +optional\n\tAddedAffinity *corev1.NodeAffinity `json:\"addedAffinity,omitempty\"`\n}\n\n// ScoringStrategyType the type of scoring strategy used in NodeResourcesFit plugin.\ntype ScoringStrategyType string\n\nconst (\n\t// LeastAllocated strategy prioritizes nodes with least allocated resources.\n\tLeastAllocated ScoringStrategyType = \"LeastAllocated\"\n\t// MostAllocated strategy prioritizes nodes with most allocated resources.\n\tMostAllocated ScoringStrategyType = \"MostAllocated\"\n\t// RequestedToCapacityRatio strategy allows specifying a custom shape function\n\t// to score nodes based on the request to capacity ratio.\n\tRequestedToCapacityRatio ScoringStrategyType = \"RequestedToCapacityRatio\"\n)\n\n// ScoringStrategy define ScoringStrategyType for node resource plugin\ntype ScoringStrategy struct {\n\t// Type selects which strategy to run.\n\tType ScoringStrategyType `json:\"type,omitempty\"`\n\n\t// Resources to consider when scoring.\n\t// The default resource set includes \"cpu\" and \"memory\" with an equal weight.\n\t// Allowed weights go from 1 to 100.\n\t// Weight defaults to 1 if not specified or explicitly set to 0.\n\t// +listType=map\n\t// +listMapKey=topologyKey\n\tResources []ResourceSpec `json:\"resources,omitempty\"`\n\n\t// Arguments specific to RequestedToCapacityRatio strategy.\n\tRequestedToCapacityRatio *RequestedToCapacityRatioParam `json:\"requestedToCapacityRatio,omitempty\"`\n}\n\n// RequestedToCapacityRatioParam define RequestedToCapacityRatio parameters\ntype RequestedToCapacityRatioParam struct {\n\t// Shape is a list of points defining the scoring function shape.\n\t// +listType=atomic\n\tShape []UtilizationShapePoint `json:\"shape,omitempty\"`\n}\n"
  },
  {
    "path": "config/v1/zz_generated.deepcopy.go",
    "content": "//go:build !ignore_autogenerated\n// +build !ignore_autogenerated\n\n/*\nCopyright The Kubernetes Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\n// Code generated by deepcopy-gen. DO NOT EDIT.\n\npackage v1\n\nimport (\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\truntime \"k8s.io/apimachinery/pkg/runtime\"\n)\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *DefaultPreemptionArgs) DeepCopyInto(out *DefaultPreemptionArgs) {\n\t*out = *in\n\tout.TypeMeta = in.TypeMeta\n\tif in.MinCandidateNodesPercentage != nil {\n\t\tin, out := &in.MinCandidateNodesPercentage, &out.MinCandidateNodesPercentage\n\t\t*out = new(int32)\n\t\t**out = **in\n\t}\n\tif in.MinCandidateNodesAbsolute != nil {\n\t\tin, out := &in.MinCandidateNodesAbsolute, &out.MinCandidateNodesAbsolute\n\t\t*out = new(int32)\n\t\t**out = **in\n\t}\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DefaultPreemptionArgs.\nfunc (in *DefaultPreemptionArgs) DeepCopy() *DefaultPreemptionArgs {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(DefaultPreemptionArgs)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.\nfunc (in *DefaultPreemptionArgs) DeepCopyObject() runtime.Object {\n\tif c := in.DeepCopy(); c != nil {\n\t\treturn c\n\t}\n\treturn nil\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *DynamicResourcesArgs) DeepCopyInto(out *DynamicResourcesArgs) {\n\t*out = *in\n\tout.TypeMeta = in.TypeMeta\n\tif in.FilterTimeout != nil {\n\t\tin, out := &in.FilterTimeout, &out.FilterTimeout\n\t\t*out = new(metav1.Duration)\n\t\t**out = **in\n\t}\n\tif in.BindingTimeout != nil {\n\t\tin, out := &in.BindingTimeout, &out.BindingTimeout\n\t\t*out = new(metav1.Duration)\n\t\t**out = **in\n\t}\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DynamicResourcesArgs.\nfunc (in *DynamicResourcesArgs) DeepCopy() *DynamicResourcesArgs {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(DynamicResourcesArgs)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.\nfunc (in *DynamicResourcesArgs) DeepCopyObject() runtime.Object {\n\tif c := in.DeepCopy(); c != nil {\n\t\treturn c\n\t}\n\treturn nil\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *Extender) DeepCopyInto(out *Extender) {\n\t*out = *in\n\tif in.TLSConfig != nil {\n\t\tin, out := &in.TLSConfig, &out.TLSConfig\n\t\t*out = new(ExtenderTLSConfig)\n\t\t(*in).DeepCopyInto(*out)\n\t}\n\tout.HTTPTimeout = in.HTTPTimeout\n\tif in.ManagedResources != nil {\n\t\tin, out := &in.ManagedResources, &out.ManagedResources\n\t\t*out = make([]ExtenderManagedResource, len(*in))\n\t\tcopy(*out, *in)\n\t}\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Extender.\nfunc (in *Extender) DeepCopy() *Extender {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(Extender)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ExtenderManagedResource) DeepCopyInto(out *ExtenderManagedResource) {\n\t*out = *in\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExtenderManagedResource.\nfunc (in *ExtenderManagedResource) DeepCopy() *ExtenderManagedResource {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ExtenderManagedResource)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ExtenderTLSConfig) DeepCopyInto(out *ExtenderTLSConfig) {\n\t*out = *in\n\tif in.CertData != nil {\n\t\tin, out := &in.CertData, &out.CertData\n\t\t*out = make([]byte, len(*in))\n\t\tcopy(*out, *in)\n\t}\n\tif in.KeyData != nil {\n\t\tin, out := &in.KeyData, &out.KeyData\n\t\t*out = make([]byte, len(*in))\n\t\tcopy(*out, *in)\n\t}\n\tif in.CAData != nil {\n\t\tin, out := &in.CAData, &out.CAData\n\t\t*out = make([]byte, len(*in))\n\t\tcopy(*out, *in)\n\t}\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExtenderTLSConfig.\nfunc (in *ExtenderTLSConfig) DeepCopy() *ExtenderTLSConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ExtenderTLSConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *InterPodAffinityArgs) DeepCopyInto(out *InterPodAffinityArgs) {\n\t*out = *in\n\tout.TypeMeta = in.TypeMeta\n\tif in.HardPodAffinityWeight != nil {\n\t\tin, out := &in.HardPodAffinityWeight, &out.HardPodAffinityWeight\n\t\t*out = new(int32)\n\t\t**out = **in\n\t}\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new InterPodAffinityArgs.\nfunc (in *InterPodAffinityArgs) DeepCopy() *InterPodAffinityArgs {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(InterPodAffinityArgs)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.\nfunc (in *InterPodAffinityArgs) DeepCopyObject() runtime.Object {\n\tif c := in.DeepCopy(); c != nil {\n\t\treturn c\n\t}\n\treturn nil\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *KubeSchedulerConfiguration) DeepCopyInto(out *KubeSchedulerConfiguration) {\n\t*out = *in\n\tout.TypeMeta = in.TypeMeta\n\tif in.Parallelism != nil {\n\t\tin, out := &in.Parallelism, &out.Parallelism\n\t\t*out = new(int32)\n\t\t**out = **in\n\t}\n\tin.LeaderElection.DeepCopyInto(&out.LeaderElection)\n\tout.ClientConnection = in.ClientConnection\n\tin.DebuggingConfiguration.DeepCopyInto(&out.DebuggingConfiguration)\n\tif in.PercentageOfNodesToScore != nil {\n\t\tin, out := &in.PercentageOfNodesToScore, &out.PercentageOfNodesToScore\n\t\t*out = new(int32)\n\t\t**out = **in\n\t}\n\tif in.PodInitialBackoffSeconds != nil {\n\t\tin, out := &in.PodInitialBackoffSeconds, &out.PodInitialBackoffSeconds\n\t\t*out = new(int64)\n\t\t**out = **in\n\t}\n\tif in.PodMaxBackoffSeconds != nil {\n\t\tin, out := &in.PodMaxBackoffSeconds, &out.PodMaxBackoffSeconds\n\t\t*out = new(int64)\n\t\t**out = **in\n\t}\n\tif in.Profiles != nil {\n\t\tin, out := &in.Profiles, &out.Profiles\n\t\t*out = make([]KubeSchedulerProfile, len(*in))\n\t\tfor i := range *in {\n\t\t\t(*in)[i].DeepCopyInto(&(*out)[i])\n\t\t}\n\t}\n\tif in.Extenders != nil {\n\t\tin, out := &in.Extenders, &out.Extenders\n\t\t*out = make([]Extender, len(*in))\n\t\tfor i := range *in {\n\t\t\t(*in)[i].DeepCopyInto(&(*out)[i])\n\t\t}\n\t}\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KubeSchedulerConfiguration.\nfunc (in *KubeSchedulerConfiguration) DeepCopy() *KubeSchedulerConfiguration {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(KubeSchedulerConfiguration)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.\nfunc (in *KubeSchedulerConfiguration) DeepCopyObject() runtime.Object {\n\tif c := in.DeepCopy(); c != nil {\n\t\treturn c\n\t}\n\treturn nil\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *KubeSchedulerProfile) DeepCopyInto(out *KubeSchedulerProfile) {\n\t*out = *in\n\tif in.SchedulerName != nil {\n\t\tin, out := &in.SchedulerName, &out.SchedulerName\n\t\t*out = new(string)\n\t\t**out = **in\n\t}\n\tif in.PercentageOfNodesToScore != nil {\n\t\tin, out := &in.PercentageOfNodesToScore, &out.PercentageOfNodesToScore\n\t\t*out = new(int32)\n\t\t**out = **in\n\t}\n\tif in.Plugins != nil {\n\t\tin, out := &in.Plugins, &out.Plugins\n\t\t*out = new(Plugins)\n\t\t(*in).DeepCopyInto(*out)\n\t}\n\tif in.PluginConfig != nil {\n\t\tin, out := &in.PluginConfig, &out.PluginConfig\n\t\t*out = make([]PluginConfig, len(*in))\n\t\tfor i := range *in {\n\t\t\t(*in)[i].DeepCopyInto(&(*out)[i])\n\t\t}\n\t}\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KubeSchedulerProfile.\nfunc (in *KubeSchedulerProfile) DeepCopy() *KubeSchedulerProfile {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(KubeSchedulerProfile)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *NodeAffinityArgs) DeepCopyInto(out *NodeAffinityArgs) {\n\t*out = *in\n\tout.TypeMeta = in.TypeMeta\n\tif in.AddedAffinity != nil {\n\t\tin, out := &in.AddedAffinity, &out.AddedAffinity\n\t\t*out = new(corev1.NodeAffinity)\n\t\t(*in).DeepCopyInto(*out)\n\t}\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NodeAffinityArgs.\nfunc (in *NodeAffinityArgs) DeepCopy() *NodeAffinityArgs {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(NodeAffinityArgs)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.\nfunc (in *NodeAffinityArgs) DeepCopyObject() runtime.Object {\n\tif c := in.DeepCopy(); c != nil {\n\t\treturn c\n\t}\n\treturn nil\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *NodeResourcesBalancedAllocationArgs) DeepCopyInto(out *NodeResourcesBalancedAllocationArgs) {\n\t*out = *in\n\tout.TypeMeta = in.TypeMeta\n\tif in.Resources != nil {\n\t\tin, out := &in.Resources, &out.Resources\n\t\t*out = make([]ResourceSpec, len(*in))\n\t\tcopy(*out, *in)\n\t}\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NodeResourcesBalancedAllocationArgs.\nfunc (in *NodeResourcesBalancedAllocationArgs) DeepCopy() *NodeResourcesBalancedAllocationArgs {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(NodeResourcesBalancedAllocationArgs)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.\nfunc (in *NodeResourcesBalancedAllocationArgs) DeepCopyObject() runtime.Object {\n\tif c := in.DeepCopy(); c != nil {\n\t\treturn c\n\t}\n\treturn nil\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *NodeResourcesFitArgs) DeepCopyInto(out *NodeResourcesFitArgs) {\n\t*out = *in\n\tout.TypeMeta = in.TypeMeta\n\tif in.IgnoredResources != nil {\n\t\tin, out := &in.IgnoredResources, &out.IgnoredResources\n\t\t*out = make([]string, len(*in))\n\t\tcopy(*out, *in)\n\t}\n\tif in.IgnoredResourceGroups != nil {\n\t\tin, out := &in.IgnoredResourceGroups, &out.IgnoredResourceGroups\n\t\t*out = make([]string, len(*in))\n\t\tcopy(*out, *in)\n\t}\n\tif in.ScoringStrategy != nil {\n\t\tin, out := &in.ScoringStrategy, &out.ScoringStrategy\n\t\t*out = new(ScoringStrategy)\n\t\t(*in).DeepCopyInto(*out)\n\t}\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NodeResourcesFitArgs.\nfunc (in *NodeResourcesFitArgs) DeepCopy() *NodeResourcesFitArgs {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(NodeResourcesFitArgs)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.\nfunc (in *NodeResourcesFitArgs) DeepCopyObject() runtime.Object {\n\tif c := in.DeepCopy(); c != nil {\n\t\treturn c\n\t}\n\treturn nil\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *Plugin) DeepCopyInto(out *Plugin) {\n\t*out = *in\n\tif in.Weight != nil {\n\t\tin, out := &in.Weight, &out.Weight\n\t\t*out = new(int32)\n\t\t**out = **in\n\t}\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Plugin.\nfunc (in *Plugin) DeepCopy() *Plugin {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(Plugin)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *PluginConfig) DeepCopyInto(out *PluginConfig) {\n\t*out = *in\n\tin.Args.DeepCopyInto(&out.Args)\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PluginConfig.\nfunc (in *PluginConfig) DeepCopy() *PluginConfig {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(PluginConfig)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *PluginSet) DeepCopyInto(out *PluginSet) {\n\t*out = *in\n\tif in.Enabled != nil {\n\t\tin, out := &in.Enabled, &out.Enabled\n\t\t*out = make([]Plugin, len(*in))\n\t\tfor i := range *in {\n\t\t\t(*in)[i].DeepCopyInto(&(*out)[i])\n\t\t}\n\t}\n\tif in.Disabled != nil {\n\t\tin, out := &in.Disabled, &out.Disabled\n\t\t*out = make([]Plugin, len(*in))\n\t\tfor i := range *in {\n\t\t\t(*in)[i].DeepCopyInto(&(*out)[i])\n\t\t}\n\t}\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PluginSet.\nfunc (in *PluginSet) DeepCopy() *PluginSet {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(PluginSet)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *Plugins) DeepCopyInto(out *Plugins) {\n\t*out = *in\n\tin.PreEnqueue.DeepCopyInto(&out.PreEnqueue)\n\tin.QueueSort.DeepCopyInto(&out.QueueSort)\n\tin.PreFilter.DeepCopyInto(&out.PreFilter)\n\tin.Filter.DeepCopyInto(&out.Filter)\n\tin.PostFilter.DeepCopyInto(&out.PostFilter)\n\tin.PreScore.DeepCopyInto(&out.PreScore)\n\tin.Score.DeepCopyInto(&out.Score)\n\tin.Reserve.DeepCopyInto(&out.Reserve)\n\tin.Permit.DeepCopyInto(&out.Permit)\n\tin.PreBind.DeepCopyInto(&out.PreBind)\n\tin.Bind.DeepCopyInto(&out.Bind)\n\tin.PostBind.DeepCopyInto(&out.PostBind)\n\tin.MultiPoint.DeepCopyInto(&out.MultiPoint)\n\tin.PlacementGenerate.DeepCopyInto(&out.PlacementGenerate)\n\tin.PlacementScore.DeepCopyInto(&out.PlacementScore)\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Plugins.\nfunc (in *Plugins) DeepCopy() *Plugins {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(Plugins)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *PodTopologySpreadArgs) DeepCopyInto(out *PodTopologySpreadArgs) {\n\t*out = *in\n\tout.TypeMeta = in.TypeMeta\n\tif in.DefaultConstraints != nil {\n\t\tin, out := &in.DefaultConstraints, &out.DefaultConstraints\n\t\t*out = make([]corev1.TopologySpreadConstraint, len(*in))\n\t\tfor i := range *in {\n\t\t\t(*in)[i].DeepCopyInto(&(*out)[i])\n\t\t}\n\t}\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodTopologySpreadArgs.\nfunc (in *PodTopologySpreadArgs) DeepCopy() *PodTopologySpreadArgs {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(PodTopologySpreadArgs)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.\nfunc (in *PodTopologySpreadArgs) DeepCopyObject() runtime.Object {\n\tif c := in.DeepCopy(); c != nil {\n\t\treturn c\n\t}\n\treturn nil\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *RequestedToCapacityRatioParam) DeepCopyInto(out *RequestedToCapacityRatioParam) {\n\t*out = *in\n\tif in.Shape != nil {\n\t\tin, out := &in.Shape, &out.Shape\n\t\t*out = make([]UtilizationShapePoint, len(*in))\n\t\tcopy(*out, *in)\n\t}\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RequestedToCapacityRatioParam.\nfunc (in *RequestedToCapacityRatioParam) DeepCopy() *RequestedToCapacityRatioParam {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(RequestedToCapacityRatioParam)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ResourceSpec) DeepCopyInto(out *ResourceSpec) {\n\t*out = *in\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ResourceSpec.\nfunc (in *ResourceSpec) DeepCopy() *ResourceSpec {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ResourceSpec)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ScoringStrategy) DeepCopyInto(out *ScoringStrategy) {\n\t*out = *in\n\tif in.Resources != nil {\n\t\tin, out := &in.Resources, &out.Resources\n\t\t*out = make([]ResourceSpec, len(*in))\n\t\tcopy(*out, *in)\n\t}\n\tif in.RequestedToCapacityRatio != nil {\n\t\tin, out := &in.RequestedToCapacityRatio, &out.RequestedToCapacityRatio\n\t\t*out = new(RequestedToCapacityRatioParam)\n\t\t(*in).DeepCopyInto(*out)\n\t}\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScoringStrategy.\nfunc (in *ScoringStrategy) DeepCopy() *ScoringStrategy {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ScoringStrategy)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *UtilizationShapePoint) DeepCopyInto(out *UtilizationShapePoint) {\n\t*out = *in\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new UtilizationShapePoint.\nfunc (in *UtilizationShapePoint) DeepCopy() *UtilizationShapePoint {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(UtilizationShapePoint)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *VolumeBindingArgs) DeepCopyInto(out *VolumeBindingArgs) {\n\t*out = *in\n\tout.TypeMeta = in.TypeMeta\n\tif in.BindTimeoutSeconds != nil {\n\t\tin, out := &in.BindTimeoutSeconds, &out.BindTimeoutSeconds\n\t\t*out = new(int64)\n\t\t**out = **in\n\t}\n\tif in.Shape != nil {\n\t\tin, out := &in.Shape, &out.Shape\n\t\t*out = make([]UtilizationShapePoint, len(*in))\n\t\tcopy(*out, *in)\n\t}\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new VolumeBindingArgs.\nfunc (in *VolumeBindingArgs) DeepCopy() *VolumeBindingArgs {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(VolumeBindingArgs)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.\nfunc (in *VolumeBindingArgs) DeepCopyObject() runtime.Object {\n\tif c := in.DeepCopy(); c != nil {\n\t\treturn c\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "config/v1/zz_generated.model_name.go",
    "content": "//go:build !ignore_autogenerated\n// +build !ignore_autogenerated\n\n/*\nCopyright The Kubernetes Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\n// Code generated by openapi-gen. DO NOT EDIT.\n\npackage v1\n\n// OpenAPIModelName returns the OpenAPI model name for this type.\nfunc (in DefaultPreemptionArgs) OpenAPIModelName() string {\n\treturn \"io.k8s.kube-scheduler.config.v1.DefaultPreemptionArgs\"\n}\n\n// OpenAPIModelName returns the OpenAPI model name for this type.\nfunc (in DynamicResourcesArgs) OpenAPIModelName() string {\n\treturn \"io.k8s.kube-scheduler.config.v1.DynamicResourcesArgs\"\n}\n\n// OpenAPIModelName returns the OpenAPI model name for this type.\nfunc (in Extender) OpenAPIModelName() string {\n\treturn \"io.k8s.kube-scheduler.config.v1.Extender\"\n}\n\n// OpenAPIModelName returns the OpenAPI model name for this type.\nfunc (in ExtenderManagedResource) OpenAPIModelName() string {\n\treturn \"io.k8s.kube-scheduler.config.v1.ExtenderManagedResource\"\n}\n\n// OpenAPIModelName returns the OpenAPI model name for this type.\nfunc (in ExtenderTLSConfig) OpenAPIModelName() string {\n\treturn \"io.k8s.kube-scheduler.config.v1.ExtenderTLSConfig\"\n}\n\n// OpenAPIModelName returns the OpenAPI model name for this type.\nfunc (in InterPodAffinityArgs) OpenAPIModelName() string {\n\treturn \"io.k8s.kube-scheduler.config.v1.InterPodAffinityArgs\"\n}\n\n// OpenAPIModelName returns the OpenAPI model name for this type.\nfunc (in KubeSchedulerConfiguration) OpenAPIModelName() string {\n\treturn \"io.k8s.kube-scheduler.config.v1.KubeSchedulerConfiguration\"\n}\n\n// OpenAPIModelName returns the OpenAPI model name for this type.\nfunc (in KubeSchedulerProfile) OpenAPIModelName() string {\n\treturn \"io.k8s.kube-scheduler.config.v1.KubeSchedulerProfile\"\n}\n\n// OpenAPIModelName returns the OpenAPI model name for this type.\nfunc (in NodeAffinityArgs) OpenAPIModelName() string {\n\treturn \"io.k8s.kube-scheduler.config.v1.NodeAffinityArgs\"\n}\n\n// OpenAPIModelName returns the OpenAPI model name for this type.\nfunc (in NodeResourcesBalancedAllocationArgs) OpenAPIModelName() string {\n\treturn \"io.k8s.kube-scheduler.config.v1.NodeResourcesBalancedAllocationArgs\"\n}\n\n// OpenAPIModelName returns the OpenAPI model name for this type.\nfunc (in NodeResourcesFitArgs) OpenAPIModelName() string {\n\treturn \"io.k8s.kube-scheduler.config.v1.NodeResourcesFitArgs\"\n}\n\n// OpenAPIModelName returns the OpenAPI model name for this type.\nfunc (in Plugin) OpenAPIModelName() string {\n\treturn \"io.k8s.kube-scheduler.config.v1.Plugin\"\n}\n\n// OpenAPIModelName returns the OpenAPI model name for this type.\nfunc (in PluginConfig) OpenAPIModelName() string {\n\treturn \"io.k8s.kube-scheduler.config.v1.PluginConfig\"\n}\n\n// OpenAPIModelName returns the OpenAPI model name for this type.\nfunc (in PluginSet) OpenAPIModelName() string {\n\treturn \"io.k8s.kube-scheduler.config.v1.PluginSet\"\n}\n\n// OpenAPIModelName returns the OpenAPI model name for this type.\nfunc (in Plugins) OpenAPIModelName() string {\n\treturn \"io.k8s.kube-scheduler.config.v1.Plugins\"\n}\n\n// OpenAPIModelName returns the OpenAPI model name for this type.\nfunc (in PodTopologySpreadArgs) OpenAPIModelName() string {\n\treturn \"io.k8s.kube-scheduler.config.v1.PodTopologySpreadArgs\"\n}\n\n// OpenAPIModelName returns the OpenAPI model name for this type.\nfunc (in RequestedToCapacityRatioParam) OpenAPIModelName() string {\n\treturn \"io.k8s.kube-scheduler.config.v1.RequestedToCapacityRatioParam\"\n}\n\n// OpenAPIModelName returns the OpenAPI model name for this type.\nfunc (in ResourceSpec) OpenAPIModelName() string {\n\treturn \"io.k8s.kube-scheduler.config.v1.ResourceSpec\"\n}\n\n// OpenAPIModelName returns the OpenAPI model name for this type.\nfunc (in ScoringStrategy) OpenAPIModelName() string {\n\treturn \"io.k8s.kube-scheduler.config.v1.ScoringStrategy\"\n}\n\n// OpenAPIModelName returns the OpenAPI model name for this type.\nfunc (in UtilizationShapePoint) OpenAPIModelName() string {\n\treturn \"io.k8s.kube-scheduler.config.v1.UtilizationShapePoint\"\n}\n\n// OpenAPIModelName returns the OpenAPI model name for this type.\nfunc (in VolumeBindingArgs) OpenAPIModelName() string {\n\treturn \"io.k8s.kube-scheduler.config.v1.VolumeBindingArgs\"\n}\n"
  },
  {
    "path": "doc.go",
    "content": "/*\nCopyright 2021 The Kubernetes Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage kubescheduler\n"
  },
  {
    "path": "extender/OWNERS",
    "content": "# See the OWNERS docs at https://go.k8s.io/owners\n\n# Disable inheritance as this is an api owners file\noptions:\n  no_parent_owners: true\napprovers:\n  - api-approvers\nreviewers:\n  - api-reviewers\n  - sig-scheduling\nlabels:\n  - kind/api-change\n  - sig/scheduling\n"
  },
  {
    "path": "extender/v1/doc.go",
    "content": "/*\nCopyright 2019 The Kubernetes Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\n// +k8s:deepcopy-gen=package\n\n// Package v1 contains scheduler API objects.\npackage v1\n"
  },
  {
    "path": "extender/v1/types.go",
    "content": "/*\nCopyright 2019 The Kubernetes Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage v1\n\nimport (\n\tv1 \"k8s.io/api/core/v1\"\n\t\"k8s.io/apimachinery/pkg/types\"\n)\n\nconst (\n\t// MinExtenderPriority defines the min priority value for extender.\n\tMinExtenderPriority int64 = 0\n\n\t// MaxExtenderPriority defines the max priority value for extender.\n\tMaxExtenderPriority int64 = 10\n)\n\n// ExtenderPreemptionResult represents the result returned by preemption phase of extender.\ntype ExtenderPreemptionResult struct {\n\tNodeNameToMetaVictims map[string]*MetaVictims\n}\n\n// ExtenderPreemptionArgs represents the arguments needed by the extender to preempt pods on nodes.\ntype ExtenderPreemptionArgs struct {\n\t// Pod being scheduled\n\tPod *v1.Pod\n\t// Victims map generated by scheduler preemption phase\n\t// Only set NodeNameToMetaVictims if Extender.NodeCacheCapable == true. Otherwise, only set NodeNameToVictims.\n\tNodeNameToVictims     map[string]*Victims\n\tNodeNameToMetaVictims map[string]*MetaVictims\n}\n\n// Victims represents:\n//\n//\tpods:  a group of pods expected to be preempted.\n//\tnumPDBViolations: the count of violations of PodDisruptionBudget\ntype Victims struct {\n\tPods             []*v1.Pod\n\tNumPDBViolations int64\n}\n\n// MetaPod represent identifier for a v1.Pod\ntype MetaPod struct {\n\tUID string\n}\n\n// MetaVictims represents:\n//\n//\tpods:  a group of pods expected to be preempted.\n//\t  Only Pod identifiers will be sent and user are expect to get v1.Pod in their own way.\n//\tnumPDBViolations: the count of violations of PodDisruptionBudget\ntype MetaVictims struct {\n\tPods             []*MetaPod\n\tNumPDBViolations int64\n}\n\n// ExtenderArgs represents the arguments needed by the extender to filter/prioritize\n// nodes for a pod.\ntype ExtenderArgs struct {\n\t// Pod being scheduled\n\tPod *v1.Pod\n\t// List of candidate nodes where the pod can be scheduled; to be populated\n\t// only if Extender.NodeCacheCapable == false\n\tNodes *v1.NodeList\n\t// List of candidate node names where the pod can be scheduled; to be\n\t// populated only if Extender.NodeCacheCapable == true\n\tNodeNames *[]string\n}\n\n// FailedNodesMap represents the filtered out nodes, with node names and failure messages\ntype FailedNodesMap map[string]string\n\n// ExtenderFilterResult represents the results of a filter call to an extender\ntype ExtenderFilterResult struct {\n\t// Filtered set of nodes where the pod can be scheduled; to be populated\n\t// only if Extender.NodeCacheCapable == false\n\tNodes *v1.NodeList\n\t// Filtered set of nodes where the pod can be scheduled; to be populated\n\t// only if Extender.NodeCacheCapable == true\n\tNodeNames *[]string\n\t// Filtered out nodes where the pod can't be scheduled and the failure messages\n\tFailedNodes FailedNodesMap\n\t// Filtered out nodes where the pod can't be scheduled and preemption would\n\t// not change anything. The value is the failure message same as FailedNodes.\n\t// Nodes specified here takes precedence over FailedNodes.\n\tFailedAndUnresolvableNodes FailedNodesMap\n\t// Error message indicating failure\n\tError string\n}\n\n// ExtenderBindingArgs represents the arguments to an extender for binding a pod to a node.\ntype ExtenderBindingArgs struct {\n\t// PodName is the name of the pod being bound\n\tPodName string\n\t// PodNamespace is the namespace of the pod being bound\n\tPodNamespace string\n\t// PodUID is the UID of the pod being bound\n\tPodUID types.UID\n\t// Node selected by the scheduler\n\tNode string\n}\n\n// ExtenderBindingResult represents the result of binding of a pod to a node from an extender.\ntype ExtenderBindingResult struct {\n\t// Error message indicating failure\n\tError string\n}\n\n// HostPriority represents the priority of scheduling to a particular host, higher priority is better.\ntype HostPriority struct {\n\t// Name of the host\n\tHost string\n\t// Score associated with the host\n\tScore int64\n}\n\n// HostPriorityList declares a []HostPriority type.\ntype HostPriorityList []HostPriority\n"
  },
  {
    "path": "extender/v1/types_test.go",
    "content": "/*\nCopyright 2020 The Kubernetes Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage v1\n\nimport (\n\t\"encoding/json\"\n\t\"reflect\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/google/go-cmp/cmp\"\n\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/types\"\n)\n\n// TestCompatibility verifies that the types in extender/v1 can be successfully encoded to json and decoded back, even when lowercased,\n// since these types were written around JSON tags and we need to enforce consistency on them now.\n// @TODO(88634): v2 of these types should be defined with proper JSON tags to enforce field casing to a single approach\nfunc TestCompatibility(t *testing.T) {\n\ttestcases := []struct {\n\t\temptyObj   interface{}\n\t\tobj        interface{}\n\t\texpectJSON string\n\t}{\n\t\t{\n\t\t\temptyObj: &ExtenderPreemptionResult{},\n\t\t\tobj: &ExtenderPreemptionResult{\n\t\t\t\tNodeNameToMetaVictims: map[string]*MetaVictims{\"foo\": {Pods: []*MetaPod{{UID: \"myuid\"}}, NumPDBViolations: 1}},\n\t\t\t},\n\t\t\texpectJSON: `{\"NodeNameToMetaVictims\":{\"foo\":{\"Pods\":[{\"UID\":\"myuid\"}],\"NumPDBViolations\":1}}}`,\n\t\t},\n\t\t{\n\t\t\temptyObj: &ExtenderPreemptionArgs{},\n\t\t\tobj: &ExtenderPreemptionArgs{\n\t\t\t\tPod:                   &corev1.Pod{ObjectMeta: metav1.ObjectMeta{Name: \"podname\"}},\n\t\t\t\tNodeNameToVictims:     map[string]*Victims{\"foo\": {Pods: []*corev1.Pod{{ObjectMeta: metav1.ObjectMeta{Name: \"podname\"}}}, NumPDBViolations: 1}},\n\t\t\t\tNodeNameToMetaVictims: map[string]*MetaVictims{\"foo\": {Pods: []*MetaPod{{UID: \"myuid\"}}, NumPDBViolations: 1}},\n\t\t\t},\n\t\t\texpectJSON: `{\"Pod\":{\"metadata\":{\"name\":\"podname\"},\"spec\":{\"containers\":null},\"status\":{}},\"NodeNameToVictims\":{\"foo\":{\"Pods\":[{\"metadata\":{\"name\":\"podname\"},\"spec\":{\"containers\":null},\"status\":{}}],\"NumPDBViolations\":1}},\"NodeNameToMetaVictims\":{\"foo\":{\"Pods\":[{\"UID\":\"myuid\"}],\"NumPDBViolations\":1}}}`,\n\t\t},\n\t\t{\n\t\t\temptyObj: &ExtenderArgs{},\n\t\t\tobj: &ExtenderArgs{\n\t\t\t\tPod:       &corev1.Pod{ObjectMeta: metav1.ObjectMeta{Name: \"podname\"}},\n\t\t\t\tNodes:     &corev1.NodeList{Items: []corev1.Node{{ObjectMeta: metav1.ObjectMeta{Name: \"nodename\"}}}},\n\t\t\t\tNodeNames: &[]string{\"node1\"},\n\t\t\t},\n\t\t\texpectJSON: `{\"Pod\":{\"metadata\":{\"name\":\"podname\"},\"spec\":{\"containers\":null},\"status\":{}},\"Nodes\":{\"metadata\":{},\"items\":[{\"metadata\":{\"name\":\"nodename\"},\"spec\":{},\"status\":{\"daemonEndpoints\":{\"kubeletEndpoint\":{\"Port\":0}},\"nodeInfo\":{\"machineID\":\"\",\"systemUUID\":\"\",\"bootID\":\"\",\"kernelVersion\":\"\",\"osImage\":\"\",\"containerRuntimeVersion\":\"\",\"kubeletVersion\":\"\",\"kubeProxyVersion\":\"\",\"operatingSystem\":\"\",\"architecture\":\"\"}}}]},\"NodeNames\":[\"node1\"]}`,\n\t\t},\n\t\t{\n\t\t\temptyObj: &ExtenderFilterResult{},\n\t\t\tobj: &ExtenderFilterResult{\n\t\t\t\tNodes:                      &corev1.NodeList{Items: []corev1.Node{{ObjectMeta: metav1.ObjectMeta{Name: \"nodename\"}}}},\n\t\t\t\tNodeNames:                  &[]string{\"node1\"},\n\t\t\t\tFailedNodes:                FailedNodesMap{\"foo\": \"bar\"},\n\t\t\t\tFailedAndUnresolvableNodes: FailedNodesMap{\"baz\": \"qux\"},\n\t\t\t\tError:                      \"myerror\",\n\t\t\t},\n\t\t\texpectJSON: `{\"Nodes\":{\"metadata\":{},\"items\":[{\"metadata\":{\"name\":\"nodename\"},\"spec\":{},\"status\":{\"daemonEndpoints\":{\"kubeletEndpoint\":{\"Port\":0}},\"nodeInfo\":{\"machineID\":\"\",\"systemUUID\":\"\",\"bootID\":\"\",\"kernelVersion\":\"\",\"osImage\":\"\",\"containerRuntimeVersion\":\"\",\"kubeletVersion\":\"\",\"kubeProxyVersion\":\"\",\"operatingSystem\":\"\",\"architecture\":\"\"}}}]},\"NodeNames\":[\"node1\"],\"FailedNodes\":{\"foo\":\"bar\"},\"FailedAndUnresolvableNodes\":{\"baz\":\"qux\"},\"Error\":\"myerror\"}`,\n\t\t},\n\t\t{\n\t\t\temptyObj: &ExtenderBindingArgs{},\n\t\t\tobj: &ExtenderBindingArgs{\n\t\t\t\tPodName:      \"mypodname\",\n\t\t\t\tPodNamespace: \"mypodnamespace\",\n\t\t\t\tPodUID:       types.UID(\"mypoduid\"),\n\t\t\t\tNode:         \"mynode\",\n\t\t\t},\n\t\t\texpectJSON: `{\"PodName\":\"mypodname\",\"PodNamespace\":\"mypodnamespace\",\"PodUID\":\"mypoduid\",\"Node\":\"mynode\"}`,\n\t\t},\n\t\t{\n\t\t\temptyObj:   &ExtenderBindingResult{},\n\t\t\tobj:        &ExtenderBindingResult{Error: \"myerror\"},\n\t\t\texpectJSON: `{\"Error\":\"myerror\"}`,\n\t\t},\n\t\t{\n\t\t\temptyObj:   &HostPriority{},\n\t\t\tobj:        &HostPriority{Host: \"myhost\", Score: 1},\n\t\t\texpectJSON: `{\"Host\":\"myhost\",\"Score\":1}`,\n\t\t},\n\t}\n\n\tfor _, tc := range testcases {\n\t\tt.Run(reflect.TypeOf(tc.obj).String(), func(t *testing.T) {\n\t\t\tdata, err := json.Marshal(tc.obj)\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\tif string(data) != tc.expectJSON {\n\t\t\t\tt.Fatalf(\"expected %s, got %s\", tc.expectJSON, string(data))\n\t\t\t}\n\t\t\tif err := json.Unmarshal([]byte(strings.ToLower(string(data))), tc.emptyObj); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(tc.emptyObj, tc.obj) {\n\t\t\t\tt.Fatalf(\"round-tripped case-insensitive diff: %s\", cmp.Diff(tc.obj, tc.emptyObj))\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "extender/v1/zz_generated.deepcopy.go",
    "content": "//go:build !ignore_autogenerated\n// +build !ignore_autogenerated\n\n/*\nCopyright The Kubernetes Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\n// Code generated by deepcopy-gen. DO NOT EDIT.\n\npackage v1\n\nimport (\n\tcorev1 \"k8s.io/api/core/v1\"\n)\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ExtenderArgs) DeepCopyInto(out *ExtenderArgs) {\n\t*out = *in\n\tif in.Pod != nil {\n\t\tin, out := &in.Pod, &out.Pod\n\t\t*out = new(corev1.Pod)\n\t\t(*in).DeepCopyInto(*out)\n\t}\n\tif in.Nodes != nil {\n\t\tin, out := &in.Nodes, &out.Nodes\n\t\t*out = new(corev1.NodeList)\n\t\t(*in).DeepCopyInto(*out)\n\t}\n\tif in.NodeNames != nil {\n\t\tin, out := &in.NodeNames, &out.NodeNames\n\t\t*out = new([]string)\n\t\tif **in != nil {\n\t\t\tin, out := *in, *out\n\t\t\t*out = make([]string, len(*in))\n\t\t\tcopy(*out, *in)\n\t\t}\n\t}\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExtenderArgs.\nfunc (in *ExtenderArgs) DeepCopy() *ExtenderArgs {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ExtenderArgs)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ExtenderBindingArgs) DeepCopyInto(out *ExtenderBindingArgs) {\n\t*out = *in\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExtenderBindingArgs.\nfunc (in *ExtenderBindingArgs) DeepCopy() *ExtenderBindingArgs {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ExtenderBindingArgs)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ExtenderBindingResult) DeepCopyInto(out *ExtenderBindingResult) {\n\t*out = *in\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExtenderBindingResult.\nfunc (in *ExtenderBindingResult) DeepCopy() *ExtenderBindingResult {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ExtenderBindingResult)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ExtenderFilterResult) DeepCopyInto(out *ExtenderFilterResult) {\n\t*out = *in\n\tif in.Nodes != nil {\n\t\tin, out := &in.Nodes, &out.Nodes\n\t\t*out = new(corev1.NodeList)\n\t\t(*in).DeepCopyInto(*out)\n\t}\n\tif in.NodeNames != nil {\n\t\tin, out := &in.NodeNames, &out.NodeNames\n\t\t*out = new([]string)\n\t\tif **in != nil {\n\t\t\tin, out := *in, *out\n\t\t\t*out = make([]string, len(*in))\n\t\t\tcopy(*out, *in)\n\t\t}\n\t}\n\tif in.FailedNodes != nil {\n\t\tin, out := &in.FailedNodes, &out.FailedNodes\n\t\t*out = make(FailedNodesMap, len(*in))\n\t\tfor key, val := range *in {\n\t\t\t(*out)[key] = val\n\t\t}\n\t}\n\tif in.FailedAndUnresolvableNodes != nil {\n\t\tin, out := &in.FailedAndUnresolvableNodes, &out.FailedAndUnresolvableNodes\n\t\t*out = make(FailedNodesMap, len(*in))\n\t\tfor key, val := range *in {\n\t\t\t(*out)[key] = val\n\t\t}\n\t}\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExtenderFilterResult.\nfunc (in *ExtenderFilterResult) DeepCopy() *ExtenderFilterResult {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ExtenderFilterResult)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ExtenderPreemptionArgs) DeepCopyInto(out *ExtenderPreemptionArgs) {\n\t*out = *in\n\tif in.Pod != nil {\n\t\tin, out := &in.Pod, &out.Pod\n\t\t*out = new(corev1.Pod)\n\t\t(*in).DeepCopyInto(*out)\n\t}\n\tif in.NodeNameToVictims != nil {\n\t\tin, out := &in.NodeNameToVictims, &out.NodeNameToVictims\n\t\t*out = make(map[string]*Victims, len(*in))\n\t\tfor key, val := range *in {\n\t\t\tvar outVal *Victims\n\t\t\tif val == nil {\n\t\t\t\t(*out)[key] = nil\n\t\t\t} else {\n\t\t\t\tin, out := &val, &outVal\n\t\t\t\t*out = new(Victims)\n\t\t\t\t(*in).DeepCopyInto(*out)\n\t\t\t}\n\t\t\t(*out)[key] = outVal\n\t\t}\n\t}\n\tif in.NodeNameToMetaVictims != nil {\n\t\tin, out := &in.NodeNameToMetaVictims, &out.NodeNameToMetaVictims\n\t\t*out = make(map[string]*MetaVictims, len(*in))\n\t\tfor key, val := range *in {\n\t\t\tvar outVal *MetaVictims\n\t\t\tif val == nil {\n\t\t\t\t(*out)[key] = nil\n\t\t\t} else {\n\t\t\t\tin, out := &val, &outVal\n\t\t\t\t*out = new(MetaVictims)\n\t\t\t\t(*in).DeepCopyInto(*out)\n\t\t\t}\n\t\t\t(*out)[key] = outVal\n\t\t}\n\t}\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExtenderPreemptionArgs.\nfunc (in *ExtenderPreemptionArgs) DeepCopy() *ExtenderPreemptionArgs {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ExtenderPreemptionArgs)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *ExtenderPreemptionResult) DeepCopyInto(out *ExtenderPreemptionResult) {\n\t*out = *in\n\tif in.NodeNameToMetaVictims != nil {\n\t\tin, out := &in.NodeNameToMetaVictims, &out.NodeNameToMetaVictims\n\t\t*out = make(map[string]*MetaVictims, len(*in))\n\t\tfor key, val := range *in {\n\t\t\tvar outVal *MetaVictims\n\t\t\tif val == nil {\n\t\t\t\t(*out)[key] = nil\n\t\t\t} else {\n\t\t\t\tin, out := &val, &outVal\n\t\t\t\t*out = new(MetaVictims)\n\t\t\t\t(*in).DeepCopyInto(*out)\n\t\t\t}\n\t\t\t(*out)[key] = outVal\n\t\t}\n\t}\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExtenderPreemptionResult.\nfunc (in *ExtenderPreemptionResult) DeepCopy() *ExtenderPreemptionResult {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(ExtenderPreemptionResult)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in FailedNodesMap) DeepCopyInto(out *FailedNodesMap) {\n\t{\n\t\tin := &in\n\t\t*out = make(FailedNodesMap, len(*in))\n\t\tfor key, val := range *in {\n\t\t\t(*out)[key] = val\n\t\t}\n\t\treturn\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new FailedNodesMap.\nfunc (in FailedNodesMap) DeepCopy() FailedNodesMap {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(FailedNodesMap)\n\tin.DeepCopyInto(out)\n\treturn *out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *HostPriority) DeepCopyInto(out *HostPriority) {\n\t*out = *in\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HostPriority.\nfunc (in *HostPriority) DeepCopy() *HostPriority {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(HostPriority)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in HostPriorityList) DeepCopyInto(out *HostPriorityList) {\n\t{\n\t\tin := &in\n\t\t*out = make(HostPriorityList, len(*in))\n\t\tcopy(*out, *in)\n\t\treturn\n\t}\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HostPriorityList.\nfunc (in HostPriorityList) DeepCopy() HostPriorityList {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(HostPriorityList)\n\tin.DeepCopyInto(out)\n\treturn *out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *MetaPod) DeepCopyInto(out *MetaPod) {\n\t*out = *in\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new MetaPod.\nfunc (in *MetaPod) DeepCopy() *MetaPod {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(MetaPod)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *MetaVictims) DeepCopyInto(out *MetaVictims) {\n\t*out = *in\n\tif in.Pods != nil {\n\t\tin, out := &in.Pods, &out.Pods\n\t\t*out = make([]*MetaPod, len(*in))\n\t\tfor i := range *in {\n\t\t\tif (*in)[i] != nil {\n\t\t\t\tin, out := &(*in)[i], &(*out)[i]\n\t\t\t\t*out = new(MetaPod)\n\t\t\t\t**out = **in\n\t\t\t}\n\t\t}\n\t}\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new MetaVictims.\nfunc (in *MetaVictims) DeepCopy() *MetaVictims {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(MetaVictims)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n\n// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.\nfunc (in *Victims) DeepCopyInto(out *Victims) {\n\t*out = *in\n\tif in.Pods != nil {\n\t\tin, out := &in.Pods, &out.Pods\n\t\t*out = make([]*corev1.Pod, len(*in))\n\t\tfor i := range *in {\n\t\t\tif (*in)[i] != nil {\n\t\t\t\tin, out := &(*in)[i], &(*out)[i]\n\t\t\t\t*out = new(corev1.Pod)\n\t\t\t\t(*in).DeepCopyInto(*out)\n\t\t\t}\n\t\t}\n\t}\n\treturn\n}\n\n// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Victims.\nfunc (in *Victims) DeepCopy() *Victims {\n\tif in == nil {\n\t\treturn nil\n\t}\n\tout := new(Victims)\n\tin.DeepCopyInto(out)\n\treturn out\n}\n"
  },
  {
    "path": "framework/api_calls.go",
    "content": "/*\nCopyright 2025 The Kubernetes Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage framework\n\nimport (\n\t\"context\"\n\n\tv1 \"k8s.io/api/core/v1\"\n)\n\n// APICacher defines methods that send API calls through the scheduler's cache\n// before they are executed asynchronously by the APIDispatcher.\n// This ensures the scheduler's internal state is updated optimistically,\n// reflecting the intended outcome of the call.\n// This methods should be used only if the SchedulerAsyncAPICalls feature gate is enabled.\ntype APICacher interface {\n\t// PatchPodStatus sends a patch request for a Pod's status.\n\t// The patch could be first applied to the cached Pod object and then the API call is executed asynchronously.\n\t// It returns a channel that can be used to wait for the call's completion.\n\tPatchPodStatus(pod *v1.Pod, condition *v1.PodCondition, nominatingInfo *NominatingInfo) (<-chan error, error)\n\n\t// BindPod sends a binding request. The binding could be first applied to the cached Pod object\n\t// and then the API call is executed asynchronously.\n\t// It returns a channel that can be used to wait for the call's completion.\n\tBindPod(binding *v1.Binding) (<-chan error, error)\n\n\t// WaitOnFinish blocks until the result of an API call is sent to the given onFinish channel\n\t// (returned by methods BindPod or PreemptPod).\n\t//\n\t// It returns the error received from the channel.\n\t// It also returns nil if the call was skipped or overwritten,\n\t// as these are considered successful lifecycle outcomes.\n\t// Direct onFinish channel read can be used to access these results.\n\tWaitOnFinish(ctx context.Context, onFinish <-chan error) error\n}\n\n// APICallImplementations define constructors for each APICall that is used by the scheduler internally.\ntype APICallImplementations[T, K APICall] struct {\n\t// PodStatusPatch is a constructor used to create APICall object for pod status patch.\n\tPodStatusPatch func(pod *v1.Pod, condition *v1.PodCondition, nominatingInfo *NominatingInfo) T\n\t// PodBinding is a constructor used to create APICall object for pod binding.\n\tPodBinding func(binding *v1.Binding) K\n}\n"
  },
  {
    "path": "framework/api_dispatcher.go",
    "content": "/*\nCopyright 2025 The Kubernetes Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage framework\n\nimport (\n\t\"context\"\n\t\"errors\"\n\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/types\"\n\tclientset \"k8s.io/client-go/kubernetes\"\n)\n\nvar (\n\t// ErrCallSkipped is returned by APIDispatcher.Add or sent to the OnFinish channel when the call is skipped and will not be executed.\n\tErrCallSkipped = errors.New(\"call skipped\")\n\t// ErrCallOverwritten is sent to the OnFinish channel when an enqueued call is overwritten by a newer or more relevant call.\n\tErrCallOverwritten = errors.New(\"call overwritten\")\n)\n\n// IsUnexpectedError returns true if the given error is not nil and is not one of the expected\n// dispatcher lifecycle errors (ErrCallSkipped, ErrCallOverwritten). This can be used to\n// filter for errors that may require logging or special handling.\nfunc IsUnexpectedError(err error) bool {\n\treturn err != nil && !errors.Is(err, ErrCallSkipped) && !errors.Is(err, ErrCallOverwritten)\n}\n\n// APICallType defines a call type name that governs how the dispatcher handles multiple pending calls for the same object.\n//\n// The type determines if two calls for the same object are mergeable\n// or if one should overwrite the other based on relevance:\n//   - Calls with the same type should be merged.\n//   - When calls have different types, their scores in APICallRelevances are used to determine precedence.\n//\n// Each APICall implementation should have a unique type within a given dispatcher.\ntype APICallType string\n\n// APICallRelevances maps all possible APICallTypes to a relevance value.\n// A more relevant API call should overwrite a less relevant one for the same object.\n// Types of the same relevance should only be defined for different object types.\ntype APICallRelevances map[APICallType]int\n\n// APICall defines the interface for an API call that can be processed by an APIDispatcher.\ntype APICall interface {\n\t// CallType returns the type of the API call.\n\t// See the APICallType and APICallRelevances comments on how to define the APICallType.\n\tCallType() APICallType\n\t// UID returns the UID of the object this call refers to.\n\t// This is used to identify and potentially merge or skip calls for the same object.\n\tUID() types.UID\n\t// Execute performs the actual API call.\n\tExecute(ctx context.Context, client clientset.Interface) error\n\t// Merge merges the state of an older call for the same object into the current (receiver) call.\n\t// The receiver should incorporate all necessary information from oldCall, as oldCall will be discarded.\n\t// After this method is called, IsNoOp() should be checked to see if the call can be skipped.\n\tMerge(oldCall APICall) error\n\t// Sync synchronizes the state of this call with the given object.\n\t// It may apply changes to the object or store information from the object needed for later execution.\n\t// The implementation should return a copy of the object if it is modified.\n\t// After this method is called, IsNoOp() should be checked to see if the call can be skipped.\n\tSync(obj metav1.Object) (metav1.Object, error)\n\t// IsNoOp returns true if the call represents a no-operation and should be skipped by the dispatcher.\n\t// A call may be a no-op from its creation or become one after a Merge or Update.\n\tIsNoOp() bool\n}\n\n// APICallOptions defines options for an API call.\ntype APICallOptions struct {\n\t// OnFinish is an optional channel to receive the final result of a call's lifecycle.\n\t//\n\t// The result is sent in a non-blocking way. If this channel is unbuffered and has no\n\t// ready receiver, the result will be dropped.\n\t//\n\t// Note that receiving an error does not guarantee the API call itself was executed.\n\t// For instance, an ErrCallOverwritten or ErrCallSkipped error may be sent.\n\t//\n\t// To opt out of receiving a result, leave this channel nil.\n\tOnFinish chan<- error\n}\n\n// APIDispatcher defines the interface for a dispatcher that queues and asynchronously executes API calls.\ntype APIDispatcher interface {\n\t// Add adds an API call to the dispatcher's queue. It returns an error if the call is not enqueued\n\t// (e.g., if it's skipped). The caller should handle ErrCallSkipped if returned.\n\tAdd(incomingAPICall APICall, opts APICallOptions) error\n\t// SyncObject performs a two-way synchronization between the given object\n\t// and a pending API call held within the dispatcher.\n\t// This can be called by the scheduler's event handlers on object updates\n\t// to enrich the cached state and the call.\n\t//\n\t// If a call for the object exists there, this method:\n\t// 1. Applies the call's pending changes to the object, providing an optimistic preview of its state.\n\t// 2. Allows the call to update its own internal state from the object,\n\t//    ensuring it has the most recent data before its eventual execution.\n\t//\n\t// It returns the modified object. If no call is pending for the object,\n\t// the original object is returned unmodified.\n\tSyncObject(obj metav1.Object) (metav1.Object, error)\n}\n"
  },
  {
    "path": "framework/cycle_state.go",
    "content": "/*\nCopyright 2025 The Kubernetes Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage framework\n\nimport (\n\t\"errors\"\n\n\t\"k8s.io/apimachinery/pkg/util/sets\"\n)\n\nvar (\n\t// ErrNotFound is the not found error message.\n\tErrNotFound = errors.New(\"not found\")\n)\n\n// StateData is a generic type for arbitrary data stored in CycleState.\ntype StateData interface {\n\t// Clone is an interface to make a copy of StateData. For performance reasons,\n\t// clone should make shallow copies for members (e.g., slices or maps) that are not\n\t// impacted by PreFilter's optional AddPod/RemovePod methods.\n\tClone() StateData\n}\n\n// StateKey is the type of keys stored in CycleState.\ntype StateKey string\n\n// CycleState provides a mechanism for plugins to store and retrieve arbitrary data.\n// StateData stored by one plugin can be read, altered, or deleted by another plugin.\n// CycleState does not provide any data protection, as all plugins are assumed to be\n// trusted.\ntype CycleState interface {\n\t// ShouldRecordPluginMetrics returns whether metrics.PluginExecutionDuration metrics\n\t// should be recorded.\n\t// This function is mostly for the scheduling framework runtime, plugins usually don't have to use it.\n\tShouldRecordPluginMetrics() bool\n\t// GetSkipFilterPlugins returns plugins that will be skipped in the Filter extension point.\n\t// This function is mostly for the scheduling framework runtime, plugins usually don't have to use it.\n\tGetSkipFilterPlugins() sets.Set[string]\n\t// SetSkipFilterPlugins sets plugins that should be skipped in the Filter extension point.\n\t// This function is mostly for the scheduling framework runtime, plugins usually don't have to use it.\n\tSetSkipFilterPlugins(plugins sets.Set[string])\n\t// GetSkipScorePlugins returns plugins that will be skipped in the Score extension point.\n\t// This function is mostly for the scheduling framework runtime, plugins usually don't have to use it.\n\tGetSkipScorePlugins() sets.Set[string]\n\t// SetSkipScorePlugins sets plugins that should be skipped in the Score extension point.\n\t// This function is mostly for the scheduling framework runtime, plugins usually don't have to use it.\n\tSetSkipScorePlugins(plugins sets.Set[string])\n\t// GetSkipPreBindPlugins returns plugins that will be skipped in the PreBind extension point.\n\t// This function is mostly for the scheduling framework runtime, plugins usually don't have to use it.\n\tGetSkipPreBindPlugins() sets.Set[string]\n\t// SetSkipPreBindPlugins sets plugins that should be skipped in the PerBind extension point.\n\t// This function is mostly for the scheduling framework runtime, plugins usually don't have to use it.\n\tSetSkipPreBindPlugins(plugins sets.Set[string])\n\t// GetParallelPreBindPlugins returns plugins that can be run in parallel with other plugins\n\t// in the PreBind extension point.\n\t// This function is mostly for the scheduling framework runtime, plugins usually don't have to use it.\n\tGetParallelPreBindPlugins() sets.Set[string]\n\t// GetParallelPreBindPlugins returns plugins that can be run in parallel with other plugins\n\t// in the PreBind extension point.\n\t// This function is mostly for the scheduling framework runtime, plugins usually don't have to use it.\n\tSetParallelPreBindPlugins(plugins sets.Set[string])\n\t// ShouldSkipAllPostFilterPlugins returns whether all plugins should be skipped in the PostFilter extension point.\n\t// This function is mostly for the scheduling framework runtime, plugins usually don't have to use it.\n\tShouldSkipAllPostFilterPlugins() bool\n\n\t// Read retrieves data with the given \"key\" from CycleState. If the key is not\n\t// present, ErrNotFound is returned.\n\t//\n\t// See CycleState for notes on concurrency.\n\tRead(key StateKey) (StateData, error)\n\t// Write stores the given \"val\" in CycleState with the given \"key\".\n\t//\n\t// See CycleState for notes on concurrency.\n\tWrite(key StateKey, val StateData)\n\t// Delete deletes data with the given key from CycleState.\n\t//\n\t// See CycleState for notes on concurrency.\n\tDelete(key StateKey)\n\t// Clone creates a copy of CycleState and returns its pointer. Clone returns\n\t// nil if the context being cloned is nil.\n\tClone() CycleState\n\t// IsPodGroupSchedulingCycle returns true if this cycle is a pod group scheduling cycle.\n\t// If set to false, it means that the pod referencing this CycleState either passed the pod group cycle\n\t// or doesn't belong to any pod group.\n\t// This field can only be set to true when GenericWorkload feature flag is enabled.\n\tIsPodGroupSchedulingCycle() bool\n\t// GetPodGroupSchedulingCycle gets the cycle state of the PodGroup for a Pod.\n\t// This should be only used when GenericWorkload feature flag is enabled.\n\tGetPodGroupSchedulingCycle() PodGroupCycleState\n\t// SetPodGroupSchedulingCycle sets the cycle state of the PodGroup for a Pod.\n\t// This should be only used when GenericWorkload feature flag is enabled.\n\tSetPodGroupSchedulingCycle(PodGroupCycleState)\n}\n\n// PodGroupCycleState provides a mechanism for plugins that operate on pod groups to store and retrieve arbitrary data.\n// StateData stored by one plugin can be read, altered, or deleted by another plugin that operates on a pod group.\n// PodGroupCycleState does not provide any data protection, as all plugins are assumed to be\n// trusted.\ntype PodGroupCycleState interface {\n\t// ShouldRecordPluginMetrics returns whether metrics.PluginExecutionDuration metrics\n\t// should be recorded.\n\t// This function is mostly for the scheduling framework runtime, plugins usually don't have to use it.\n\tShouldRecordPluginMetrics() bool\n\t// Read retrieves data with the given \"key\" from PodGroupCycleState. If the key is not\n\t// present, ErrNotFound is returned.\n\t//\n\t// See PodGroupCycleState for notes on concurrency.\n\tRead(key StateKey) (StateData, error)\n\t// Write stores the given \"val\" in PodGroupCycleState with the given \"key\".\n\t//\n\t// See PodGroupCycleState for notes on concurrency.\n\tWrite(key StateKey, val StateData)\n\t// Delete deletes data with the given key from PodGroupCycleState.\n\t//\n\t// See PodGroupCycleState for notes on concurrency.\n\tDelete(key StateKey)\n}\n"
  },
  {
    "path": "framework/extender.go",
    "content": "/*\nCopyright 2020 The Kubernetes Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage framework\n\nimport (\n\tv1 \"k8s.io/api/core/v1\"\n\textenderv1 \"k8s.io/kube-scheduler/extender/v1\"\n)\n\n// Extender is an interface for external processes to influence scheduling\n// decisions made by Kubernetes. This is typically needed for resources not directly\n// managed by Kubernetes.\ntype Extender interface {\n\t// Name returns a unique name that identifies the extender.\n\tName() string\n\n\t// Filter based on extender-implemented predicate functions. The filtered list is\n\t// expected to be a subset of the supplied list.\n\t// The failedNodes and failedAndUnresolvableNodes optionally contains the list\n\t// of failed nodes and failure reasons, except nodes in the latter are\n\t// unresolvable.\n\tFilter(pod *v1.Pod, nodes []NodeInfo) (filteredNodes []NodeInfo, failedNodesMap extenderv1.FailedNodesMap, failedAndUnresolvable extenderv1.FailedNodesMap, err error)\n\n\t// Prioritize based on extender-implemented priority functions. The returned scores & weight\n\t// are used to compute the weighted score for an extender. The weighted scores are added to\n\t// the scores computed by Kubernetes scheduler. The total scores are used to do the host selection.\n\tPrioritize(pod *v1.Pod, nodes []NodeInfo) (hostPriorities *extenderv1.HostPriorityList, weight int64, err error)\n\n\t// Bind delegates the action of binding a pod to a node to the extender.\n\tBind(binding *v1.Binding) error\n\n\t// IsBinder returns whether this extender is configured for the Bind method.\n\tIsBinder() bool\n\n\t// IsInterested returns true if at least one extended resource requested by\n\t// this pod is managed by this extender.\n\tIsInterested(pod *v1.Pod) bool\n\n\t// IsPrioritizer returns whether this extender is configured for the Prioritize method.\n\tIsPrioritizer() bool\n\n\t// IsFilter returns whether this extender is configured for the Filter method.\n\tIsFilter() bool\n\n\t// ProcessPreemption returns nodes with their victim pods processed by extender based on\n\t// given:\n\t//   1. Pod to schedule\n\t//   2. Candidate nodes and victim pods (nodeNameToVictims) generated by previous scheduling process.\n\t// The possible changes made by extender may include:\n\t//   1. Subset of given candidate nodes after preemption phase of extender.\n\t//   2. A different set of victim pod for every given candidate node after preemption phase of extender.\n\tProcessPreemption(\n\t\tpod *v1.Pod,\n\t\tnodeNameToVictims map[string]*extenderv1.Victims,\n\t\tnodeInfos NodeInfoLister,\n\t) (map[string]*extenderv1.Victims, error)\n\n\t// SupportsPreemption returns if the scheduler extender support preemption or not.\n\tSupportsPreemption() bool\n\n\t// IsIgnorable returns true indicates scheduling should not fail when this extender\n\t// is unavailable. This gives scheduler ability to fail fast and tolerate non-critical extenders as well.\n\t// Both Filter and Bind actions are supported.\n\tIsIgnorable() bool\n}\n"
  },
  {
    "path": "framework/interface.go",
    "content": "/*\nCopyright 2025 The Kubernetes Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\n// This file defines the scheduling framework plugin interfaces.\n\npackage framework\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"math\"\n\t\"slices\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/google/go-cmp/cmp\"         //nolint:depguard\n\t\"github.com/google/go-cmp/cmp/cmpopts\" //nolint:depguard\n\tv1 \"k8s.io/api/core/v1\"\n\t\"k8s.io/apimachinery/pkg/types\"\n\t\"k8s.io/apimachinery/pkg/util/sets\"\n\t\"k8s.io/client-go/informers\"\n\tclientset \"k8s.io/client-go/kubernetes\"\n\trestclient \"k8s.io/client-go/rest\"\n\t\"k8s.io/client-go/tools/events\"\n\t\"k8s.io/client-go/util/workqueue\"\n\t\"k8s.io/klog/v2\"\n)\n\n// Code is the Status code/type which is returned from plugins.\ntype Code int\n\n// These are predefined codes used in a Status.\n// Note: when you add a new status, you have to add it in `codes` slice below.\nconst (\n\t// Success means that plugin ran correctly and found pod schedulable.\n\t// NOTE: A nil status is also considered as \"Success\".\n\tSuccess Code = iota\n\t// Error is one of the failures, used for internal plugin errors, unexpected input, etc.\n\t// Plugin shouldn't return this code for expected failures, like Unschedulable.\n\t// Since it's the unexpected failure, the scheduling queue registers the pod without unschedulable plugins.\n\t// Meaning, the Pod will be requeued to activeQ/backoffQ soon.\n\tError\n\t// Unschedulable is one of the failures, used when a plugin finds a pod unschedulable.\n\t// If it's returned from PreFilter or Filter, the scheduler might attempt to\n\t// run other postFilter plugins like preemption to get this pod scheduled.\n\t// Use UnschedulableAndUnresolvable to make the scheduler skipping other postFilter plugins.\n\t// The accompanying status message should explain why the pod is unschedulable.\n\t//\n\t// We regard the backoff as a penalty of wasting the scheduling cycle.\n\t// When the scheduling queue requeues Pods, which was rejected with Unschedulable in the last scheduling,\n\t// the Pod goes through backoff.\n\tUnschedulable\n\t// UnschedulableAndUnresolvable is used when a plugin finds a pod unschedulable and\n\t// other postFilter plugins like preemption would not change anything.\n\t// See the comment on PostFilter interface for more details about how PostFilter should handle this status.\n\t// Plugins should return Unschedulable if it is possible that the pod can get scheduled\n\t// after running other postFilter plugins.\n\t// The accompanying status message should explain why the pod is unschedulable.\n\t//\n\t// We regard the backoff as a penalty of wasting the scheduling cycle.\n\t// When the scheduling queue requeues Pods, which was rejected with UnschedulableAndUnresolvable in the last scheduling,\n\t// the Pod goes through backoff.\n\tUnschedulableAndUnresolvable\n\t// Wait is used when a Permit plugin finds a pod scheduling should wait.\n\tWait\n\t// Skip is used in the following scenarios:\n\t// - when a Bind plugin chooses to skip binding.\n\t// - when a PreFilter plugin returns Skip so that coupled Filter plugin/PreFilterExtensions() will be skipped.\n\t// - when a PreScore plugin returns Skip so that coupled Score plugin will be skipped.\n\tSkip\n\t// Pending means that the scheduling process is finished successfully,\n\t// but the plugin wants to stop the scheduling cycle/binding cycle here.\n\t//\n\t// For example, if your plugin has to notify the scheduling result to an external component,\n\t// and wait for it to complete something **before** binding.\n\t// It's different from when to return Unschedulable/UnschedulableAndUnresolvable,\n\t// because in this case, the scheduler decides where the Pod can go successfully,\n\t// but we need to wait for the external component to do something based on that scheduling result.\n\t//\n\t// We regard the backoff as a penalty of wasting the scheduling cycle.\n\t// In the case of returning Pending, we cannot say the scheduling cycle is wasted\n\t// because the scheduling result is used to proceed the Pod's scheduling forward,\n\t// that particular scheduling cycle is failed though.\n\t// So, Pods rejected by such reasons don't need to suffer a penalty (backoff).\n\t// When the scheduling queue requeues Pods, which was rejected with Pending in the last scheduling,\n\t// the Pod goes to activeQ directly ignoring backoff.\n\tPending\n)\n\n// This list should be exactly the same as the codes iota defined above in the same order.\nvar codes = []string{\"Success\", \"Error\", \"Unschedulable\", \"UnschedulableAndUnresolvable\", \"Wait\", \"Skip\", \"Pending\"}\n\nfunc (c Code) String() string {\n\treturn codes[c]\n}\n\n// Status indicates the result of running a plugin. It consists of a code, a\n// message, (optionally) an error, and a plugin name it fails by.\n// When the status code is not Success, the reasons should explain why.\n// And, when code is Success, all the other fields should be empty.\n// NOTE: A nil Status is also considered as Success.\ntype Status struct {\n\tcode    Code\n\treasons []string\n\terr     error\n\t// plugin is an optional field that records the plugin name causes this status.\n\t// It's set by the framework when code is Unschedulable, UnschedulableAndUnresolvable or Pending.\n\tplugin string\n}\n\nfunc (s *Status) WithError(err error) *Status {\n\ts.err = err\n\treturn s\n}\n\n// Code returns code of the Status.\nfunc (s *Status) Code() Code {\n\tif s == nil {\n\t\treturn Success\n\t}\n\treturn s.code\n}\n\n// Message returns a concatenated message on reasons of the Status.\nfunc (s *Status) Message() string {\n\tif s == nil {\n\t\treturn \"\"\n\t}\n\treturn strings.Join(s.Reasons(), \", \")\n}\n\n// SetPlugin sets the given plugin name to s.plugin.\nfunc (s *Status) SetPlugin(plugin string) {\n\ts.plugin = plugin\n}\n\n// WithPlugin sets the given plugin name to s.plugin,\n// and returns the given status object.\nfunc (s *Status) WithPlugin(plugin string) *Status {\n\ts.SetPlugin(plugin)\n\treturn s\n}\n\n// Plugin returns the plugin name which caused this status.\nfunc (s *Status) Plugin() string {\n\treturn s.plugin\n}\n\n// Reasons returns reasons of the Status.\nfunc (s *Status) Reasons() []string {\n\tif s.err != nil {\n\t\treturn append([]string{s.err.Error()}, s.reasons...)\n\t}\n\treturn s.reasons\n}\n\n// AppendReason appends given reason to the Status.\nfunc (s *Status) AppendReason(reason string) {\n\ts.reasons = append(s.reasons, reason)\n}\n\n// IsSuccess returns true if and only if \"Status\" is nil or Code is \"Success\".\nfunc (s *Status) IsSuccess() bool {\n\treturn s.Code() == Success\n}\n\n// IsWait returns true if and only if \"Status\" is non-nil and its Code is \"Wait\".\nfunc (s *Status) IsWait() bool {\n\treturn s.Code() == Wait\n}\n\n// IsSkip returns true if and only if \"Status\" is non-nil and its Code is \"Skip\".\nfunc (s *Status) IsSkip() bool {\n\treturn s.Code() == Skip\n}\n\n// IsRejected returns true if \"Status\" is Unschedulable (Unschedulable, UnschedulableAndUnresolvable, or Pending).\nfunc (s *Status) IsRejected() bool {\n\tcode := s.Code()\n\treturn code == Unschedulable || code == UnschedulableAndUnresolvable || code == Pending\n}\n\n// IsError returns true if and only if \"Status\" is non-nil and its Code is \"Error\".\nfunc (s *Status) IsError() bool {\n\treturn s.Code() == Error\n}\n\n// AsError returns nil if the status is a success, a wait or a skip; otherwise returns an \"error\" object\n// with a concatenated message on reasons of the Status.\nfunc (s *Status) AsError() error {\n\tif s.IsSuccess() || s.IsWait() || s.IsSkip() {\n\t\treturn nil\n\t}\n\tif s.err != nil {\n\t\treturn s.err\n\t}\n\treturn errors.New(s.Message())\n}\n\n// Equal checks equality of two statuses. This is useful for testing with\n// cmp.Equal.\nfunc (s *Status) Equal(x *Status) bool {\n\tif s == nil || x == nil {\n\t\treturn s.IsSuccess() && x.IsSuccess()\n\t}\n\tif s.code != x.code {\n\t\treturn false\n\t}\n\tif !cmp.Equal(s.err, x.err, cmpopts.EquateErrors()) {\n\t\treturn false\n\t}\n\tif !cmp.Equal(s.reasons, x.reasons) {\n\t\treturn false\n\t}\n\treturn cmp.Equal(s.plugin, x.plugin)\n}\n\nfunc (s *Status) String() string {\n\treturn s.Message()\n}\n\n// Clone clones the entire Status and returns a copy.\nfunc (s *Status) Clone() *Status {\n\treturn &Status{\n\t\tcode:    s.code,\n\t\treasons: slices.Clone(s.reasons),\n\t\terr:     s.err,\n\t\tplugin:  s.plugin,\n\t}\n}\n\n// NewStatus makes a Status out of the given arguments and returns its pointer.\nfunc NewStatus(code Code, reasons ...string) *Status {\n\ts := &Status{\n\t\tcode:    code,\n\t\treasons: reasons,\n\t}\n\treturn s\n}\n\n// AsStatus wraps an error in a Status.\nfunc AsStatus(err error) *Status {\n\tif err == nil {\n\t\treturn nil\n\t}\n\treturn &Status{\n\t\tcode: Error,\n\t\terr:  err,\n\t}\n}\n\n// NodeToStatusReader is a read-only interface of NodeToStatus passed to each PostFilter plugin.\ntype NodeToStatusReader interface {\n\t// Get returns the status for given nodeName.\n\t// If the node is not in the map, the AbsentNodesStatus is returned.\n\tGet(nodeName string) *Status\n\t// NodesForStatusCode returns a list of NodeInfos for the nodes that have a given status code.\n\t// It returns the NodeInfos for all matching nodes denoted by AbsentNodesStatus as well.\n\tNodesForStatusCode(nodeLister NodeInfoLister, code Code) ([]NodeInfo, error)\n}\n\n// NodeScoreList declares a list of nodes and their scores.\ntype NodeScoreList []NodeScore\n\n// NodeScore is a struct with node name and score.\ntype NodeScore struct {\n\tName  string\n\tScore int64\n}\n\n// NodePluginScores is a struct with node name and scores for that node.\ntype NodePluginScores struct {\n\t// Name is node name.\n\tName string\n\t// Scores is scores from plugins and extenders.\n\tScores []PluginScore\n\t// TotalScore is the total score in Scores.\n\tTotalScore int64\n\t// Randomizer is used to provide randomness\n\t// when randomizing nodes within a common score.\n\tRandomizer int\n}\n\n// PluginScore is a struct with plugin/extender name and score.\ntype PluginScore struct {\n\t// Name is the name of plugin or extender.\n\tName  string\n\tScore int64\n}\n\n// PlacementPluginScores stores scores for a given placement.\ntype PlacementPluginScores struct {\n\t// Placement is the placement info that can be used to identify a specific placement.\n\tPlacement *Placement\n\t// Scores is scores from plugins and extenders.\n\tScores []PluginScore\n\t// TotalScore is the total score in Scores.\n\tTotalScore int64\n\t// Randomizer is used to provide randomness\n\t// when randomizing placements within a common score.\n\tRandomizer int\n}\n\nconst (\n\t// MaxNodeScore is the maximum score a Score plugin is expected to return.\n\t//\n\t// Deprecated: use MaxScore instead.\n\tMaxNodeScore int64 = MaxScore\n\n\t// MinNodeScore is the minimum score a Score plugin is expected to return.\n\t//\n\t// Deprecated: use MinScore instead.\n\tMinNodeScore int64 = MinScore\n\n\t// MaxScore is the maximum score a Score or PlacementScore plugin is expected to return.\n\tMaxScore int64 = 100\n\n\t// MinScore is the minimum score a Score or PlacementScore plugin is expected to return.\n\tMinScore int64 = 0\n\n\t// MaxTotalScore is the maximum total score.\n\tMaxTotalScore int64 = math.MaxInt64\n)\n\ntype NominatingMode int\n\nconst (\n\tModeNoop NominatingMode = iota\n\tModeOverride\n)\n\ntype NominatingInfo struct {\n\tNominatedNodeName string\n\tNominatingMode    NominatingMode\n}\n\nfunc (ni *NominatingInfo) Mode() NominatingMode {\n\tif ni == nil {\n\t\treturn ModeNoop\n\t}\n\treturn ni.NominatingMode\n}\n\n// WaitingPod represents a pod currently waiting in the permit phase.\ntype WaitingPod interface {\n\t// GetPod returns a reference to the waiting pod.\n\tGetPod() *v1.Pod\n\t// GetPendingPlugins returns a list of pending Permit plugin's name.\n\tGetPendingPlugins() []string\n\t// Allow declares the waiting pod is allowed to be scheduled by the plugin named as \"pluginName\".\n\t// If this is the last remaining plugin to allow, then a success signal is delivered\n\t// to unblock the pod.\n\tAllow(pluginName string)\n\t// Reject declares the waiting pod unschedulable.\n\tReject(pluginName, msg string) bool\n\t// Preempt preempts the waiting pod. Compared to reject it does not mark the pod as unschedulable,\n\t// allowing it to be rescheduled.\n\tPreempt(pluginName, msg string) bool\n}\n\n// PodInPreBind represents a pod currently in preBind phase.\ntype PodInPreBind interface {\n\t// CancelPod cancels the context attached to a goroutine running binding cycle of this pod\n\t// if the pod is not marked as prebound.\n\t// Returns true if the cancel was successfully run.\n\tCancelPod(reason string) bool\n\n\t// MarkPrebound marks the pod as prebound, making it impossible to cancel the context of binding cycle\n\t// via PodInPreBind\n\t// Returns false if the context was already canceled.\n\tMarkPrebound() bool\n}\n\n// PreFilterResult wraps needed info for scheduler framework to act upon PreFilter phase.\ntype PreFilterResult struct {\n\t// The set of nodes that should be considered downstream; if nil then\n\t// all nodes are eligible.\n\tNodeNames sets.Set[string]\n}\n\nfunc (p *PreFilterResult) AllNodes() bool {\n\treturn p == nil || p.NodeNames == nil\n}\n\nfunc (p *PreFilterResult) Merge(in *PreFilterResult) *PreFilterResult {\n\tif p.AllNodes() && in.AllNodes() {\n\t\treturn nil\n\t}\n\n\tr := PreFilterResult{}\n\tif p.AllNodes() {\n\t\tr.NodeNames = in.NodeNames.Clone()\n\t\treturn &r\n\t}\n\tif in.AllNodes() {\n\t\tr.NodeNames = p.NodeNames.Clone()\n\t\treturn &r\n\t}\n\n\tr.NodeNames = p.NodeNames.Intersection(in.NodeNames)\n\treturn &r\n}\n\n// PostFilterResult wraps needed info for scheduler framework to act upon PostFilter phase.\ntype PostFilterResult struct {\n\t*NominatingInfo\n}\n\n// PreBindPreFlightResult wraps needed info for scheduler framework to act upon PreBindPreFlight phase.\ntype PreBindPreFlightResult struct {\n\t// AllowParallel indicates whether this plugin's PreBind method can be run\n\t// in parallel with other plugins during PreBind phase.\n\t// The scheduler groups consecutive plugins that return AllowParallel: true\n\t// and runs them in parallel.\n\t// A plugin that returns AllowParallel: false breaks the parallel group\n\t// and runs sequentially.\n\t// Note: skipped plugins are effectively ignored, but if a skipped plugin returns\n\t// AllowParallel: false, it still breaks the parallel group of adjacent plugins.\n\tAllowParallel bool\n}\n\n// Plugin is the parent type for all the scheduling framework plugins.\ntype Plugin interface {\n\tName() string\n}\n\n// PreEnqueuePlugin is an interface that must be implemented by \"PreEnqueue\" plugins.\n// These plugins are called prior to adding Pods to activeQ or backoffQ.\n// Note: an preEnqueue plugin is expected to be lightweight and efficient, so it's not expected to\n// involve expensive calls like accessing external endpoints; otherwise it'd block other\n// Pods' enqueuing in event handlers.\ntype PreEnqueuePlugin interface {\n\tPlugin\n\t// PreEnqueue is called prior to adding Pods to activeQ or backoffQ.\n\tPreEnqueue(ctx context.Context, p *v1.Pod) *Status\n}\n\n// LessFunc is the function to sort pod info\ntype LessFunc func(podInfo1, podInfo2 QueuedPodInfo) bool\n\n// QueueSortPlugin is an interface that must be implemented by \"QueueSort\" plugins.\n// These plugins are used to sort pods in the scheduling queue. Only one queue sort\n// plugin may be enabled at a time.\ntype QueueSortPlugin interface {\n\tPlugin\n\t// Less are used to sort pods in the scheduling queue.\n\tLess(QueuedPodInfo, QueuedPodInfo) bool\n}\n\n// EnqueueExtensions is an optional interface that plugins can implement to efficiently\n// move unschedulable Pods in internal scheduling queues.\n// In the scheduler, Pods can be unschedulable by PreEnqueue, PreFilter, Filter, Reserve, and Permit plugins,\n// and Pods rejected by these plugins are requeued based on this extension point.\n// Failures from other extension points are regarded as temporal errors (e.g., network failure),\n// and the scheduler requeue Pods without this extension point - always requeue Pods to activeQ after backoff.\n// This is because such temporal errors cannot be resolved by specific cluster events,\n// and we have no choose but keep retrying scheduling until the failure is resolved.\n//\n// Plugins that make pod unschedulable (PreEnqueue, PreFilter, Filter, Reserve, and Permit plugins) must implement this interface,\n// otherwise the default implementation will be used, which is less efficient in requeueing Pods rejected by the plugin.\n//\n// Also, if EventsToRegister returns an empty list, that means the Pods failed by the plugin are not requeued by any events,\n// which doesn't make sense in most cases (very likely misuse)\n// since the pods rejected by the plugin could be stuck in the unschedulable pod pool forever.\n//\n// If plugins other than above extension points support this interface, they are just ignored.\ntype EnqueueExtensions interface {\n\tPlugin\n\t// EventsToRegister returns a series of possible events that may cause a Pod\n\t// failed by this plugin schedulable. Each event has a callback function that\n\t// filters out events to reduce useless retry of Pod's scheduling.\n\t// The events will be registered when instantiating the internal scheduling queue,\n\t// and leveraged to build event handlers dynamically.\n\t// When it returns an error, the scheduler fails to start.\n\t// Note: the returned list needs to be determined at a startup,\n\t// and the scheduler only evaluates it once during start up.\n\t// Do not change the result during runtime, for example, based on the cluster's state etc.\n\t//\n\t// Appropriate implementation of this function will make Pod's re-scheduling accurate and performant.\n\tEventsToRegister(context.Context) ([]ClusterEventWithHint, error)\n}\n\n// PreFilterExtensions is an interface that is included in plugins that allow specifying\n// callbacks to make incremental updates to its supposedly pre-calculated\n// state.\ntype PreFilterExtensions interface {\n\t// AddPod is called by the framework while trying to evaluate the impact\n\t// of adding podToAdd to the node while scheduling podToSchedule.\n\tAddPod(ctx context.Context, state CycleState, podToSchedule *v1.Pod, podInfoToAdd PodInfo, nodeInfo NodeInfo) *Status\n\t// RemovePod is called by the framework while trying to evaluate the impact\n\t// of removing podToRemove from the node while scheduling podToSchedule.\n\tRemovePod(ctx context.Context, state CycleState, podToSchedule *v1.Pod, podInfoToRemove PodInfo, nodeInfo NodeInfo) *Status\n}\n\n// PreFilterPlugin is an interface that must be implemented by \"PreFilter\" plugins.\n// These plugins are called at the beginning of the scheduling cycle. Plugins that implement PreFilterPlugin should\n// also implement SignPlugin to enable batching optimizations.\ntype PreFilterPlugin interface {\n\tPlugin\n\t// PreFilter is called at the beginning of the scheduling cycle. All PreFilter\n\t// plugins must return success or the pod will be rejected. PreFilter could optionally\n\t// return a PreFilterResult to influence which nodes to evaluate downstream. This is useful\n\t// for cases where it is possible to determine the subset of nodes to process in O(1) time.\n\t// When PreFilterResult filters out some Nodes, the framework considers Nodes that are filtered out as getting \"UnschedulableAndUnresolvable\".\n\t// i.e., those Nodes will be out of the candidates of the preemption.\n\t//\n\t// When it returns Skip status, returned PreFilterResult and other fields in status are just ignored,\n\t// and coupled Filter plugin/PreFilterExtensions() will be skipped in this scheduling cycle.\n\tPreFilter(ctx context.Context, state CycleState, p *v1.Pod, nodes []NodeInfo) (*PreFilterResult, *Status)\n\t// PreFilterExtensions returns a PreFilterExtensions interface if the plugin implements one,\n\t// or nil if it does not. A Pre-filter plugin can provide extensions to incrementally\n\t// modify its pre-processed info. The framework guarantees that the extensions\n\t// AddPod/RemovePod will only be called after PreFilter, possibly on a cloned\n\t// CycleState, and may call those functions more than once before calling\n\t// Filter again on a specific node.\n\tPreFilterExtensions() PreFilterExtensions\n}\n\n// FilterPlugin is an interface for Filter plugins. These plugins are called at the\n// filter extension point for filtering out hosts that cannot run a pod.\n// This concept used to be called 'predicate' in the original scheduler.\n// These plugins should return \"Success\", \"Unschedulable\" or \"Error\" in Status.code.\n// However, the scheduler accepts other valid codes as well.\n// Anything other than \"Success\" will lead to exclusion of the given host from\n// running the pod. Plugins that implement FilterPlugin should\n// also implement SignPlugin to enable batching optimizations.\ntype FilterPlugin interface {\n\tPlugin\n\t// Filter is called by the scheduling framework.\n\t// All FilterPlugins should return \"Success\" to declare that\n\t// the given node fits the pod. If Filter doesn't return \"Success\",\n\t// it will return \"Unschedulable\", \"UnschedulableAndUnresolvable\" or \"Error\".\n\t//\n\t// \"Error\" aborts pod scheduling and puts the pod into the backoff queue.\n\t//\n\t// For the node being evaluated, Filter plugins should look at the passed\n\t// nodeInfo reference for this particular node's information (e.g., pods\n\t// considered to be running on the node) instead of looking it up in the\n\t// NodeInfoSnapshot because we don't guarantee that they will be the same.\n\t// For example, during preemption, we may pass a copy of the original\n\t// nodeInfo object that has some pods removed from it to evaluate the\n\t// possibility of preempting them to schedule the target pod.\n\t//\n\t// Plugins are encouraged to check the context for cancellation.\n\t// Once canceled, they should return as soon as possible with\n\t// an UnschedulableAndUnresolvable status that includes the\n\t// `context.Cause(ctx)` error explanation. For example, the\n\t// context gets canceled when a sufficient number of suitable\n\t// nodes have been found and searching for more isn't necessary\n\t// anymore.\n\tFilter(ctx context.Context, state CycleState, pod *v1.Pod, nodeInfo NodeInfo) *Status\n}\n\n// PostFilterPlugin is an interface for \"PostFilter\" plugins. These plugins are called\n// after a pod cannot be scheduled.\ntype PostFilterPlugin interface {\n\tPlugin\n\t// PostFilter is called by the scheduling framework\n\t// when the scheduling cycle failed at PreFilter or Filter by Unschedulable or UnschedulableAndUnresolvable.\n\t// NodeToStatusReader has statuses that each Node got in PreFilter or Filter phase.\n\t//\n\t// If you're implementing a custom preemption with PostFilter, ignoring Nodes with UnschedulableAndUnresolvable is the responsibility of your plugin,\n\t// meaning NodeToStatusReader could have Nodes with UnschedulableAndUnresolvable\n\t// and the scheduling framework does call PostFilter plugins even when all Nodes in NodeToStatusReader are UnschedulableAndUnresolvable.\n\t//\n\t// A PostFilter plugin should return one of the following statuses:\n\t// - Unschedulable: the plugin gets executed successfully but the pod cannot be made schedulable.\n\t// - Success: the plugin gets executed successfully and the pod can be made schedulable.\n\t// - Error: the plugin aborts due to some internal error.\n\t//\n\t// Informational plugins should be configured ahead of other ones, and always return Unschedulable status.\n\t// Optionally, a non-nil PostFilterResult may be returned along with a Success status. For example,\n\t// a preemption plugin may choose to return nominatedNodeName, so that framework can reuse that to update the\n\t// preemptor pod's .spec.status.nominatedNodeName field.\n\tPostFilter(ctx context.Context, state CycleState, pod *v1.Pod, filteredNodeStatusMap NodeToStatusReader) (*PostFilterResult, *Status)\n}\n\n// PreScorePlugin is an interface for \"PreScore\" plugin. PreScore is an\n// informational extension point. Plugins will be called with a list of nodes\n// that passed the filtering phase. A plugin may use this data to update internal\n// state or to generate logs/metrics. Plugins that implement PreScorePlugin should\n// also implement SignPlugin to enable batching optimizations.\ntype PreScorePlugin interface {\n\tPlugin\n\t// PreScore is called by the scheduling framework after a list of nodes\n\t// passed the filtering phase. All prescore plugins must return success or\n\t// the pod will be rejected\n\t// When it returns Skip status, other fields in status are just ignored,\n\t// and coupled Score plugin will be skipped in this scheduling cycle.\n\tPreScore(ctx context.Context, state CycleState, pod *v1.Pod, nodes []NodeInfo) *Status\n}\n\n// ScoreExtensions is an interface for Score extended functionality.\ntype ScoreExtensions interface {\n\t// NormalizeScore is called for all node scores produced by the same plugin's \"Score\"\n\t// method. A successful run of NormalizeScore will update the scores list and return\n\t// a success status.\n\tNormalizeScore(ctx context.Context, state CycleState, p *v1.Pod, scores NodeScoreList) *Status\n}\n\n// ScorePlugin is an interface that must be implemented by \"Score\" plugins to rank\n// nodes that passed the filtering phase. Plugins that implement ScorePlugin should\n// also implement SignPlugin to enable batching optimizations.\ntype ScorePlugin interface {\n\tPlugin\n\t// Score is called on each filtered node. It must return success and an integer\n\t// indicating the rank of the node. All scoring plugins must return success or\n\t// the pod will be rejected.\n\tScore(ctx context.Context, state CycleState, p *v1.Pod, nodeInfo NodeInfo) (int64, *Status)\n\n\t// ScoreExtensions returns a ScoreExtensions interface if it implements one, or nil if does not.\n\tScoreExtensions() ScoreExtensions\n}\n\n// ReservePlugin is an interface for plugins with Reserve and Unreserve\n// methods. These are meant to update the state of the plugin. This concept\n// used to be called 'assume' in the original scheduler. These plugins should\n// return only Success or Error in Status.code. However, the scheduler accepts\n// other valid codes as well. Anything other than Success will lead to\n// rejection of the pod.\ntype ReservePlugin interface {\n\tPlugin\n\t// Reserve is called by the scheduling framework when the scheduler cache is\n\t// updated. If this method returns a failed Status, the scheduler will call\n\t// the Unreserve method for all enabled ReservePlugins.\n\tReserve(ctx context.Context, state CycleState, p *v1.Pod, nodeName string) *Status\n\t// Unreserve is called by the scheduling framework when a reserved pod was\n\t// rejected, an error occurred during reservation of subsequent plugins, or\n\t// in a later phase. The Unreserve method implementation must be idempotent\n\t// and may be called by the scheduler even if the corresponding Reserve\n\t// method for the same plugin was not called.\n\tUnreserve(ctx context.Context, state CycleState, p *v1.Pod, nodeName string)\n}\n\n// PreBindPlugin is an interface that must be implemented by \"PreBind\" plugins.\n// These plugins are called before a pod being scheduled.\ntype PreBindPlugin interface {\n\tPlugin\n\t// PreBindPreFlight is called before PreBind, and the plugin is supposed to return two values:\n\t// - PreBindPreFlightResult (nil is valid, and means results with the zero values on all fields).\n\t// - Success, Skip, or Error status.\n\t// If it returns Success, it means this PreBind plugin will handle this pod.\n\t// If it returns Skip, it means this PreBind plugin has nothing to do with the pod, and PreBind will be skipped.\n\t// This function should be lightweight, and shouldn't do any actual operation, e.g., creating a volume etc.\n\tPreBindPreFlight(ctx context.Context, state CycleState, p *v1.Pod, nodeName string) (*PreBindPreFlightResult, *Status)\n\n\t// PreBind is called before binding a pod. All prebind plugins must return\n\t// success or the pod will be rejected and won't be sent for binding.\n\tPreBind(ctx context.Context, state CycleState, p *v1.Pod, nodeName string) *Status\n}\n\n// PostBindPlugin is an interface that must be implemented by \"PostBind\" plugins.\n// These plugins are called after a pod is successfully bound to a node.\ntype PostBindPlugin interface {\n\tPlugin\n\t// PostBind is called after a pod is successfully bound. These plugins are\n\t// informational. A common application of this extension point is for cleaning\n\t// up. If a plugin needs to clean-up its state after a pod is scheduled and\n\t// bound, PostBind is the extension point that it should register.\n\tPostBind(ctx context.Context, state CycleState, p *v1.Pod, nodeName string)\n}\n\n// PermitPlugin is an interface that must be implemented by \"Permit\" plugins.\n// These plugins are called before a pod is bound to a node.\ntype PermitPlugin interface {\n\tPlugin\n\t// Permit is called before binding a pod (and before prebind plugins). Permit\n\t// plugins are used to prevent or delay the binding of a Pod. A permit plugin\n\t// must return success or wait with timeout duration, or the pod will be rejected.\n\t// The pod will also be rejected if the wait timeout or the pod is rejected while\n\t// waiting. Note that if the plugin returns \"wait\", the framework will wait only\n\t// after running the remaining plugins given that no other plugin rejects the pod.\n\tPermit(ctx context.Context, state CycleState, p *v1.Pod, nodeName string) (*Status, time.Duration)\n}\n\n// BindPlugin is an interface that must be implemented by \"Bind\" plugins. Bind\n// plugins are used to bind a pod to a Node.\ntype BindPlugin interface {\n\tPlugin\n\t// Bind plugins will not be called until all pre-bind plugins have completed. Each\n\t// bind plugin is called in the configured order. A bind plugin may choose whether\n\t// or not to handle the given Pod. If a bind plugin chooses to handle a Pod, the\n\t// remaining bind plugins are skipped. When a bind plugin does not handle a pod,\n\t// it must return Skip in its Status code. If a bind plugin returns an Error, the\n\t// pod is rejected and will not be bound.\n\tBind(ctx context.Context, state CycleState, p *v1.Pod, nodeName string) *Status\n}\n\n// A portion of a pod signature. The sign fragments from all plugins are combined\n// together to create a unified signature.\ntype SignFragment struct {\n\t// Pod signature fragments are identified by a key, i.e., fragments with the same key\n\t// should contain the same value for the same pod. Plugin authors can return the same SignFragment\n\t// from multiple plugins, the framework just ignores duplicates. This allows plugins to share signers easily.\n\t//\n\t// Fragment names can be found in k8s.io/kube-scheduler/framework/signers.go. New fragment names for\n\t// in-tree plugins should be added there, and custom plugins can also use them.\n\t// Simple SignFragments which return a field should have names that follow the field name they return.\n\t// So a SignFragment that returns NodeName would have the key:\n\t//\n\t//\t\"v1.Pod.Spec.NodeName\"\n\t//\n\t// If a SignFragment does some processing on the resource, its name should include the path to the base of the state it uses,\n\t// and then suffix this with a descriptive function name followed by (). So, for example, a SignFragment that only returns\n\t// Ephemeral volumes might use the key:\n\t//\n\t//\t\"v1.Pod.Volumes.EphemeralVolumes()\"\n\tKey string\n\n\t// The value of a SignFragment must be a json-marshallable object. Remember that we need to compare these across pods, so\n\t// plugins should ensure that lists where order doesn't matter are sorted, for example.\n\tValue any\n}\n\n// The signature for a given pod after all of the results from plugins are consolidated.\ntype PodSignature []byte\n\n// SignPlugin is an interface that should be implemented by plugins that either filter or score\n// pods to enable batching and gang scheduling optimizations.\n// Each plugin returns SignFragments used to build a single signature per pod entering the scheduling cycle,\n// and the scheduler uses signatures to determine whether it can use a cached scheduling result, or needs to\n// recompute the prioritized nodes.\n//\n// If an enabled plugin that does Scoring, Prescoring, Filtering or Prefiltering does not implement this interface we will turn off batching for all pods.\ntype SignPlugin interface {\n\tPlugin\n\t// SignPod returns SignFragments for use in batching. This is called at every scheduling cycle\n\t// and is used to construct a signature builder used for pods. (KEP-5598).\n\t//\n\t// The sign fragments from all the plugins are combined to create a single signature. Only one fragment\n\t// with a given key will be included.\n\t//\n\t// Status means:\n\t//   - Success: the signer can sign the pod, accompanied by a set of signature fragments to be included.\n\t//   - Unschedulable: the signer refuses to sign the pod, meaning the pod is not eligible for opportunistic batching optimization.\n\t//   - Error: the signer hits something _unexpected_ and cannot build sign(s). A framework runtime just\n\t//     proceeds with scheduling this pod without opportunistic batching optimization like Unschedulable,\n\t//     but just report errors to logs.\n\tSignPod(ctx context.Context, pod *v1.Pod) ([]SignFragment, *Status)\n}\n\n// GeneratePlacementsResult represents the result of the PlacementGeneratePlugin.\ntype GeneratePlacementsResult struct {\n\t// Placements is the set of placements that the plugin wants to partition the resources into.\n\t// The partitions can overlap.\n\t//\n\t// To represent no valid partitions, set the array to nil or empty.\n\tPlacements []*Placement\n}\n\n// PlacementGeneratePlugin is an interface for plugins that generate candidate Placements.\ntype PlacementGeneratePlugin interface {\n\tPlugin\n\n\t// GeneratePlacements generates a list of potential Placements for the given PodGroup within the parent placement.\n\t// Each Placement represents a candidate set of resources, e.g., nodes matching a selector.\n\tGeneratePlacements(ctx context.Context, state PodGroupCycleState, podGroup PodGroupInfo, parentPlacement *Placement) (*GeneratePlacementsResult, *Status)\n}\n\n// PlacementScore stores result of a placement score plugin to be later used for normalization.\ntype PlacementScore struct {\n\t// Placement is the placement for which the score was computed\n\tPlacement *Placement\n\t// Score is the score for a given placement, which is used to rank the placements and pick the best one.\n\tScore int64\n}\n\n// PlacementScoreExtensions is an interface for PlacementScore extended functionality.\ntype PlacementScoreExtensions interface {\n\t// NormalizePlacementScore is called for all placement scores produced by the same plugin's \"ScorePlacement\"\n\t// method. A successful run of NormalizePlacementScore will update the scores list and return\n\t// a success status.\n\tNormalizePlacementScore(ctx context.Context, state PodGroupCycleState, podGroup PodGroupInfo, placementScores []PlacementScore) *Status\n}\n\n// PlacementScorePlugin is an interface for plugins that score feasible Placements.\ntype PlacementScorePlugin interface {\n\tPlugin\n\n\t// ScorePlacement calculates a score for a given Placement.\n\t// This function is called only for Placements that have been deemed feasible for the sufficient number of pods in the PodGroup scheduling cycle.\n\t// The PodGroupAssignments indicates the node assigned to each pod within this Placement.\n\t// The returned score is a int64 with higher scores generally indicating more preferable Placements.\n\t// Plugins can implement various scoring strategies, such as bin packing to minimize resource fragmentation.\n\tScorePlacement(ctx context.Context, state PodGroupCycleState, podGroup PodGroupInfo, placement *PodGroupAssignments) (int64, *Status)\n\n\t// PlacementScoreExtensions returns a PlacementScoreExtensions interface if it implements one, or nil if does not.\n\tPlacementScoreExtensions() PlacementScoreExtensions\n}\n\n// Handle provides data and some tools that plugins can use. It is\n// passed to the plugin factories at the time of plugin initialization. Plugins\n// must store and use this handle to call framework functions.\ntype Handle interface {\n\t// PodNominator abstracts operations to maintain nominated Pods.\n\tPodNominator\n\t// PluginsRunner abstracts operations to run some plugins.\n\tPluginsRunner\n\t// PodActivator abstracts operations in the scheduling queue.\n\tPodActivator\n\t// SnapshotSharedLister returns listers from the latest NodeInfo Snapshot. The snapshot\n\t// is taken at the beginning of a scheduling cycle and remains unchanged until\n\t// a pod finishes \"Permit\" point.\n\t//\n\t// It should be used only during scheduling cycle:\n\t// - There is no guarantee that the information remains unchanged in the binding phase of scheduling.\n\t//   So, plugins shouldn't use it in the binding cycle (pre-bind/bind/post-bind/un-reserve plugin)\n\t//   otherwise, a concurrent read/write error might occur.\n\t// - There is no guarantee that the information is always up-to-date.\n\t//   So, plugins shouldn't use it in QueueingHint and PreEnqueue\n\t//   otherwise, they might make a decision based on stale information.\n\t//\n\t// Instead, they should use the resources getting from Informer created from SharedInformerFactory().\n\tSnapshotSharedLister() SharedLister\n\n\t// IterateOverWaitingPods acquires a read lock and iterates over the WaitingPods map.\n\tIterateOverWaitingPods(callback func(WaitingPod))\n\n\t// GetWaitingPod returns a waiting pod given its UID.\n\tGetWaitingPod(uid types.UID) WaitingPod\n\n\t// RejectWaitingPod rejects a waiting pod given its UID.\n\t// The return value indicates if the pod is waiting or not.\n\tRejectWaitingPod(uid types.UID) bool\n\n\t// AddPodInPreBind adds a pod to the pods in preBind list.\n\tAddPodInPreBind(uid types.UID, cancel context.CancelCauseFunc)\n\n\t// GetPodInPreBind returns a pod that is in the binding cycle but before it is bound given its UID.\n\tGetPodInPreBind(uid types.UID) PodInPreBind\n\n\t// RemovePodInPreBind removes a pod from the pods in preBind list.\n\tRemovePodInPreBind(uid types.UID)\n\n\t// ClientSet returns a kubernetes clientSet.\n\tClientSet() clientset.Interface\n\n\t// KubeConfig returns the raw kube config.\n\tKubeConfig() *restclient.Config\n\n\t// EventRecorder returns an event recorder.\n\tEventRecorder() events.EventRecorderLogger\n\n\tSharedInformerFactory() informers.SharedInformerFactory\n\n\t// SharedDRAManager can be used to obtain DRA objects, and track modifications to them in-memory - mainly by the DRA plugin.\n\t// A non-default implementation can be plugged into the framework to simulate the state of DRA objects.\n\tSharedDRAManager() SharedDRAManager\n\n\t// SharedCSIManager can be used to obtain CSINode objects, and track changes to them in-memory.\n\t// A non-default implementation can be plugged into the framework to simulate the state of CSINode objects.\n\tSharedCSIManager() CSIManager\n\n\t// RunFilterPluginsWithNominatedPods runs the set of configured filter plugins for nominated pod on the given node.\n\tRunFilterPluginsWithNominatedPods(ctx context.Context, state CycleState, pod *v1.Pod, info NodeInfo) *Status\n\n\t// Extenders returns registered scheduler extenders.\n\tExtenders() []Extender\n\n\t// Parallelizer returns a parallelizer holding parallelism for scheduler.\n\tParallelizer() Parallelizer\n\n\t// APIDispatcher returns a APIDispatcher that can be used to dispatch API calls directly.\n\t// This is non-nil if the SchedulerAsyncAPICalls feature gate is enabled.\n\tAPIDispatcher() APIDispatcher\n\n\t// APICacher returns an APICacher that coordinates API calls with the scheduler's internal cache.\n\t// Use this to ensure the scheduler's view of the cluster remains consistent.\n\t// This is non-nil if the SchedulerAsyncAPICalls feature gate is enabled.\n\tAPICacher() APICacher\n\n\t// ProfileName returns the profile name associated to a profile.\n\tProfileName() string\n\n\t// PodGroupManager provides an interface for runtime information about pod groups from scheduler's cache.\n\tPodGroupManager() PodGroupManager\n\n\t// SignPod creates a PodSignature for a pod.\n\tSignPod(ctx context.Context, pod *v1.Pod) PodSignature\n}\n\n// Parallelizer helps run scheduling operations in parallel chunks where possible, to improve performance and CPU utilization.\ntype Parallelizer interface {\n\t// Until executes the given func doWorkPiece in parallel chunks, if applicable. Max number of chunks is param pieces.\n\tUntil(ctx context.Context, pieces int, doWorkPiece workqueue.DoWorkPieceFunc, operation string)\n}\n\n// PodActivator abstracts operations in the scheduling queue.\ntype PodActivator interface {\n\t// Activate moves the given pods to activeQ.\n\t// If a pod isn't found in unschedulablePods or backoffQ and it's in-flight,\n\t// the wildcard event is registered so that the pod will be requeued when it comes back.\n\t// But, if a pod isn't found in unschedulablePods or backoffQ and it's not in-flight (i.e., completely unknown pod),\n\t// Activate would ignore the pod.\n\tActivate(logger klog.Logger, pods map[string]*v1.Pod)\n}\n\n// PodNominator abstracts operations to maintain nominated Pods.\ntype PodNominator interface {\n\t// AddNominatedPod adds the given pod to the nominator or\n\t// updates it if it already exists.\n\tAddNominatedPod(logger klog.Logger, pod PodInfo, nominatingInfo *NominatingInfo)\n\t// DeleteNominatedPodIfExists deletes nominatedPod from internal cache. It's a no-op if it doesn't exist.\n\tDeleteNominatedPodIfExists(pod *v1.Pod)\n\t// UpdateNominatedPod updates the <oldPod> with <newPod>.\n\tUpdateNominatedPod(logger klog.Logger, oldPod *v1.Pod, newPodInfo PodInfo)\n\t// NominatedPodsForNode returns nominatedPods on the given node.\n\tNominatedPodsForNode(nodeName string) []PodInfo\n}\n\n// PluginsRunner abstracts operations to run some plugins.\n// This is used by preemption PostFilter plugins when evaluating the feasibility of\n// scheduling the pod on nodes when certain running pods get evicted.\ntype PluginsRunner interface {\n\t// RunPreScorePlugins runs the set of configured PreScore plugins. If any\n\t// of these plugins returns any status other than \"Success\", the given pod is rejected.\n\tRunPreScorePlugins(context.Context, CycleState, *v1.Pod, []NodeInfo) *Status\n\t// RunScorePlugins runs the set of configured scoring plugins.\n\t// It returns a list that stores scores from each plugin and total score for each Node.\n\t// It also returns *Status, which is set to non-success if any of the plugins returns\n\t// a non-success status.\n\tRunScorePlugins(context.Context, CycleState, *v1.Pod, []NodeInfo) ([]NodePluginScores, *Status)\n\t// RunFilterPlugins runs the set of configured Filter plugins for pod on\n\t// the given node. Note that for the node being evaluated, the passed nodeInfo\n\t// reference could be different from the one in NodeInfoSnapshot map (e.g., pods\n\t// considered to be running on the node could be different). For example, during\n\t// preemption, we may pass a copy of the original nodeInfo object that has some pods\n\t// removed from it to evaluate the possibility of preempting them to\n\t// schedule the target pod.\n\tRunFilterPlugins(context.Context, CycleState, *v1.Pod, NodeInfo) *Status\n\t// RunPreFilterExtensionAddPod calls the AddPod interface for the set of configured\n\t// PreFilter plugins. It returns directly if any of the plugins return any\n\t// status other than Success.\n\tRunPreFilterExtensionAddPod(ctx context.Context, state CycleState, podToSchedule *v1.Pod, podInfoToAdd PodInfo, nodeInfo NodeInfo) *Status\n\t// RunPreFilterExtensionRemovePod calls the RemovePod interface for the set of configured\n\t// PreFilter plugins. It returns directly if any of the plugins return any\n\t// status other than Success.\n\tRunPreFilterExtensionRemovePod(ctx context.Context, state CycleState, podToSchedule *v1.Pod, podInfoToRemove PodInfo, nodeInfo NodeInfo) *Status\n}\n"
  },
  {
    "path": "framework/interface_test.go",
    "content": "/*\nCopyright 2019 The Kubernetes Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage framework\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"testing\"\n\n\t\"github.com/google/go-cmp/cmp\"\n\n\t\"k8s.io/apimachinery/pkg/util/sets\"\n)\n\nvar errorStatus = NewStatus(Error, \"internal error\")\nvar statusWithErr = AsStatus(errors.New(\"internal error\"))\n\nfunc TestStatus(t *testing.T) {\n\ttests := []struct {\n\t\tname              string\n\t\tstatus            *Status\n\t\texpectedCode      Code\n\t\texpectedMessage   string\n\t\texpectedIsSuccess bool\n\t\texpectedIsWait    bool\n\t\texpectedIsSkip    bool\n\t\texpectedAsError   error\n\t}{\n\t\t{\n\t\t\tname:              \"success status\",\n\t\t\tstatus:            NewStatus(Success, \"\"),\n\t\t\texpectedCode:      Success,\n\t\t\texpectedMessage:   \"\",\n\t\t\texpectedIsSuccess: true,\n\t\t\texpectedIsWait:    false,\n\t\t\texpectedIsSkip:    false,\n\t\t\texpectedAsError:   nil,\n\t\t},\n\t\t{\n\t\t\tname:              \"wait status\",\n\t\t\tstatus:            NewStatus(Wait, \"\"),\n\t\t\texpectedCode:      Wait,\n\t\t\texpectedMessage:   \"\",\n\t\t\texpectedIsSuccess: false,\n\t\t\texpectedIsWait:    true,\n\t\t\texpectedIsSkip:    false,\n\t\t\texpectedAsError:   nil,\n\t\t},\n\t\t{\n\t\t\tname:              \"error status\",\n\t\t\tstatus:            NewStatus(Error, \"unknown error\"),\n\t\t\texpectedCode:      Error,\n\t\t\texpectedMessage:   \"unknown error\",\n\t\t\texpectedIsSuccess: false,\n\t\t\texpectedIsWait:    false,\n\t\t\texpectedIsSkip:    false,\n\t\t\texpectedAsError:   errors.New(\"unknown error\"),\n\t\t},\n\t\t{\n\t\t\tname:              \"skip status\",\n\t\t\tstatus:            NewStatus(Skip, \"\"),\n\t\t\texpectedCode:      Skip,\n\t\t\texpectedMessage:   \"\",\n\t\t\texpectedIsSuccess: false,\n\t\t\texpectedIsWait:    false,\n\t\t\texpectedIsSkip:    true,\n\t\t\texpectedAsError:   nil,\n\t\t},\n\t\t{\n\t\t\tname:              \"nil status\",\n\t\t\tstatus:            nil,\n\t\t\texpectedCode:      Success,\n\t\t\texpectedMessage:   \"\",\n\t\t\texpectedIsSuccess: true,\n\t\t\texpectedIsSkip:    false,\n\t\t\texpectedAsError:   nil,\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tif test.status.Code() != test.expectedCode {\n\t\t\t\tt.Errorf(\"expect status.Code() returns %v, but %v\", test.expectedCode, test.status.Code())\n\t\t\t}\n\n\t\t\tif test.status.Message() != test.expectedMessage {\n\t\t\t\tt.Errorf(\"expect status.Message() returns %v, but %v\", test.expectedMessage, test.status.Message())\n\t\t\t}\n\n\t\t\tif test.status.IsSuccess() != test.expectedIsSuccess {\n\t\t\t\tt.Errorf(\"expect status.IsSuccess() returns %v, but %v\", test.expectedIsSuccess, test.status.IsSuccess())\n\t\t\t}\n\n\t\t\tif test.status.IsWait() != test.expectedIsWait {\n\t\t\t\tt.Errorf(\"status.IsWait() returns %v, but want %v\", test.status.IsWait(), test.expectedIsWait)\n\t\t\t}\n\n\t\t\tif test.status.IsSkip() != test.expectedIsSkip {\n\t\t\t\tt.Errorf(\"status.IsSkip() returns %v, but want %v\", test.status.IsSkip(), test.expectedIsSkip)\n\t\t\t}\n\n\t\t\tif errors.Is(test.status.AsError(), test.expectedAsError) {\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\tif test.status.AsError().Error() != test.expectedAsError.Error() {\n\t\t\t\tt.Errorf(\"expect status.AsError() returns %v, but %v\", test.expectedAsError, test.status.AsError())\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestPreFilterResultMerge(t *testing.T) {\n\ttests := map[string]struct {\n\t\treceiver *PreFilterResult\n\t\tin       *PreFilterResult\n\t\twant     *PreFilterResult\n\t}{\n\t\t\"all nil\": {},\n\t\t\"nil receiver empty input\": {\n\t\t\tin:   &PreFilterResult{NodeNames: sets.New[string]()},\n\t\t\twant: &PreFilterResult{NodeNames: sets.New[string]()},\n\t\t},\n\t\t\"empty receiver nil input\": {\n\t\t\treceiver: &PreFilterResult{NodeNames: sets.New[string]()},\n\t\t\twant:     &PreFilterResult{NodeNames: sets.New[string]()},\n\t\t},\n\t\t\"empty receiver empty input\": {\n\t\t\treceiver: &PreFilterResult{NodeNames: sets.New[string]()},\n\t\t\tin:       &PreFilterResult{NodeNames: sets.New[string]()},\n\t\t\twant:     &PreFilterResult{NodeNames: sets.New[string]()},\n\t\t},\n\t\t\"nil receiver populated input\": {\n\t\t\tin:   &PreFilterResult{NodeNames: sets.New(\"node1\")},\n\t\t\twant: &PreFilterResult{NodeNames: sets.New(\"node1\")},\n\t\t},\n\t\t\"empty receiver populated input\": {\n\t\t\treceiver: &PreFilterResult{NodeNames: sets.New[string]()},\n\t\t\tin:       &PreFilterResult{NodeNames: sets.New(\"node1\")},\n\t\t\twant:     &PreFilterResult{NodeNames: sets.New[string]()},\n\t\t},\n\n\t\t\"populated receiver nil input\": {\n\t\t\treceiver: &PreFilterResult{NodeNames: sets.New(\"node1\")},\n\t\t\twant:     &PreFilterResult{NodeNames: sets.New(\"node1\")},\n\t\t},\n\t\t\"populated receiver empty input\": {\n\t\t\treceiver: &PreFilterResult{NodeNames: sets.New(\"node1\")},\n\t\t\tin:       &PreFilterResult{NodeNames: sets.New[string]()},\n\t\t\twant:     &PreFilterResult{NodeNames: sets.New[string]()},\n\t\t},\n\t\t\"populated receiver and input\": {\n\t\t\treceiver: &PreFilterResult{NodeNames: sets.New(\"node1\", \"node2\")},\n\t\t\tin:       &PreFilterResult{NodeNames: sets.New(\"node2\", \"node3\")},\n\t\t\twant:     &PreFilterResult{NodeNames: sets.New(\"node2\")},\n\t\t},\n\t}\n\tfor name, test := range tests {\n\t\tt.Run(name, func(t *testing.T) {\n\t\t\tgot := test.receiver.Merge(test.in)\n\t\t\tif diff := cmp.Diff(test.want, got); diff != \"\" {\n\t\t\t\tt.Errorf(\"unexpected diff (-want, +got):\\n%s\", diff)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestIsStatusEqual(t *testing.T) {\n\ttests := []struct {\n\t\tname string\n\t\tx, y *Status\n\t\twant bool\n\t}{\n\t\t{\n\t\t\tname: \"two nil should be equal\",\n\t\t\tx:    nil,\n\t\t\ty:    nil,\n\t\t\twant: true,\n\t\t},\n\t\t{\n\t\t\tname: \"nil should be equal to success status\",\n\t\t\tx:    nil,\n\t\t\ty:    NewStatus(Success),\n\t\t\twant: true,\n\t\t},\n\t\t{\n\t\t\tname: \"nil should not be equal with status except success\",\n\t\t\tx:    nil,\n\t\t\ty:    NewStatus(Error, \"internal error\"),\n\t\t\twant: false,\n\t\t},\n\t\t{\n\t\t\tname: \"one status should be equal to itself\",\n\t\t\tx:    errorStatus,\n\t\t\ty:    errorStatus,\n\t\t\twant: true,\n\t\t},\n\t\t{\n\t\t\tname: \"same type statuses without reasons should be equal\",\n\t\t\tx:    NewStatus(Success),\n\t\t\ty:    NewStatus(Success),\n\t\t\twant: true,\n\t\t},\n\t\t{\n\t\t\tname: \"statuses with same message should be equal\",\n\t\t\tx:    NewStatus(Unschedulable, \"unschedulable\"),\n\t\t\ty:    NewStatus(Unschedulable, \"unschedulable\"),\n\t\t\twant: true,\n\t\t},\n\t\t{\n\t\t\tname: \"error statuses with same message should be equal\",\n\t\t\tx:    NewStatus(Error, \"error\"),\n\t\t\ty:    NewStatus(Error, \"error\"),\n\t\t\twant: true,\n\t\t},\n\t\t{\n\t\t\tname: \"statuses with different reasons should not be equal\",\n\t\t\tx:    NewStatus(Unschedulable, \"unschedulable\"),\n\t\t\ty:    NewStatus(Unschedulable, \"unschedulable\", \"injected filter status\"),\n\t\t\twant: false,\n\t\t},\n\t\t{\n\t\t\tname: \"statuses with different codes should not be equal\",\n\t\t\tx:    NewStatus(Error, \"internal error\"),\n\t\t\ty:    NewStatus(Unschedulable, \"internal error\"),\n\t\t\twant: false,\n\t\t},\n\t\t{\n\t\t\tname: \"wrap error status should be equal with original one\",\n\t\t\tx:    statusWithErr,\n\t\t\ty:    AsStatus(fmt.Errorf(\"error: %w\", statusWithErr.AsError())),\n\t\t\twant: true,\n\t\t},\n\t\t{\n\t\t\tname: \"statues with different errors that have the same message shouldn't be equal\",\n\t\t\tx:    AsStatus(errors.New(\"error\")),\n\t\t\ty:    AsStatus(errors.New(\"error\")),\n\t\t\twant: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif got := tt.x.Equal(tt.y); got != tt.want {\n\t\t\t\tt.Errorf(\"cmp.Equal() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "framework/listers.go",
    "content": "/*\nCopyright 2019 The Kubernetes Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage framework\n\nimport (\n\tv1 \"k8s.io/api/core/v1\"\n\tresourceapi \"k8s.io/api/resource/v1\"\n\tschedulingapi \"k8s.io/api/scheduling/v1alpha2\"\n\tstoragev1 \"k8s.io/api/storage/v1\"\n\t\"k8s.io/apimachinery/pkg/types\"\n\t\"k8s.io/apimachinery/pkg/util/sets\"\n\t\"k8s.io/dynamic-resource-allocation/structured\"\n)\n\n// NodeInfoLister interface represents anything that can list/get NodeInfo objects from node name.\ntype NodeInfoLister interface {\n\t// List returns the list of NodeInfos.\n\tList() ([]NodeInfo, error)\n\t// HavePodsWithAffinityList returns the list of NodeInfos of nodes with pods with affinity terms.\n\tHavePodsWithAffinityList() ([]NodeInfo, error)\n\t// HavePodsWithRequiredAntiAffinityList returns the list of NodeInfos of nodes with pods with required anti-affinity terms.\n\tHavePodsWithRequiredAntiAffinityList() ([]NodeInfo, error)\n\t// Get returns the NodeInfo of the given node name.\n\tGet(nodeName string) (NodeInfo, error)\n}\n\n// StorageInfoLister interface represents anything that handles storage-related operations and resources.\ntype StorageInfoLister interface {\n\t// IsPVCUsedByPods returns true/false on whether the PVC is used by one or more scheduled pods,\n\t// keyed in the format \"namespace/name\".\n\tIsPVCUsedByPods(key string) bool\n}\n\n// SharedLister groups scheduler-specific listers.\ntype SharedLister interface {\n\tNodeInfos() NodeInfoLister\n\tStorageInfos() StorageInfoLister\n\tPodGroupStates() PodGroupStateLister\n}\n\n// PodGroupStateLister provides read access to pod group states.\ntype PodGroupStateLister interface {\n\t// Get returns the PodGroupState of the given pod group.\n\tGet(namespace string, podGroupName string) (PodGroupState, error)\n}\n\ntype CSINodeLister interface {\n\t// List returns a list of all CSINodes.\n\tList() ([]*storagev1.CSINode, error)\n\t// Get returns the CSINode with the given name.\n\tGet(name string) (*storagev1.CSINode, error)\n}\n\n// ResourceSliceLister can be used to obtain ResourceSlices.\ntype ResourceSliceLister interface {\n\t// ListWithDeviceTaintRules returns a list of all ResourceSlices with DeviceTaintRules applied\n\t// if the DRADeviceTaints feature is enabled, otherwise without them.\n\t//\n\t// k8s.io/dynamic-resource-allocation/resourceslice/tracker provides an implementation\n\t// of the necessary logic. That tracker can be instantiated as a replacement for\n\t// a normal ResourceSlice informer and provides a ListPatchedResourceSlices method.\n\tListWithDeviceTaintRules() ([]*resourceapi.ResourceSlice, error)\n}\n\n// DeviceClassLister can be used to obtain DeviceClasses.\ntype DeviceClassLister interface {\n\t// List returns a list of all DeviceClasses.\n\tList() ([]*resourceapi.DeviceClass, error)\n\t// Get returns the DeviceClass with the given className.\n\tGet(className string) (*resourceapi.DeviceClass, error)\n}\n\n// ResourceClaimTracker can be used to obtain ResourceClaims, and track changes to ResourceClaims in-memory.\n//\n// If the claims are meant to be allocated in the API during the binding phase (when used by scheduler), the tracker helps avoid\n// race conditions between scheduling and binding phases (as well as between the binding phase and the informer cache update).\n//\n// If the binding phase is not run (e.g. when used by Cluster Autoscaler which only runs the scheduling phase, and simulates binding in-memory),\n// the tracker allows the framework user to obtain the claim allocations produced by the DRA plugin, and persist them outside of the API (e.g. in-memory).\ntype ResourceClaimTracker interface {\n\t// List lists ResourceClaims. The result is guaranteed to immediately include any changes made via AssumeClaimAfterAPICall(),\n\t// and SignalClaimPendingAllocation().\n\tList() ([]*resourceapi.ResourceClaim, error)\n\t// Get works like List(), but for a single claim.\n\tGet(namespace, claimName string) (*resourceapi.ResourceClaim, error)\n\t// ListAllAllocatedDevices lists all allocated Devices from allocated ResourceClaims. The result is guaranteed to immediately include\n\t// any changes made via AssumeClaimAfterAPICall(), and SignalClaimPendingAllocation().\n\tListAllAllocatedDevices() (sets.Set[structured.DeviceID], error)\n\t// GatherAllocatedState gathers information about allocated devices from allocated ResourceClaims. The result is guaranteed to immediately include\n\t// any changes made via AssumeClaimAfterAPICall(), and SignalClaimPendingAllocation().\n\tGatherAllocatedState() (*structured.AllocatedState, error)\n\n\t// SignalClaimPendingAllocation signals to the tracker that the given ResourceClaim will be allocated via an API call in the\n\t// binding phase, therefore the given ResourceClaim must be non-nil and have a non-nil Status.Allocation.\n\t// If the claim already has a pending allocation, then the allocation becomes shared. The same number of SignalClaimPendingAllocation() callers\n\t// for a given claimUID is expected to eventually call MaybeRemoveClaimPendingAllocation() for that claimUID.\n\t// This change is immediately reflected in the result of List() and the other accessors.\n\tSignalClaimPendingAllocation(claimUID types.UID, allocatedClaim *resourceapi.ResourceClaim) error\n\t// ClaimHasPendingAllocation answers whether a given claim has a pending allocation during the binding phase. It can be used to avoid\n\t// race conditions in subsequent scheduling phases.\n\tGetPendingAllocation(claimUID types.UID) *resourceapi.AllocationResult\n\t// MaybeRemoveClaimPendingAllocation might remove the pending allocation for the given ResourceClaim from the tracker if any was signaled via\n\t// SignalClaimPendingAllocation(). When `forceRemove` is true, it always removes the pending allocation. Otherwise, it removes the pending\n\t// allocation only when no other pods are still using that pending allocation (from SignalClaimPendingAllocation and AcquirePendingAllocation).\n\t// Returns whether there was a pending allocation and it was removed.\n\t// List() and the other accessors immediately stop reflecting the pending allocation in the results when the pending allocation is removed.\n\tMaybeRemoveClaimPendingAllocation(claimUID types.UID, forceRemove bool) (deleted bool)\n\n\t// AssumeClaimAfterAPICall signals to the tracker that an API call modifying the given ResourceClaim was made in the binding phase, and the\n\t// changes should be reflected in informers very soon. This change is immediately reflected in the result of List() and the other accessors.\n\t// This mechanism can be used to avoid race conditions between the informer update and subsequent scheduling phases.\n\tAssumeClaimAfterAPICall(claim *resourceapi.ResourceClaim) error\n\t// AssumedClaimRestore signals to the tracker that something went wrong with the API call modifying the given ResourceClaim, and\n\t// the changes won't be reflected in informers after all. List() and the other accessors immediately stop reflecting the assumed change,\n\t// and go back to the informer version.\n\tAssumedClaimRestore(namespace, claimName string)\n}\n\n// DeviceClassResolver resolves device class names from extended resource names.\ntype DeviceClassResolver interface {\n\t// GetDeviceClass returns the device class for the given extended resource name.\n\t// Returns nil if no mapping exists for the resource name or\n\t// the DRAExtendedResource feature is disabled.\n\tGetDeviceClass(resourceName v1.ResourceName) *resourceapi.DeviceClass\n}\n\n// PodGroupLister can be used to obtain PodGroups.\ntype PodGroupLister interface {\n\t// Get returns the PodGroup with the given podGroupName.\n\tGet(namespace, podGroupName string) (*schedulingapi.PodGroup, error)\n}\n\n// SharedDRAManager can be used to obtain DRA objects, and track modifications to them in-memory - mainly by the DRA plugin.\n// The plugin's default implementation obtains the objects from the API. A different implementation can be\n// plugged into the framework in order to simulate the state of DRA objects. For example, Cluster Autoscaler\n// can use this to provide the correct DRA object state to the DRA plugin when simulating scheduling changes in-memory.\ntype SharedDRAManager interface {\n\tResourceClaims() ResourceClaimTracker\n\tResourceSlices() ResourceSliceLister\n\tDeviceClasses() DeviceClassLister\n\tDeviceClassResolver() DeviceClassResolver\n\tPodGroups() PodGroupLister\n}\n\n// CSIManager can be used to obtain CSINode objects, and track changes to CSINode objects in-memory.\n// The plugin's default implementation obtains the objects from the API. A different implementation can be\n// plugged into the framework in order to simulate the state of CSINode objects. For example, Cluster Autoscaler\n// can use this to provide the correct CSINode object state to the CSINode plugin when simulating scheduling changes in-memory.\ntype CSIManager interface {\n\tCSINodes() CSINodeLister\n}\n\n// PodGroupManager provides an interface for runtime information about pod groups in the scheduler cache.\ntype PodGroupManager interface {\n\t// PodGroupStates returns the PodGroupStateLister.\n\tPodGroupStates() PodGroupStateLister\n}\n\n// PodGroupState provides an interface to view the state of a single pod group.\ntype PodGroupState interface {\n\t// AllPods returns the UIDs of all pods known to the scheduler for this group.\n\tAllPods() sets.Set[types.UID]\n\t// AllPodsCount returns the number of all pods known to the scheduler for this group.\n\tAllPodsCount() int\n\t// UnscheduledPods returns all pods that are unscheduled for this group,\n\t// i.e., are neither assumed nor assigned.\n\t// The returned map type corresponds to the argument of the PodActivator.Activate method.\n\tUnscheduledPods() map[string]*v1.Pod\n\t// AssumedPods returns the UIDs of all pods for this group in the \"assumed\" state,\n\t// i.e., passed the Reserve gate.\n\tAssumedPods() sets.Set[types.UID]\n\t// AssignedPods returns the UIDs of all pods already assigned (bound) for this group.\n\tAssignedPods() sets.Set[types.UID]\n\t// ScheduledPods returns the pods that are either assumed or assigned for this pod group.\n\tScheduledPods() []*v1.Pod\n\t// ScheduledPodsCount returns the number of pods for this group that are either assumed or assigned.\n\tScheduledPodsCount() int\n}\n"
  },
  {
    "path": "framework/signers.go",
    "content": "/*\nCopyright 2025 The Kubernetes Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage framework\n\nimport (\n\t\"encoding/json\"\n\t\"slices\"\n\t\"sort\"\n\n\tv1 \"k8s.io/api/core/v1\"\n\t\"k8s.io/apimachinery/pkg/util/sets\"\n)\n\n// This file contains the names and implementations of various functions used\n// to compute pod signatures for use in batching and other scheduling optimizations.\n// See the definition of BatchablePlugin for more details.\n\n// Signer names\nconst (\n\tDynamicResourcesSignerName = \"v1.Pod.Spec.DynamicResources\"\n\tImageNamesSignerName       = \"v1.Pod.Spec.CanonicalImageNames()\"\n\tLabelsSignerName           = \"v1.Pod.Labels\"\n\tNodeNameSignerName         = \"v1.Pod.Spec.NodeName\"\n\tNodeAffinitySignerName     = \"v1.Pod.Spec.Affinity.NodeAffinity\"\n\tNodeSelectorSignerName     = \"v1.Pod.Spec.Affinity.NodeSelector\"\n\tHostPortsSignerName        = \"v1.Pod.Spec.HostPorts()\"\n\tResourcesSignerName        = \"v1.Pod.Spec.ContainerRequestsAndOverheads()\"\n\tSchedulerNameSignerName    = \"v1.Pod.Spec.SchedulerName\"\n\tTolerationsSignerName      = \"v1.Pod.Spec.Tolerations\"\n\tVolumesSignerName          = \"v1.Pod.Spec.Volumes.NonSyntheticSources()\"\n\tFeaturesSignerName         = \"v1.Pod.Spec.RequiredFeatures()\"\n)\n\n// Common signers. These are either generic or shared across plugins.\n\nfunc HostPortsSigner(pod *v1.Pod) any {\n\tportSet := sets.New[int32]()\n\tcontainers := []v1.Container{}\n\tcontainers = append(containers, pod.Spec.Containers...)\n\tcontainers = append(containers, pod.Spec.InitContainers...)\n\tfor _, container := range containers {\n\t\tfor _, port := range container.Ports {\n\t\t\tif port.HostPort != 0 {\n\t\t\t\tportSet.Insert(port.HostPort)\n\t\t\t}\n\t\t}\n\t}\n\tports := portSet.UnsortedList()\n\tslices.Sort(ports)\n\treturn ports\n}\n\nfunc NodeSelectorRequirementsSigner(reqs []v1.NodeSelectorRequirement) ([]string, error) {\n\tret := make([]string, len(reqs))\n\tfor i, req := range reqs {\n\t\tt := req.DeepCopy()\n\t\tslices.Sort(t.Values)\n\t\tv, err := json.Marshal(t)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tret[i] = string(v)\n\t}\n\tslices.Sort(ret)\n\treturn ret, nil\n}\n\ntype nodeSelTermSignResult struct {\n\tMatchExpressions []string\n\tMatchFields      []string\n}\n\nfunc NodeSelectorTermSigner(t *v1.NodeSelectorTerm) (nodeSelTermSignResult, error) {\n\texp, err := NodeSelectorRequirementsSigner(t.MatchExpressions)\n\tif err != nil {\n\t\treturn nodeSelTermSignResult{}, err\n\t}\n\tfld, err := NodeSelectorRequirementsSigner(t.MatchFields)\n\tif err != nil {\n\t\treturn nodeSelTermSignResult{}, err\n\t}\n\treturn nodeSelTermSignResult{\n\t\tMatchExpressions: exp,\n\t\tMatchFields:      fld,\n\t}, nil\n}\n\ntype prefSchedTermSignResult struct {\n\tWeight     int32\n\tPreference nodeSelTermSignResult\n}\n\nfunc PreferredSchedulingTermSigner(terms []v1.PreferredSchedulingTerm) ([]string, error) {\n\tnewTerms := make([]string, len(terms))\n\tfor i, t := range terms {\n\t\tpref, err := NodeSelectorTermSigner(&t.Preference)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\ttermStr, err := json.Marshal(prefSchedTermSignResult{\n\t\t\tWeight:     t.Weight,\n\t\t\tPreference: pref,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tnewTerms[i] = string(termStr)\n\t}\n\tslices.Sort(newTerms)\n\treturn newTerms, nil\n}\n\nfunc NodeSelectorTermsSigner(terms []v1.NodeSelectorTerm) ([]string, error) {\n\treq := make([]string, len(terms))\n\tfor i, t := range terms {\n\t\tnst, err := NodeSelectorTermSigner(&t)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\ttStr, err := json.Marshal(nst)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treq[i] = string(tStr)\n\t}\n\tslices.Sort(req)\n\n\treturn req, nil\n}\n\ntype nodeAffinitySignerResult struct {\n\tRequired  []string\n\tPreferred []string\n}\n\nfunc NodeAffinitySigner(pod *v1.Pod) (any, error) {\n\tif pod.Spec.Affinity != nil {\n\t\tif pod.Spec.Affinity.NodeAffinity != nil {\n\t\t\tn := pod.Spec.Affinity.NodeAffinity\n\t\t\tpref := []string{}\n\t\t\tvar err error\n\t\t\tif n.PreferredDuringSchedulingIgnoredDuringExecution != nil {\n\t\t\t\tpref, err = PreferredSchedulingTermSigner(n.PreferredDuringSchedulingIgnoredDuringExecution)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t}\n\n\t\t\treq := []string{}\n\t\t\tif n.RequiredDuringSchedulingIgnoredDuringExecution != nil {\n\t\t\t\treq, err = NodeSelectorTermsSigner(n.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t}\n\n\t\t\treturn nodeAffinitySignerResult{\n\t\t\t\tRequired:  req,\n\t\t\t\tPreferred: pref,\n\t\t\t}, nil\n\t\t}\n\t}\n\treturn nil, nil\n}\n\nfunc TolerationsSigner(pod *v1.Pod) any {\n\tret := []v1.Toleration{}\n\tret = append(ret, pod.Spec.Tolerations...)\n\tsort.Slice(ret, func(i, j int) bool {\n\t\treturn ret[i].Key < ret[j].Key || (ret[i].Key == ret[j].Key && ret[i].Value < ret[j].Value)\n\t})\n\treturn ret\n}\n\n// We special case volumes because config and secret volumes don't\n// impact scheduling but are very specific to individual pods. If we\n// don't exclude them no pods will have matching signatures.\nfunc VolumesSigner(pod *v1.Pod) any {\n\tret := []string{}\n\tfor _, vol := range pod.Spec.Volumes {\n\t\tif vol.VolumeSource.ConfigMap == nil && vol.VolumeSource.Secret == nil {\n\t\t\tvolStr, err := json.Marshal(vol.VolumeSource)\n\t\t\tif err != nil {\n\t\t\t\treturn nil\n\t\t\t}\n\t\t\tret = append(ret, string(volStr))\n\t\t}\n\t}\n\tslices.Sort(ret)\n\treturn ret\n}\n"
  },
  {
    "path": "framework/signers_test.go",
    "content": "/*\nCopyright 2025 The Kubernetes Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage framework\n\nimport (\n\t\"encoding/json\"\n\t\"errors\"\n\t\"reflect\"\n\t\"testing\"\n\n\t\"github.com/google/go-cmp/cmp\"\n\tv1 \"k8s.io/api/core/v1\"\n)\n\nfunc TestHostPortsSigner(t *testing.T) {\n\ttests := []struct {\n\t\tname string\n\t\tpod  *v1.Pod\n\t\twant []int32\n\t}{\n\t\t{\n\t\t\tname: \"no containers\",\n\t\t\tpod:  &v1.Pod{},\n\t\t\twant: []int32{},\n\t\t},\n\t\t{\n\t\t\tname: \"containers without host ports\",\n\t\t\tpod: &v1.Pod{\n\t\t\t\tSpec: v1.PodSpec{\n\t\t\t\t\tContainers: []v1.Container{\n\t\t\t\t\t\t{Ports: []v1.ContainerPort{{ContainerPort: 80}}},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: []int32{},\n\t\t},\n\t\t{\n\t\t\tname: \"single container with host port\",\n\t\t\tpod: &v1.Pod{\n\t\t\t\tSpec: v1.PodSpec{\n\t\t\t\t\tContainers: []v1.Container{\n\t\t\t\t\t\t{Ports: []v1.ContainerPort{{HostPort: 8080}}},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: []int32{8080},\n\t\t},\n\t\t{\n\t\t\tname: \"multiple containers, unsorted host ports, duplicates\",\n\t\t\tpod: &v1.Pod{\n\t\t\t\tSpec: v1.PodSpec{\n\t\t\t\t\tInitContainers: []v1.Container{\n\t\t\t\t\t\t{Ports: []v1.ContainerPort{{HostPort: 9090}}},\n\t\t\t\t\t},\n\t\t\t\t\tContainers: []v1.Container{\n\t\t\t\t\t\t{Ports: []v1.ContainerPort{{HostPort: 80}}},\n\t\t\t\t\t\t{Ports: []v1.ContainerPort{{HostPort: 443}, {HostPort: 80}}}, // duplicate 80\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: []int32{80, 443, 9090},\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tgot := HostPortsSigner(tt.pod)\n\t\t\tif !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"HostPortsSigner() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestTolerationsSigner(t *testing.T) {\n\ttests := []struct {\n\t\tname string\n\t\tpod  *v1.Pod\n\t\twant []v1.Toleration\n\t}{\n\t\t{\n\t\t\tname: \"no tolerations\",\n\t\t\tpod:  &v1.Pod{},\n\t\t\twant: []v1.Toleration{},\n\t\t},\n\t\t{\n\t\t\tname: \"single toleration\",\n\t\t\tpod: &v1.Pod{\n\t\t\t\tSpec: v1.PodSpec{\n\t\t\t\t\tTolerations: []v1.Toleration{\n\t\t\t\t\t\t{Key: \"key1\", Value: \"value1\", Effect: v1.TaintEffectNoSchedule},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: []v1.Toleration{\n\t\t\t\t{Key: \"key1\", Value: \"value1\", Effect: v1.TaintEffectNoSchedule},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"multiple tolerations, unsorted\",\n\t\t\tpod: &v1.Pod{\n\t\t\t\tSpec: v1.PodSpec{\n\t\t\t\t\tTolerations: []v1.Toleration{\n\t\t\t\t\t\t{Key: \"b\", Value: \"2\"},\n\t\t\t\t\t\t{Key: \"a\", Value: \"1\"},\n\t\t\t\t\t\t{Key: \"b\", Value: \"1\"},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: []v1.Toleration{\n\t\t\t\t{Key: \"a\", Value: \"1\"},\n\t\t\t\t{Key: \"b\", Value: \"1\"},\n\t\t\t\t{Key: \"b\", Value: \"2\"},\n\t\t\t},\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tgot := TolerationsSigner(tt.pod)\n\t\t\tif !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"TolerationsSigner() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestVolumesSigner(t *testing.T) {\n\thostPath := v1.VolumeSource{HostPath: &v1.HostPathVolumeSource{Path: \"/tmp\"}}\n\temptyDir := v1.VolumeSource{EmptyDir: &v1.EmptyDirVolumeSource{}}\n\tconfigMap := v1.VolumeSource{ConfigMap: &v1.ConfigMapVolumeSource{LocalObjectReference: v1.LocalObjectReference{Name: \"cm\"}}}\n\tsecret := v1.VolumeSource{Secret: &v1.SecretVolumeSource{SecretName: \"secret\"}}\n\n\tmarshal := func(vs v1.VolumeSource) string {\n\t\tb, _ := json.Marshal(vs)\n\t\treturn string(b)\n\t}\n\n\ttests := []struct {\n\t\tname string\n\t\tpod  *v1.Pod\n\t\twant []string\n\t}{\n\t\t{\n\t\t\tname: \"no volumes\",\n\t\t\tpod:  &v1.Pod{},\n\t\t\twant: []string{},\n\t\t},\n\t\t{\n\t\t\tname: \"only ignored volumes (ConfigMap, Secret)\",\n\t\t\tpod: &v1.Pod{\n\t\t\t\tSpec: v1.PodSpec{\n\t\t\t\t\tVolumes: []v1.Volume{\n\t\t\t\t\t\t{Name: \"v1\", VolumeSource: configMap},\n\t\t\t\t\t\t{Name: \"v2\", VolumeSource: secret},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: []string{},\n\t\t},\n\t\t{\n\t\t\tname: \"mixed volumes, should be filtered and sorted\",\n\t\t\tpod: &v1.Pod{\n\t\t\t\tSpec: v1.PodSpec{\n\t\t\t\t\tVolumes: []v1.Volume{\n\t\t\t\t\t\t{Name: \"v1\", VolumeSource: hostPath},  // e.g. {\"hostPath\":{\"path\":\"/tmp\"}}\n\t\t\t\t\t\t{Name: \"v2\", VolumeSource: configMap}, // ignored\n\t\t\t\t\t\t{Name: \"v3\", VolumeSource: emptyDir},  // e.g. {\"emptyDir\":{}}\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\t// Expected sort order depends on the exact JSON string.\n\t\t\t// {\"emptyDir\":{}} comes before {\"hostPath\":...} alphabetically.\n\t\t\twant: []string{marshal(emptyDir), marshal(hostPath)},\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tgot := VolumesSigner(tt.pod)\n\t\t\tif !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"VolumesSigner() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestNodeAffinitySigner(t *testing.T) {\n\ttable := []struct {\n\t\tname        string\n\t\tinput       *v1.Pod\n\t\texpected    any\n\t\texpectedErr error\n\t}{\n\t\t{\n\t\t\tname:        \"nil affinity\",\n\t\t\tinput:       &v1.Pod{},\n\t\t\texpected:    nil,\n\t\t\texpectedErr: nil,\n\t\t},\n\t\t{\n\t\t\tname: \"empty affinity\",\n\t\t\tinput: &v1.Pod{\n\t\t\t\tSpec: v1.PodSpec{\n\t\t\t\t\tAffinity: &v1.Affinity{\n\t\t\t\t\t\tNodeAffinity: &v1.NodeAffinity{\n\t\t\t\t\t\t\tRequiredDuringSchedulingIgnoredDuringExecution: &v1.NodeSelector{},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\texpected:    nodeAffinitySignerResult{Required: []string{}, Preferred: []string{}},\n\t\t\texpectedErr: nil,\n\t\t},\n\t\t{\n\t\t\tname: \"affinity unsorted\",\n\t\t\tinput: &v1.Pod{\n\t\t\t\tSpec: v1.PodSpec{\n\t\t\t\t\tAffinity: &v1.Affinity{\n\t\t\t\t\t\tNodeAffinity: &v1.NodeAffinity{\n\t\t\t\t\t\t\tRequiredDuringSchedulingIgnoredDuringExecution: &v1.NodeSelector{\n\t\t\t\t\t\t\t\tNodeSelectorTerms: []v1.NodeSelectorTerm{\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\tMatchExpressions: []v1.NodeSelectorRequirement{\n\t\t\t\t\t\t\t\t\t\t\t{Key: \"kk3\", Operator: v1.NodeSelectorOpIn, Values: []string{\"v3\", \"kv4\"}},\n\t\t\t\t\t\t\t\t\t\t\t{Key: \"kk2\", Operator: v1.NodeSelectorOpIn, Values: []string{\"kv1\", \"v2\"}},\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\tMatchFields: []v1.NodeSelectorRequirement{\n\t\t\t\t\t\t\t\t\t\t\t{Key: \"kk1\", Operator: v1.NodeSelectorOpIn, Values: []string{\"kv3\", \"v4\"}},\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\tMatchExpressions: []v1.NodeSelectorRequirement{\n\t\t\t\t\t\t\t\t\t\t\t{Key: \"k2\", Operator: v1.NodeSelectorOpIn, Values: []string{\"v1\", \"v2\"}},\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\tMatchFields: []v1.NodeSelectorRequirement{\n\t\t\t\t\t\t\t\t\t\t\t{Key: \"k1\", Operator: v1.NodeSelectorOpIn, Values: []string{\"v3\", \"v4\"}},\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\tPreferredDuringSchedulingIgnoredDuringExecution: []v1.PreferredSchedulingTerm{\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tWeight: 3,\n\t\t\t\t\t\t\t\t\tPreference: v1.NodeSelectorTerm{\n\t\t\t\t\t\t\t\t\t\tMatchExpressions: []v1.NodeSelectorRequirement{\n\t\t\t\t\t\t\t\t\t\t\t{Key: \"ppk2\", Operator: v1.NodeSelectorOpIn, Values: []string{\"ppv1\", \"v2\"}},\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\tMatchFields: []v1.NodeSelectorRequirement{\n\t\t\t\t\t\t\t\t\t\t\t{Key: \"ppk1\", Operator: v1.NodeSelectorOpIn, Values: []string{\"ppv3\", \"v4\"}},\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tWeight: 1,\n\t\t\t\t\t\t\t\t\tPreference: v1.NodeSelectorTerm{\n\t\t\t\t\t\t\t\t\t\tMatchExpressions: []v1.NodeSelectorRequirement{\n\t\t\t\t\t\t\t\t\t\t\t{Key: \"pk2\", Operator: v1.NodeSelectorOpIn, Values: []string{\"pv1\", \"v2\"}},\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\tMatchFields: []v1.NodeSelectorRequirement{\n\t\t\t\t\t\t\t\t\t\t\t{Key: \"pk1\", Operator: v1.NodeSelectorOpIn, Values: []string{\"pv3\", \"v4\"}},\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\texpected: nodeAffinitySignerResult{\n\t\t\t\tRequired: []string{\n\t\t\t\t\t`{\"MatchExpressions\":[\"{\\\"key\\\":\\\"k2\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"v1\\\",\\\"v2\\\"]}\"],\"MatchFields\":[\"{\\\"key\\\":\\\"k1\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"v3\\\",\\\"v4\\\"]}\"]}`,\n\t\t\t\t\t`{\"MatchExpressions\":[\"{\\\"key\\\":\\\"kk2\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"kv1\\\",\\\"v2\\\"]}\",\"{\\\"key\\\":\\\"kk3\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"kv4\\\",\\\"v3\\\"]}\"],\"MatchFields\":[\"{\\\"key\\\":\\\"kk1\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"kv3\\\",\\\"v4\\\"]}\"]}`,\n\t\t\t\t},\n\t\t\t\tPreferred: []string{\n\t\t\t\t\t`{\"Weight\":1,\"Preference\":{\"MatchExpressions\":[\"{\\\"key\\\":\\\"pk2\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"pv1\\\",\\\"v2\\\"]}\"],\"MatchFields\":[\"{\\\"key\\\":\\\"pk1\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"pv3\\\",\\\"v4\\\"]}\"]}}`,\n\t\t\t\t\t`{\"Weight\":3,\"Preference\":{\"MatchExpressions\":[\"{\\\"key\\\":\\\"ppk2\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"ppv1\\\",\\\"v2\\\"]}\"],\"MatchFields\":[\"{\\\"key\\\":\\\"ppk1\\\",\\\"operator\\\":\\\"In\\\",\\\"values\\\":[\\\"ppv3\\\",\\\"v4\\\"]}\"]}}`,\n\t\t\t\t},\n\t\t\t},\n\t\t\texpectedErr: nil,\n\t\t},\n\t}\n\n\tfor _, tt := range table {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tres, err := NodeAffinitySigner(tt.input)\n\t\t\tif !errors.Is(err, tt.expectedErr) {\n\t\t\t\tt.Fatalf(\"unexpected error %v, expected %v\", err, tt.expectedErr)\n\t\t\t}\n\t\t\tif diff := cmp.Diff(res, tt.expected); diff != \"\" {\n\t\t\t\tt.Fatalf(\"unexpected result %s\", diff)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "framework/types.go",
    "content": "/*\nCopyright 2025 The Kubernetes Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage framework\n\nimport (\n\t\"fmt\"\n\t\"time\"\n\n\tv1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/labels\"\n\t\"k8s.io/apimachinery/pkg/types\"\n\t\"k8s.io/apimachinery/pkg/util/sets\"\n\tndf \"k8s.io/component-helpers/nodedeclaredfeatures\"\n\t\"k8s.io/klog/v2\"\n)\n\n// ActionType is an integer to represent one type of resource change.\n// Different ActionTypes can be bit-wised to compose new semantics.\ntype ActionType int64\n\n// Constants for ActionTypes.\n// CAUTION for contributors: When you add a new ActionType, you must update the following:\n// - The list of basicActionTypes, podActionTypes, and nodeActionTypes at k/k/pkg/scheduler/framework/types.go\n// - String() method.\nconst (\n\tAdd ActionType = 1 << iota\n\tDelete\n\n\t// UpdateNodeXYZ is only applicable for Node events.\n\t// If you use UpdateNodeXYZ,\n\t// your plugin's QueueingHint is only executed for the specific sub-Update event.\n\t// It's better to narrow down the scope of the event by using them instead of just using Update event\n\t// for better performance in requeueing.\n\tUpdateNodeAllocatable\n\tUpdateNodeLabel\n\t// UpdateNodeTaint is an update for node's taints or node.Spec.Unschedulable.\n\tUpdateNodeTaint\n\tUpdateNodeCondition\n\tUpdateNodeAnnotation\n\t// UpdateNodeDeclaredFeature is an update for node's declared features.\n\tUpdateNodeDeclaredFeature\n\n\t// UpdatePodXYZ is only applicable for Pod events.\n\t// If you use UpdatePodXYZ,\n\t// your plugin's QueueingHint is only executed for the specific sub-Update event.\n\t// It's better to narrow down the scope of the event by using them instead of Update event\n\t// for better performance in requeueing.\n\tUpdatePodLabel\n\t// UpdatePodScaleDown is an update for pod's scale down (i.e., any resource request is reduced).\n\tUpdatePodScaleDown\n\t// UpdatePodToleration is an addition for pod's tolerations.\n\t// (Due to API validation, we can add, but cannot modify or remove tolerations.)\n\tUpdatePodToleration\n\t// UpdatePodSchedulingGatesEliminated is an update for pod's scheduling gates, which eliminates all scheduling gates in the Pod.\n\tUpdatePodSchedulingGatesEliminated\n\t// UpdatePodGeneratedResourceClaim is an update of the list of ResourceClaims generated for the pod.\n\t// Depends on the DynamicResourceAllocation feature gate.\n\tUpdatePodGeneratedResourceClaim\n\n\tAll ActionType = 1<<iota - 1\n\n\t// Use the general Update type if you don't either know or care the specific sub-Update type to use.\n\tUpdate = UpdateNodeAllocatable | UpdateNodeLabel | UpdateNodeTaint | UpdateNodeCondition | UpdateNodeAnnotation | UpdateNodeDeclaredFeature | UpdatePodLabel | UpdatePodScaleDown | UpdatePodToleration | UpdatePodSchedulingGatesEliminated | UpdatePodGeneratedResourceClaim\n\n\t// None is a special ActionType that is only used internally.\n\tNone ActionType = 0\n)\n\nfunc (a ActionType) String() string {\n\tswitch a {\n\tcase Add:\n\t\treturn \"Add\"\n\tcase Delete:\n\t\treturn \"Delete\"\n\tcase UpdateNodeAllocatable:\n\t\treturn \"UpdateNodeAllocatable\"\n\tcase UpdateNodeLabel:\n\t\treturn \"UpdateNodeLabel\"\n\tcase UpdateNodeTaint:\n\t\treturn \"UpdateNodeTaint\"\n\tcase UpdateNodeCondition:\n\t\treturn \"UpdateNodeCondition\"\n\tcase UpdateNodeAnnotation:\n\t\treturn \"UpdateNodeAnnotation\"\n\tcase UpdateNodeDeclaredFeature:\n\t\treturn \"UpdateNodeDeclaredFeature\"\n\tcase UpdatePodLabel:\n\t\treturn \"UpdatePodLabel\"\n\tcase UpdatePodScaleDown:\n\t\treturn \"UpdatePodScaleDown\"\n\tcase UpdatePodToleration:\n\t\treturn \"UpdatePodToleration\"\n\tcase UpdatePodSchedulingGatesEliminated:\n\t\treturn \"UpdatePodSchedulingGatesEliminated\"\n\tcase UpdatePodGeneratedResourceClaim:\n\t\treturn \"UpdatePodGeneratedResourceClaim\"\n\tcase All:\n\t\treturn \"All\"\n\tcase Update:\n\t\treturn \"Update\"\n\t}\n\n\t// Shouldn't reach here.\n\treturn \"\"\n}\n\n// EventResource is basically short for group/version/kind, which can uniquely represent a particular API resource.\ntype EventResource string\n\n// Constants for GVKs.\n//\n// CAUTION for contributors: When you add a new EventResource, you must register a new one to allResources at k/k/pkg/scheduler/framework/types.go\n//\n// Note:\n// - UpdatePodXYZ or UpdateNodeXYZ: triggered by updating particular parts of a Pod or a Node, e.g. updatePodLabel.\n// Use specific events rather than general ones (updatePodLabel vs update) can make the requeueing process more efficient\n// and consume less memory as less events will be cached at scheduler.\nconst (\n\t// There are a couple of notes about how the scheduler notifies the events of Pods:\n\t// - Add: add events could be triggered by either a newly created Pod or an existing Pod that is scheduled to a Node.\n\t// - Delete: delete events could be triggered by:\n\t//           - a Pod that is deleted\n\t//           - a Pod that was assumed, but gets un-assumed due to some errors in the binding cycle.\n\t//           - an existing Pod that was unscheduled but gets scheduled to a Node.\n\t//\n\t// Note that the Pod event type includes the events for the unscheduled Pod itself.\n\t// i.e., when unscheduled Pods are updated, the scheduling queue checks with Pod/Update QueueingHint(s) whether the update may make the pods schedulable,\n\t// and requeues them to activeQ/backoffQ when at least one QueueingHint(s) return Queue.\n\t// Plugins **have to** implement a QueueingHint for Pod/Update event\n\t// if the rejection from them could be resolved by updating unscheduled Pods themselves.\n\t// Example: Pods that require excessive resources may be rejected by the noderesources plugin,\n\t// if this unscheduled pod is updated to require fewer resources,\n\t// the previous rejection from noderesources plugin can be resolved.\n\t// this plugin would implement QueueingHint for Pod/Update event\n\t// that returns Queue when such label changes are made in unscheduled Pods.\n\t//\n\t// There is one general pod resource: Pod, that contains three specific pod resources: AssignedPod, UnscheduledPod, and TargetPod.\n\t// Plugins can and are expected to register to specific pod events for better performance.\n\tPod EventResource = \"Pod\"\n\t// AssignedPod resource is associated with the cluster event that gets triggered when a scheduled pod is updated.\n\tAssignedPod EventResource = \"AssignedPod\"\n\t// UnscheduledPod resource is associated with the cluster event that gets triggered when an unscheduled pod is updated, other than the target pod.\n\tUnscheduledPod EventResource = \"UnscheduledPod\"\n\t// TargetPod resource is associated with the cluster event that gets triggered when an unscheduled pod itself is updated.\n\tTargetPod EventResource = \"TargetPod\"\n\n\tNode                  EventResource = \"Node\"\n\tPersistentVolume      EventResource = \"PersistentVolume\"\n\tPersistentVolumeClaim EventResource = \"PersistentVolumeClaim\"\n\tCSINode               EventResource = \"storage.k8s.io/CSINode\"\n\tCSIDriver             EventResource = \"storage.k8s.io/CSIDriver\"\n\tVolumeAttachment      EventResource = \"storage.k8s.io/VolumeAttachment\"\n\tCSIStorageCapacity    EventResource = \"storage.k8s.io/CSIStorageCapacity\"\n\tStorageClass          EventResource = \"storage.k8s.io/StorageClass\"\n\tResourceClaim         EventResource = \"resource.k8s.io/ResourceClaim\"\n\tResourceSlice         EventResource = \"resource.k8s.io/ResourceSlice\"\n\tDeviceClass           EventResource = \"resource.k8s.io/DeviceClass\"\n\tPodGroup              EventResource = \"scheduling.k8s.io/PodGroup\"\n\n\t// WildCard is a special EventResource to match all resources.\n\t// e.g., If you register `{Resource: \"*\", ActionType: All}` in EventsToRegister,\n\t// all coming clusterEvents will be admitted. Be careful to register it, it will\n\t// increase the computing pressure in requeueing unless you really need it.\n\t//\n\t// Meanwhile, if the coming clusterEvent is a wildcard one, all pods\n\t// will be moved from unschedulablePod pool to activeQ/backoffQ forcibly.\n\tWildCard EventResource = \"*\"\n)\n\ntype ClusterEventWithHint struct {\n\tEvent ClusterEvent\n\t// QueueingHintFn is executed for the Pod rejected by this plugin when the above Event happens,\n\t// and filters out events to reduce useless retry of Pod's scheduling.\n\t// It's an optional field. If not set,\n\t// the scheduling of Pods will be always retried with backoff when this Event happens.\n\t// (the same as Queue)\n\tQueueingHintFn QueueingHintFn\n}\n\n// QueueingHintFn returns a hint that signals whether the event can make a Pod,\n// which was rejected by this plugin in the past scheduling cycle, schedulable or not.\n// It's called before a Pod gets moved from unschedulableQ to backoffQ or activeQ.\n// If it returns an error, we'll take the returned QueueingHint as `Queue` at the caller whatever we returned here so that\n// we can prevent the Pod from being stuck in the unschedulable pod pool.\n//\n// - `pod`: the Pod to be enqueued, which is rejected by this plugin in the past.\n// - `oldObj` `newObj`: the object involved in that event.\n//   - For example, the given event is \"Node deleted\", the `oldObj` will be that deleted Node.\n//   - `oldObj` is nil if the event is add event.\n//   - `newObj` is nil if the event is delete event.\ntype QueueingHintFn func(logger klog.Logger, pod *v1.Pod, oldObj, newObj interface{}) (QueueingHint, error)\n\ntype QueueingHint int\n\nconst (\n\t// QueueSkip implies that the cluster event has no impact on\n\t// scheduling of the pod.\n\tQueueSkip QueueingHint = iota\n\n\t// Queue implies that the Pod may be schedulable by the event.\n\tQueue\n)\n\nfunc (s QueueingHint) String() string {\n\tswitch s {\n\tcase QueueSkip:\n\t\treturn \"QueueSkip\"\n\tcase Queue:\n\t\treturn \"Queue\"\n\t}\n\treturn \"\"\n}\n\n// ClusterEvent abstracts how a system resource's state gets changed.\n// Resource represents the standard API resources such as Pod, Node, etc.\n// ActionType denotes the specific change such as Add, Update or Delete.\ntype ClusterEvent struct {\n\tResource   EventResource\n\tActionType ActionType\n\n\t// CustomLabel describes this cluster event.\n\t// It's an optional field to control Label(), which is used in logging and metrics.\n\t// Normally, it's not necessary to set this field; only used for special events like UnschedulableTimeout.\n\tCustomLabel string\n}\n\n// Label is used for logging and metrics.\nfunc (ce ClusterEvent) Label() string {\n\tif ce.CustomLabel != \"\" {\n\t\treturn ce.CustomLabel\n\t}\n\n\treturn fmt.Sprintf(\"%v%v\", ce.Resource, ce.ActionType)\n}\n\n// NodeInfo is node level aggregated information.\ntype NodeInfo interface {\n\t// Node returns overall information about this node.\n\tNode() *v1.Node\n\t// GetPods returns Pods running on the node.\n\tGetPods() []PodInfo\n\t// GetPodsWithAffinity returns the subset of pods with affinity.\n\tGetPodsWithAffinity() []PodInfo\n\t// GetPodsWithRequiredAntiAffinity returns the subset of pods with required anti-affinity.\n\tGetPodsWithRequiredAntiAffinity() []PodInfo\n\t// GetUsedPorts returns the ports allocated on the node.\n\tGetUsedPorts() HostPortInfo\n\t// GetRequested returns total requested resources of all pods on this node. This includes assumed\n\t// pods, which scheduler has sent for binding, but may not be scheduled yet.\n\tGetRequested() Resource\n\t// GetNonZeroRequested return total requested resources of all pods on this node with a minimum value\n\t// applied to each container's CPU and memory requests. This does not reflect\n\t// the actual resource requests for this node, but is used to avoid scheduling\n\t// many zero-request pods onto one node.\n\tGetNonZeroRequested() Resource\n\t// We store allocatedResources (which is Node.Status.Allocatable.*) explicitly\n\t// as int64, to avoid conversions and accessing map.\n\tGetAllocatable() Resource\n\t// GetImageStates returns the entry of an image if and only if this image is on the node. The entry can be used for\n\t// checking an image's existence and advanced usage (e.g., image locality scheduling policy) based on the image\n\t// state information.\n\tGetImageStates() map[string]*ImageStateSummary\n\t// GetPVCRefCounts returns a mapping of PVC names to the number of pods on the node using it.\n\t// Keys are in the format \"namespace/name\".\n\tGetPVCRefCounts() map[string]int\n\t// Whenever NodeInfo changes, generation is bumped.\n\t// This is used to avoid cloning it if the object didn't change.\n\tGetGeneration() int64\n\t// GetNodeDeclaredFeatures returns the declared feature set of the node.\n\tGetNodeDeclaredFeatures() ndf.FeatureSet\n\t// Snapshot returns a copy of this node, Except that ImageStates is copied without the Nodes field.\n\tSnapshot() NodeInfo\n\t// String returns representation of human readable format of this NodeInfo.\n\tString() string\n\t// GetNodeAllocatableDRAClaimState returns the node allocatable DRA claim allocation states on this node.\n\tGetNodeAllocatableDRAClaimState() map[types.NamespacedName]*NodeAllocatableDRAClaimState\n\n\t// AddPodInfo adds pod information to this NodeInfo.\n\t// Consider using this instead of AddPod if a PodInfo is already computed.\n\tAddPodInfo(podInfo PodInfo)\n\t// RemovePod subtracts pod information from this NodeInfo.\n\tRemovePod(logger klog.Logger, pod *v1.Pod) error\n\t// SetNode sets the overall node information.\n\tSetNode(node *v1.Node)\n}\n\n// QueuedPodInfo is a Pod wrapper with additional information related to\n// the pod's status in the scheduling queue, such as the timestamp when\n// it's added to the queue.\ntype QueuedPodInfo interface {\n\t// GetPodInfo returns the PodInfo object wrapped by this QueuedPodInfo instance.\n\tGetPodInfo() PodInfo\n\t// GetTimestamp returns the time pod added to the scheduling queue.\n\tGetTimestamp() time.Time\n\t// GetAttempts returns the number of all schedule attempts before successfully scheduled.\n\t// It's used to record the # attempts metric.\n\tGetAttempts() int\n\t// GetBackoffExpiration returns the time when the Pod will complete its backoff.\n\t// If the SchedulerPopFromBackoffQ feature is enabled, the value is aligned to the backoff ordering window.\n\t// Then, two Pods with the same BackoffExpiration (time bucket) are ordered by priority and eventually the timestamp,\n\t// to make sure popping from the backoffQ considers priority of pods that are close to the expiration time.\n\tGetBackoffExpiration() time.Time\n\t// GetUnschedulableCount returns the total number of the scheduling attempts that this Pod gets unschedulable.\n\t// Basically it equals Attempts, but when the Pod fails with the Error status (e.g., the network error),\n\t// this count won't be incremented.\n\t// It's used to calculate the backoff time this Pod is obliged to get before retrying.\n\tGetUnschedulableCount() int\n\t// GetConsecutiveErrorsCount returns the number of the error status that this Pod gets sequentially.\n\t// This count is reset when the Pod gets another status than Error.\n\t//\n\t// If the error status is returned (e.g., kube-apiserver is unstable), we don't want to immediately retry the Pod and hence need a backoff retry mechanism\n\t// because that might push more burden to the kube-apiserver.\n\t// But, we don't want to calculate the backoff time in the same way as the normal unschedulable reason\n\t// since the purpose is different; the backoff for a unschedulable status etc is for the punishment of wasting the scheduling cycles,\n\t// whereas the backoff for the error status is for the protection of the kube-apiserver.\n\t// That's why we need to distinguish ConsecutiveErrorsCount for the error status and UnschedulableCount for the unschedulable status.\n\t// See https://github.com/kubernetes/kubernetes/issues/128744 for the discussion.\n\tGetConsecutiveErrorsCount() int\n\t// GetInitialAttemptTimestamp returns the time when the pod is added to the queue for the first time. The pod may be added\n\t// back to the queue multiple times before it's successfully scheduled.\n\t// It shouldn't be updated once initialized. It's used to record the e2e scheduling\n\t// latency for a pod.\n\tGetInitialAttemptTimestamp() *time.Time\n\t// GetUnschedulablePlugins records the plugin names that the Pod failed with Unschedulable or UnschedulableAndUnresolvable status\n\t// at specific extension points: PreFilter, Filter, Reserve, or Permit (WaitOnPermit).\n\t// If Pods are rejected at other extension points,\n\t// they're assumed to be unexpected errors (e.g., temporal network issue, plugin implementation issue, etc)\n\t// and retried soon after a backoff period.\n\t// That is because such failures could be solved regardless of incoming cluster events (registered in EventsToRegister).\n\tGetUnschedulablePlugins() sets.Set[string]\n\t// GetPendingPlugins records the plugin names that the Pod failed with Pending status.\n\tGetPendingPlugins() sets.Set[string]\n\t// GetGatingPlugin records the plugin name that gated the Pod at PreEnqueue.\n\tGetGatingPlugin() string\n\t// GetGatingPluginEvents records the events registered by the plugin that gated the Pod at PreEnqueue.\n\t// We have it as a cache purpose to avoid re-computing which event(s) might ungate the Pod.\n\tGetGatingPluginEvents() []ClusterEvent\n}\n\n// PodInfo is a wrapper to a Pod with additional pre-computed information to\n// accelerate processing. This information is typically immutable (e.g., pre-processed\n// inter-pod affinity selectors).\ntype PodInfo interface {\n\t// GetPod returns the wrapped Pod\n\tGetPod() *v1.Pod\n\t// GetRequiredAffinityTerms returns the precomputed affinity terms.\n\tGetRequiredAffinityTerms() []AffinityTerm\n\t// GetRequiredAffinitRequiredAntiAffinityTermsyTerms returns the precomputed anti-affinity terms.\n\tGetRequiredAntiAffinityTerms() []AffinityTerm\n\t// GetPreferredAffinityTerms returns the precomputed affinity terms with weights.\n\tGetPreferredAffinityTerms() []WeightedAffinityTerm\n\t// GetPreferredAntiAffinityTerms returns the precomputed anti-affinity terms with weights.\n\tGetPreferredAntiAffinityTerms() []WeightedAffinityTerm\n\t// CalculateResource is only intended to be used by NodeInfo.\n\tCalculateResource() PodResource\n}\n\n// PodResource contains the result of CalculateResource and is intended to be used only internally.\ntype PodResource struct {\n\tResource Resource\n\tNon0CPU  int64\n\tNon0Mem  int64\n}\n\n// AffinityTerm is a processed version of v1.PodAffinityTerm.\ntype AffinityTerm struct {\n\tNamespaces        sets.Set[string]\n\tSelector          labels.Selector\n\tTopologyKey       string\n\tNamespaceSelector labels.Selector\n}\n\n// Matches returns true if the pod matches the label selector and namespaces or namespace selector.\nfunc (at *AffinityTerm) Matches(pod *v1.Pod, nsLabels labels.Set) bool {\n\tif at.Namespaces.Has(pod.Namespace) || at.NamespaceSelector.Matches(nsLabels) {\n\t\treturn at.Selector.Matches(labels.Set(pod.Labels))\n\t}\n\treturn false\n}\n\n// WeightedAffinityTerm is a \"processed\" representation of v1.WeightedAffinityTerm.\ntype WeightedAffinityTerm struct {\n\tAffinityTerm\n\tWeight int32\n}\n\n// GetAffinityTerms receives a Pod and affinity terms and returns the namespaces and\n// selectors of the terms.\nfunc GetAffinityTerms(pod *v1.Pod, v1Terms []v1.PodAffinityTerm) ([]AffinityTerm, error) {\n\tif v1Terms == nil {\n\t\treturn nil, nil\n\t}\n\n\tvar terms []AffinityTerm\n\tfor i := range v1Terms {\n\t\tt, err := newAffinityTerm(pod, &v1Terms[i])\n\t\tif err != nil {\n\t\t\t// We get here if the label selector failed to process\n\t\t\treturn nil, err\n\t\t}\n\t\tterms = append(terms, *t)\n\t}\n\treturn terms, nil\n}\n\nfunc newAffinityTerm(pod *v1.Pod, term *v1.PodAffinityTerm) (*AffinityTerm, error) {\n\tselector, err := metav1.LabelSelectorAsSelector(term.LabelSelector)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tnamespaces := getNamespacesFromPodAffinityTerm(pod, term)\n\tnsSelector, err := metav1.LabelSelectorAsSelector(term.NamespaceSelector)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn &AffinityTerm{Namespaces: namespaces, Selector: selector, TopologyKey: term.TopologyKey, NamespaceSelector: nsSelector}, nil\n}\n\n// getNamespacesFromPodAffinityTerm returns a set of names according to the namespaces indicated in podAffinityTerm.\n// If namespaces is empty it considers the given pod's namespace.\nfunc getNamespacesFromPodAffinityTerm(pod *v1.Pod, podAffinityTerm *v1.PodAffinityTerm) sets.Set[string] {\n\tnames := sets.Set[string]{}\n\tif len(podAffinityTerm.Namespaces) == 0 && podAffinityTerm.NamespaceSelector == nil {\n\t\tnames.Insert(pod.Namespace)\n\t} else {\n\t\tnames.Insert(podAffinityTerm.Namespaces...)\n\t}\n\treturn names\n}\n\n// GetPodAffinityTerms returns the list of PodAffinityTerms specified in the PodAffinity.RequiredDuringSchedulingIgnoredDuringExecution field.\nfunc GetPodAffinityTerms(affinity *v1.Affinity) (terms []v1.PodAffinityTerm) {\n\tif affinity != nil && affinity.PodAffinity != nil {\n\t\tif len(affinity.PodAffinity.RequiredDuringSchedulingIgnoredDuringExecution) != 0 {\n\t\t\tterms = affinity.PodAffinity.RequiredDuringSchedulingIgnoredDuringExecution\n\t\t}\n\t\t// TODO: Uncomment this block when implement RequiredDuringSchedulingRequiredDuringExecution.\n\t\t// if len(affinity.PodAffinity.RequiredDuringSchedulingRequiredDuringExecution) != 0 {\n\t\t//\tterms = append(terms, affinity.PodAffinity.RequiredDuringSchedulingRequiredDuringExecution...)\n\t\t// }\n\t}\n\treturn terms\n}\n\n// GetWeightedAffinityTerms returns affinity terms with weights, namespaces and selectors of the terms.\nfunc GetWeightedAffinityTerms(pod *v1.Pod, v1Terms []v1.WeightedPodAffinityTerm) ([]WeightedAffinityTerm, error) {\n\tif v1Terms == nil {\n\t\treturn nil, nil\n\t}\n\n\tvar terms []WeightedAffinityTerm\n\tfor i := range v1Terms {\n\t\tt, err := newAffinityTerm(pod, &v1Terms[i].PodAffinityTerm)\n\t\tif err != nil {\n\t\t\t// We get here if the label selector failed to process\n\t\t\treturn nil, err\n\t\t}\n\t\tterms = append(terms, WeightedAffinityTerm{AffinityTerm: *t, Weight: v1Terms[i].Weight})\n\t}\n\treturn terms, nil\n}\n\n// GetPodAntiAffinityTerms returns the list of PodAffinityTerms specified in the PodAntiAffinity.RequiredDuringSchedulingIgnoredDuringExecution field.\nfunc GetPodAntiAffinityTerms(affinity *v1.Affinity) (terms []v1.PodAffinityTerm) {\n\tif affinity != nil && affinity.PodAntiAffinity != nil {\n\t\tif len(affinity.PodAntiAffinity.RequiredDuringSchedulingIgnoredDuringExecution) != 0 {\n\t\t\tterms = affinity.PodAntiAffinity.RequiredDuringSchedulingIgnoredDuringExecution\n\t\t}\n\t\t// TODO: Uncomment this block when implement RequiredDuringSchedulingRequiredDuringExecution.\n\t\t// if len(affinity.PodAntiAffinity.RequiredDuringSchedulingRequiredDuringExecution) != 0 {\n\t\t//\tterms = append(terms, affinity.PodAntiAffinity.RequiredDuringSchedulingRequiredDuringExecution...)\n\t\t// }\n\t}\n\treturn terms\n}\n\n// Resource is a collection of compute resources.\ntype Resource interface {\n\tGetMilliCPU() int64\n\tGetMemory() int64\n\tGetEphemeralStorage() int64\n\t// We return AllowedPodNumber (which is Node.Status.Allocatable.Pods().Value())\n\t// explicitly as int, to avoid conversions and improve performance.\n\tGetAllowedPodNumber() int\n\t// ScalarResources returns a map for resource names to their scalar values\n\tGetScalarResources() map[v1.ResourceName]int64\n\t// SetMaxResource compares with ResourceList and takes max value for each Resource.\n\tSetMaxResource(rl v1.ResourceList)\n}\n\n// ImageStateSummary provides summarized information about the state of an image.\ntype ImageStateSummary struct {\n\t// Size of the image\n\tSize int64\n\t// Used to track how many nodes have this image, it is computed from the Nodes field below\n\t// during the execution of Snapshot.\n\tNumNodes int\n\t// A set of node names for nodes having this image present. This field is used for\n\t// keeping track of the nodes during update/add/remove events.\n\tNodes sets.Set[string]\n}\n\n// Snapshot returns a copy without Nodes field of ImageStateSummary\nfunc (iss *ImageStateSummary) Snapshot() *ImageStateSummary {\n\treturn &ImageStateSummary{\n\t\tSize:     iss.Size,\n\t\tNumNodes: iss.Nodes.Len(),\n\t}\n}\n\n// DefaultBindAllHostIP defines the default ip address used to bind to all host.\nconst DefaultBindAllHostIP = \"0.0.0.0\"\n\n// ProtocolPort represents a protocol port pair, e.g. tcp:80.\ntype ProtocolPort struct {\n\tProtocol string\n\tPort     int32\n}\n\n// NewProtocolPort creates a ProtocolPort instance.\nfunc NewProtocolPort(protocol string, port int32) *ProtocolPort {\n\tpp := &ProtocolPort{\n\t\tProtocol: protocol,\n\t\tPort:     port,\n\t}\n\n\tif len(pp.Protocol) == 0 {\n\t\tpp.Protocol = string(v1.ProtocolTCP)\n\t}\n\n\treturn pp\n}\n\n// HostPortInfo stores mapping from ip to a set of ProtocolPort\ntype HostPortInfo map[string]map[ProtocolPort]struct{}\n\n// Add adds (ip, protocol, port) to HostPortInfo\nfunc (h HostPortInfo) Add(ip, protocol string, port int32) {\n\tif port <= 0 {\n\t\treturn\n\t}\n\n\th.sanitize(&ip, &protocol)\n\n\tpp := NewProtocolPort(protocol, port)\n\tif _, ok := h[ip]; !ok {\n\t\th[ip] = map[ProtocolPort]struct{}{\n\t\t\t*pp: {},\n\t\t}\n\t\treturn\n\t}\n\n\th[ip][*pp] = struct{}{}\n}\n\n// Remove removes (ip, protocol, port) from HostPortInfo\nfunc (h HostPortInfo) Remove(ip, protocol string, port int32) {\n\tif port <= 0 {\n\t\treturn\n\t}\n\n\th.sanitize(&ip, &protocol)\n\n\tpp := NewProtocolPort(protocol, port)\n\tif m, ok := h[ip]; ok {\n\t\tdelete(m, *pp)\n\t\tif len(h[ip]) == 0 {\n\t\t\tdelete(h, ip)\n\t\t}\n\t}\n}\n\n// Len returns the total number of (ip, protocol, port) tuple in HostPortInfo\nfunc (h HostPortInfo) Len() int {\n\tlength := 0\n\tfor _, m := range h {\n\t\tlength += len(m)\n\t}\n\treturn length\n}\n\n// CheckConflict checks if the input (ip, protocol, port) conflicts with the existing\n// ones in HostPortInfo.\nfunc (h HostPortInfo) CheckConflict(ip, protocol string, port int32) bool {\n\tif port <= 0 {\n\t\treturn false\n\t}\n\n\th.sanitize(&ip, &protocol)\n\n\tpp := NewProtocolPort(protocol, port)\n\n\t// If ip is 0.0.0.0 check all IP's (protocol, port) pair\n\tif ip == DefaultBindAllHostIP {\n\t\tfor _, m := range h {\n\t\t\tif _, ok := m[*pp]; ok {\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\t\treturn false\n\t}\n\n\t// If ip isn't 0.0.0.0, only check IP and 0.0.0.0's (protocol, port) pair\n\tfor _, key := range []string{DefaultBindAllHostIP, ip} {\n\t\tif m, ok := h[key]; ok {\n\t\t\tif _, ok2 := m[*pp]; ok2 {\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\t}\n\n\treturn false\n}\n\n// sanitize the parameters\nfunc (h HostPortInfo) sanitize(ip, protocol *string) {\n\tif len(*ip) == 0 {\n\t\t*ip = DefaultBindAllHostIP\n\t}\n\tif len(*protocol) == 0 {\n\t\t*protocol = string(v1.ProtocolTCP)\n\t}\n}\n\n// PodGroupInfo is a wrapper around the PodGroup API object together with a list of unscheduled pods that belong to the pod group.\n// Typically used as an input to pod group scheduling cycle plugins.\ntype PodGroupInfo interface {\n\t// GetUnscheduledPods returns pods that are currently being considered for scheduling.\n\t// The order of the pods is deterministic and based on signature, priority and timestamp.\n\t// This structure only contains the pods considered for scheduling in the pod group scheduling cycle.\n\tGetUnscheduledPods() []*v1.Pod\n\n\t// GetName returns the PodGroup name that is used to identify the pod group.\n\tGetName() string\n\t// GetNamespace returns the namespace the pod group belongs to.\n\tGetNamespace() string\n}\n\n// Placement determines the resources to be considered when scheduling a pod group.\n// Pod group scheduling cycle can check multiple placements and select the one that results\n// in the best pod assignments.\ntype Placement struct {\n\t// Name uniquely identifies the placement.\n\t// This is used for diagnostics and debugability.\n\t// The choice of the name is up to the PlacementGeneratePlugin.\n\tName string\n\t// Nodes specifies the nodes that are valid for this placement.\n\t// Scheduler will try to schedule the pod group using only those nodes.\n\tNodes []NodeInfo\n}\n\n// ProposedAssignment associates pod of a pod group with a proposed node assignment, determined in pod group scheduling cycle.\ntype ProposedAssignment interface {\n\t// GetPod returns the pod that has the proposed node assignment.\n\tGetPod() *v1.Pod\n\t// GetNodeName returns the name of the proposed node for the pod.\n\tGetNodeName() string\n}\n\n// PodGroupAssignments holds the temporary assignments of pods in a pod group to nodes for a placement.\n// Can be used in the pod group scheduling cycle to determine the best placement for a pod group.\ntype PodGroupAssignments struct {\n\t*Placement\n\t// ProposedAssignments associates pods with proposed nodes that were determined for a given placement\n\t// during the pod group scheduling cycle.\n\t// The pods are guaranteed to also be present in the PodGroupInfo.\n\tProposedAssignments []ProposedAssignment\n}\n\n// NodeAllocatableDRAClaimState holds information about a node allocatable resource DRA claim's allocation on a node.\ntype NodeAllocatableDRAClaimState struct {\n\t// ConsumerPods is a set of UIDs of pods that are consuming the DRA claim on this node.\n\tConsumerPods sets.Set[types.UID]\n}\n\n// Snapshot returns a copy of NodeAllocatableDRAClaimAllocationState with ConsumerPods cloned.\nfunc (s *NodeAllocatableDRAClaimState) Snapshot() *NodeAllocatableDRAClaimState {\n\tif s == nil {\n\t\treturn nil\n\t}\n\treturn &NodeAllocatableDRAClaimState{\n\t\tConsumerPods: s.ConsumerPods.Clone(),\n\t}\n}\n"
  },
  {
    "path": "framework/types_test.go",
    "content": "/*\nCopyright 2025 The Kubernetes Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage framework\n\nimport (\n\t\"testing\"\n\n\t\"github.com/google/go-cmp/cmp\"\n\n\tv1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/util/sets\"\n)\n\ntype hostPortInfoParam struct {\n\tprotocol, ip string\n\tport         int32\n}\n\nfunc TestHostPortInfo_AddRemove(t *testing.T) {\n\ttests := []struct {\n\t\tdesc    string\n\t\tadded   []hostPortInfoParam\n\t\tremoved []hostPortInfoParam\n\t\tlength  int\n\t}{\n\t\t{\n\t\t\tdesc: \"normal add case\",\n\t\t\tadded: []hostPortInfoParam{\n\t\t\t\t{\"TCP\", \"127.0.0.1\", 79},\n\t\t\t\t{\"UDP\", \"127.0.0.1\", 80},\n\t\t\t\t{\"TCP\", \"127.0.0.1\", 81},\n\t\t\t\t{\"TCP\", \"127.0.0.1\", 82},\n\t\t\t\t// this might not make sense in real case, but the struct doesn't forbid it.\n\t\t\t\t{\"TCP\", \"0.0.0.0\", 79},\n\t\t\t\t{\"UDP\", \"0.0.0.0\", 80},\n\t\t\t\t{\"TCP\", \"0.0.0.0\", 81},\n\t\t\t\t{\"TCP\", \"0.0.0.0\", 82},\n\t\t\t\t{\"TCP\", \"0.0.0.0\", 0},\n\t\t\t\t{\"TCP\", \"0.0.0.0\", -1},\n\t\t\t},\n\t\t\tlength: 8,\n\t\t},\n\t\t{\n\t\t\tdesc: \"empty ip and protocol add should work\",\n\t\t\tadded: []hostPortInfoParam{\n\t\t\t\t{\"\", \"127.0.0.1\", 79},\n\t\t\t\t{\"UDP\", \"127.0.0.1\", 80},\n\t\t\t\t{\"\", \"127.0.0.1\", 81},\n\t\t\t\t{\"\", \"127.0.0.1\", 82},\n\t\t\t\t{\"\", \"\", 79},\n\t\t\t\t{\"UDP\", \"\", 80},\n\t\t\t\t{\"\", \"\", 81},\n\t\t\t\t{\"\", \"\", 82},\n\t\t\t\t{\"\", \"\", 0},\n\t\t\t\t{\"\", \"\", -1},\n\t\t\t},\n\t\t\tlength: 8,\n\t\t},\n\t\t{\n\t\t\tdesc: \"normal remove case\",\n\t\t\tadded: []hostPortInfoParam{\n\t\t\t\t{\"TCP\", \"127.0.0.1\", 79},\n\t\t\t\t{\"UDP\", \"127.0.0.1\", 80},\n\t\t\t\t{\"TCP\", \"127.0.0.1\", 81},\n\t\t\t\t{\"TCP\", \"127.0.0.1\", 82},\n\t\t\t\t{\"TCP\", \"0.0.0.0\", 79},\n\t\t\t\t{\"UDP\", \"0.0.0.0\", 80},\n\t\t\t\t{\"TCP\", \"0.0.0.0\", 81},\n\t\t\t\t{\"TCP\", \"0.0.0.0\", 82},\n\t\t\t},\n\t\t\tremoved: []hostPortInfoParam{\n\t\t\t\t{\"TCP\", \"127.0.0.1\", 79},\n\t\t\t\t{\"UDP\", \"127.0.0.1\", 80},\n\t\t\t\t{\"TCP\", \"127.0.0.1\", 81},\n\t\t\t\t{\"TCP\", \"127.0.0.1\", 82},\n\t\t\t\t{\"TCP\", \"0.0.0.0\", 79},\n\t\t\t\t{\"UDP\", \"0.0.0.0\", 80},\n\t\t\t\t{\"TCP\", \"0.0.0.0\", 81},\n\t\t\t\t{\"TCP\", \"0.0.0.0\", 82},\n\t\t\t},\n\t\t\tlength: 0,\n\t\t},\n\t\t{\n\t\t\tdesc: \"empty ip and protocol remove should work\",\n\t\t\tadded: []hostPortInfoParam{\n\t\t\t\t{\"TCP\", \"127.0.0.1\", 79},\n\t\t\t\t{\"UDP\", \"127.0.0.1\", 80},\n\t\t\t\t{\"TCP\", \"127.0.0.1\", 81},\n\t\t\t\t{\"TCP\", \"127.0.0.1\", 82},\n\t\t\t\t{\"TCP\", \"0.0.0.0\", 79},\n\t\t\t\t{\"UDP\", \"0.0.0.0\", 80},\n\t\t\t\t{\"TCP\", \"0.0.0.0\", 81},\n\t\t\t\t{\"TCP\", \"0.0.0.0\", 82},\n\t\t\t},\n\t\t\tremoved: []hostPortInfoParam{\n\t\t\t\t{\"\", \"127.0.0.1\", 79},\n\t\t\t\t{\"\", \"127.0.0.1\", 81},\n\t\t\t\t{\"\", \"127.0.0.1\", 82},\n\t\t\t\t{\"UDP\", \"127.0.0.1\", 80},\n\t\t\t\t{\"\", \"\", 79},\n\t\t\t\t{\"\", \"\", 81},\n\t\t\t\t{\"\", \"\", 82},\n\t\t\t\t{\"UDP\", \"\", 80},\n\t\t\t},\n\t\t\tlength: 0,\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.desc, func(t *testing.T) {\n\t\t\thp := make(HostPortInfo)\n\t\t\tfor _, param := range test.added {\n\t\t\t\thp.Add(param.ip, param.protocol, param.port)\n\t\t\t}\n\t\t\tfor _, param := range test.removed {\n\t\t\t\thp.Remove(param.ip, param.protocol, param.port)\n\t\t\t}\n\t\t\tif hp.Len() != test.length {\n\t\t\t\tt.Errorf(\"%v failed: expect length %d; got %d\", test.desc, test.length, hp.Len())\n\t\t\t\tt.Error(hp)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestHostPortInfo_Check(t *testing.T) {\n\ttests := []struct {\n\t\tdesc   string\n\t\tadded  []hostPortInfoParam\n\t\tcheck  hostPortInfoParam\n\t\texpect bool\n\t}{\n\t\t{\n\t\t\tdesc: \"empty check should check 0.0.0.0 and TCP\",\n\t\t\tadded: []hostPortInfoParam{\n\t\t\t\t{\"TCP\", \"127.0.0.1\", 80},\n\t\t\t},\n\t\t\tcheck:  hostPortInfoParam{\"\", \"\", 81},\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tdesc: \"empty check should check 0.0.0.0 and TCP (conflicted)\",\n\t\t\tadded: []hostPortInfoParam{\n\t\t\t\t{\"TCP\", \"127.0.0.1\", 80},\n\t\t\t},\n\t\t\tcheck:  hostPortInfoParam{\"\", \"\", 80},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tdesc: \"empty port check should pass\",\n\t\t\tadded: []hostPortInfoParam{\n\t\t\t\t{\"TCP\", \"127.0.0.1\", 80},\n\t\t\t},\n\t\t\tcheck:  hostPortInfoParam{\"\", \"\", 0},\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tdesc: \"0.0.0.0 should check all registered IPs\",\n\t\t\tadded: []hostPortInfoParam{\n\t\t\t\t{\"TCP\", \"127.0.0.1\", 80},\n\t\t\t},\n\t\t\tcheck:  hostPortInfoParam{\"TCP\", \"0.0.0.0\", 80},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tdesc: \"0.0.0.0 with different protocol should be allowed\",\n\t\t\tadded: []hostPortInfoParam{\n\t\t\t\t{\"UDP\", \"127.0.0.1\", 80},\n\t\t\t},\n\t\t\tcheck:  hostPortInfoParam{\"TCP\", \"0.0.0.0\", 80},\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tdesc: \"0.0.0.0 with different port should be allowed\",\n\t\t\tadded: []hostPortInfoParam{\n\t\t\t\t{\"TCP\", \"127.0.0.1\", 79},\n\t\t\t\t{\"TCP\", \"127.0.0.1\", 81},\n\t\t\t\t{\"TCP\", \"127.0.0.1\", 82},\n\t\t\t},\n\t\t\tcheck:  hostPortInfoParam{\"TCP\", \"0.0.0.0\", 80},\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tdesc: \"normal ip should check all registered 0.0.0.0\",\n\t\t\tadded: []hostPortInfoParam{\n\t\t\t\t{\"TCP\", \"0.0.0.0\", 80},\n\t\t\t},\n\t\t\tcheck:  hostPortInfoParam{\"TCP\", \"127.0.0.1\", 80},\n\t\t\texpect: true,\n\t\t},\n\t\t{\n\t\t\tdesc: \"normal ip with different port/protocol should be allowed (0.0.0.0)\",\n\t\t\tadded: []hostPortInfoParam{\n\t\t\t\t{\"TCP\", \"0.0.0.0\", 79},\n\t\t\t\t{\"UDP\", \"0.0.0.0\", 80},\n\t\t\t\t{\"TCP\", \"0.0.0.0\", 81},\n\t\t\t\t{\"TCP\", \"0.0.0.0\", 82},\n\t\t\t},\n\t\t\tcheck:  hostPortInfoParam{\"TCP\", \"127.0.0.1\", 80},\n\t\t\texpect: false,\n\t\t},\n\t\t{\n\t\t\tdesc: \"normal ip with different port/protocol should be allowed\",\n\t\t\tadded: []hostPortInfoParam{\n\t\t\t\t{\"TCP\", \"127.0.0.1\", 79},\n\t\t\t\t{\"UDP\", \"127.0.0.1\", 80},\n\t\t\t\t{\"TCP\", \"127.0.0.1\", 81},\n\t\t\t\t{\"TCP\", \"127.0.0.1\", 82},\n\t\t\t},\n\t\t\tcheck:  hostPortInfoParam{\"TCP\", \"127.0.0.1\", 80},\n\t\t\texpect: false,\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.desc, func(t *testing.T) {\n\t\t\thp := make(HostPortInfo)\n\t\t\tfor _, param := range test.added {\n\t\t\t\thp.Add(param.ip, param.protocol, param.port)\n\t\t\t}\n\t\t\tif hp.CheckConflict(test.check.ip, test.check.protocol, test.check.port) != test.expect {\n\t\t\t\tt.Errorf(\"expected %t; got %t\", test.expect, !test.expect)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestGetNamespacesFromPodAffinityTerm(t *testing.T) {\n\ttests := []struct {\n\t\tname string\n\t\tterm *v1.PodAffinityTerm\n\t\twant sets.Set[string]\n\t}{\n\t\t{\n\t\t\tname: \"podAffinityTerm_namespace_empty\",\n\t\t\tterm: &v1.PodAffinityTerm{},\n\t\t\twant: sets.Set[string]{metav1.NamespaceDefault: sets.Empty{}},\n\t\t},\n\t\t{\n\t\t\tname: \"podAffinityTerm_namespace_not_empty\",\n\t\t\tterm: &v1.PodAffinityTerm{\n\t\t\t\tNamespaces: []string{metav1.NamespacePublic, metav1.NamespaceSystem},\n\t\t\t},\n\t\t\twant: sets.New(metav1.NamespacePublic, metav1.NamespaceSystem),\n\t\t},\n\t\t{\n\t\t\tname: \"podAffinityTerm_namespace_selector_not_nil\",\n\t\t\tterm: &v1.PodAffinityTerm{\n\t\t\t\tNamespaceSelector: &metav1.LabelSelector{},\n\t\t\t},\n\t\t\twant: sets.Set[string]{},\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tgot := getNamespacesFromPodAffinityTerm(&v1.Pod{\n\t\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\t\tName:      \"topologies_pod\",\n\t\t\t\t\tNamespace: metav1.NamespaceDefault,\n\t\t\t\t},\n\t\t\t}, test.term)\n\t\t\tif diff := cmp.Diff(test.want, got); diff != \"\" {\n\t\t\t\tt.Errorf(\"Unexpected namespaces (-want, +got):\\n%s\", diff)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "go.mod",
    "content": "// This is a generated file. Do not edit directly.\n\nmodule k8s.io/kube-scheduler\n\ngo 1.26.0\n\ngodebug default=go1.26\n\nrequire (\n\tgithub.com/google/go-cmp v0.7.0\n\tk8s.io/api v0.0.0-20260506204515-74f8152a4388\n\tk8s.io/apimachinery v0.0.0-20260506204125-679298e8cb0f\n\tk8s.io/client-go v0.0.0-20260506205028-24705f39ff1a\n\tk8s.io/component-base v0.0.0-20260506210233-5f255b73349b\n\tk8s.io/component-helpers v0.0.0-20260506210426-85ee2c4ec30d\n\tk8s.io/dynamic-resource-allocation v0.0.0-20260506220151-a1ba28d29f24\n\tk8s.io/klog/v2 v2.140.0\n\tsigs.k8s.io/yaml v1.6.0\n)\n\nrequire (\n\tcel.dev/expr v0.25.1 // indirect\n\tgithub.com/antlr4-go/antlr/v4 v4.13.1 // indirect\n\tgithub.com/beorn7/perks v1.0.1 // indirect\n\tgithub.com/blang/semver/v4 v4.0.0 // indirect\n\tgithub.com/cespare/xxhash/v2 v2.3.0 // indirect\n\tgithub.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect\n\tgithub.com/emicklei/go-restful/v3 v3.13.0 // indirect\n\tgithub.com/fxamacker/cbor/v2 v2.9.1 // indirect\n\tgithub.com/go-logr/logr v1.4.3 // indirect\n\tgithub.com/go-openapi/jsonpointer v0.22.4 // indirect\n\tgithub.com/go-openapi/jsonreference v0.21.4 // indirect\n\tgithub.com/go-openapi/swag v0.25.4 // indirect\n\tgithub.com/go-openapi/swag/cmdutils v0.25.4 // indirect\n\tgithub.com/go-openapi/swag/conv v0.25.4 // indirect\n\tgithub.com/go-openapi/swag/fileutils v0.25.4 // indirect\n\tgithub.com/go-openapi/swag/jsonname v0.25.4 // indirect\n\tgithub.com/go-openapi/swag/jsonutils v0.25.4 // indirect\n\tgithub.com/go-openapi/swag/loading v0.25.4 // indirect\n\tgithub.com/go-openapi/swag/mangling v0.25.4 // indirect\n\tgithub.com/go-openapi/swag/netutils v0.25.4 // indirect\n\tgithub.com/go-openapi/swag/stringutils v0.25.4 // indirect\n\tgithub.com/go-openapi/swag/typeutils v0.25.4 // indirect\n\tgithub.com/go-openapi/swag/yamlutils v0.25.4 // indirect\n\tgithub.com/google/cel-go v0.27.0 // indirect\n\tgithub.com/google/gnostic-models v0.7.0 // indirect\n\tgithub.com/google/uuid v1.6.0 // indirect\n\tgithub.com/inconshreveable/mousetrap v1.1.0 // indirect\n\tgithub.com/json-iterator/go v1.1.12 // indirect\n\tgithub.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect\n\tgithub.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee // indirect\n\tgithub.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect\n\tgithub.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect\n\tgithub.com/prometheus/client_golang v1.23.2 // indirect\n\tgithub.com/prometheus/client_model v0.6.2 // indirect\n\tgithub.com/prometheus/common v0.67.5 // indirect\n\tgithub.com/prometheus/procfs v0.19.2 // indirect\n\tgithub.com/spf13/cobra v1.10.2 // indirect\n\tgithub.com/spf13/pflag v1.0.10 // indirect\n\tgithub.com/x448/float16 v0.8.4 // indirect\n\tgo.opentelemetry.io/otel v1.43.0 // indirect\n\tgo.opentelemetry.io/otel/trace v1.43.0 // indirect\n\tgo.yaml.in/yaml/v2 v2.4.4 // indirect\n\tgo.yaml.in/yaml/v3 v3.0.4 // indirect\n\tgolang.org/x/exp v0.0.0-20260410095643-746e56fc9e2f // indirect\n\tgolang.org/x/net v0.53.0 // indirect\n\tgolang.org/x/oauth2 v0.36.0 // indirect\n\tgolang.org/x/sync v0.20.0 // indirect\n\tgolang.org/x/sys v0.43.0 // indirect\n\tgolang.org/x/term v0.42.0 // indirect\n\tgolang.org/x/text v0.36.0 // indirect\n\tgolang.org/x/time v0.15.0 // indirect\n\tgoogle.golang.org/genproto/googleapis/api v0.0.0-20260414002931-afd174a4e478 // indirect\n\tgoogle.golang.org/genproto/googleapis/rpc v0.0.0-20260414002931-afd174a4e478 // indirect\n\tgoogle.golang.org/protobuf v1.36.12-0.20260120151049-f2248ac996af // indirect\n\tgopkg.in/evanphx/json-patch.v4 v4.13.0 // indirect\n\tgopkg.in/inf.v0 v0.9.1 // indirect\n\tk8s.io/apiserver v0.0.0-20260506211241-4aa69dc1d36e // indirect\n\tk8s.io/kube-openapi v0.0.0-20260502001324-b7f5293f4787 // indirect\n\tk8s.io/utils v0.0.0-20260210185600-b8788abfbbc2 // indirect\n\tsigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 // indirect\n\tsigs.k8s.io/randfill v1.0.0 // indirect\n\tsigs.k8s.io/structured-merge-diff/v6 v6.3.2 // indirect\n)\n"
  },
  {
    "path": "go.sum",
    "content": "cel.dev/expr v0.25.1 h1:1KrZg61W6TWSxuNZ37Xy49ps13NUovb66QLprthtwi4=\ncel.dev/expr v0.25.1/go.mod h1:hrXvqGP6G6gyx8UAHSHJ5RGk//1Oj5nXQ2NI02Nrsg4=\ngithub.com/antlr4-go/antlr/v4 v4.13.1 h1:SqQKkuVZ+zWkMMNkjy5FZe5mr5WURWnlpmOuzYWrPrQ=\ngithub.com/antlr4-go/antlr/v4 v4.13.1/go.mod h1:GKmUxMtwp6ZgGwZSva4eWPC5mS6vUAmOABFgjdkM7Nw=\ngithub.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=\ngithub.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=\ngithub.com/blang/semver/v4 v4.0.0 h1:1PFHFE6yCCTv8C1TeyNNarDzntLi7wMI5i/pzqYIsAM=\ngithub.com/blang/semver/v4 v4.0.0/go.mod h1:IbckMUScFkM3pff0VJDNKRiT6TG/YpiHIM2yvyW5YoQ=\ngithub.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=\ngithub.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=\ngithub.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=\ngithub.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=\ngithub.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/emicklei/go-restful/v3 v3.13.0 h1:C4Bl2xDndpU6nJ4bc1jXd+uTmYPVUwkD6bFY/oTyCes=\ngithub.com/emicklei/go-restful/v3 v3.13.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=\ngithub.com/fxamacker/cbor/v2 v2.9.1 h1:2rWm8B193Ll4VdjsJY28jxs70IdDsHRWgQYAI80+rMQ=\ngithub.com/fxamacker/cbor/v2 v2.9.1/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ=\ngithub.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=\ngithub.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=\ngithub.com/go-openapi/jsonpointer v0.22.4 h1:dZtK82WlNpVLDW2jlA1YCiVJFVqkED1MegOUy9kR5T4=\ngithub.com/go-openapi/jsonpointer v0.22.4/go.mod h1:elX9+UgznpFhgBuaMQ7iu4lvvX1nvNsesQ3oxmYTw80=\ngithub.com/go-openapi/jsonreference v0.21.4 h1:24qaE2y9bx/q3uRK/qN+TDwbok1NhbSmGjjySRCHtC8=\ngithub.com/go-openapi/jsonreference v0.21.4/go.mod h1:rIENPTjDbLpzQmQWCj5kKj3ZlmEh+EFVbz3RTUh30/4=\ngithub.com/go-openapi/swag v0.25.4 h1:OyUPUFYDPDBMkqyxOTkqDYFnrhuhi9NR6QVUvIochMU=\ngithub.com/go-openapi/swag v0.25.4/go.mod h1:zNfJ9WZABGHCFg2RnY0S4IOkAcVTzJ6z2Bi+Q4i6qFQ=\ngithub.com/go-openapi/swag/cmdutils v0.25.4 h1:8rYhB5n6WawR192/BfUu2iVlxqVR9aRgGJP6WaBoW+4=\ngithub.com/go-openapi/swag/cmdutils v0.25.4/go.mod h1:pdae/AFo6WxLl5L0rq87eRzVPm/XRHM3MoYgRMvG4A0=\ngithub.com/go-openapi/swag/conv v0.25.4 h1:/Dd7p0LZXczgUcC/Ikm1+YqVzkEeCc9LnOWjfkpkfe4=\ngithub.com/go-openapi/swag/conv v0.25.4/go.mod h1:3LXfie/lwoAv0NHoEuY1hjoFAYkvlqI/Bn5EQDD3PPU=\ngithub.com/go-openapi/swag/fileutils v0.25.4 h1:2oI0XNW5y6UWZTC7vAxC8hmsK/tOkWXHJQH4lKjqw+Y=\ngithub.com/go-openapi/swag/fileutils v0.25.4/go.mod h1:cdOT/PKbwcysVQ9Tpr0q20lQKH7MGhOEb6EwmHOirUk=\ngithub.com/go-openapi/swag/jsonname v0.25.4 h1:bZH0+MsS03MbnwBXYhuTttMOqk+5KcQ9869Vye1bNHI=\ngithub.com/go-openapi/swag/jsonname v0.25.4/go.mod h1:GPVEk9CWVhNvWhZgrnvRA6utbAltopbKwDu8mXNUMag=\ngithub.com/go-openapi/swag/jsonutils v0.25.4 h1:VSchfbGhD4UTf4vCdR2F4TLBdLwHyUDTd1/q4i+jGZA=\ngithub.com/go-openapi/swag/jsonutils v0.25.4/go.mod h1:7OYGXpvVFPn4PpaSdPHJBtF0iGnbEaTk8AvBkoWnaAY=\ngithub.com/go-openapi/swag/jsonutils/fixtures_test v0.25.4 h1:IACsSvBhiNJwlDix7wq39SS2Fh7lUOCJRmx/4SN4sVo=\ngithub.com/go-openapi/swag/jsonutils/fixtures_test v0.25.4/go.mod h1:Mt0Ost9l3cUzVv4OEZG+WSeoHwjWLnarzMePNDAOBiM=\ngithub.com/go-openapi/swag/loading v0.25.4 h1:jN4MvLj0X6yhCDduRsxDDw1aHe+ZWoLjW+9ZQWIKn2s=\ngithub.com/go-openapi/swag/loading v0.25.4/go.mod h1:rpUM1ZiyEP9+mNLIQUdMiD7dCETXvkkC30z53i+ftTE=\ngithub.com/go-openapi/swag/mangling v0.25.4 h1:2b9kBJk9JvPgxr36V23FxJLdwBrpijI26Bx5JH4Hp48=\ngithub.com/go-openapi/swag/mangling v0.25.4/go.mod h1:6dxwu6QyORHpIIApsdZgb6wBk/DPU15MdyYj/ikn0Hg=\ngithub.com/go-openapi/swag/netutils v0.25.4 h1:Gqe6K71bGRb3ZQLusdI8p/y1KLgV4M/k+/HzVSqT8H0=\ngithub.com/go-openapi/swag/netutils v0.25.4/go.mod h1:m2W8dtdaoX7oj9rEttLyTeEFFEBvnAx9qHd5nJEBzYg=\ngithub.com/go-openapi/swag/stringutils v0.25.4 h1:O6dU1Rd8bej4HPA3/CLPciNBBDwZj9HiEpdVsb8B5A8=\ngithub.com/go-openapi/swag/stringutils v0.25.4/go.mod h1:GTsRvhJW5xM5gkgiFe0fV3PUlFm0dr8vki6/VSRaZK0=\ngithub.com/go-openapi/swag/typeutils v0.25.4 h1:1/fbZOUN472NTc39zpa+YGHn3jzHWhv42wAJSN91wRw=\ngithub.com/go-openapi/swag/typeutils v0.25.4/go.mod h1:Ou7g//Wx8tTLS9vG0UmzfCsjZjKhpjxayRKTHXf2pTE=\ngithub.com/go-openapi/swag/yamlutils v0.25.4 h1:6jdaeSItEUb7ioS9lFoCZ65Cne1/RZtPBZ9A56h92Sw=\ngithub.com/go-openapi/swag/yamlutils v0.25.4/go.mod h1:MNzq1ulQu+yd8Kl7wPOut/YHAAU/H6hL91fF+E2RFwc=\ngithub.com/go-openapi/testify/enable/yaml/v2 v2.0.2 h1:0+Y41Pz1NkbTHz8NngxTuAXxEodtNSI1WG1c/m5Akw4=\ngithub.com/go-openapi/testify/enable/yaml/v2 v2.0.2/go.mod h1:kme83333GCtJQHXQ8UKX3IBZu6z8T5Dvy5+CW3NLUUg=\ngithub.com/go-openapi/testify/v2 v2.0.2 h1:X999g3jeLcoY8qctY/c/Z8iBHTbwLz7R2WXd6Ub6wls=\ngithub.com/go-openapi/testify/v2 v2.0.2/go.mod h1:HCPmvFFnheKK2BuwSA0TbbdxJ3I16pjwMkYkP4Ywn54=\ngithub.com/google/cel-go v0.27.0 h1:e7ih85+4qVrBuqQWTW4FKSqZYokVuc3HnhH5keboFTo=\ngithub.com/google/cel-go v0.27.0/go.mod h1:tTJ11FWqnhw5KKpnWpvW9CJC3Y9GK4EIS0WXnBbebzw=\ngithub.com/google/gnostic-models v0.7.0 h1:qwTtogB15McXDaNqTZdzPJRHvaVJlAl+HVQnLmJEJxo=\ngithub.com/google/gnostic-models v0.7.0/go.mod h1:whL5G0m6dmc5cPxKc5bdKdEN3UjI7OUGxBlw57miDrQ=\ngithub.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=\ngithub.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=\ngithub.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=\ngithub.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=\ngithub.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=\ngithub.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=\ngithub.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=\ngithub.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=\ngithub.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=\ngithub.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=\ngithub.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=\ngithub.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=\ngithub.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=\ngithub.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=\ngithub.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=\ngithub.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=\ngithub.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=\ngithub.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=\ngithub.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=\ngithub.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=\ngithub.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=\ngithub.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee h1:W5t00kpgFdJifH4BDsTlE89Zl93FEloxaWZfGcifgq8=\ngithub.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=\ngithub.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=\ngithub.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=\ngithub.com/onsi/gomega v1.39.1 h1:1IJLAad4zjPn2PsnhH70V4DKRFlrCzGBNrNaru+Vf28=\ngithub.com/onsi/gomega v1.39.1/go.mod h1:hL6yVALoTOxeWudERyfppUcZXjMwIMLnuSfruD2lcfg=\ngithub.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=\ngithub.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=\ngithub.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=\ngithub.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=\ngithub.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=\ngithub.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=\ngithub.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=\ngithub.com/prometheus/common v0.67.5 h1:pIgK94WWlQt1WLwAC5j2ynLaBRDiinoAb86HZHTUGI4=\ngithub.com/prometheus/common v0.67.5/go.mod h1:SjE/0MzDEEAyrdr5Gqc6G+sXI67maCxzaT3A2+HqjUw=\ngithub.com/prometheus/procfs v0.19.2 h1:zUMhqEW66Ex7OXIiDkll3tl9a1ZdilUOd/F6ZXw4Vws=\ngithub.com/prometheus/procfs v0.19.2/go.mod h1:M0aotyiemPhBCM0z5w87kL22CxfcH05ZpYlu+b4J7mw=\ngithub.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=\ngithub.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=\ngithub.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=\ngithub.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU=\ngithub.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4=\ngithub.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=\ngithub.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk=\ngithub.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=\ngithub.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=\ngithub.com/stretchr/objx v0.5.3 h1:jmXUvGomnU1o3W/V5h2VEradbpJDwGrzugQQvL0POH4=\ngithub.com/stretchr/objx v0.5.3/go.mod h1:rDQraq+vQZU7Fde9LOZLr8Tax6zZvy4kuNKF+QYS+U0=\ngithub.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=\ngithub.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=\ngithub.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=\ngithub.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=\ngithub.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=\ngo.opentelemetry.io/otel v1.43.0 h1:mYIM03dnh5zfN7HautFE4ieIig9amkNANT+xcVxAj9I=\ngo.opentelemetry.io/otel v1.43.0/go.mod h1:JuG+u74mvjvcm8vj8pI5XiHy1zDeoCS2LB1spIq7Ay0=\ngo.opentelemetry.io/otel/trace v1.43.0 h1:BkNrHpup+4k4w+ZZ86CZoHHEkohws8AY+WTX09nk+3A=\ngo.opentelemetry.io/otel/trace v1.43.0/go.mod h1:/QJhyVBUUswCphDVxq+8mld+AvhXZLhe+8WVFxiFff0=\ngo.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=\ngo.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=\ngo.yaml.in/yaml/v2 v2.4.4 h1:tuyd0P+2Ont/d6e2rl3be67goVK4R6deVxCUX5vyPaQ=\ngo.yaml.in/yaml/v2 v2.4.4/go.mod h1:gMZqIpDtDqOfM0uNfy0SkpRhvUryYH0Z6wdMYcacYXQ=\ngo.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=\ngo.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=\ngolang.org/x/exp v0.0.0-20260410095643-746e56fc9e2f h1:W3F4c+6OLc6H2lb//N1q4WpJkhzJCK5J6kUi1NTVXfM=\ngolang.org/x/exp v0.0.0-20260410095643-746e56fc9e2f/go.mod h1:J1xhfL/vlindoeF/aINzNzt2Bket5bjo9sdOYzOsU80=\ngolang.org/x/net v0.53.0 h1:d+qAbo5L0orcWAr0a9JweQpjXF19LMXJE8Ey7hwOdUA=\ngolang.org/x/net v0.53.0/go.mod h1:JvMuJH7rrdiCfbeHoo3fCQU24Lf5JJwT9W3sJFulfgs=\ngolang.org/x/oauth2 v0.36.0 h1:peZ/1z27fi9hUOFCAZaHyrpWG5lwe0RJEEEeH0ThlIs=\ngolang.org/x/oauth2 v0.36.0/go.mod h1:YDBUJMTkDnJS+A4BP4eZBjCqtokkg1hODuPjwiGPO7Q=\ngolang.org/x/sync v0.20.0 h1:e0PTpb7pjO8GAtTs2dQ6jYa5BWYlMuX047Dco/pItO4=\ngolang.org/x/sync v0.20.0/go.mod h1:9xrNwdLfx4jkKbNva9FpL6vEN7evnE43NNNJQ2LF3+0=\ngolang.org/x/sys v0.43.0 h1:Rlag2XtaFTxp19wS8MXlJwTvoh8ArU6ezoyFsMyCTNI=\ngolang.org/x/sys v0.43.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw=\ngolang.org/x/term v0.42.0 h1:UiKe+zDFmJobeJ5ggPwOshJIVt6/Ft0rcfrXZDLWAWY=\ngolang.org/x/term v0.42.0/go.mod h1:Dq/D+snpsbazcBG5+F9Q1n2rXV8Ma+71xEjTRufARgY=\ngolang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg=\ngolang.org/x/text v0.36.0/go.mod h1:NIdBknypM8iqVmPiuco0Dh6P5Jcdk8lJL0CUebqK164=\ngolang.org/x/time v0.15.0 h1:bbrp8t3bGUeFOx08pvsMYRTCVSMk89u4tKbNOZbp88U=\ngolang.org/x/time v0.15.0/go.mod h1:Y4YMaQmXwGQZoFaVFk4YpCt4FLQMYKZe9oeV/f4MSno=\ngoogle.golang.org/genproto/googleapis/api v0.0.0-20260414002931-afd174a4e478 h1:yQugLulqltosq0B/f8l4w9VryjV+N/5gcW0jQ3N8Qec=\ngoogle.golang.org/genproto/googleapis/api v0.0.0-20260414002931-afd174a4e478/go.mod h1:C6ADNqOxbgdUUeRTU+LCHDPB9ttAMCTff6auwCVa4uc=\ngoogle.golang.org/genproto/googleapis/rpc v0.0.0-20260414002931-afd174a4e478 h1:RmoJA1ujG+/lRGNfUnOMfhCy5EipVMyvUE+KNbPbTlw=\ngoogle.golang.org/genproto/googleapis/rpc v0.0.0-20260414002931-afd174a4e478/go.mod h1:4Hqkh8ycfw05ld/3BWL7rJOSfebL2Q+DVDeRgYgxUU8=\ngoogle.golang.org/protobuf v1.36.12-0.20260120151049-f2248ac996af h1:+5/Sw3GsDNlEmu7TfklWKPdQ0Ykja5VEmq2i817+jbI=\ngoogle.golang.org/protobuf v1.36.12-0.20260120151049-f2248ac996af/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=\ngopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=\ngopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=\ngopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=\ngopkg.in/evanphx/json-patch.v4 v4.13.0 h1:czT3CmqEaQ1aanPc5SdlgQrrEIb8w/wwCvWWnfEbYzo=\ngopkg.in/evanphx/json-patch.v4 v4.13.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M=\ngopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=\ngopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=\ngopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=\ngopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=\nk8s.io/api v0.0.0-20260506204515-74f8152a4388 h1:gy+NiFkFNPY3Ffk9NrodlEO7UfiIsnCYuH1GAusQrC8=\nk8s.io/api v0.0.0-20260506204515-74f8152a4388/go.mod h1:yYTMXzsjNiH+UQjxNYdM9Y8mPEp3ABXQlEHLK7foc3U=\nk8s.io/apimachinery v0.0.0-20260506204125-679298e8cb0f h1:rKddGTHDr/T6QJctEOwc2yV6UweRnUgtcjb0bDeughM=\nk8s.io/apimachinery v0.0.0-20260506204125-679298e8cb0f/go.mod h1:PHkLkx5z/hAHB+xyUMdZ14HcSzFDD54Z/5l9hYoT4OU=\nk8s.io/apiserver v0.0.0-20260506211241-4aa69dc1d36e h1:WWNwYFIi2Oj78Zd5HrSiT4u4745S8V3H4xTEPCm5bDo=\nk8s.io/apiserver v0.0.0-20260506211241-4aa69dc1d36e/go.mod h1:7RDJir6Qiqlo1HXihyYqhSl8bhGW2y38YwObmnEy/lc=\nk8s.io/client-go v0.0.0-20260506205028-24705f39ff1a h1:j570PbaQWtkGiOLdF6BbywLyq+F5yNv0MqxiapmVOqY=\nk8s.io/client-go v0.0.0-20260506205028-24705f39ff1a/go.mod h1:LEc9AYsnF1aeDyf2S7+Y/ZiDCap5ZSkZoaOoPFaTt9Q=\nk8s.io/component-base v0.0.0-20260506210233-5f255b73349b h1:i90Fvxn5awj/xHulI+kU1J/59DiRpFCMenFP7u1nJYk=\nk8s.io/component-base v0.0.0-20260506210233-5f255b73349b/go.mod h1:E0LcCidT6JoQw2DiYPCF1+9X21paPRTUk8t5jrJjvEk=\nk8s.io/component-helpers v0.0.0-20260506210426-85ee2c4ec30d h1:JD07Qc1QyjV1rBhU9gOXAUhkSxjdpM93jPHbYo53YIs=\nk8s.io/component-helpers v0.0.0-20260506210426-85ee2c4ec30d/go.mod h1:/kz5XYUf2o/t3oc+7dW/QKSBpU10MrO+xD/kvHHERlU=\nk8s.io/dynamic-resource-allocation v0.0.0-20260506220151-a1ba28d29f24 h1:kO93oEn/AFBHTubvp55F+62rLw+rwTDaE4yKFVKw75U=\nk8s.io/dynamic-resource-allocation v0.0.0-20260506220151-a1ba28d29f24/go.mod h1:X6IGQUVO7AI6v+huHtGWtahx8IhehP0s0P3qKrxp3BE=\nk8s.io/klog/v2 v2.140.0 h1:Tf+J3AH7xnUzZyVVXhTgGhEKnFqye14aadWv7bzXdzc=\nk8s.io/klog/v2 v2.140.0/go.mod h1:o+/RWfJ6PwpnFn7OyAG3QnO47BFsymfEfrz6XyYSSp0=\nk8s.io/kube-openapi v0.0.0-20260502001324-b7f5293f4787 h1:kHv8PETbPIVHfqKBYwTNNSjqChf/7xn3JOS3re+NWs8=\nk8s.io/kube-openapi v0.0.0-20260502001324-b7f5293f4787/go.mod h1:Cyq7UE0QtGe+Zo+/6XFrxiS4Mq0tLyQEONkFzSkfp9o=\nk8s.io/utils v0.0.0-20260210185600-b8788abfbbc2 h1:AZYQSJemyQB5eRxqcPky+/7EdBj0xi3g0ZcxxJ7vbWU=\nk8s.io/utils v0.0.0-20260210185600-b8788abfbbc2/go.mod h1:xDxuJ0whA3d0I4mf/C4ppKHxXynQ+fxnkmQH0vTHnuk=\nsigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 h1:IpInykpT6ceI+QxKBbEflcR5EXP7sU1kvOlxwZh5txg=\nsigs.k8s.io/json v0.0.0-20250730193827-2d320260d730/go.mod h1:mdzfpAEoE6DHQEN0uh9ZbOCuHbLK5wOm7dK4ctXE9Tg=\nsigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU=\nsigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=\nsigs.k8s.io/structured-merge-diff/v6 v6.3.2 h1:kwVWMx5yS1CrnFWA/2QHyRVJ8jM6dBA80uLmm0wJkk8=\nsigs.k8s.io/structured-merge-diff/v6 v6.3.2/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE=\nsigs.k8s.io/yaml v1.6.0 h1:G8fkbMSAFqgEFgh4b1wmtzDnioxFCUgTZhlbj5P9QYs=\nsigs.k8s.io/yaml v1.6.0/go.mod h1:796bPqUfzR/0jLAl6XjHl3Ck7MiyVv8dbTdyT3/pMf4=\n"
  }
]