1 Create a pod using the data in pod.json
|
|
2 Create a pod based on the JSON passed into stdin
|
|
3 Edit the data in docker-registry.yaml in JSON then create the resource using the edited data
|
|
4 Create a cluster role named “pod-reader” that allows user to perform “get”, “watch” and “list” on pods
|
|
5 Create a cluster role named “pod-reader” with ResourceName specified
|
|
6 Create a cluster role named “foo” with API Group specified
|
|
7 Create a cluster role named “foo” with SubResource specified
|
|
8 Create a cluster role name “foo” with NonResourceURL specified
|
|
9 Create a cluster role name “monitoring” with AggregationRule specified
|
|
10 Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role
|
|
11 Create a new config map named my-config based on folder bar
|
|
12 Create a new config map named my-config with specified keys instead of file basenames on disk
|
|
13 Create a new config map named my-config with key1=config1 and key2=config2
|
|
14 Create a new config map named my-config from the key=value pairs in the file
|
|
15 Create a new config map named my-config from an env file
|
|
16 Create a cron job
|
|
17 Create a cron job with a command
|
|
18 Create a deployment named my-dep that runs the busybox image
|
|
19 Create a deployment with a command
|
|
20 Create a deployment named my-dep that runs the nginx image with 3 replicas
|
|
21 Create a deployment named my-dep that runs the busybox image and expose port 5701
|
|
22 Create a single ingress called ‘simple’ that directs requests to foo.com/bar to svc svc1:8080 with a tls secret “my-cert”
|
|
23 Create a catch all ingress of “/path” pointing to service svc:port and Ingress Class as “otheringress”
|
|
24 Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2
|
|
25 Create an ingress with the same host and multiple paths
|
|
26 Create an ingress with multiple hosts and the pathType as Prefix
|
|
27 Create an ingress with TLS enabled using the default ingress certificate and different path types
|
|
28 Create an ingress with TLS enabled using a specific secret and pathType as Prefix
|
|
29 Create an ingress with a default backend
|
|
30 Create a job
|
|
31 Create a job with a command
|
|
32 Create a job from a cron job named “a-cronjob”
|
|
33 Create a new namespace named my-namespace
|
|
34 Create a pod disruption budget named my-pdb that will select all pods with the app=rails label and require at least one of them being available at any point in time
|
|
35 Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label and require at least half of the pods selected to be available at any point in time
|
|
36 Create a priority class named high-priority
|
|
37 Create a priority class named default-priority that is considered as the global default priority
|
|
38 Create a priority class named high-priority that cannot preempt pods with lower priority
|
|
39 Create a new resource quota named my-quota
|
|
40 Create a new resource quota named best-effort
|
|
41 Create a role named “pod-reader” that allows user to perform “get”, “watch” and “list” on pods
|
|
42 Create a role named “pod-reader” with ResourceName specified
|
|
43 Create a role named “foo” with API Group specified
|
|
44 Create a role named “foo” with SubResource specified
|
|
45 Create a role binding for user1, user2, and group1 using the admin cluster role
|
|
46 If you don’t already have a .dockercfg file, you can create a dockercfg secret directly by using:
|
|
47 Create a new secret named my-secret from ~/.docker/config.json
|
|
48 Create a new secret named my-secret with keys for each file in folder bar
|
|
49 Create a new secret named my-secret with specified keys instead of names on disk
|
|
50 Create a new secret named my-secret with key1=supersecret and key2=topsecret
|
|
51 Create a new secret named my-secret using a combination of a file and a literal
|
|
52 Create a new secret named my-secret from an env file
|
|
53 Create a new TLS secret named tls-secret with the given key pair
|
|
54 Create a new ClusterIP service named my-cs
|
|
55 Create a new ClusterIP service named my-cs (in headless mode)
|
|
56 Create a new ExternalName service named my-ns
|
|
57 Create a new LoadBalancer service named my-lbs
|
|
58 Create a new NodePort service named my-ns
|
|
59 Create a new service account named my-service-account
|
|
60 List all pods in ps output format
|
|
61 List all pods in ps output format with more information (such as node name)
|
|
62 List a single replication controller with specified NAME in ps output format
|
|
63 List deployments in JSON output format, in the “v1” version of the “apps” API group
|
|
64 List a single pod in JSON output format
|
|
65 List a pod identified by type and name specified in “pod.yaml” in JSON output format
|
|
66 List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml
|
|
67 Return only the phase value of the specified pod
|
|
68 List resource information in custom columns
|
|
69 List all replication controllers and services together in ps output format
|
|
70 List one or more resources by their type and names
|
|
71 Start a nginx pod
|
|
72 Start a hazelcast pod and let the container expose port 5701
|
|
73 Start a hazelcast pod and set environment variables “DNS_DOMAIN=cluster” and “POD_NAMESPACE=default” in the container
|
|
74 Start a hazelcast pod and set labels “app=hazelcast” and “env=prod” in the container
|
|
75 Dry run; print the corresponding API objects without creating them
|
|
76 Start a nginx pod, but overload the spec with a partial set of values parsed from JSON
|
|
77 Start a busybox pod and keep it in the foreground, don’t restart it if it exits
|
|
78 Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command
|
|
79 Start the nginx pod using a different command and custom arguments
|
|
80 Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000
|
|
81 Create a service for a replication controller identified by type and name specified in “nginx-controller.yaml”, which serves on port 80 and connects to the containers on port 8000
|
|
82 Create a service for a pod valid-pod, which serves on port 444 with the name “frontend”
|
|
83 Create a second service based on the above service, exposing the container port 8443 as port 443 with the name “nginx-https”
|
|
84 Create a service for a replicated streaming application on port 4100 balancing UDP traffic and named ‘video-stream’.
|
|
85 Create a service for a replicated nginx using replica set, which serves on port 80 and connects to the containers on port 8000
|
|
86 Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000
|
|
87 Delete a pod using the type and name specified in pod.json
|
|
88 Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml
|
|
89 Delete a pod based on the type and name in the JSON passed into stdin
|
|
90 Delete pods and services with same names “baz” and “foo”
|
|
91 Delete pods and services with label name=myLabel
|
|
92 Delete a pod with minimal delay
|
|
93 Force delete a pod on a dead node
|
|
94 Delete all pods
|
|
95 Apply the configuration in pod.json to a pod
|
|
96 Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml
|
|
97 Apply the JSON passed into stdin to a pod
|
|
98 Note: –prune is still in Alpha Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx
|
|
99 Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file
|
|
100 Edit the last-applied-configuration annotations by type/name in YAML
|
|
101 Edit the last-applied-configuration annotations by file in JSON
|
|
102 Set the last-applied-configuration of a resource to match the contents of a file
|
|
103 Execute set-last-applied against each configuration file in a directory
|
|
104 Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist
|
|
105 View the last-applied-configuration annotations by type/name in YAML
|
|
106 View the last-applied-configuration annotations by file in JSON
|
|
107 Update pod ‘foo’ with the annotation ‘description’ and the value ‘my frontend’ If the same annotation is set multiple times, only the last value will be applied
|
|
108 Update a pod identified by type and name in “pod.json”
|
|
109 Update pod ‘foo’ with the annotation ‘description’ and the value ‘my frontend running nginx’, overwriting any existing value
|
|
110 Update all pods in the namespace
|
|
111 Update pod ‘foo’ only if the resource is unchanged from version 1
|
|
112 Update pod ‘foo’ by removing an annotation named ‘description’ if it exists Does not require the –overwrite flag
|
|
113 Auto scale a deployment “foo”, with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used
|
|
114 Auto scale a replication controller “foo”, with the number of pods between 1 and 5, target CPU utilization at 80%
|
|
115 Create an interactive debugging session in pod mypod and immediately attach to it. (requires the EphemeralContainers feature to be enabled in the cluster)
|
|
116 Create a debug container named debugger using a custom automated debugging image. (requires the EphemeralContainers feature to be enabled in the cluster)
|
|
117 Create a copy of mypod adding a debug container and attach to it
|
|
118 Create a copy of mypod changing the command of mycontainer
|
|
119 Create a copy of mypod changing all container images to busybox
|
|
120 Create a copy of mypod adding a debug container and changing container images
|
|
121 Create an interactive debugging session on a node and immediately attach to it. The container will run in the host namespaces and the host’s filesystem will be mounted at /host
|
|
122 Diff resources included in pod.json
|
|
123 Diff file read from stdin
|
|
124 Edit the service named ‘docker-registry’
|
|
125 Use an alternative editor
|
|
126 Edit the job ‘myjob’ in JSON using the v1 API format
|
|
127 Edit the deployment ‘mydeployment’ in YAML and save the modified config in its annotation
|
|
128 Build the current working directory
|
|
129 Build some shared configuration directory
|
|
130 Build from github
|
|
131 Update pod ‘foo’ with the label ‘unhealthy’ and the value ’true’
|
|
132 Update pod ‘foo’ with the label ‘status’ and the value ‘unhealthy’, overwriting any existing value
|
|
133 Update all pods in the namespace
|
|
134 Update a pod identified by the type and name in “pod.json”
|
|
135 Update pod ‘foo’ only if the resource is unchanged from version 1
|
|
136 Update pod ‘foo’ by removing a label named ‘bar’ if it exists Does not require the –overwrite flag
|
|
137 Partially update a node using a strategic merge patch, specifying the patch as JSON
|
|
138 Partially update a node using a strategic merge patch, specifying the patch as YAML
|
|
139 Partially update a node identified by the type and name specified in “node.json” using strategic merge patch
|
|
140 Update a container’s image; spec.containers[*].name is required because it’s a merge key
|
|
141 Update a container’s image using a JSON patch with positional arrays
|
|
142 Replace a pod using the data in pod.json
|
|
143 Replace a pod based on the JSON passed into stdin
|
|
144 Update a single-container pod’s image version (tag) to v4
|
|
145 Force replace, delete and then re-create the resource
|
|
146 Rollback to the previous deployment
|
|
147 Check the rollout status of a daemonset
|
|
148 View the rollout history of a deployment
|
|
149 View the details of daemonset revision 3
|
|
150 Mark the nginx deployment as paused Any current state of the deployment will continue its function; new updates to the deployment will not have an effect as long as the deployment is paused
|
|
151 Restart a deployment
|
|
152 Restart a daemon set
|
|
153 Resume an already paused deployment
|
|
154 Watch the rollout status of a deployment
|
|
155 Roll back to the previous deployment
|
|
156 Roll back to daemonset revision 3
|
|
157 Roll back to the previous deployment with dry-run
|
|
158 Scale a replica set named ‘foo’ to 3
|
|
159 Scale a resource identified by type and name specified in “foo.yaml” to 3
|
|
160 If the deployment named mysql’s current size is 2, scale mysql to 3
|
|
161 Scale multiple replication controllers
|
|
162 Scale stateful set named ‘web’ to 3
|
|
163 Update deployment ‘registry’ with a new environment variable
|
|
164 List the environment variables defined on a deployments ‘sample-build’
|
|
165 List the environment variables defined on all pods
|
|
166 Output modified deployment in YAML, and does not alter the object on the server
|
|
167 Update all containers in all replication controllers in the project to have ENV=prod
|
|
168 Import environment from a secret
|
|
169 Import environment from a config map with a prefix
|
|
170 Import specific keys from a config map
|
|
171 Remove the environment variable ENV from container ‘c1’ in all deployment configs
|
|
172 Remove the environment variable ENV from a deployment definition on disk and update the deployment config on the server
|
|
173 Set some of the local shell environment into a deployment config on the server
|
|
174 Set a deployment’s nginx container image to ’nginx:1.9.1’, and its busybox container image to ‘busybox’
|
|
175 Update all deployments’ and rc’s nginx container’s image to ’nginx:1.9.1’
|
|
176 Update image of all containers of daemonset abc to ’nginx:1.9.1’
|
|
177 Print result (in yaml format) of updating nginx container image from local file, without hitting the server
|
|
178 Set a deployments nginx container cpu limits to “200m” and memory to “512Mi”
|
|
179 Set the resource request and limits for all containers in nginx
|
|
180 Remove the resource requests for resources on containers in nginx
|
|
181 Print the result (in yaml format) of updating nginx container limits from a local, without hitting the server
|
|
182 Set the labels and selector before creating a deployment/service pair
|
|
183 Set deployment nginx-deployment’s service account to serviceaccount1
|
|
184 Print the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server
|
|
185 Update a cluster role binding for serviceaccount1
|
|
186 Update a role binding for user1, user2, and group1
|
|
187 Print the result (in YAML format) of updating rolebinding subjects from a local, without hitting the server
|
|
188 Wait for the pod “busybox1” to contain the status condition of type “Ready”
|
|
189 The default value of status condition is true; you can set it to false
|
|
190 Wait for the pod “busybox1” to be deleted, with a timeout of 60s, after having issued the “delete” command
|
|
191 Get output from running pod mypod; use the ‘kubectl.kubernetes.io/default-container’ annotation for selecting the container to be attached or the first container in the pod will be chosen
|
|
192 Get output from ruby-container from pod mypod
|
|
193 Switch to raw terminal mode; sends stdin to ‘bash’ in ruby-container from pod mypod and sends stdout/stderr from ‘bash’ back to the client
|
|
194 Get output from the first pod of a replica set named nginx
|
|
195 Check to see if I can create pods in any namespace
|
|
196 Check to see if I can list deployments in my current namespace
|
|
197 Check to see if I can do everything in my current namespace ("*" means all)
|
|
198 Check to see if I can get the job named “bar” in namespace “foo”
|
|
199 Check to see if I can read pod logs
|
|
200 Check to see if I can access the URL /logs/
|
|
201 List all allowed actions in namespace “foo”
|
|
202 Reconcile RBAC resources from a file
|
|
203 !!!Important Note!!! Requires that the ’tar’ binary is present in your container image. If ’tar’ is not present, ‘kubectl cp’ will fail. For advanced use cases, such as symlinks, wildcard expansion or file mode preservation, consider using ‘kubectl exec’. Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace
|
|
204 Copy /tmp/foo from a remote pod to /tmp/bar locally
|
|
205 Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace
|
|
206 Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container
|
|
207 Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace
|
|
208 Copy /tmp/foo from a remote pod to /tmp/bar locally
|
|
209 Describe a node
|
|
210 Describe a pod
|
|
211 Describe a pod identified by type and name in “pod.json”
|
|
212 Describe all pods
|
|
213 Describe pods by label name=myLabel
|
|
214 Describe all pods managed by the ‘frontend’ replication controller (rc-created pods get the name of the rc as a prefix in the pod the name)
|
|
215 Get output from running the ‘date’ command from pod mypod, using the first container by default
|
|
216 Get output from running the ‘date’ command in ruby-container from pod mypod
|
|
217 Switch to raw terminal mode; sends stdin to ‘bash’ in ruby-container from pod mypod and sends stdout/stderr from ‘bash’ back to the client
|
|
218 List contents of /usr from the first container of pod mypod and sort by modification time If the command you want to execute in the pod has any flags in common (e.g. -i), you must use two dashes (–) to separate your command’s flags/arguments Also note, do not surround your command and its flags/arguments with quotes unless that is how you would execute it normally (i.e., do ls -t /usr, not “ls -t /usr”)
|
|
219 Get output from running ‘date’ command from the first pod of the deployment mydeployment, using the first container by default
|
|
220 Get output from running ‘date’ command from the first pod of the service myservice, using the first container by default
|
|
221 Return snapshot logs from pod nginx with only one container
|
|
222 Return snapshot logs from pod nginx with multi containers
|
|
223 Return snapshot logs from all containers in pods defined by label app=nginx
|
|
224 Return snapshot of previous terminated ruby container logs from pod web-1
|
|
225 Begin streaming the logs of the ruby container in pod web-1
|
|
226 Begin streaming the logs from all containers in pods defined by label app=nginx
|
|
227 Display only the most recent 20 lines of output in pod nginx
|
|
228 Show all logs from pod nginx written in the last hour
|
|
229 Show logs from a kubelet with an expired serving certificate
|
|
230 Return snapshot logs from first container of a job named hello
|
|
231 Return snapshot logs from container nginx-1 of a deployment named nginx
|
|
232 Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod
|
|
233 Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment
|
|
234 Listen on port 8443 locally, forwarding to the targetPort of the service’s port named “https” in a pod selected by the service
|
|
235 Listen on port 8888 locally, forwarding to 5000 in the pod
|
|
236 Listen on port 8888 on all addresses, forwarding to 5000 in the pod
|
|
237 Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod
|
|
238 Listen on a random port locally, forwarding to 5000 in the pod
|
|
239 To proxy all of the Kubernetes API and nothing else
|
|
240 To proxy only part of the Kubernetes API and also some static files You can get pods info with ‘curl localhost:8001/api/v1/pods’
|
|
241 To proxy the entire Kubernetes API at a different root You can get pods info with ‘curl localhost:8001/custom/api/v1/pods’
|
|
242 Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/
|
|
243 Run a proxy to the Kubernetes API server on an arbitrary local port The chosen port for the server will be output to stdout
|
|
244 Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/
|
|
245 Show metrics for all nodes
|
|
246 Show metrics for a given node
|
|
247 Show metrics for all pods in the default namespace
|
|
248 Show metrics for all pods in the given namespace
|
|
249 Show metrics for a given pod and its containers
|
|
250 Show metrics for the pods defined by label name=myLabel
|
|
251 Print the supported API versions
|
|
252 Approve CSR ‘csr-sqgzp’
|
|
253 Deny CSR ‘csr-sqgzp’
|
|
254 Print the address of the control plane and cluster services
|
|
255 Dump current cluster state to stdout
|
|
256 Dump current cluster state to /path/to/cluster-state
|
|
257 Dump all namespaces to stdout
|
|
258 Dump a set of namespaces to /path/to/cluster-state
|
|
259 Mark node “foo” as unschedulable
|
|
260 Drain node “foo”, even if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set on it
|
|
261 As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set, and use a grace period of 15 minutes
|
|
262 Update node ‘foo’ with a taint with key ‘dedicated’ and value ‘special-user’ and effect ‘NoSchedule’ If a taint with that key and effect already exists, its value is replaced as specified
|
|
263 Remove from node ‘foo’ the taint with key ‘dedicated’ and effect ‘NoSchedule’ if one exists
|
|
264 Remove from node ‘foo’ all the taints with key ‘dedicated’
|
|
265 Add a taint with key ‘dedicated’ on nodes having label mylabel=X
|
|
266 Add to node ‘foo’ a taint with key ‘bar’ and no value
|
|
267 Mark node “foo” as schedulable
|
|
268 Print the supported API resources
|
|
269 Print the supported API resources with more information
|
|
270 Print the supported API resources sorted by a column
|
|
271 Print the supported namespaced resources
|
|
272 Print the supported non-namespaced resources
|
|
273 Print the supported API resources with a specific APIGroup
|
|
274 Installing bash completion on macOS using homebrew If running Bash 3.2 included with macOS
|
|
|
|
|
|
275 Installing bash completion on Linux If bash-completion is not installed on Linux, install the ‘bash-completion’ package via your distribution’s package manager. Load the kubectl completion code for bash into the current shell
|
|
|
|
276 Kubectl shell completion
|
|
277 Load the kubectl completion code for zsh[1] into the current shell
|
|
278 Set the kubectl completion code for zsh[1] to autoload on startup
|
|
279 Display the current-context
|
|
280 Delete the minikube cluster
|
|
281 Delete the context for the minikube cluster
|
|
282 Delete the minikube user
|
|
283 List the clusters that kubectl knows about
|
|
284 List all the contexts in your kubeconfig file
|
|
285 Describe one context in your kubeconfig file
|
|
286 List the users that kubectl knows about
|
|
287 Rename the context ‘old-name’ to ’new-name’ in your kubeconfig file
|
|
288 Set the server field on the my-cluster cluster to https://1.2.3.4
|
|
289 Set the certificate-authority-data field on the my-cluster cluster
|
|
290 Set the cluster field in the my-context context to my-cluster
|
|
291 Set the client-key-data field in the cluster-admin user using –set-raw-bytes option
|
|
292 Set only the server field on the e2e cluster entry without touching other values
|
|
293 Embed certificate authority data for the e2e cluster entry
|
|
294 Disable cert checking for the dev cluster entry
|
|
295 Set custom TLS server name to use for validation for the e2e cluster entry
|
|
296 Set the user field on the gce context entry without touching other values
|
|
297 Set only the “client-key” field on the “cluster-admin” entry, without touching other values
|
|
298 Set basic auth for the “cluster-admin” entry
|
|
299 Embed client certificate data in the “cluster-admin” entry
|
|
300 Enable the Google Compute Platform auth provider for the “cluster-admin” entry
|
|
301 Enable the OpenID Connect auth provider for the “cluster-admin” entry with additional args
|
|
302 Remove the “client-secret” config value for the OpenID Connect auth provider for the “cluster-admin” entry
|
|
303 Enable new exec auth plugin for the “cluster-admin” entry
|
|
304 Define new exec auth plugin args for the “cluster-admin” entry
|
|
305 Create or update exec auth plugin environment variables for the “cluster-admin” entry
|
|
306 Remove exec auth plugin environment variables for the “cluster-admin” entry
|
|
307 Unset the current-context
|
|
308 Unset namespace in foo context
|
|
309 Use the context for the minikube cluster
|
|
310 Show merged kubeconfig settings
|
|
311 Show merged kubeconfig settings and raw certificate data
|
|
312 Get the password for the e2e user
|
|
313 Get the documentation of the resource and its fields
|
|
314 Get the documentation of a specific field of a resource
|
|
315 Print flags inherited by all commands
|
|
316 Print the client and server versions for the current context
|
|