
If you are interested in finding out how kubectl exec
works, then I hope you will find this post useful. We will look into how the command works by examing the relevant code in kubectl, K8s API Server, Kubelet and the Container Runtime Interface (CRI) Docker API.
About This Command
Thekubectl exec
command is an invaluable tool for those of us who regularly work with containerized workloads on Kubernetes. It allows us to inspect and debug our applications, by executing commands inside our containers.
Let’s use kubectl
v1.15.0 to run an example:
The first exec
command runs a date
command inside my Nginx container. The second exec
command uses the -i
and -t
flags to get a shell to my Nginx container.
The CLI Code
Let’s repeat the command with increased log verbosity:
Notice that there are two HTTP requests:
- a
GET
request to fetch the pod - a
POST
request to theexec
subresource of the pod.

The server responds with a 101 Upgrade
response header indicating to the client that it has switched to the SPDY protocol.
The API Server Code
Let’s examine the API Server’s code to see how it registers the rest.ExecRest
handler to handle /exec
subresource requests. This handler is used to determine the node endpoint to exec
to.
One of the things that the API Server does when starting is to instruct its embedded GenericAPIServer
to install the ‘legacy’ API:
During the API installation, an instance of the LegacyRESTStorage
type is instantiated, which creates a storage.PodStorage
instance:

This storage.PodStorage
instance is then added to the restStorageMap
map. Notice that in this map, the pods/exec
path is mapped to the podStorage
’s rest.ExecRest
handler:
This map then becomes part of an apiGroupInfo
instance, which gets added to the GenericAPIServer
:
The GoRestfulContainer
has a ServeMux
that knows how to map incoming requests URL to the different handlers.
Let’s take a closer look at how therest.ExecRest
handler works. Its Connect()
method calls the pod.ExecLocation()
function to determine the exec
subresource URL of a pod container:
The URL returned by the pod.ExecLocation()
function is used by the API Server to determine which node to connect to.
Now let’s look at the Kubelet code.
The Kubelet Code
How does the Kubelet register its exec
handler? What does its interaction with the Docker API look like?
The Kubelet initialization process is quite involved. The following two functions are most relevant to this post:
PreInitRuntimeService()
which initializes the CRI using thedockershim
packageRunKubelet()
which registers handler and starts the server
Setting up the Handler
As the Kubelet is starting up, its RunKubelet()
function calls the unexported startKubelet()
function to starts the ListenAndServe()
method of the kubelet.Kubelet
instance. This method then calls the ListenAndServeKubeletServer()
function, which uses the NewServer()
constructor to install the “debugging” handlers:
The InstallDebuggingHandlers()
function registers the HTTP request patterns with the getExec()
handler:
The getExec()
handler calls the GetExec()
method of the s.host
instance:
The s.host
is instantiated as an instance of the kubelet.Kubelet
type. It has a nested reference to the StreamingRuntime
interface, which is instantiated as a kubeGenericRuntimeManager
instance. This runtime manager is the key component that interacts with the Docker API. It implements the GetExec()
method:
This method invokes the runtimeService.Exec()
method. Upon further investigation, we discover that the runtimeService
is an interface defined in the CRI package. The kuberuntime.kubeGenericRuntimeManager
‘s runtimeService
object was instantiated as a kuberuntime.instrumentedRuntimeService
type, which implements the runtimeService.Exec()
method:
Furthermore, the nested service
object of this instrumentedRuntimeService
instance is instantiated as an instance of theremote.RemoteRuntimeService
type. This type owns an Exec()
method:
This Exec()
method issues a GRPC call to the /runtime.v1alpha2.RuntimeService/Exec
endpoint, to prepare a streaming endpoint which will be used to execute commands in the container. (See the next subsection titled Setting up the Docker shim for more on setting up the Docker shim as a GRPC server.)
The GRPC server handles this by invoking the RuntimeServiceServer.Exec()
method. This method is implemented by the dockershim.dockerService
struct:
ThestreamingServer
in line 10 is a streaming.Server
interface. It is instantiated in the dockershim.NewDockerService()
constructor:
Let’s look at the implementation of its GetExec()
method:
This is where the streaming endpoint is built and returned to the GRPC client.
As seen above, the restful.WebService
instance then routes podexec
requests to this endpoint.
Setting up the Docker shim
ThePreInitRuntimeService()
function creates and starts the Docker shim, as a GRPC server. While instantiating an instance of the dockershim.dockerService
type, its nested streamingRuntime
instance is assigned a reference to an instance of dockershim.NativeExecHandler
, which implements the dockershim.ExecHandler
interface:
The NativeExecHandler.ExecInContainer()
method is the key to executing commands in containers using the Docker’s exec
API:
Ultimately, this is where the Kubelet invokes the Docker exec
API.
The final piece of the puzzle we need is to figure out how the streamingServer
handle the exec
requests. To do that, we’ll need to find its exec
handler. Let’s start with the streaming.NewServer()
constructor. This is where the /exec/{token}
path is bound to the the serveExec
handler:
All exec
requests sent to the dockershim.dockerService
instance will end up at the streamingServer
because the dockerService.ServeHTTP()
method calls the ServeHTTP()
method of the streamingServer
instance.
The serveExec
handler calls the remoteCommand.ServeExec()
function. And what does this function do? It calls into the Executor.ExecInContainer()
method which we discussed earlier. Remember, the ExecInContainer()
method is the one that knows how to talk to the Docker exec
API:
Conclusion
In this post, we looked at how thekubectl exec
command works by examing the code of kubectl, the K8s API Server, Kubelet and CRI Docker API.
We didn’t cover the details of the Docker exec
API, nor how docker exec
works.
The kubectl CLI issued a GET
and POST
requests to the K8s API Server. In response, the server sent a 101 Upgrade
header to the client, indicating the switch to the SPDY protocol.
The K8s API Server used the storage.PodStorage
and rest.ExecRest
to provide the handler mapping and logic. The rest.ExecRest
handler determined the node endpoint to exec
to.
The Kubelet requested a streaming endpoint from the Docker shim and forwarded the exec
requests to the the Docker exec
API.
Although this post focused only on the exec
command, it’s noticeable that other commands like attach
, port-forward
and logs
follow a similar implementation pattern.