I was seeing the rather character dense and yet information sparse error from Docker:
Error response from daemon: manifest for graylog/graylog:latest not found: manifest unknown: manifest unknown
Yes, I was hacking around with Graylog in this specific instance.
As it turns out, Graylog doesn’t have a
latest tag on Dockerhub, and Docker will add
:latest to any image that you attempt to pull without explicitly adding a tag.
What happens if there’s no
:latest tag on the registry? You get the above error. Search your container registry and repo for what tags they use and find the one that makes most sense for you.
When switching to a Linode Kubernetes Engine (LKE) cluster context, any command such as
kubectl get pods or
kubectl cluster-info hangs for about a minute before ultimately showing the following error:
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
It’s super simple. Check your
kubectl config view and make sure that your authentication information is accurate. In my case the user token was wrong since I had been bringing up and tearing down LKE clusters and forgot to change my token. The error could probably be a bit more verbose or otherwise narrow the context down a bit, but alas.
The Long Story
Incidentally, I was running Windows 10 and running
kubectl from PowerShell, but that doesn’t seem to be germane to the situation.
kubectl system-info --v=10 provided a ton of information. Note that
--v is perhaps underdocumented (or was at one point).
What I found was that I was getting numerous:
Got a Retry-After 1s response for attempt 8 to https://my-cluster:443/api?timeout=32s until the whole request timed out. I checked my Linode control panel and the cluster was indeed up and running.
The whole thing smelled like some kind of auth issue to me, so I double checked the kubectl config file that Linode offers in the UI (and via API), and noticed that the tokens weren’t matching with what I had in my .kube/config file. It was then that I remembered I had been tearing down and re-creating k8s clusters via Terraform and had forgotten to update my config file with the proper user token. Oh the joys of late-night hacking.
Once I updated my config file, I was able to access kubernetes.