Bash Parameter Expansion: Variable Unset or Null ${YOU_BE_THE:-“Judge”}

I came across a lesser-used Bash idiom while attempting to implement ZeroSSL TLS certificates. Specifically, in ZeroSSL’s wrapper script to install their implementation of certbot. The idiom is seen here:


My interest was piqued by that one little dash in between CERTBOT_SCRIPT_LOCATION and "". To understand it, I’ll pull back and think about the whole line, component by component.


Let’s look at this line as if it was two sides of a seesaw with the middle being the = sign. The left half CERTBOT_SCRIPT_LOCATION= is simply a variable assignment. Whatever the right side of the = expands to is going to be put inside the variable CERTBOT_SCRIPT_LOCATION.

So far, so simple.

${ }

On the right side of the =, we have a dollar sign and a bunch of stuff within a pair of braces. Let’s ignore the content within the braces for now and examine the use of ${} as our next element.

The dollar sign character is interpreted by Bash to introduce a number of possible things, including command substitution, arithmetic evaluation, accessing the value of a named variable, or a parameter expansion.

Command substitution is triggered in Bash with $( and ends with a closing ). You could fill a variable with the return value of any command like this:

MY_VARIABLE=$( command )

Arithmetic substitution is triggered in bash with $(( and ends with a matching )). Whatever is between the double parentheses is expanded and treated as an arithmetic expression.

Variable values are accessed when a $ is followed by a named variable. You’ve already seen one named variable in this article: CERTBOT_SCRIPT_LOCATION. However, it currently has no value. In fact, as you read this, we’re currently in the midst of figuring out what value is going to be assigned to that variable.

Parameter expansion is introduced into bash with ${ and ending with a corresponding }. Any shell parameter found within the braces is expanded. There are a lot of arcane and esoteric shell parameters, but you’ve already been introduced to one type of shell parameter in this article: a variable. That’s right, shell variables are parameters. This brings us to the final piece of this puzzle.


We know that CERTBOT_SCRIPT_LOCATION is a variable and thus a shell parameter, so Bash will attempt to expand it within the ${} construct. However, we’re pretty sure that it’s empty at this point. And what’s with the double-quoted string that contains a URL? And why is a dash separating them?! That lowly dash is the linchpin that holds all of this together.

Within a parameter expansion expression, the dash will test if the variable on the left is set or not. If it is set, the variable is expanded and what’s on the right is discarded.

However, if the parameter on the left of the dash is not set, then the thing on the right side of the dash is expanded (if it needs to be) and then assigned as the value of the variable on the left of the dash. Let’s take a look at our specimen:


The above says, in plain language: “Does the variable CERTBOT_SCRIPT_LOCATION exist? If it does, return the variable’s value. If the variable doesn’t exist, then insert the string "" into it, and finally return that value.

Putting it all Together

Whew! We’ve been through a lot, but there’s still a bit more to go. Let’s take a look at the whole line again, explain what’s happening, and then put it in context:


We’re creating a variable named CERTBOT_SCRIPT_LOCATION and assigning it the final value of the parameter expansion on the right side of the = sign.

Within that parameter expansion expression, we’re checking if CERTBOT_SCRIPT_LOCATION already exists. If it does, return the value of that variable which is immediately assigned to that exact same variable. This looks a little weird, but it’s a Bash idiom that means “If CERTBOT_SCRIPT_LOCATION already exists, leave it alone.”

However, if the variable CERTBOT_SCRIPT_LOCATION does not exist, then create it and put the string "" inside.

To put things into greater context, that variable is later used within a call to curl:


The question you may now be asking is: “Why?!” Why not avoid the use of a seldom used, single character test that took so long to explain? Why not use curl and supply the URL directly? Without asking the script author, here are three reasons that I think the script was written this way:

Abstraction. We use variables for any information that has a reasonable chance of being changed. A URL can easily change, and if we assign it to a variable once, we can more easily change that value at a later date. We never need to worry about changing the URL in every spot that we used it.

Documentation. When you assign a value to a variable, you name the variable. In this case, our value is a URL. What exactly does that URL do? What is its purpose? When we assign the URL to a variable named CERTBOT_SCRIPT_LOCATION, now we have an explanation. Every time we use that variable it reminds us of what it’s doing.

Safety. The two reasons above explain the use of variables, but not that lone dash. I believe the dash idiom was chosen for safety. Maybe we ran the script multiple times before, or perhaps something else set it previously. We don’t need to keep repeating the process of setting the variable, and if it was set previously, let’s not overwrite it.

Final Thoughts

I noticed that the script does not check CERTBOT_SCRIPT_LOCATION for a value that makes sense. What if it’s set, but has a number in it? Or a string that isn’t an HTTP URL? Those are more complex problems. How would you solve them?

In the title of this article, I used a slightly different bash idiom: the use of :- rather than the lonesome -. If we look to Bash’s documentation, we find:

When not performing substring expansion, using the form described below (e.g., ‘:-’), Bash tests for a parameter that is unset or null. Omitting the colon results in a test only for a parameter that is unset.

The dash merely checks for the existence of the variable on its left. The colon-dash will additionally check if the variable exists but is null. If the value is null, then Bash assigns the value on the right to the variable. Ask yourself which logic makes the most sense for your own scripts.

Do you have any scripts you’d like me to tear down? Any shell idioms that you’re not sure about? Comment below!

Solved: The connection to the server localhost:8080 was refused – did you specify the right host or port?

My Problem

When attempting to perform any kubectl command, you receive the error:

The connection to the server localhost:8080 was refused - did you specify the right host or port?

I was not on the Kubernetes cluster nodes or master, and I did not need to initialize the cluster or move /etc/kubernetes/admin.conf.

My Solution

Your kubeconfig file is jacked up. No really, it is. It’s most likely because you attempted to add or remove clusters to a monolithic config file rather than using multiple config files and having them merged together into one running config.

Go back to basics and create the simplest possible kubeconfig file that works to access your cluster. If you’re having trouble with that, leave a comment below and perhaps we can step through the issue to find the specific bit of yaml that tripped you up.

The Long Story

When hacking around on a kubectl config file, I ended up getting it into a state where any kubectl command responded with the error:

The connection to the server localhost:8080 was refused - did you specify the right host or port?

When searching around on the internet, most of the solutions focus on creating a nonexistent config file, or initializing the Kubernetes cluster. However in my case, I was not on a cluster member itself, and I already had a config file. The problem was somewhere in the config file itself.

Interestingly, when attempting to use the same config file on a Windows machine, the error was slightly different:

error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

Well now that’s interesting. It’s complaining that there’s no master, which seems like it would be a root cause of kubectl attempting to connect to localhost for the control plane server.

What happened next was a painstaking comparison of known-good config files, which found misconfiguration errors. It would be nice if perhaps there were some kind of default config linting that took place and offered a bit better errors.

After starting from basics and using the simplest possible kubeconfig file, and adding in more contexts and users, the monolithic file eventually worked correctly, and peace reigned in the land.

Solving ModuleNotFoundError: No module named ‘ansible’

My Problem

When running any ansible command, I see a stack trace similar to:

Traceback (most recent call last):
File "/usr/local/bin/ansible", line 34, in
from ansible import context
ModuleNotFoundError: No module named 'ansible'

My Solution

pip install ansible or brew install ansible or yum install ansible or…

Somehow your Ansible Python modules were removed, but the Ansible scripts in your $PATH remained. Install Ansible’s python package however makes the most sense for your platform and preferences. E.g. via pip directly or Homebrew or your package manager of choice.

The Long Story

Let’s break the error down line by line:

File "/usr/local/bin/ansible", line 34, in

Ansible is just a Python script, so let’s check out line 34:

31 import sys
32 import traceback
34 from ansible import context
35 from ansible.errors import AnsibleError, AnsibleOptionsError, AnsibleParserError
36 from ansible.module_utils._text import to_text

The second line in the stack trace shows that from ansible import context is just another module import in the larger context of the Python application. With that larger context clarified, this error may snap a bit more into focus:

ModuleNotFoundError: No module named 'ansible'

It’s just a Python application that can’t find a module. If there’s no module, let’s check with Python to see what packages it knows about:

$ pip list

Package    Version
---------- -------
gpg        1.14.0
pip        20.1.1
protobuf   3.13.0
setuptools 49.2.0
six        1.15.0
wheel.     0.34.2

There’s no Ansible package listed. Wait, which version of Python did I just check?

$ which pip
pip: aliased to pip3

Let’s check pip2 just to make sure there’s no version weirdness going on:

$ pip2 list

Package      Version
------------ -------
altgraph     0.10.2
asn1crypto   0.24.0
bdist-mpkg   0.5.0
bonjour-py   0.3
boto         2.49.0
cffi         1.12.2
cryptography 2.6.1
enum34       1.1.6
future       0.17.1

Nope, no Ansible. Since I’m on a Mac, let’s check Brew just to see what comes back:

brew list ansible
Error: No such keg: /usr/local/Cellar/ansible

I’m not really sure what happened. I’ve got the Ansible scripts in my path, but I don’t have the python modules. I prefer to install Ansible via pip so I simply pip install ansible and everything was right with the world.

Docker: Error response from daemon: manifest not found: manifest unknown

I was seeing the rather character dense and yet information sparse error from Docker:

Error response from daemon: manifest for graylog/graylog:latest not found: manifest unknown: manifest unknown

Yes, I was hacking around with Graylog in this specific instance.

As it turns out, Graylog doesn’t have a latest tag on Dockerhub, and Docker will add :latest to any image that you attempt to pull without explicitly adding a tag.

What happens if there’s no :latest tag on the registry? You get the above error. Search your container registry and repo for what tags they use and find the one that makes most sense for you.

Solving Kubectl “Error from server (InternalError): an error on the server (“”) has prevented the request from succeeding”

My Problem

When switching to a Linode Kubernetes Engine (LKE) cluster context, any command such as kubectl get pods or kubectl cluster-info hangs for about a minute before ultimately showing the following error:

Error from server (InternalError): an error on the server ("") has prevented the request from succeeding

My Solution

It’s super simple. Check your kubectl config view and make sure that your authentication information is accurate. In my case the user token was wrong since I had been bringing up and tearing down LKE clusters and forgot to change my token. The error could probably be a bit more verbose or otherwise narrow the context down a bit, but alas.

The Long Story

Incidentally, I was running Windows 10 and running kubectl from PowerShell, but that doesn’t seem to be germane to the situation.

Running kubectl system-info --v=10 provided a ton of information. Note that --v is perhaps underdocumented (or was at one point).

What I found was that I was getting numerous: Got a Retry-After 1s response for attempt 8 to https://my-cluster:443/api?timeout=32s until the whole request timed out. I checked my Linode control panel and the cluster was indeed up and running.

The whole thing smelled like some kind of auth issue to me, so I double checked the kubectl config file that Linode offers in the UI (and via API), and noticed that the tokens weren’t matching with what I had in my .kube/config file. It was then that I remembered I had been tearing down and re-creating k8s clusters via Terraform and had forgotten to update my config file with the proper user token. Oh the joys of late-night hacking.

Once I updated my config file, I was able to access kubernetes.

Solving Terraform: “No valid credential sources found for AWS Provider”

My Problem

Using Terraform v0.12 and attempting to use the AWS provider to init an S3 backend, I’m receiving the error:

Initializing the backend…

Error: No valid credential sources found for AWS Provider.
Please see for more information on providing credentials for the AWS Provider

I’m experimenting with providing static credentials in a .tf file (P.S. don’t do this in production) and I’ve verified that the AWS keys are correct.

My Solution

Preamble: The following is terrible, don’t do this. I’m writing this merely as an answer to something that was puzzling me.

Add access_key and secret_key to the Terraform backend block. E.g.:

terraform {
  backend "s3" {
    bucket = "your-bucket"
    region = "your-region"
    key = "yourkey/terraform.tfstate"
    dynamodb_table = "your-lock-table"
    encrypt = true
    access_key = "DONT_PUT_KEYS_IN_YOUR.TF_FILES"
    secret_key = "NO_REALLY_DONT"

This would be in addition to the keys that you’ve placed in your provider block:

provider "aws" {
   region = "us-east-1"
   access_key = "DONT_PUT_KEYS_IN_YOUR.TF_FILES"
   secret_key = "NO_REALLY_DONT"

The backend needs to be initialized before the provider plugin, so any keys in the provider block are not evaluated. The Terraform backend block needs to be provided with its own keys.

A better method for doing that would be using environmental variables, among other more secure methods (including the use of shared_credentials_file and a profile, such as what Martin Hall references in the comments below. You can also provide a partial configuration and then pass variables in via the command line.

The Long Story

There are a number of ways to provide Terraform with AWS credentials. The worst option is to use static credentials provided in your .tf files, so naturally that’s what I’m experimenting with.

One way to provide credentials is through environmental variables, and when I tested that method out, it worked! I’ll make use of environmental variables in the future (promise), but I want to figure out why static credentials aren’t working because… because.

Another way to provide AWS credentials is via the good ol’ shared credentials file located at .aws/credentials. Again, this works in my scenario but I’m stumped as to why static credentials won’t.

(Side note: At this point in the story, this is the universe telling me just how bad it is to use static credentials, but my preferred decision making methodology is to ignore such urgings.)

Let’s debug this sucker by setting the environmental variable TF_LOGS to trace: export TF_LOGS=trace

# terraform init
2020/05/21 06:26:58 [INFO] Terraform version: 0.12.25
2020/05/21 06:26:58 [INFO] Go runtime version: go1.12.13
2020/05/21 06:26:58 [INFO] CLI args: []string{"/usr/bin/terraform", "init"}
2020/05/21 06:26:58 [DEBUG] Attempting to open CLI config file: /root/.terraformrc
2020/05/21 06:26:58 [DEBUG] File doesn't exist, but doesn't need to. Ignoring.
2020/05/21 06:26:58 [INFO] CLI command args: []string{"init"}

Initializing the backend…

2020/05/21 06:26:58 [TRACE] Meta.Backend: built configuration for "s3" backend with hash value 953412181
2020/05/21 06:26:58 [TRACE] Preserving existing state lineage "da125f8e-6c56-d65a-c30b-77978250065c"
2020/05/21 06:26:58 [TRACE] Preserving existing state lineage "da125f8e-6c56-d65a-c30b-77978250065c"
2020/05/21 06:26:58 [TRACE] Meta.Backend: working directory was previously initialized for "s3" backend
2020/05/21 06:26:58 [TRACE] Meta.Backend: using already-initialized, unchanged "s3" backend configuration
2020/05/21 06:26:58 [INFO] Setting AWS metadata API timeout to 100ms
2020/05/21 06:27:00 [INFO] Ignoring AWS metadata API endpoint at default location as it doesn't return any instance-id
2020/05/21 06:27:00 [INFO] Attempting to use session-derived credentials

Error: No valid credential sources found for AWS Provider.
Please see for more information on providing credentials for the AWS Provider

Huh, it’s as if the backend section is totally ignoring my provider credentials.

It was then that I realized that the backend block has its own variables for keys. Well that’s weird. Why would it need its own definition of my provider’s keys when I already have keys placed in the “aws” provider block? Unless… Terraform doesn’t look at that block.

Some further research confirms that when a terraform backend is init’d, it’s executed before just about anything else (naturally), and there’s no sharing of provider credentials from a provider block even if the backend resides in the provider (E.g. a backend that uses Amazon S3 will not look to the AWS provider block for credentials).

Once I placed my AWS keys in the terraform backend block (don’t do that), things worked.

Adding Simple base64 Decoding to Your Shell

I had a need to repeatedly decode some base64 strings quickly and easily. Easier than typing out openssl base64 -d -in -out, or even base64 --decode file.

The simplest solution that I found and prefer is a shell function with a here string. Crack open your preferred shell’s profile file. In my case, .zshrc. Make a shell function thusly:

decode() {
  base64 --decode <<<$1

Depending on your shell and any addons, you may need to echo an extra newline to make the decoded text appear on its own line and not have the next shell prompt append to the decoded text.

Solved: Getting Backblaze to Backup OneDrive Folders in Windows

My Problem

I use Microsoft Office 365 and OneDrive for my consulting work to keep my files synced between multiple devices and preserved from loss should I have my laptop stolen or otherwise destroyed. I use Backblaze as part of my strategy to back up the data and keep version history of my files. This can be a tiny bit tricky since Backblaze can’t back up the files if you have OneDrive “Files On-Demand” turned on. However, once you turn Files On-Demand off, Backblaze should be able to back up the files just like any other file on your hard drive. In theory.

In practice, I was unable to get one particular folder contained within OneDrive to back up to Backblaze. This was a considerable problem because that one particular folder was the main folder that I kept all of my business files in. It was essentially the only folder that I cared deeply about having backed up, and as luck would have it, it was the only folder that wasn’t showing up in my list of files that I could restore from Backblaze.

After considerable work with Backblaze support, we came to the final solution.

My Solution

Reparse points! Check to see if the directory that isn’t being backed up has the ReparsePoint attribute. There are a few ways to do this, but the most plain one that I used was:

> gci|select name,attributes -u

Name                                       Attributes
----                                       ----------
Important Work       Directory, Archive, ReparsePoint
GoProjects                                  Directory
More Work            Directory, Archive, ReparsePoint
Even more work       Directory, Archive, ReparsePoint

As it turns out, OneDrive apparently has a history of changing if and when it marks a directory with the ReparsePoint attribute. Here’s where I have to insert a giant disclaimer:

I don’t know if changing the ReparsePoint attribute manually out from under OneDrive will do anything nasty and prevent OneDrive from working as intended. I also do not know if OneDrive will silently add the ReparsePoint attribute to folders again, thus causing Backblaze backups to silently fail. I’ll be checking this over time, but you should check it for yourself as well.

However, note that changing a directory’s ReparsePoint attribute in this situation will not delete data.

As it turns out, most if not all of my directories under the one crucial directory were marked with the ReparsePoint attribute. My only choice was to recursively check each directory and remove the attribute. If you take such a scorched earth approach, this will very likely tamper with any junctions and/or mount points that you have in that tree of your filesystem, so beware of what that implies for your usage. For me, there were no known negative implications.

My solution was to mass change the troublesome directory with some PowerShell:

Get-ChildItem -recurse -force | Where-Object { $_.Attributes -match "ReparsePoint" } | foreach-object -Process {fsutil reparsepoint delete $_.fullname}

For more information, check out the help document for the fsutil tool. Keep in mind that while the verb delete is scary, it doesn’t actually delete any files or directories, rather it’s simply removing the reparsepoint attribute on the filesystem object.

After that, I forced a rescan of the files that Backblaze should back up (Windows instructions here, and then Mac instructions here). Suddenly thousands of new files were discovered and began uploading. After a little while, I checked for what files I could restore, and sure enough, the troublesome folder and seemingly all of it’s child items were in my available backup.

I’ll periodically check back on my filesystem to see if any directories were re-marked with ReparsePoint and make note of it here. If I was smart and diligent, I’d make a scheduled task to remove that attribute from the areas of my filesystem that I’m concerned with.

Workaround: “Unable to Change Virtual Machine Power State: Cannot Find a Valid Peer Process to Connect to”

My Problem

Attempting to start a virtual machine in VMware Workstation 15 Pro (15.0.3) on a RedHat based Linux workstation caused the following error: “Unable to Change Virtual Machine Power State: Cannot Find a Valid Peer Process to Connect to”

I was able to start other virtual machines in the VM library, however.

My Workaround

Note that this is simply a workaround. I don’t yet know the ultimate cause, but I’m documenting how I workaround it until I or someone else can figure out the ultimate cause of this problem.

First, check to see if the virtual machine is actually running, in spite of there being no visual indicators within VMware Workstation: vmrun list

You’ll probably see that the virtual machine is running. If you don’t, then this workaround isn’t likely to help you. Attempt to shut the running virtual machine down softly: vmrun stop /path/to/virtual_machine.vmx soft

After that, you should be able to start the machine again, until the next time it crashes for unknown reasons. More news as I discover it.

Dumping Grounds (Turn Back Now):

I’ll dump some of my notes here and they’ll be updated periodically as I find out more info about this issue. You’re completely safe to ignore everything past this point. Abandon all hope, ye who proceed.

I had recently upgraded from Fedora 29 to Fedora 30, and was experiencing some minor instability with my main workstation. I’m not sure if that was the ultimate cause of this issue, but I’m suspicious since I never had this issue until after the upgrade.

My first act was to go to the Help menu, select the “Support” menu and then “Collect Support Data…” I chose to collect data for the specific VM that was having this issue. This took quite a while, by my standards. About 20 minutes. It basically creates a giant zipped dump of pertinent files across your physical machine that pertain to VMware and that specific virtual machine. It’s not super easy to parse and know what to look for.

I searched through /var/log/vmware/ for any clues in any of the log files found therein. Grepping for all files that had the pertinent virtual machine’s name, and looking for surrounding context didn’t turn anything up.

I attempted to start the vmware-workstation-server service but that failed. I don’t think that’s the issue since the virtual machine isn’t a shared VM.

I tried vmrun list and saw that the Windows VM was actually listed as running. I stopped it soft: vmrun stop /path/to/my/virtual_machine.vmx soft and was then able to start the virtual machine. I’m not sure what’s causing the crash, and what’s causing the crash of VMware Workstation Pro, and why when I start it back up it doesn’t appear to know that the VM it was previously working with is actually running.

Solved: “bad input file size” When Attempting to `setfont` to a New Console Font

My Problem

In a Linux distribution of one kind or another, when attempting to set a new console font in a TTY, you may received the following error:

# setfont -32 ter-u32n.bdf
bad input file size

My Solution

First, if you’re coming to this blog post because you’re attempting to install larger Terminus fonts for your TTY, you probably just want to search your distribution’s package manager for Terminus, specifically the console fonts package:

$ yum search terminus
== Name Matched: terminus ==
terminus-fonts.noarch : Clean fixed width font
terminus-fonts-grub2.noarch : Clean fixed width font (grub2 version)
terminus-fonts-console.noarch : Clean fixed width font (console version)
$ yum install terminus-fonts-console

However if you’re coming to this blog post for other reasons, then you’re probably attempting to setfont with a .bdf file or just something that isn’t a .psf file. You most likely need to follow the instructions for your font, in my case Terminus, to make the files into the proper .psf format.The Linux From Scratch project has a good quick primer on the topic that you can use to mine for search terms and further information.

With my specific font, what worked for me was:

$ sudo ./configure --psfdir=/usr/lib/kbd/consolefonts
$ sudo make -j8 psf
# Stuff happens here
$ sudo make install-psf

After that, I had the fonts installed into my /usr/lib/kbd/consolefonts directory and was able to setfont and further change my TTY font to my preferences.