Solving Terraform: “No valid credential sources found for AWS Provider”

My Problem

Using Terraform v0.12 and attempting to use the AWS provider to init an S3 backend, I’m receiving the error:

Initializing the backend…

Error: No valid credential sources found for AWS Provider.
Please see for more information on providing credentials for the AWS Provider

I’m experimenting with providing static credentials in a .tf file (P.S. don’t do this in production) and I’ve verified that the AWS keys are correct.

My Solution

Preamble: The following is terrible, don’t do this. I’m writing this merely as an answer to something that was puzzling me.

Add access_key and secret_key to the Terraform backend block. E.g.:

terraform {
  backend "s3" {
    bucket = "your-bucket"
    region = "your-region"
    key = "yourkey/terraform.tfstate"
    dynamodb_table = "your-lock-table"
    encrypt = true
    access_key = "DONT_PUT_KEYS_IN_YOUR.TF_FILES"
    secret_key = "NO_REALLY_DONT"

This would be in addition to the keys that you’ve placed in your provider block:

provider "aws" {
   region = "us-east-1"
   access_key = "DONT_PUT_KEYS_IN_YOUR.TF_FILES"
   secret_key = "NO_REALLY_DONT"

The backend needs to be initialized before the provider plugin, so any keys in the provider block are not evaluated. The Terraform backend block needs to be provided with its own keys.

A better method for doing that would be using environmental variables, among other more secure methods (including the use of shared_credentials_file and a profile, such as what Martin Hall references in the comments below. You can also provide a partial configuration and then pass variables in via the command line.

The Long Story

There are a number of ways to provide Terraform with AWS credentials. The worst option is to use static credentials provided in your .tf files, so naturally that’s what I’m experimenting with.

One way to provide credentials is through environmental variables, and when I tested that method out, it worked! I’ll make use of environmental variables in the future (promise), but I want to figure out why static credentials aren’t working because… because.

Another way to provide AWS credentials is via the good ol’ shared credentials file located at .aws/credentials. Again, this works in my scenario but I’m stumped as to why static credentials won’t.

(Side note: At this point in the story, this is the universe telling me just how bad it is to use static credentials, but my preferred decision making methodology is to ignore such urgings.)

Let’s debug this sucker by setting the environmental variable TF_LOGS to trace: export TF_LOGS=trace

# terraform init
2020/05/21 06:26:58 [INFO] Terraform version: 0.12.25
2020/05/21 06:26:58 [INFO] Go runtime version: go1.12.13
2020/05/21 06:26:58 [INFO] CLI args: []string{"/usr/bin/terraform", "init"}
2020/05/21 06:26:58 [DEBUG] Attempting to open CLI config file: /root/.terraformrc
2020/05/21 06:26:58 [DEBUG] File doesn't exist, but doesn't need to. Ignoring.
2020/05/21 06:26:58 [INFO] CLI command args: []string{"init"}

Initializing the backend…

2020/05/21 06:26:58 [TRACE] Meta.Backend: built configuration for "s3" backend with hash value 953412181
2020/05/21 06:26:58 [TRACE] Preserving existing state lineage "da125f8e-6c56-d65a-c30b-77978250065c"
2020/05/21 06:26:58 [TRACE] Preserving existing state lineage "da125f8e-6c56-d65a-c30b-77978250065c"
2020/05/21 06:26:58 [TRACE] Meta.Backend: working directory was previously initialized for "s3" backend
2020/05/21 06:26:58 [TRACE] Meta.Backend: using already-initialized, unchanged "s3" backend configuration
2020/05/21 06:26:58 [INFO] Setting AWS metadata API timeout to 100ms
2020/05/21 06:27:00 [INFO] Ignoring AWS metadata API endpoint at default location as it doesn't return any instance-id
2020/05/21 06:27:00 [INFO] Attempting to use session-derived credentials

Error: No valid credential sources found for AWS Provider.
Please see for more information on providing credentials for the AWS Provider

Huh, it’s as if the backend section is totally ignoring my provider credentials.

It was then that I realized that the backend block has its own variables for keys. Well that’s weird. Why would it need its own definition of my provider’s keys when I already have keys placed in the “aws” provider block? Unless… Terraform doesn’t look at that block.

Some further research confirms that when a terraform backend is init’d, it’s executed before just about anything else (naturally), and there’s no sharing of provider credentials from a provider block even if the backend resides in the provider (E.g. a backend that uses Amazon S3 will not look to the AWS provider block for credentials).

Once I placed my AWS keys in the terraform backend block (don’t do that), things worked.

Adding Simple base64 Decoding to Your Shell

I had a need to repeatedly decode some base64 strings quickly and easily. Easier than typing out openssl base64 -d -in -out, or even base64 --decode file.

The simplest solution that I found and prefer is a shell function with a here string. Crack open your preferred shell’s profile file. In my case, .zshrc. Make a shell function thusly:

decode() {
  base64 --decode <<<$1

Depending on your shell and any addons, you may need to echo an extra newline to make the decoded text appear on its own line and not have the next shell prompt append to the decoded text.

Solving ‘UseKeychain’ Not Working for Password Protected SSH Key Logins on macOS

My Problem

Using macOS 10.15, attempting to automatically load a password protected SSH key into ssh-agent by using the SSH configuration option UseKeychain was not working. I had the SSH key’s password stored in the macOS Keychain, and if I manually ran ssh-add -K /path/to/private/key it would load the key without asking me to input a password, proving that they key’s password did exist in the keychain. However, when attempting to SSH to a host that required that private key, I was still being asked for the password to the private key. This is in direct opposition to the intended behavior of using UseKeychain in one’s SSH config file.

I did not want to put ssh-add -A or some variant of ssh-add in my .bashrc file, even though that would have “solved” the problem. The version of OpenSSH that macOS uses has a configuration option provided for just such a desire, and that option is UseKeychain.

My Solution

Of course, first make sure that your SSH config file includes the following:

  UseKeychain yes
  AddKeysToAgent yes

Then, when adding the key to Keychain with ssh-add -K, use the full filesystem path to the key file, not a relative path.

For example, do not use ssh-add -K ~/.ssh/private.key, rather you should use ssh-add -K /Users/username/.ssh/private.key. You may also want to remove all relative pathed entries to the same SSH key from Keychain:

The Long Story

Let’s sort out some terms and get a few things straight. If you’re going to use SSH with public / private keypairs, you have the option of password protecting the private key in the pair. This means that you cannot decrypt the private key to then authenticate your identity to other systems without being in possession of the password. Without a password on the private key, mere possession of the file is good enough to prove your identity, which isn’t really comfortably secure. Any person or software that has access to your filesystem could theoretically snatch that file and masquerade as you. A password protected private key is a type of two-factor authentication. You must have possession of the file and knowledge of the password.

This is great, except if you perform tasks that require the repetitive use of the private key. Such as shelling into a remote system (Ansible, anyone?) or using git. Just an hour or two of using git pull, git push, or git fetch will wear out your keyboard and have you reconsidering your career.

This is where ssh-agent comes in, which keeps track of your private SSH key identities and their passwords. You’d then use ssh-add to stick the identities into ssh-agent. A fuller discussion of these tools is beyond the scope of this post, but suffice it to say that in theory this should reduce your need for typing SSH key passwords to an absolute minimum per reboot.

Apple’s Keychain can store SSH key passwords securely, and Apple’s version of OpenSSH includes an option that you can include in your config file named UseKeychain. This will further reduce your need for typing passwords, even across reboots. It’s possible to virtually never need to type your SSH key’s password again. Except when it doesn’t work.

Some people, myself included, have had an issue where no amount of UseKeychain under any Host * or Host hostname heading in one’s SSH config file would seem to work. After a reboot, ssh-add -l shows now known identities, which is expected. However, ssh user@host should trigger UseKeychain to find the proper key file’s password in keychain and simply allow one to log in to the remote system without providing the local private key’s password. After the first access to a host that needs your private key identity, ssh-add -l should show that the identity is now loaded even though you didn’t type the password.

Opening up the “Keychain Access” application on macOS and then searching for any login with the word ssh in it may reveal that the SSH key identities that are known are all referred to with relative paths. It’s common to see ~/.ssh/private.key or even ../.ssh/private.key depending on where you were in your filesystem when you realized you needed to add the key to Keychain. For some people, this appears to work. Not for me (and others) however.

You may want to try first adding your key with ssh-add -K using the full filesystem path to the key. Then you may want to remove any other references to the key file in Keychain Access that use relative paths.

Solved: Getting Backblaze to Backup OneDrive Folders in Windows

My Problem

I use Microsoft Office 365 and OneDrive for my consulting work to keep my files synced between multiple devices and preserved from loss should I have my laptop stolen or otherwise destroyed. I use Backblaze as part of my strategy to back up the data and keep version history of my files. This can be a tiny bit tricky since Backblaze can’t back up the files if you have OneDrive “Files On-Demand” turned on. However, once you turn Files On-Demand off, Backblaze should be able to back up the files just like any other file on your hard drive. In theory.

In practice, I was unable to get one particular folder contained within OneDrive to back up to Backblaze. This was a considerable problem because that one particular folder was the main folder that I kept all of my business files in. It was essentially the only folder that I cared deeply about having backed up, and as luck would have it, it was the only folder that wasn’t showing up in my list of files that I could restore from Backblaze.

After considerable work with Backblaze support, we came to the final solution.

My Solution

Reparse points! Check to see if the directory that isn’t being backed up has the ReparsePoint attribute. There are a few ways to do this, but the most plain one that I used was:

> gci|select name,attributes -u

Name                                       Attributes
----                                       ----------
Important Work       Directory, Archive, ReparsePoint
GoProjects                                  Directory
More Work            Directory, Archive, ReparsePoint
Even more work       Directory, Archive, ReparsePoint

As it turns out, OneDrive apparently has a history of changing if and when it marks a directory with the ReparsePoint attribute. Here’s where I have to insert a giant disclaimer:

I don’t know if changing the ReparsePoint attribute manually out from under OneDrive will do anything nasty and prevent OneDrive from working as intended. I also do not know if OneDrive will silently add the ReparsePoint attribute to folders again, thus causing Backblaze backups to silently fail. I’ll be checking this over time, but you should check it for yourself as well.

However, note that changing a directory’s ReparsePoint attribute in this situation will not delete data.

As it turns out, most if not all of my directories under the one crucial directory were marked with the ReparsePoint attribute. My only choice was to recursively check each directory and remove the attribute. If you take such a scorched earth approach, this will very likely tamper with any junctions and/or mount points that you have in that tree of your filesystem, so beware of what that implies for your usage. For me, there were no known negative implications.

My solution was to mass change the troublesome directory with some PowerShell:

Get-ChildItem -recurse -force | Where-Object { $_.Attributes -match "ReparsePoint" } | foreach-object -Process {fsutil reparsepoint delete $_.fullname}

For more information, check out the help document for the fsutil tool. Keep in mind that while the verb delete is scary, it doesn’t actually delete any files or directories, rather it’s simply removing the reparsepoint attribute on the filesystem object.

After that, I forced a rescan of the files that Backblaze should back up (Windows instructions here, and then Mac instructions here). Suddenly thousands of new files were discovered and began uploading. After a little while, I checked for what files I could restore, and sure enough, the troublesome folder and seemingly all of it’s child items were in my available backup.

I’ll periodically check back on my filesystem to see if any directories were re-marked with ReparsePoint and make note of it here. If I was smart and diligent, I’d make a scheduled task to remove that attribute from the areas of my filesystem that I’m concerned with.

Workaround: “Unable to Change Virtual Machine Power State: Cannot Find a Valid Peer Process to Connect to”

My Problem

Attempting to start a virtual machine in VMware Workstation 15 Pro (15.0.3) on a RedHat based Linux workstation caused the following error: “Unable to Change Virtual Machine Power State: Cannot Find a Valid Peer Process to Connect to”

I was able to start other virtual machines in the VM library, however.

My Workaround

Note that this is simply a workaround. I don’t yet know the ultimate cause, but I’m documenting how I workaround it until I or someone else can figure out the ultimate cause of this problem.

First, check to see if the virtual machine is actually running, in spite of there being no visual indicators within VMware Workstation: vmrun list

You’ll probably see that the virtual machine is running. If you don’t, then this workaround isn’t likely to help you. Attempt to shut the running virtual machine down softly: vmrun stop /path/to/virtual_machine.vmx soft

After that, you should be able to start the machine again, until the next time it crashes for unknown reasons. More news as I discover it.

Dumping Grounds (Turn Back Now):

I’ll dump some of my notes here and they’ll be updated periodically as I find out more info about this issue. You’re completely safe to ignore everything past this point. Abandon all hope, ye who proceed.

I had recently upgraded from Fedora 29 to Fedora 30, and was experiencing some minor instability with my main workstation. I’m not sure if that was the ultimate cause of this issue, but I’m suspicious since I never had this issue until after the upgrade.

My first act was to go to the Help menu, select the “Support” menu and then “Collect Support Data…” I chose to collect data for the specific VM that was having this issue. This took quite a while, by my standards. About 20 minutes. It basically creates a giant zipped dump of pertinent files across your physical machine that pertain to VMware and that specific virtual machine. It’s not super easy to parse and know what to look for.

I searched through /var/log/vmware/ for any clues in any of the log files found therein. Grepping for all files that had the pertinent virtual machine’s name, and looking for surrounding context didn’t turn anything up.

I attempted to start the vmware-workstation-server service but that failed. I don’t think that’s the issue since the virtual machine isn’t a shared VM.

I tried vmrun list and saw that the Windows VM was actually listed as running. I stopped it soft: vmrun stop /path/to/my/virtual_machine.vmx soft and was then able to start the virtual machine. I’m not sure what’s causing the crash, and what’s causing the crash of VMware Workstation Pro, and why when I start it back up it doesn’t appear to know that the VM it was previously working with is actually running.

Solved: “bad input file size” When Attempting to `setfont` to a New Console Font

My Problem

In a Linux distribution of one kind or another, when attempting to set a new console font in a TTY, you may received the following error:

# setfont -32 ter-u32n.bdf
bad input file size

My Solution

First, if you’re coming to this blog post because you’re attempting to install larger Terminus fonts for your TTY, you probably just want to search your distribution’s package manager for Terminus, specifically the console fonts package:

$ yum search terminus
== Name Matched: terminus ==
terminus-fonts.noarch : Clean fixed width font
terminus-fonts-grub2.noarch : Clean fixed width font (grub2 version)
terminus-fonts-console.noarch : Clean fixed width font (console version)
$ yum install terminus-fonts-console

However if you’re coming to this blog post for other reasons, then you’re probably attempting to setfont with a .bdf file or just something that isn’t a .psf file. You most likely need to follow the instructions for your font, in my case Terminus, to make the files into the proper .psf format.The Linux From Scratch project has a good quick primer on the topic that you can use to mine for search terms and further information.

With my specific font, what worked for me was:

$ sudo ./configure --psfdir=/usr/lib/kbd/consolefonts
$ sudo make -j8 psf
# Stuff happens here
$ sudo make install-psf

After that, I had the fonts installed into my /usr/lib/kbd/consolefonts directory and was able to setfont and further change my TTY font to my preferences.

Solved: Attempting to Install and Configure Wireguard Fails with “Unknown device type” and “FATAL: Module wireguard not found in directory”

My Problem

Attempting to install and use Wireguard (version 0.0.20190406-1) on Fedora release 29 is unsuccessful with a variety of symptoms. The first being:

ip link add dev wg0 type wireguard
Error: Unknown device type.

Attempting to get some info about the module with modprobe shows:

$ modprobe wireguard
modprobe: FATAL: Module wireguard not found in directory /lib/modules/5.0.4-2004

The dkms tool shows that the wireguard module is added:

$ dkms status
wireguard, 0.0.20190406: added

However, attempting to build it shows:

$ dkms build wireguard/0.0.20190406
Error! echo
Your kernel headers for kernel 5.0.4-200.fc29.x86_64 cannot be found at /lib/modules/5.0.4-200.fc29.x86_64/build or /lib/modules/5.0.4-200.fc29.x86_64/.

My Solution

Make sure that your running kernel and your kernel headers are the same version, or at least that the running version of the kernel is newer than your kernel headers.

For example, I’m running on a RedHat based system, and checked the following:

$ uname --kernel-release

But then the kernel headers were newer:

$ rpm -q kernel-headers

My solution was to yum update the kernel and reboot. I didn’t have to re-install the headers or the wireguard packages. Another possible solution would have been to manually install 5.0.4 kernel headers, but that would require uninstalling packages that marked 5.0.9 kernel headers as a dependency. I believe the cleaner solution is to simply update the kernel.

The Long Story

First, I checked that I even had kernel headers installed in the first place:

$ rpm -q kernel-headers

Well that’s interesting, because:

$ uname --kernel-release

So I’m running kernel 5.0.4, but the kernel-headers package that I’m offered is for 5.0.9. I attempted to install the specific kernel header package by version:

yum install kernel-headers-5.0.4-200.fc29.x86_64
No match for argument: kernel-headers-5.0.4-200.fc29.x86_64

At this point, I had two viable options.

  1. I could update the running kernel, since 5.0.10-200.fc29 was released and waiting for me.
  2. I could go into Fedora’s build system, Koji, and pull out the specific kernel headers package that I needed to then install manually.

Choosing #2, however, would require me to uninstall the current 5.0.9 kernel headers, and anything that had it as a dependency. This includes things like binutils and gcc, among many others. I decided to update the system. A quick yum update and reboot later, and:

$ uname -or
5.0.10-200.fc29.x86_64 GNU/Linux

My only concern was that the headers that are in the official yum repo are 5.0.9; a minor version behind the new kernel:

rpm -q kernel-headers

Nevertheless, my fears were allayed with dkms:

$ dkms status
wireguard, 0.0.20190406, 5.0.10-200.fc29.x86_64, x86_64: installed

Previously, wireguard had only been added, but not successfully installed. I quickly tried to add a wireguard interface:

$ ip link add dev wg0 type wireguard
$ ip link show wg0
3: wg0: <POINTOPOINT,NOARP> mtu 1420 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/none


Solved: VMware Workstation 15 Fails to Compile Kernel Modules with “Failed to build vmmon” and “Failed to build vmnet.”

My Problem:

After updating Fedora 29, VMware Workstation Pro 15 needed to have some kernel modules compiled. However, attempting to install them earned me warning signs on the “Virtual Machine Monitor” and “Virtual Network Device” compilation process, and of course starting the services failed. Logs stated “Failed to build vmmon” and “Failed to build vmnet.”

My Solution:

Digest this SuperUser thread:

Ultimately, you need to clone this github repo: Checkout the proper branch that corresponds to your product and version (for example, I used the Workstation 15.0.3 branch). make then make install within that repo, and finally you’ll want to create a symlink for

Using two separate answers in the above SuperUser thread, then modifying it for my own purposes, I came up with this:

VMWARE_VERSION=workstation-15.0.3 #This needs to be the actual name of the appropriate branch in mkubecek's GitHub repo for your purposes
rm -fdr $TMP_FOLDER
mkdir -p $TMP_FOLDER
git clone #Use `git branch -a` to find all available branches and find the one that's appropriate for you
cd $TMP_FOLDER/vmware-host-modules
git checkout $VMWARE_VERSION
git fetch
sudo make install
sudo rm /usr/lib/vmware/lib/
sudo ln -s /lib/x86_64-linux-gnu/ /usr/lib/vmware/lib/
systemctl restart vmware && vmware &

The Long Story:

After a massive cascade of updates on my Fedora 29 workstation that I had been delaying, VMware Workstation Pro 15.0.3 (build-12422535, and Kernal 5.0.3-200.fc29.x86_64 for whatever it’s worth) was unable to launch, instead requiring some kmods to be compiled and loaded:

“Before you can run VMware, several modules must be compiled and loaded into the running kernel.”

Dutifully clicking install earned me these lovely caution signs:

“Virtual Machine Monitor and Virtual Network Device have probably not done what you wanted.”

The services were unable to start, so I checked out the logs:

“Unable to start services. See log file /tmp/vmare-root/vmware-17464.log for details.”

Checking out the logs, I see a build command that failed:

2019-03-28T13:51:03.779-07:00| host-17464| I125: Invoking modinfo on "vmnet".
2019-03-28T13:51:03.781-07:00| host-17464| I125: "/sbin/modinfo" exited with status 256.
2019-03-28T13:51:03.833-07:00| host-17464| I125: Setting destination path for vmmon to "/lib/modules/5.0.3-200.fc29.x86_64/misc/vmmon.ko".
2019-03-28T13:51:03.833-07:00| host-17464| I125: Extracting the vmmon source from "/usr/lib/vmware/modules/source/vmmon.tar".
2019-03-28T13:51:03.839-07:00| host-17464| I125: Successfully extracted the vmmon source.
2019-03-28T13:51:03.839-07:00| host-17464| I125: Building module with command "/usr/bin/make -j12 -C /tmp/modconfig-PG76zy/vmmon-only auto-build HEADER_DIR=/lib/modules/5.0.3-200.fc29.x86_64/build/include CC=/usr/bin/gcc IS_GCC_3=no"
2019-03-28T13:51:05.179-07:00| host-17464| W115: Failed to build vmmon.  Failed to execute the build command.
2019-03-28T13:51:05.181-07:00| host-17464| I125: Setting destination path for vmnet to "/lib/modules/5.0.3-200.fc29.x86_64/misc/vmnet.ko".
2019-03-28T13:51:05.181-07:00| host-17464| I125: Extracting the vmnet source from "/usr/lib/vmware/modules/source/vmnet.tar".
2019-03-28T13:51:05.185-07:00| host-17464| I125: Successfully extracted the vmnet source.
2019-03-28T13:51:05.185-07:00| host-17464| I125: Building module with command "/usr/bin/make -j12 -C /tmp/modconfig-PG76zy/vmnet-only auto-build HEADER_DIR=/lib/modules/5.0.3-200.fc29.x86_64/build/include CC=/usr/bin/gcc IS_GCC_3=no"
2019-03-28T13:51:06.597-07:00| host-17464| W115: Failed to build vmnet.  Failed to execute the build command.

I see two other log files:

total 44
-rw-------. 1 root root 16669 Mar 28 13:51 vmware-17464.log
-rw-------. 1 root root 16555 Mar 28 13:50 vmware-apploader-17464.log
-rw-r-----. 1 root root  2792 Mar 28 13:51 vmware-authdlauncher-20116.log

The apploader log file didn’t appear to have anything of note in it. The authdlauncher… I wasn’t so sure:

2019-03-28T13:51:10.246-07:00| authdlauncher| I125: SOCKET 1 (12) creating new listening socket on port 902
2019-03-28T13:51:10.246-07:00| authdlauncher| W115: SOCKET Could not bind socket, error 98: Address already in use
2019-03-28T13:51:10.246-07:00| authdlauncher| I125: SOCKET Could not create IPv6 listener socket, error 11: Socket bind address already in use
2019-03-28T13:51:10.246-07:00| authdlauncher| I125: SOCKET 2 (12) creating new listening socket on port 902
2019-03-28T13:51:10.246-07:00| authdlauncher| W115: SOCKET Could not bind socket, error 98: Address already in use
2019-03-28T13:51:10.246-07:00| authdlauncher| I125: SOCKET Could not create IPv4 listener socket, error 11: Socket bind address already in use
2019-03-28T13:51:10.246-07:00| authdlauncher| I125: failed to listen on port 902, error 11: Resource temporarily unavailable... Exiting.

Hmm, indeed something is listening on port 902 that looks like VMware:

# sudo lsof -i -P -n | grep 902
vmware-au  1556    root   12u  IPv6  28200      0t0  TCP *:902 (LISTEN)
vmware-au  1556    root   13u  IPv4  28201      0t0  TCP *:902 (LISTEN)

I have my doubts that this is the ultimate problem, but I figured I’d HUP those suckers and see what happens. Of course, that didn’t do anything worthwhile. The processes listening on port 902 are left up and running after the failed installation, not idling there as the cause of the failure.

After tooling around with untaring the kmod and making it by hand, I saw enough errors that made me think there was a serious lack of… something on my install. This just didn’t seem right. As a result, I gave up and Googled, which brought me to this:

The short story to that long thread is that you should uninstall VMware Workstation, make sure that you have the proper prerequisite packages installed, then re-install VMware Workstation. In my case, that did absolutely nothing and I still had the same issue.

A bit more searching and I found this thread from 2017 that seems to have about a year of activity on it: Apparently this is a fairly common issue that doesn’t have a very elegant solution. Basically, VMware Workstation’s latest version doesn’t support the kernel that I updated to, and I had to patch it. I blindly followed repomon‘s Apr 4, 2018 7:16 PM post, even though it was for VMware Workstation 12. Of course, that didn’t work well because it ended up compiling for an older version of vmmon and vmnet than what I had previously.

Some more Googling brought me to this SuperUser post: Using that as inspiration, I realized that GitHub user mkubecek appears to be keeping up to date with the latest versions of VMware Workstation and Player products and creating appropriate patches to help work through this issue.

The specific script / solution I came to is described in the “My Solution” section above.

Solving Yakuake not Loading .bash_profile

My Problem:

I use Yakuake (pronounced “yaw-quake”) as a drop-down terminal to give me quick access to a shell when working on other things. However, even though my regular terminal (GNOME Terminal as of this blog post) defaults to being an interactive terminal, thus loading .bash_profile, Yakuake wasn’t. None of my aliases, custom shell functions, or anything else in .bash_profile was loading.

My Solution:

Edit your Yakuake profile to launch your shell (probably bash) as a login shell. E.g. /bin/bash -l

The Long Story:

For those unaware, Yakuake is a terminal emulator application, pronounced “yaw quake”, that drops down from the top of your GUI in the style of the old terminal in the ID game “Quake.”

Mashing a simple key combination causes the terminal to drop down over your existing windows. Its quick access combined with the option of terminal transparency allows me to hammer away on problems while reading documentation, or keeping a casual eye on other things that are going on in the background (e.g. watching and contributing to a Slack conversation while I’m pecking away in a shell)

The Yakuake drop down terminal in action, dropping down over existing windows and then being pulled back up out of the way.
Yakuake comes in handy when you need a quick shell while working on other things.

The main problem was that .bash_profile wasn’t being sourced, so my familiar aliases and shell functions weren’t being loaded. Normally this is a sign that your terminal emulator isn’t loading your shell in interactive mode. For example, in GNOME Terminal, you need to edit your profile to “Run command as a login shell” to have .bash_profile consumed.

However, in my case, it was already selected. I was under the mistaken impression that Yakuake was a wrapper around GNOME Terminal. It is, in fact, not. Rather it’s based on KDE Konsole, is it’s own terminal emulator, and thus has it’s own terminal profile settings.

You’ll want to go to to the settings menu in Yakuake and select “Manage Profiles”

From there, you can select a profile to edit (probably the only one there). You’ll want to launch your shell (presumably bash) as a login shell. By default the command invoked with Yakuake launches is /bin/bash. You’ll want to add the -l option to make sure it’s a login shell.

For more information, you’ll want to check out your shell’s manual for information about invocation options. For bash, check this part of the man pages out:

Back to the Startup and Freelance Scene

2014 was a watershed year for me. I had been a full time freelancer since 2010, learning a wide variety of technologies as well as the intricacies of developing a consulting business.

However, I had an opportunity to get into an exciting Y-Combinator startup that I couldn’t pass up. In May of 2014, I joined MongoHQ, a database as a service company focusing on hosted MongoDB deployments. We later rebranded to Compose when we started hosting more than MongoDB. First with Elasticsearch, and then PostgreSQL, Redis, and more.

In 2015 we were acquired by IBM, and it was an amazing ride through the acquisition and integration process. We added more people, obtained new and loftier goals, and had a great time succeeding in a totally new environment.

After three years of working on the Compose product within IBM, I couldn’t ignore the pull back to the startup and freelance scene. As of Oct 15th, 2018 I’m back to the wilds of uncertainty and terror excitement. I’m taking up residence at a coworking space / startup incubator called Galvanize, specifically in their Phoenix campus

I’m back to freelancing and startup work, and perhaps hunting a unicorn. Or at least a very pretty horse. If anyone is currently on the same path, I’d love to talk with you and see how things are going in 2018 and on into 2019. If anyone is in need of a consultant / contractor / freelancer who can poke a bit at AWS, Azure, MongoDB, Elasticsearch, Redis, PostgreSQL, JavaScript, Ruby, Go, Linux, and a litany of other trendy and cloudy technologies, reach out:

If anyone is specifically in the Phoenix, Arizona startup scene, stop by Galvanize and let’s grab some lunch and talk about sunburns and hiking. =)