Docker on Core Linux

Docker Running on Core Linux

Is it possible? … Yes? However I didn’t write down some of the pre-requisites for the Core Server (whoops, maybe one day I’ll redo it from scratch). But if you do manage to get the base binaries installed this post should be helpful for all the caveats I faced along the way…

In my previous post I mentioned that Docker wouldn’t run unless it was 64bit machine, so I created a Core Linux 64 bit image and showed how to get the base OS up and running… but what about Docker itself.

Now I got this “working” but I didn’t exactly write down all my steps (it took a long time to figure out). From looking at the VM’s history looks like I simply used the tc account to download and extract the base Docker binaries:

now this doesn’t tell me the relative path I was on when some of the relative paths are called, but I do know it was the tc account so some safe assumptions can be made.

Reviewing my AI chat and notes I took, and getting it running again after a reboot, it seem after the “install” (copy base files to path shown above image, line 51) I also added “var/lib/docker” and “etc/docker” to the filetool.lst file, so they stay persisted after reboot. Strangely only /var/lib/docker is populated, but I can’t see how that’s the case from the history review. I was pretty positive the script itself failed to execute… I really should start from scratch else this post will be a bit useless…. butt…. F*** it….

The next issues seems to be tired to cgroups and certificates…

Fixing Cgroups Error

sudo mount -t tmpfs cgroup_root /sys/fs/cgroup/
sudo mkdir /sys/fs/cgroup/devices
sudo mount -t cgroup -o devices none /sys/fs/cgroup/devices

That should be it… but we need this to be persisted and auto run at boot time so we don’t have to do this every time…

sudo vi /opt/dockerd.sh
i
mount -t tmpfs cgroup_root /sys/fs/cgroup/
mkdir /sys/fs/cgroup/devices
mount -t cgroup -o devices none /sys/fs/cgroup/devices
ESC
:wq
sudo vi /opt/bootlocal.sh
*append with*
/opt/dockerd.sh
:wq
sudo chmod +x /opt/dockerd.sh
filetool.sh -b

The next issue seems that docker would load, but when pulling a container to load it would just seem to fail complaining about certificates.

Fixing Certificate Error

I found the point in my notes rambling with AI when I figured it out…

“NO F***KIN WAY!!!!!!! https://stackoverflow.com/questions/75696690/how-to-resolve-tls-failed-to-verify-certificate-x509-certificate-signed-by-un I read this thread and read the answer by Andrei Nicolae… which said just durr copy ca certs to /etc/ssl/certs I was like, I bet docker is hard coded to look there, which is why it was first suggested but all other apps on tiny core linux know to use /usr/local/etc/ssl/certs, so yeah docker never was using the expectects paths liek I suspected from the begining cause we manualy installed it for a OS not supported. so with this I did sudo mkdir -p /etc/ssl/certs sudo cp /usr/local/etc/ssl/certs/* /etc/ssl/certs sudo pkill dockerd sudo dockerd & sudo docker pull hello-world and guess what it finally freaking worked”

But I realized instead of copying them I could just make a symlink

sudo mkdir /etc/ssl/
ln -s /usr/local/etc/ssl/certs/ /etc/ssl/

I simply placed these lines in /opt/dockerd.sh file I created earlier, rebooted and verified that /etc/ssl/certs was populated with certs and it was.

And finally…

Running Dockerd

sudo DOCKER_RAMDISK=true dockerd &

Pulling Image

sudo docker pull hello-world

Running Image

sudo docker run --rm hello-world

Yay we actually ran a container from Core Linux.. Mind Blown… I swear I had it all running at only 90MB of RAM, but checking now show 116MB Bah…

To get Docker to run at boot my final /opt/dockerd.sh looked like this:

Installing CorePure64

Back Story

So in my previous post I shared how to setup a very small footprint Linux server called Linux Core: Installing Core Linux – Zewwy’s Info Tech Talks

but…… I tried getting docker running on it was hit with an error “Line 1: ELF: File not found”.

AI, after giving all the required command to do a “manual install”, stated, “duuuuurrrrrrrrrrrrrr docker don’t give 32 bit binaries”, to which I was replied huh… I guess I installed 32 bit Core Linux… do they have 64bit versions?

It gave me some dumb link to some dumb third party source.. the answer is yes.. here: Index of /16.x/x86_64/release/

So here we go again….

Installing CorePure64

Step 1) Download Install image CorePure64-16.0.iso

Step 2) get x64 bit hardware, or create a VM that supports 64 bit. I have 64 bit hypervisors, so I will create a VM as I did in my first post.

This time 2 CPU, 1 GB RAM, 60GB HDD, thin, VMparavirtual scsi controller, EFI enabled with secure boot, let’s see if this works out…. No boot… Flip boot settings to BIOS mode… ISO boots.. ah man FFS its x64 based but still relies on BIOS for booting… that sucks… of well moving on….

Booting and Installing Core Linux

Attach ISO boot. Core Linux boots automatically from ISO:

For some reason the source doesn’t tell you what to do next. type tc-install and the console says doesn’t know what you are talking about:

AI Chat was kind enough to help me out here, and told me I had to run:

tce-load -wi tc-install

Which required an internet connection:

However even after this, attempting to run gave the same error.. mhmm, using the find command I find it, but it needs to be run as root, so:

sudo su
/tmp/tcloop/tc-install/usr/local/bin/tc-install.sh

C for install from CDrom:

Lets keep things frugal around here:

1 for the whole disk:

y we want a bootloader (It’s extlinux btw located [/mnt/sda1/boot/extlinux/extlinux.conf}):

Press enter again to bypass “Install Extensions from..”

3 for ext4:

Like the install source guide says add boot options for HDD (opt=sda1 home=sda1 tce=sda1)

last chance… (Dooo it!) y:

Congrats… you installed TC-Linux:

Once rebooted the partition and disk free will look different, before reboot, running from memory:

after reboot:

Cool, the install process was 100% the same as the 32bit process…

but running uname -m we see we are now 64 bit instead of 32 bit.

Changing TC Password

Step 1) Edit /opt/.filetool.lst (use vi as root)
– add etc/passwd and etc/shadow

Step 2) run:

filetool.sh -b

Step 3) run

passwd tc

Step 4) run

filetool.sh -b

Now reboot, you may not notice that it applied due to the auto login, however, if you type exit to get back to the actual login banner, type in tc and you will be prompted for the password you just set. Now we can move on to the next step which is to disable the auto login.

Disable Auto-Login

Step 1) Run

sudo su
echo 'echo "booting" > /etc/sysconfig/noautologin' >> /opt/bootsync.sh

Step 2) Run

filetool.sh -b
reboot

K on to the next fun task… static IP…

Static IP Address

For some reason AI said I had to create a script that runs the manual step… not sure if this is the proper way… I looked all over the Wiki: wiki:start – Tiny Core Linux Wiki I can’t find nothing.. I know this works so we’ll just do it this way:

Step 1)  Run:

echo "ifconfig eth0 192.168.0.69 netmask 255.255.255.0 up" > /opt/eth0.sh
echo "route add default gw 192.168.0.1" >> /opt/eth0.sh
echo 'echo "nameserver 192.168.0.7" > /etc/resolv.conf' >> /opt/eth0.sh
chmod +x /opt/eth0.sh
echo "/opt/eth0.sh" >> /opt/bootlocal.sh
filetool.sh -b

Step 2) reboot to apply and verify.

It didn’t work, but is I add “sleep 2” just before /opt/eth0.sh in the bootsync.sh file, then it works, not the greatest but I’ll take it for now.

Updates?

Tiny Core Linux, updating is a bit different from traditional distros. Here’s how it works:

🔄 Updating Tiny Core Linux

Tiny Core doesn’t have a single tc-update command for the whole system, but it does have tools for updating extensions and managing packages:

🧰 Extension Update

Use the tce-update command to update installed extensions:

bash
tce-update

This checks for newer versions of your installed .tcz extensions and updates them.
📦 Other Useful Commands

  • tce-load: Loads extensions from your /tce directory.
  • tce-ab: Opens the AppBrowser (if you have GUI).
  • tce-audit: Audits installed extensions.
  • tce-remove: Removes extensions.

🧱 Core System Update

To update the core system itself (like the kernel and initrd), you’ll need to manually download the latest bzImage and core.gz (or tinycore.gz) from the Tiny Core Mirrors then replace them in your boot partition.

Steps:

  1. Download the latest files.
  2. Mount your boot partition.
  3. Replace bzImage and core.gz.
  4. Update your bootloader config (e.g., GRUB or syslinux).
  5. Reboot.

Upgrade vCenter from 7 to 8

Upgrading vCenter

Let me start by having a firm base, a working vCenter 7 with a proper connected Veeam server. Since my server is dead I’m going to start from scratch.

Pre-req Step (Base vCenter 7 with Veeam)

Other Pre-reqs:
DNS server (or local host records)

Step 1) Install vCenter 7

20 minutes later, complete. Now I made one mistake along the way (to replicate what I think happened the first go around) and that was to give the VM name vCenter.zewwy.ca, while when configuring the network FQDN I gave it vcenter.zewwy.ca. After install, I was able to replicate the findings I had from my first go around which I now know will cause and upgrade problem for the upgrade to vCenter 8. When this is done, the install wizard has no issues with this and the install will complete successfully. However now the hostname with have a uppercase, while the PNID will have a lowercase:

It’s this case while will cause the upgrade to have hiccups which I’ll cover below. However, I need to add my hosts which were connected to my previous upgrade, let’s see if it’ll take my hosts…

Step 2) I followed my blog post to fix up my Veeam jobs again. 

Step 3) Upgrade to vCenter 8.

Stage 1 Error

Remember when I showed above the mistake I made by entering the FQDN without a capital when configuring the network when I ran the vCenter7 installer. Putting lower case here, provided the above error, to get past it had to use the case sensitive hostname with a capital.

Stage 2 Error

Now pick your host and you have another error to bypass at Stage 2.

I had another weird issue where even though the VM deployed, it was not reachable over the network and the installer timed out. To resolve this I simply changed the VM network VMPG, saved, and then changed it back to the proper one saved, and it was pingable. To get back to stage 2 simly navigate to the VAMI web page on port 5480. When you get to the stage to connect to the source you enter the details and get this error at the pre-upgrade checks:

changing the vcenter source entry again to a capital as we did before will not work the same error pops up showing to go to that blog post on how to change the FQDN. Since my FQDN already looked correct (with a capital), but the command was showing (the PNID) was with a lower case, Instead of changing the FQDN to be lowercase, and going through all the steps in that blog (there are a lot) I simply set the PNID to have a capital in it

Get:

/usr/lib/vmware-vmafd/bin/vmafd-cli get-pnid --server-name localhost

Set:

/usr/lib/vmware-vmafd/bin/vmafd-cli set-pnid --server-name localhost --pnid vCenter.mydomain.com

Yay no more error:

Now pick your content to migrate, and at this point you should stop using vCenter for the duration of the migration.

At this point the Pings dropped (roughly 12 no reponse at about 1-2 minute down time…. then they came back up as the new vCenter IP changed over. At which point step 2 began.

I went to play a bit of party animals, it had timed out on the old vami IP, which the installer may have auto switched over, but logging into VAMI on the new server showed everything was green.

Logging in, everything looks good, checking Veeam yup, rescan of vcenter worked without issue, check re-calculating VM on a backup job, yup it works.

Success. This post doesn’t cover additional steps (applying new license keys, checking using migrations (vsphere.local), remoting loging services or connections (rsyslog)) all those you’d have to verify after completing these steps. Now to upgrade the ESXi hosts, both esxi hosts remained on 7.0.3x versions for this post.