BELK Stack on Docker
(Part 2 – Docker Compose)

The Story Continues

Following on from the last post, today we cover docker-compose to allow for easier deployment of docker images and configurations. As from my previous post you may want to indulge in the same reading I did here.

Past those nice formalities, I find myself missing something… I’m not sure what it could be… oh yeah…. dependencies!

Installing Docker-Compose

Can I use apt-get?

Would seem like it… but

IT’s outdated… 😀

Other way is via pip or the intended way

Working with Docker-Compose

  • docker-compose ps  lists all the services in a network. This is especially helpful when troubleshooting a service as it will give you the container ID and you can then run docker -it exec <ID> bash to enter the container and debug as needed.
  • docker-compose build — generates any needed images from custom Dockerfiles. It will not pull images from the Docker hub, only generate custom images.
  • docker-compose up — brings up the network for the services to run in
  • docker-compose stop — stops the network and saves the state of all the services
  • docker-compose start — restarts the services and brings them back up with the state they had when they were stopped
  • docker-compose down — burns the entire Docker network with fire. The network and all the services contained within are totally destroyed.

How to Docker-Compose?

The last big question is: how to write a docker-compose.yml, and it’s actually very easy and follows a standard formula.

Here is a template of what any docker-compose.yml will look like.

  • Sample Docker Compose Template
version: "2"
  services:
    <name_of_service>:
      build: <path_to_dockerfile>
OR
      image: <name_of_image:version>
      enviroment:
        - "ConfVar:value"
        - "homeDir:/home/dir"
      ports:
        - "[HostPort]:[ContainerPort]"
        - "80:80"
      volumes:
        - /path/container/will/use

Every docker-compose file will start with a minimum of version: "2", if you’re doing a Docker Swarm file it will need version: "3", but for a single docker-compose.yml, you’ll need v2.

See here for more on the use of volumes

I’m gonna keep this post short and use examples of these first two blogs it part 3. Where I setup and configure the first container in the BELK Stack; Elasticsearch.

See you all at part 3! 😀

BELK Stack on Docker
(Part 1 – Docker)

The Story

This time our goal to setup a SEIM (Security Event & Information Monitoring) which will gather data via the BELK Stak (Beat, Elasticsearch, Logstash and Kibana). This is going to take (I’m assuming, as I’ve just started) about 4-5 separate blog posts to get this off the ground.

It has taken me a couple weeks of smashing my head into a wall simply due to my own ignorance, so in this blog series I’m going to cover more step-by-step exactly what needs to be done for my particular setup. There are many ways you can configure services these days, which still includes bare metal. If I so chose I could run Docker on a bare metal Ubuntu server, or even a bare metal windows server, but in this case I’m going to install docker on a Ubuntu server which will happen to be itself a VM (Virtual Machine).

Now with that in mind, here’s some basic reading you probably should do before continuing on. Now before we go on let’s be clear on one thing, docker itself doesn’t run on magic, or fluffly rainbow clouds, as I mentioned in the paragraph above it runs on some system, whether that’s again bare metal or some VM of some kind [Think IaaS (Infrastructure as a Service)], in this blog it will be a Ubuntu VM. The specs of this machine should suffice for the application and workloads that are going to be created on it.

Dockerfile Commands

Below, are the commands that will be used 90% of the time when you’re writing Dockerfiles, and what they mean.

FROM — this initializes a new build stage and sets the Base Image for subsequent instructions. As such, a valid Dockerfile must start with a FROM instruction.

RUN — will execute any commands in a new layer on top of the current image and commit the results. The resulting committed image will be used for the next step in the Dockerfile.

ENV — sets the environment variable <key> to the value <value>. This value will be in the environment for all subsequent instructions in the build stage and can be replaced inline in many as well.

EXPOSE — informs Docker that the container listens on the specified network ports at runtime. You can specify whether the port listens on TCP or UDP, and the default is TCP if the protocol is not specified. This makes it possible for the host and the outside world to access the isolated Docker Container

VOLUME — creates a mount point with the specified name and marks it as holding externally mounted volumes from the native host or other containers.

You do not have to use every command. In fact, I am going to demonstrate a Dockerfile using only FROM, MAINTAINER, and RUN.

Images vs. Containers

The terms Docker image and Docker container are sometimes used interchangeably, but they shouldn’t be, they mean two different things.
Docker images are executable packages that include everything needed to run an application — the code, a runtime, libraries, environment variables, and configuration files.
Docker containers are a runtime instance of an image — what the image becomes in memory when executed (that is, an image with state, or a user process).

Examples of Docker containers. Each one comes from a specific Docker image.
In short, Docker images hold the snapshot of the Dockerfile, and the Docker container is a running implementation of a Docker image based on the instructions contained within that image.

This is true, however this image is a bit misleading as it’s missing the versioning which will become apparent a bit later  on in this blog post.

Docker Engine Commands

Once the Dockerfile has been written the Docker image can be built and the Docker container can be run. All of this is taken care of by the Docker Engine that I covered briefly earlier.

A user can interact with the Docker Engine through the Docker CLI, which talks to the Docker REST API, which talks to the long-running Docker daemon process (the heart of the Docker Engine). Here’s an illustration below.

The CLI uses the Docker REST API to control or interact with the Docker daemon through scripting or direct CLI commands. Many other Docker applications use the underlying API and CLI as well.

Here are the commands you’ll be running from the command line the vast majority of the time you’re using individual Dockerfiles.

  • docker build — builds an image from a Dockerfile
  • docker images — displays all Docker images on that machine
  • docker run — starts container and runs any commands in that container
  • there’s multiple options that go along with docker run including
  • -p — allows you to specify ports in host and Docker container
  • -it—opens up an interactive terminal after the container starts running
  • -v — bind mount a volume to the container
  • -e — set environmental variables
  • -d — starts the container in daemon mode (it runs in a background process)
  • docker rmi — removes one or more images
  • docker rm — removes one or more containers
  • docker kill — kills one or more running containers
  • docker ps — displays a list of running containers
  • docker tag — tags the image with an alias that can be referenced later (good for versioning)
  • docker login — login to Docker registry

A big thank you to: Paige Niedringhaus for her contributions you can see most of this theory content was a direct copy paste, but not all the content just the basic relevant ones (in case the source material ever goes down).

Now that we got the theory out of the way, let’s get down to the practical fun!

Installing Docker

https://docs.docker.com/install/linux/docker-ce/ubuntu/

Uninstall old versions🔗

Older versions of Docker were called dockerdocker.io, or docker-engine. If these are installed, uninstall them:

$ sudo apt-get remove docker docker-engine docker.io containerd runc

It’s OK if apt-get reports that none of these packages are installed.

Installing Dependencies

Don’t got non moving on…

sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

apt-key fingerprint 0EBFCD88

sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"

Install Docker Engine – Community

Update the apt package index.

sudo apt-get update

Install the latest version of Docker Engine – Community and containerd, or go to the next step to install a specific version:

sudo apt-get install docker-ce docker-ce-cli containerd.io

Got multiple Docker repositories?

If you have multiple Docker repositories enabled, installing or updating without specifying a version in the apt-get install or apt-get update command always installs the highest possible version, which may not be appropriate for your stability needs.

To install a specific version of Docker Engine – Community, list the available versions in the repo, then select and install:

List the versions available in your repo:

apt-cache madison docker-ce
docker-ce | 5:18.09.1~3-0~ubuntu-xenial | https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages
docker-ce | 5:18.09.0~3-0~ubuntu-xenial | https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages
docker-ce | 18.06.1~ce~3-0~ubuntu | https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages
docker-ce | 18.06.0~ce~3-0~ubuntu | https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages
...
b. Install a specific version using the version string from the second column, for example, 5:18.09.1~3-0~ubuntu-xenial.
sudo apt-get install docker-ce=<VERSION_STRING> docker-ce-cli=<VERSION_STRING> containerd.io

Verify that Docker Engine – Community is installed correctly by running the hello-world image.

sudo docker run hello-world

Woooo, what a lot of fun…. Just note one thing here…

Executing the Docker Command Without Sudo (Optional)

By default, the docker command can only be run the root user or by a user in the docker group, which is automatically created during Docker’s installation process. If you attempt to run the docker command without prefixing it with sudo or without being in the docker group, you’ll get an output like this:

docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
See 'docker run --help'.

If you want to avoid typing sudo whenever you run the docker command, add your username to the docker group:

sudo usermod -aG docker ${USER}

To apply the new group membership, log out of the server and back in, or type the following:

su - ${USER}

You will be prompted to enter your user’s password to continue.

Confirm that your user is now added to the docker group by typing:

id -nG

If you need to add a user to the docker group that you’re not logged in as, declare that username explicitly using:

sudo usermod -aG docker username

The rest of this article assumes you are running the docker command as a user in the docker group. If you choose not to, please prepend the commands with sudo.

Let’s explore the docker command next. Thanks Brian

Creating your Dockerfile

The first thing we’re going to do is create a new directory to work within; so open a terminal window and issue the command as root…

 mkdir /dockerfiles
chown dadocker:docker /dockerfiles

Change into that newly created directory with the command

 cd /dockerfiles

Now we create our Dockerfile with the command nano Dockerfile and add the following contents:

FROM ubuntu:latest
MAINTAINER NAME EMAIL

RUN apt-get  update && apt-get -y upgrade && apt-get install -y nginx

Where NAME is the name to be used as the maintainer and EMAIL is the maintainer’s email address.

Save and close that file. (In my case i called it dockerfile; with a lowercase d)

Building the Image

Now we build an image from our Dockerfile. This is run with the command (by a user in the docker group):

docker build -t "NAME:Dockerfile" .

Where NAME is the name of the image to be built.

in this case . simply represents the local directory, else specify the path of the file…

Listing Images

docker images

Deleting Images

docker rmi image:tag

Running Images (Creating Containers)

docker run image

well poop, after running and stopping a container I was unable to delete the images… Internets to the rescue! since a force seemed rather harsh way to do it.

By default docker ps will only show running containers. You can show the stopped ones using docker ps --all.

You can then remove the container first with docker rm <CONTAINER_ID>

If you want to remove all of the containers, stopped or not, you can achieve this from a bash prompt with

$ docker rm $(docker ps --all -q)

The -q switch returns only the IDs

yay it worked!

Summary

Most of the time you won’t be directly installing docker, or building your own images, but if you do you at least now know the basics. These will become import in the future blog posts. I hope this helps with the basic understanding.

In the next blog post I’ll cover Docker-Compose, which will allow use to spin up multiple images into a single working container which will be the bases of our ELK stack. 🙂

Even More PowerShell Fun

The Story

It’s another day, and we all know what that means… yes, another blog post, and even more PowerShell! Can you feel all the power!?!?!

This time it came down to the storage size of my Exchange servers C:\ which turns out to be due to Logs. Logs are great, and best practice is to only clear them if you have a backup copy. Often is the case that logs can be truncated after a backup via VSS by many backup solutions however in my case I could and probably should get that validated with Veeam (as I can’t seem to get that working ‘out of the box’) at the moment. So instead I wanted to know what was “usually” done server side even if someone was not implementing a backup solution.

Source: https://social.technet.microsoft.com/wiki/contents/articles/31117.exchange-201320162019-logging-clear-out-the-log-files.aspx

Neat, but the script is just alright, good for them doing what they want and that’s running it as a scheduled task. Not my goal, but a great source and starting point… let’s have some fun and give this script some roids, much like my last one… I’ll give this a home on GitHub.

Things learned…

  1. Working with the Registry
  2. Determining if Elevated (This is great and I may have a solution to the conundrum in my previous PowerShell post)
  3. Getting a Number, and validating it
  4. Validating Objects by Type
  5. Getting Folder Sizes

Check out my script for all the fun coding bits. I’m a bit tired now as it’s getting late so not much blogging, all more coding. 🙂

ErrorAction Stop Not Stopping Script

Quick Educational note (Source)

$ETLLogKey2 = 'HKLM:\SOFTWARE\Microsoft\Search Foundation for Exchange\Diagnostics'
try{Get-ItemProperty -Path $ETLLogKey2 -ErrorAction Stop}
catch{Write-Host "No Key"}
Write-Host "This should not hit"

Produces:

Well poop… The catch block was triggered but the script did not stop…

Oddly, changing to Throw, which is ugly does make the script stop…

$ETLLogKey2 = 'HKLM:\SOFTWARE\Microsoft\Search Foundation for Exchange\Diagnostics'
try{Get-ItemProperty -Path $ETLLogKey2 -ErrorAction Stop}
catch{throw "No Key"}
Write-Host "This should not hit"

Nice it worked this time, but it’s ugly…

Write-Error is just as ugly, but doesn’t stop the script?

$ETLLogKey2 = 'HKLM:\SOFTWARE\Microsoft\Search Foundation for Exchange\Diagnostics'
try{Get-ItemProperty -Path $ETLLogKey2 -ErrorAction Stop}
catch{Write-Error "No Key"}
Write-Host "This should not hit"

Produces:

Yet if I follow Sages answer in the source, and do a script variable for the stop action it then works???!?!

$ErrorActionPreference = [System.Management.Automation.ActionPreference]::Stop
$ETLLogKey2 = 'HKLM:\SOFTWARE\Microsoft\Search Foundation for Exchange\Diagnostics'
try{Get-ItemProperty -Path $ETLLogKey2}
catch{Write-Error "No Key"}
Write-Host "This should not hit"

Those are really weird results, but all still ugly…  so it seems even though -ErrorAction Stop causes a non-terminating error to be treated as a terminating error, but depending on what you do in the catch block determines if there’s a break/exit event being done. In my case to have things look nice and actually stop the script I have to do the follow.

$ETLLogKey2 = 'HKLM:\SOFTWARE\Microsoft\Search Foundation for Exchange\Diagnostics'
try{Get-ItemProperty -Path $ETLLogKey2 -ErrorAction Stop}
catch{Write-host "No Key";break}
Write-Host "This should not hit"

Which finally produced the output I wanted. (I could have also used exit in place of break)

Finally!

Outlook and the Cache Mode

The Story

Wouldn’t be another post without another day of annoyances… yup, just another day. So recently it’s been reported that when users are working remotely their outlook decides not to open….

Now since they are remote, I have noticed this only happens if the Outlook client is configured in “Online mode”, instead of “Exchange Cached Mode”… see this MS Docs for more information on the different types and when to utilize each mode.

Originally I configured Online Mode, as most users are locally available in the work network and this is not a major problem. Also when using “Cache mode” not all emails show up in Outlook right away, specially if using folders, there generally shows a link “Click here to view more on Microsoft Exchange” this means items that were not cached (depending on the cache time slider this may vary, I choose 12 months), view this support if you can’t see the link in cache mode enabled.

I used to have an issue with this setting, user reported items wouldn’t load even after clicking the link, however I haven’t seen this to be an issue anymore, so I recommend to enable Outlook Cache mode unless you fall under the other points in MS’s Doc linked above.

Finding Users That Are Not Using Cache Mode

So I was now on a new mission… “How do I know who’s not using Cache mode on outlook?” And this is were the rabbit hole began….

First result, same question

Craig Harts reply…

his or response seem like it’s rather easy until…

wtf is this…? Depreciated?!?! C’mon, but there’s an alternative now that’s better right? No…. Thanks MS… seriously… thanks…

So this older post goes over alternatives, not because they didn’t have access to the cmdlet above, but simply cause they didn’t trust the results.. huh…

Problem is this seem to be reg keys used by older outlook, and new outlook seems to use alternative keys… This sure is fun! However, thanks to others that blog and code in their spare time as well, in this case thanks to Jose Espitia much like in my last blog post this is a great start, but usual me, I don’t like expecting anyone to change the source code. In this case you have to provide a file with a computer list, output path, but it’s hard coded…. OK you know what that means! Yeah I usually would create a new GitHub Repo but first…

Much like the older post mentioned they choose to target end user directly to get the most “accurate data”. This however assumes four things:

  1. That Exchange admins are workstation admins, and have elevated rights on all machines.
  2.  That all firewalls are configured and permissions to allow remote querying.
  3. That the user’s are in fact online at the time the query is made.
  4. That RemoteRegistry Service is started and running on end machines.

In my Case it was not and disabled on all machines, I didn’t feel it was beneficial enough to introduce a risk for the simple sake of determining a users connection mode on Outlook.

So I decided to go back to Option A from the old original post, parsing all the RPC Access logs… Using this an alternative reference.

Well I gave it a try and looking through all th elogs there was no reference to “Classic” so I guess that’s no longer valid option either.

Looks like I’m stuck on this one, I can’t seem to find a valid way to find this information out server side with MS removal of the Get-LoggingStatistics cmdlet. And attempting to query all ends users has to many restrictions/hoops that I do not wish to implement. In this case I have to simply go around to all users machines, or wait for the to complain when they work remotely.

Thanks Microsoft I really love what your doing for SysAdmins. Taking away mark and putting him on as COO of Azure so all our beautiful tools from SysInternals are now just the way they are, and new tools, don’t need em right, just buy your cloud subscriptions and who needs SysAdmins… 😛

Anyway… that’s it for today. Sorry no advanced scripts form this post, just use Jose’s script if you wish to query end users machines. Just ensure you have RemoteRegistry service running on all end users machines, and not blocking it in the firewall as well. Hoops I have no interest in jumping through.

Cheers!

WMIC Fun!

I’ve blogged about WMI before, more for setting up dedicated accounts for monitoring purposes.

Today we are going to have some fun with WMIC, the command line interface for simple and quick query data.

I got these ideas after reading this source blog… and I was curious at what level these worked (admin or not)

Using WMI

Most WMIC commands are issued in the following format:

wmic [Object Class] [Action] [Parameters]

For example, you can collect a list of groups or users on the local system and domain using the following commands:

wmic group list brief
wmic useraccount get name,sid

Yup, SIDs are no secret and you can pretty much query the whole domain if there’s been no hardening done. I haven’t tested this on a hardened domain but out of the box all users login name and SID are open for any standard user to query.

You can also perform the same data collection over the network without ever logging into the remote machine provided you know have some administrative credentials that the remote system will accept.

The same command issued against a remote system in another domain looks like this:

wmic /user:"FOREIGN_DOMAIN\Admin" /password:"Password" /node:192.168.33.25 group list brief

I can’t test this in my lab as I don’t have an alternative domain to play with (yet), but let’s see if I can query a member server using a standard domain account:

wmic /node:subca.zewwy.ca group list brief

nope well that’s good…

Processes
WMIC can collect a list of the currently running processes similar to what you’d see in “Task Manager” using the following command:

wmic process list
wmic process get name

Note that some of the WMIC built-ins can also be used in “brief” mode to display a less verbose output. The process built-in is one of these, so you could collect more refined output using the command:

wmic process list brief

Yup, those all work, even as standard user.

Some examples

Start an Application

wmic process call create "calc.exe"

Yeah… that worked…

I decided to see if I could somehow exploit these to get elevated rights, so far no dice.. but I did find this randomly while searching for a possible way…

sure enough, if you add start cmd.exe /k “net use” and name it net use.bat it will go into and endless loop. Mhmm interesting and easiest way to do a Denial Of Service attack.

anyway moving on…

System Information and Settings

You can collect a listing of the environment variables (including the PATH) with this command: (standard User works)

wmic environment list

OS/System Report HTML Formatted

wmic /output:c:os.html os get /format:hform

This was literally cause my standard account didn’t have access to C:\temp cause I created the folder using my admin account at some earlier point in time.

Products/Programs Installed Report HTML Formatted

wmic /output:c:product.html product get /format:hform

Turn on Remoted Desktop Remotely

Wmic /node:"servername" /user:"user@domain" /password: "password" RDToggle where ServerName="server name" call SetAllowTSConnections 1

Get Server Drive Space Usage Remotely (any node commands require elevated permissions, standard user fails at these generally)

WMIC /Node:%%A LogicalDisk Where DriveType="3" Get DeviceID,FileSystem,FreeSpace,Size /Format:csv MORE /E +2 >> SRVSPACE.CSV

Get PC Serial Number (works as standard user)

wmic bios get serialnumber

Get PC Product Number (works as standard user)

wmic baseboard get product

Find stuff that starts on boot (works as standard user)

wmic STARTUP GET Caption, Command, User

Reboot or Shutdown (works as standard user)

wmic os get buildnumber
wmic os where buildnumber="2600" call reboot

Get Startup List (works as standard user)

wmic startup list full

Information About Harddrives (works as standard user)

wmic logicaldisk where drivetype=3 get name, freespace, systemname, filesystem, size, volumeserialnumber

Information about OS (works as standard user)

wmic os get bootdevice, buildnumber, caption, freespaceinpagingfiles, installdate, name, systemdrive, windowsdirectory /format:htable > c:osinfo.htm

User and Groups

Local user and group information can be obtained using these commands:

wmic useraccount list
wmic group list
wmic sysaccount list

For domain controllers, this should provide a listing of all user accounts and groups in the domain. The “sysaccount” version provides you with system accounts built-in and otherwise,which is useful for any extra accounts that may have been added by rootkits.

Identify any local system accounts that are enabled (guest, etc.)

wmic USERACCOUNT WHERE "Disabled=0 AND LocalAccount=1" GET Name

Number of Logons Per USERID

wmic netlogin where (name like "%skodo") get numberoflogons

Get Domain Names And When Account PWD set to Expire

WMIC UserAccount GET name,PasswordExpires /Value

Patch Management

Need to know if there are any missing patches on the system? WMIC can help you find out with this command:

wmic qfe list

The QFE here stands for “Quick Fix Engineering”.
The results also include the dates of install should that be needed from an auditing standpoint.

Shares

Enumeration of all of the local shares can be collected using the command:

wmic share list

The result will also include hidden shares (named with a $ at the end).

Find user-created shares (usually not hidden)

wmic SHARE WHERE "NOT Name LIKE '%$'" GET Name, Path

so far all these are working as standard user, but that doesn’t mean anything.

Networking

Use the following command to extract a list of network adapters and IP address information:

wmic nicconfig list

Get Mac Address:

wmic nic get macaddress

Update static IP address:

wmic nicconfig get description, index
wmic nicconfig where index=9 call enablestatic("192.168.16.4"), ("255.255.255.0")

Yup got to be an admin for that one

Change network gateway:

wmic nicconfig where index=9 call setgateways("192.168.16.4", "192.168.16.5"),(1,2)

Enable DHCP:

wmic nicconfig where index=9 call enabledhcp

Get List of IP Interfaces

wmic nicconfig where IPEnabled='true'

Services

WMIC can list all of the installed services and their configurations using this command:

wmic service list

The output will include the full command used for starting the service and its verbose description.

Other examples

Service Management

 wmic service where caption="DHCP Client" call changestartmode "Disabled"

Look at services that are set to start automatically

wmic SERVICE WHERE StartMode="Auto" GET Name, State

Services Report on a Remote Machine HTML Formatted:

wmic /output:c:services.htm /node:server1 service list full / format:htable

Get Startmode of Services

Wmic service get caption, name, startmode, state

Change Start Mode of Service:

wmic service where (name like "Fax" OR name like "Alerter") CALL ChangeStartMode Disabled

Get Running Services Information

Wmic service where (state="running") get caption, name, startmode, state

Another interesting feature of WMIC is its ability to record the run-time command executed and runtime configuration all in one XML file. A recorded session might look something like this:

wmic /record:users_list.xml useraccount list

Of course, since WMIC wasn’t designed as a recording device, there are some caveats to using the XML. First, you can only use XML output, there are no other formats defined.

Event logs

Obtain a Certain Kind of Event from Eventlog

wmic ntevent where (message like "%logon%") list brief

Clear the Eventlog

wmic nteventlog where (description like "%secevent%") call cleareventlog

Retrieve list of warning and error events not from system or security logs

WMIC NTEVENT WHERE “EventType < 3 AND LogFile != ‘System’ AND LogFile != ‘Security’” GET LogFile, SourceName, EventType, Message, TimeGenerated /FORMAT:”htable.xsl”:” datatype = number”:” sortby = EventType” > c:appevent.htm

Thanks Andrea!

Requesting, Signing, and Applying internal PKI certificates on VCSA 6.7

The Story

Everyone loves a good story. Well today it begins with something I wanted to do for a while but haven’t got around to. I remember adjusting the certificates on 5.5 vCenter and it caused a lot of grief. Now it may have been my ignorance it also may have been due to poor documentation and guides, who knows. Now with VMware now going full linux (Photon OS) for the vCenter deployments (much more light weight) it’s still nice to see a green icon in your web browser when you navigate the nice new HTML5 based management interface. Funny that the guide I followed, even after applying their own certificate still had a “not secure” notification in their browser.

This might be because he didn’t install his Root CA certs into the computers trusted CA store on the machine he was navigating the web interface from. However I’m still going to thank RAJESH RADHAKRISHNAN for his post in VMArena. it helped. I will cover some alternatives however.

Not often I do this but I’m lazy and don’t feel like paraphrasing…

VCSA Certificate Overview

Before starting the procedure just a quick intro for managing vSphere Certificates, vSphere Certificates can manage in two different modes

VMCA Default Certificates

VMCA provides all the certificates for vCenter Server and ESXi hosts on the Virtual Infrastructure and it can manage the certificate lifecycle for vCenter Server and ESXi hosts. Using VMCA default the certificates is the simplest method and less overhead.

VMCA Default Certificates with External SSL Certificates (Hybrid Mode)
This method will replace the Platform Services Controller and vCenter Server Appliance SSL certificates, and allow VMCA to manage certificates for solution users and ESXi hosts. Also for high-security conscious deployments, you can replace the ESXi host SSL certificates as well. This method is Simple, VMCA manages the internal certificates and by using the method, you get the benefit of using your corporate-approved SSL certificates and these certificates trusted by your browsers.

Here we are discussing about the Hybrid mode, this the VMware’s recommended deployment model for certificates as it procures a good level of security. In this model only the Machine SSL certificate signed by the CA and replaced on the vCenter server and the solution user and ESXi host certificates are distributed by the VMCA.

I guess before I did the whole thing, were today I’m just going to be changing the cert that handles the web interface, which is all I really care about in this case.

Requirements

  • Working PKI based on Active directory Certificate Server.
  • Certificate Server should have a valid Template for vSphere environment
    Note : He uses a custom template he creates. I simply use the Web Server template built in to ADCS.
  • vCenter Server Appliance with root Access

Requesting the Certificate

Now requesting the certificate requires shell access, I recommend to enable SSH for ease of copying data to and from the VCSA as well as commands.

To do this log into the physical Console of the VCSA, in my case it’s a VM so I opened up the console from the VCSA web interface. Press F2 to login.

Enable both SSH and BASH Shell

OK, now we can SSH into the host to make life easier (I used putty):

Run

 /usr/lib/vmware-vmca/bin/certificate-manager

and select the operation option 1

Specify the following options:

  • Output directory path: path where will be generated the private key and the request
  • Country :               your country in two letters
  • Name :                   The FQDN of your vCSA
  • Organization :    an organization name
  • OrgUnit :               type the name of your unit
  • State :                    country name
  • Locality :               your city
  • IPAddess :           provide the vCSA IP address
  • Email :                    provide your E-mail address
  • Hostname :         the FQDN of your vCSA
  • VMCA Name: the FQDN where is located your VMCA. Usually the vCSA FQDN

Once the private key and the request is generated select Option 2 to exit

Next we have to export the Request and key from the location.

There are several options on how to compete this. Option 1 is how our source did it…

Option 1 (WinSCP)

using WinSCP for this  operation .

To perform export we need additional permission on VCSA , type the following command for same

chsh -s /bin/bash root

Once connected to vCSA from winscp tool navigate the path you have mentioned on the request and download the  vmca_issued_csr.csr  file.

Option 2 (cat)

Simple Cat the CSR file, and use the mouse to highlight the contents. Then paste it into ADCS Request textbox field.

Signing The Request

Now you simply Navigate to your signing certificate authorizes web interface. usually you hope that the PKI admin has secured this with TLS and is not just using http like our source, but instead uses HTTPS://FQDN/certsrv or just HTTPS://hostname/certsrv.

Now we want to request a certificate, an advanced certificated…

Now simply, submit and from the next page select the Base 64 encoded option and Download the Certificate and Certificate Chain.

Note :- You have to export the Chain certificate to .cer extension , by default it will be PKCS#7

Open Chain file by right click or double click navigate the certificate -> right click -> All Tasks -> export and save it as filename.cer

Now that we have our signed certificate and chains lets get to importing them back into the VCSA.

Importing the Certificates

Again there are two options here:

Option 1 (WinSCP)

using WinSCP for this  operation .

To perform export we need additional permission on VCSA , type the following command for same

chsh -s /bin/bash root

Once connected to vCSA from winscp tool navigate the path you have mentioned on the request and upload the  certnew.cer file. Along with any chain CA certs.

Option 2 (cat)

Simply open the CER file in notepad, and use the mouse to highlight the contents. Then paste it into any file on the VCSA over the putty session.

E.G

vim /tmp/certnew.cer

Press I for insert mode. Right click to paste. ESC to change modes, :wq to save.

Run

 /usr/lib/vmware-vmca/bin/certificate-manager

and select the operation option 1

Enter administrator credentials and enter option number 2

Add the exported certificate and generated key path from previous steps and Press Y to confirm the change

Custom certificate for machine SSL: Path to the chain of certificate (srv.cer here)
Valid custom key for machine SSL: Path to the .key file generated earlier.
Signing certificate of the machine SSL certificate: Path to the certificate of the Root CA (root.cer , generated base64 encoded certificate).

Piss what did I miss…

That doesn’t mean shit to me.. “PC Load letter, wtf does that mean!?”

Googling, the answer was rather clear! Thanks Digicert!

Since I have an intermediate CA, and I was trying either the Intermediate or the offline it would fail.. I needed them both in one file. So opened each .cer and pasted them into one file “signedca.cer”

Now this did take a while, mostly around 70% and 85% but then it did complete!

Checking out the web interface…

Look at that green lock, seeing even IP listed in the SAN.. mhm does that mean…

Awwww yeah!!! even navigating the VCSA by IP and it still secure! Woop!

Conclusion

Changing the certificate in vCenter 6.7 is much more flexable and easier using the hybird approach and I say thumbs up. 😀 Thanks VMware.

Ohhh yea! Make sure you update your inventory hosts in your backup software with the new certificate else you may get error attempting backup and restore operations, as I did with Veeam. It was super easy to fix just validate the host under the inventory area, by going through the wizard for host configuration.

NTFS Permissions and the Oddities

NTFS Permissions

What is NTFS?

NTFS is a high-performance and self-healing file system proprietary to Windows NT, 2000, XP, Vista, Windows 7, Windows 8, Windows 10 desktop systems as well as commonly used on Windows Servers 2016, 2012, 2008, 2003, 2000 & NT Server. NTFS file system supports file-level security, transactions, encryption, compression, auditing and much more. It also supports large volumes and powerful storage solution such as RAID/LDM. The most important features of NTFS are data integrity (transaction journal), the ability to encrypt files and folders to protect your sensitive data as well as the greatest flexibility in data handling.

Cool, now that we got that out of the way, file systems require access controls, believe it or not that’s controlled using lists called Access Control Lists (ACLs). Huh, who would of thunk it, ACLs either Allow or Deny permissions to the files and folders in the file system.

So far nothing odd or crazy here… There can come times when a user may have multiple permissions on a resource from alternative sources E.G. (Explicit vs Inherited), now depending which will determine whether the action is allowed or dined based on precedence.

A little more intricate, but still nothing odd here. However good reference material. Up Next, another tid bit required to understand the oddtites I will discuss.

File Explorer (explorer.exe)

If you’re an in-depth sysadmin you may know that by default (Windows7+) you can not run file explorer (explorer.exe) as an admin, or elevated. References one and two. Now in the second one there is a work around but I have not tested this, though I will actually probably for my next blog post. But for now the main thing to no is that you can’t run explorer elevated by default.

Next!

User Access Control (UAC)

So again talking WIndows 7 onward here Microsoft made NTFS more secure by having the OS utilize User Access Controls, for when elevated rights were required. For we all do best practices and use different admin and standard accounts, right? To keep it short the lil pop up asking “Are you sure you want to run this?” if you have the ability to run elevated or a Credential Pop-up dialog if you do not.

You can view the “Tasks that trigger a UAC prompt” section of the wiki to get an idea when. (Pretty much anytime you require an system level event)

However I’m going to bring attention this specific one:

Viewing or changing another user’s folders and files

Oddity #1

This brings up our first oddity. If I were to ask you the following question:

You are logged on as an admin on a workstation, you open file explorer, you navigate to a folder in which you do not have either explicit or inherited permissions. When you double click this folder you are presented with a UAC prompt, what does clicking “Continue” do?

A) Clicking Continue causes UAC to temperately runs explorer elevated and navigates into the folder.

B) Clicking Continue will take the current logged on user Security Identifier (SID) and append it to the folders ACL.

Now if you are following along closely we already discussed that A) isn’t even a viable option which means the answer is non other then B…

 

Yup, marvel at it… dirty ACLs everywhere. Now do note I had to break inheritance from the parent folder in order to restrict normal access, which makes sense when your navigating folders in file explorer as an admin already. But this information is still good to know if you do come across this when you are working in an elevated user session.

Also note IF the folder’s owner is SYSTEM or TrustedInstaller, clicking continue will not work and you’ll get an error, cause this action will not take ownership of a folder only grant access, and without the rights to grant those permissions it will still fail, even though there’s nothing stopping you from using takeown or the file explorer to actually grant your account ownership.

Oddity #2

This is the one I really wanted to cover in this blog post. You may have noticed that I stated I broke inheritance, this is generally not best practice and should be done as a last resort usually when it comes to permission management. However it does come around as a solution to access control when it really needs to be super granular.

I had created a TechNet post asking how to restore Volume ACLs, to which no good answers came about. So what I ended up doing was simply adding a new disk to a VM and checked out it’s permissions.

Now if you look closely you’ll notice 3 lines specifying specific access rights for the group “Users”. Now on a workstation, these permissions make perfect sense, a user has the right to read and execute files (needed just to use the system), create folders and they are the owners of them (what good is a workstation if you can’t organize your work), create files and write data (what goods a workstation if you can’t save your work).

However you might think, bah this will be a server (I’ll harden it that standard users can’t have interactive log on) so along with traversal bypass granted by default users should have access to only the specific folders in which they are explicitly granted, and by default will not have any access right inherited.

Removing Users still leaves the Administrators group with full Control rights, and you are a member of that group by domain inheritance, so all is good right? Sounds gravy until…. you realize as soon as you removed the “Users” accounts from the ACLs your admin account has inherited access rights revoked?

Inside the disk was a folder “Test” as you can see by its inherited ACLs

Now this is where it gets weird, it would be safe to assume that my domain admin account which I’m logged in as is part of the Built in administrators group… as demonstrated by this drawing here:

Which is also proven by the fact I can run CMD and other applications elevated via the UAC prompt and I simply click Yes instead of getting a credential box.

Now wouldn’t it be safe to assume that since Administrators have Full Control on the folder in question clearly shows that above, we should be able to traverse the folder, right? It’s basic operation of someone with “Full Control”… and…. awwwww would you look at that? Just look at it! Look at it!

It’s a big ol’ UAC prompt, now why would we get that if we have inherited permission… we already know what it’s going to do… that’s grant my account’s SID permissions, but why? I have inherited full control through administrators don’t I? and sure enough, clicking Continue…

well that’s super weird. I’m skip paste a lot of my trial and error tasks and make the claim, it literally comes down to one ACL that magically makes inheritance work like it’s suppose to…

believe it or not that’s it…. that’s the magical ACL on a folder that will make File Explorer actually adhere to inherited permissions. literally… granting S-1-5-32-545 Users “List folder \ Read Data” permission on the folder, and now as an admin I can traverse the folder without a UAC prompt, and without explicit permissions…

Oddity #3

So I’m like, alright, I’m liking this, I’m learning new things, things are getting weird…. and I can like weird, so I decided like YO! let’s create some folders and like see how things play out when I dickery do with those nasty little ACLs you know what I mean?

 

This stuffs too clean, you know what I mean, all nicely inherited, user owner, nah let’s change things up on this one, SYSTEM you got ownership, and you know what… all regular users.. yer gone you know what that means… inheritance who needs that. This is security, deeerrrrr…..

Awww yeah, and sure enough, trying to traverse the folder gives a UAC prompt, and grants my account explicit permissions, there goes those clean ACLs.

Answer to the Whole Thing

Turns out I was thinking about this all day at work, I couldn’t get it. It honest felt like somehow all access rights were being granted by the “Users” group only…. as if… they are.. using the lowest common denominator… like it can’t… run elevated! DOH!

The answer has been staring me in the face the whole freaking time!

I already stated “If you’re an in-depth sysadmin you may know that by default (Windows7+) you can not run file explorer (explorer.exe) as an admin, or elevated.”

I’m expecting to do task via explorer through an account I have inheritance from BUT the group I’m expecting to grant me the right is an elevated rights group “Administrators”… like DOH!

So the easy fix is create any random security group in the domain, add users accordingly into that group and grant that group full control over the folder, sub-folders and files (even make the group the owner of said folders and subfolders). Then sure enough everything works as expected.

For Example

added my admin account into this group. Then on the file server. Leave the D:\ disk permissions in place. Create a Folder in which other folders can be created and shared accordingly, in this case, teehee let’s call it DATA.

Sure enough, no surprise it looks like this…

everything as it should be, I created the Folder, my accounts the owner, I have inherited Full Control because I am the owner, and all other permissions have been granted by the base disk, besides the one permission which was configured at the disk level to be “this folder only” so all is good.

And now I did some quick searching on how to restrict access without breaking inheritance, and overall most responses was “even though it’s best practice to not break inheritance, alternative means for access control via deny’s is even more dirty”.

So, here we go lets break the inheritance from the disk and remove all users access, now as we discovered we will initially get UAC prompts if we try to navigate it with our admin account after this. Let’s not do that just yet after. So it’s now like this (we granted the group above ownership).

Now since I am a member of this group (I just added my account so I’m going to log off and back on to ensure my group mappings update properly for my kerberos tickets (TGT baby) to work.

whoami /groups

I’m so glad I did this, cause my MMC snap-in did not save the changes and I was not in this group after my first re-logon and sure enough after I fixed it.. 🙂

Now if I navigate the folder I should not get a UAC prompt cause my request to traverse the folder will be granted via File Share Admins, which is not an elevated SID request and I’ll be able to create files and folders without interruptions… lets try..

And there it is, no UAC prompt, all creation options available, and no users in the folders ACLs! Future Admins will need to be added to this group however, if an admin (domain admin or otherwise) attempts to login and navigate this folder they will get a UAC prompt and their SIDs will be auto appended to all folders, subfolders and files! Let me show you…

Welcome DeadUserAdmin! He’s been granted domain admin rights only, and decided to check out the file server…

as shown in the diagram the group permissions, and those inherited by simply being a domain admin, such as local admin. Below the permissions of a file before this domain admin attempts to navigate the folders..

Now as we learnt when this admin double clicks the DATA folder explorer can’t run elevated, and can’t grant traverse access via this accounts nested permissions under the administrators account, and when the UAC prompt appears is granting that SID direct access… lets follow:

There it is! and sure enough…

Yup every folder, and every file now has this SID in it, and when the user no longer works at the company…

SIDE ERROR****

deleting the Users Profile (to fix, naviagte in a couple folders, cut a folder, go to user profile root folder and paste to shorten the overall path name)

So anyway after the user leaves the company and his account gets deleted…

Yay, a whole entire folder/file structure with SIDs as Principals cause AD can’t resolve them anymore. They have been deleted. So how does an admin now fix DeadUserAdmins undesired effects?

Navigate to the root DATA folder properties, Security Tab, advanced settings. Remove the SID…

Be careful of the checkbox at the bottom (Replace all child permissions) use this with caution as it can do some damages if other folders down the line have broken inheritance and specific permissions. In this case all folders and files inherent from this base DATA drive and thus….

All get removed. If there are other folders with broken inheritance then an Audit is required of all folders, their resources, their purposes, and who’s suppose to have access.

Another option is to nest domain admins into file share admins, then it all works well too.

I hope this blog post has helped someone.

Email Scamming

The Story

Everyone loves a good story, ehhhhhhhhhhhh.

Anyway sitting around playing a new puzzle game I picked up The Talos Principle. Enjoying it very much, and I my phone goes off, just another email. Looking at the Subject did have me intrigued (while also instantly alerting me that its a scam). Now I plan to cover this blog post in 2 parts. 1 in which I cover the basics of catching “Red Flags” and how to spot these types of emails for the basic user, and 2 more technically in-depth for those that happen to be admins of some kind. Let’s begin.

The Email in Question

Now looking right at this it may not scream out at you, but I’ll point them all out.

First Red Flag

First off, the Subject, the first thing anyone sees when they get an email, and in this case it’s designated to grab attention. “Order of a Premium Account”? What I didn’t order any premium account. So the inclination is to open the email to find out more. Most of the time this is a safe move to make, but I’m sure hackers could make it in at this point if it was an APT (Advanced Persistent Threat) and they really wanted to target you. In this case, not likely. This in itself isn’t a red flag as many legit emails can be of high importance and the sender could use alerting terms to ensure action is taken when time is of the essence. However it still a tactic used by the perpetrator.

Second Red Flag

So what’s the body tell us? In this case it is a clear and definitive “Red Flag”; Vague, and requesting the user to open an attachment for more details. This is the hugest red flag, the body should contain enough information to satisfy the recipient to understand exactly what an attachment would justify being there for.

Third Red Flag

Now mixing the two together we get another “Red Flag” the subject was for a premium account for a “Diamond Shop App.” whatever that is, I suppose many apps have separate account creations and thus this isn’t exactly alarming, however, if it was from the Apple store the email I’m assuming would either follow Apples template (which this doesn’t), considering the attachment is labeled “Apple Invoice.doc”. I also don’t use the Apple Store so for me was an easy red flag.

Fourth Red Flag

Grammar; “Are you sure to cancel this order, please see attachment for more details. thanks you” a question ending in a period with a following “thanks you” with an s and no cap, and the subject was for an account creation…. need I say more?

What now?

OK, so pretty obvious here there some shenanigans goin’ on here. If you’re an end user this is a good time to send the email (as an attachment) to your IT department. It is important to send the email itself as an attachment to retain the email headers (discussed later in this post) for admins to analyze the original sender details.

Technical Stuff

Now we’re going to get technical, so if you are not a technical person you education session is done, else keep reading.

Initial Analyses

Yeah you guessed it; VirusTotal.

Well, nyet….

Nothing… OK, let’s analyze the headers quick with MxToolbox

Here we can see it was sent from the domain “retail-payment.com”, they also masked their list of targets by BCCing them all, shady, and pointing to main to address to noreply@apple.com or device@apple.com which probably are non existent addresses for apple, and making it look more legit while not letting apple actually know. What about this sending domain?

sad another zero day domain registration, I was expecting GoDaddy to be honest, was rather disappointed to see Wix supporting such rubbish.

What’s next? Joe Sandbox!

At this point it’s clear the file and email are brand new attempts and not caught by virus total, so what is it attempting to accomplish. I signed up to JoeSandbox to find out. Then submitted the file, I was impressed with the results!

Results…

I’m not sure why older OS with older Office was clean? but newer showed some results, when I opened the report I was like HA!

Neat looks like it the doc had links to some websites, and yeah.. the sandbox went there! 😀

Would ya look at that! It looks like the apple login page, thankfully the URL doesn’t match apple’s at all and should be another duh red flag.

OK, who registered that domain?

I have no clue who that registrar is, nor do I know how they managed to keep it alive since the 2000’s hosting malicious phishing sites? Sad…

Conclusion

Don’t open up stupid emails, and report them to your admins whenever possible. 😀

Using OpenSSL to convert PKCS12 to PEM

Found from here

openssl pkcs12 -in path.p12 -out newfile.crt.pem -clcerts -nokeys
openssl pkcs12 -in path.p12 -out newfile.key.pem -nocerts -nodes

After that you have:

  • certificate in newfile.crt.pem
  • private key in newfile.key.pem

To put the certificate and key in the same file use the following

openssl pkcs12 -in path.p12 -out newfile.pem

If you need to input the PKCS#12 password directly from the command line (e.g. a script), just add -passin pass:${PASSWORD}:

openssl pkcs12 -in path.p12 -out newfile.crt.pem -clcerts -nokeys -passin 'pass:P@s5w0rD'

Thanks KMX